Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Color calibration of ASI178 - success and a "failure"


Recommended Posts

I wanted to try color calibration of my ASI178 and finally came up with a "sensible" technique.

Usually, when color calibrating for certain lighting type, photographers use "colorchecker passport" or similar color chart. These are basically printed colors of known RGB values and color calibration consists in shooting this chart and then adjusting color balance in post processing to get as close to known RGB values.

I wanted to do something similar, but these charts are rather expensive - about $100 or more online. There are Chinese knock offs that are cheaper, but I did not want to spend any money on it since it won't be a thing I regularly use. I finally came up with idea - one can download similar color chart online. I just need a good display device capable of rendering very small image of it that is of a good color accuracy, hm what could that be? :D

Enter my cell phone. I've got Xiaomi Mi A1 and it's got pretty decent screen - 5.5" LTPS IPS panel and colors look rather ok on it. Just a bit of math and I was set to go. Phone is to be placed about 10 meters from telescope. I'll be using my Mak102 and ASI178.

Field of view at astronomy tools was used to quickly calculate FOV in degrees (0.33 of a degree in horizontal), and this calculator for distance required:

http://www.1728.org/angsize.htm

I made image of a color chart on screen be about 5cm.

Here is used template:

reference.png.24f5e71e98ab9005b4edb733fc7cec2f.png

My phone renders it pretty much the same as my Dell U2311H except that colors are very slightly more vivid - in particular blues and it looks like overall color temperature is just a bit cooler. This is probably due to LED back light on the phone display. There differences are so subtle that they are barely noticeable and smaller than differences we are about to see below.

So I went and recorded couple of frames with SharpCap using ASCOM driver instead of native.

Here is what came out of the camera as raw data directly mapped to RGB:

linear_unbalanced.png.83269e51ba1f424f9c608c51117b855b.png

Colors are obviously wrong because image has not been color balanced. First logical step would be to color balance the image using simple ratios for R, G and B channel:

linear_simple_balance.png.1a23f608f0f95c9ca8c2684121ba8133.png

Luckily bottom row is full of gray values so that is rather easy to do. Notice that most left bottom square is clipped. This is because I clipped green on that square (only on that one) due to saturation and I did not notice when I was shooting.

Colors don't look quite right like in above chart. Gray gradient is not as it should be and colors are too dark and saturated in comparison to reference. This is because colors are still linear and sRGB that these images are recorded in, requires gamma of 2.2 to be used (in fact it's a bit more complicated gamma function but for this test, gamma 2.2 is good enough approximation). Here is simple color balancing encoded in sRGB with gamma 2.2:

simple_balance_gamma.png.fdeb389cb7fcbed975d27d884ce90e96.png

Well, this is better - gray gradient is now uniform, but colors are still not quite as they should be. How do we fix this?

It turns out that best color conversion is not simple channel balancing, but rather applying color conversion matrix. This is the same as color mixer where each of output channels depends on all three input channels.

I thus opened Libre office calc and did some measurement on reference image and recorded image and derived transform matrix:

new red = 1.3516376483848*r+0.033549040292064*g-0.054423382816904*b

new green = -0.619460118029851*r+1.2330224800277*g-0.211822127381072*b

new blue = -0.178260528770039*r-0.088350168915713*g+1.19254273096779*b

You don't really need that much decimal digits - I just copied numbers from office calc and it gave me that precision.

Once you do this matrix transform on linear data and do gamma correction, this is what you get:

calibration.png.aad4e3b3bf7f95faf3646fee00101fef.png

Colors are now the closest to reference image, maybe blue and red are just a tad over saturated, but I believe that is because of my phone.

With this new color conversion tool, I set to check it in action. I shot Jupiter some time ago and of course, raw image from camera desperately needed proper color balancing:

image.png.16c51565942f1fbea1393fe213412ceb.png

Too green and all wrong, right?

Auto color balance function of Registax gave me this (and processing of course):

image.png.78b00cf006db78cc42166c32309249c0.png

But I had a sense that colors are too washed out and need a bit more saturation. This was even not gamma corrected. This time, I thought I give it a proper color treatment to see what I would get, and result is somewhat disappointing.

jup_final.png.01360a3ad9608cb4f3b30d99b74bcef6.png

Why would I get completely wrong color here? And then I realized. Jupiter is low now and atmosphere has this strange property - it makes sun orange/red when setting, although it should be white/yellow. No wonder Jupiter has such strong yellow/orange cast as well. Transformation worked perfectly and gave actual color. In fact, If you observe this image in complete dark without any other white surface, after some time it will start resembling view of Jupiter at an eyepiece (eye will try to color balance image and make it slightly whiter but it will still remain pale/yellowish).

In order to correct color so that planet looks like it was shot outside atmosphere I'll need to account for atmospheric impact. I'll be thinking about how to best do that.

 

 

  • Like 5
Link to comment
Share on other sites

Interesting!

Could you combine the colour balancing transformation, your own derived transformation and gamma correction into one all embracing transformation matrix? - effectively a one-stop-shop colour correction matrix for any image taken with your 178.

Adrian

P.S. I've been trying to visualise your transformation matrix on a 3D rgb axis transfroming r,g,b to r',g',b' - struggling with my visualisation tools. Have you tried any sort of visualisation - maybe in MatLab?

Edited by Adreneline
Link to comment
Share on other sites

11 minutes ago, Adreneline said:

Interesting!

Could you combine the colour balancing transformation, your own derived transformation and gamma correction into one all embracing transformation matrix? - effectively a one-stop-shop colour correction matrix for any image taken with your 178.

Adrian

P.S. I've been trying to visualise your transformation matrix on a 3D rgb axis transfroming r,g,b to r',g',b' - struggling with my visualisation tools. Have you tried any sort of visualisation - maybe in MatLab?

Color balancing transform and my own derived transform serve the same purpose, so there is no much point in combining these two.

You can pick either of the two. I did not mention what the scaling coefficients are:

Red is 1.985
Blue is 1.309

From above examples, it looks like my measured and derived matrix will give better results - at least on these test colors (which should be good representatives of colors out there).

Gamma transform is non linear and therefore can't be encoded in matrix. It is part of sRGB standard - and should be used only if color space of resulting image is sRGB. Computer standards say that any image without embedded color profile should be treated as sRGB image.

Gamma is basically power law. Gamma 2.2 just means that linear RGB colors in range of 0-1 get transformed by doing power to 1/2.2 (2.2th root) and going from gamma corrected to linear is opposite - calculate by doing power to the 2.2.

This is approximate transform. True transform is a bit more complicated, and here it is written down:

image.png.f8963f681a51d7b86a6cf32c8dcd7aaa.png

So it is split into two parts - for very dark values, gamma transform is linear, and for values larger than about 0.003 (in 0-1 space), it uses exponent of 1/2.4.

1024px-SRGB_gamma.svg.png

This image show difference between actual gamma and approximation - red line is actual gamma function of sRGB and dashed line is approximation with 2.2. Blue line is same as red only plotted in log space - so you can clearly see linear part as being constant in log space.

BTW, gamma function exists for the same reason magnitude system exists. Human vision is non linear with respect to intensity.

If you take RGB values and compare for example 0.2, 0.4 and 0.6 (each being white so (0.2, 0.2, 0.2), (0.4, 0.4, 0.4), (0.6, 0.6, 0.6)) - you would expect to visually say something like - second color is brighter than first as much as third color is brighter than the second. Scale of such colors should form nice linear gradient. And it does in gamma corrected space such as sRGB.

Problem is that this does not happen in linear space. If you take light sources and make first light source give off 2 million photons per second, second light source give off 4 million photons per second and third light source give off 6 million photons per second - and you look at them, you won't see nice linear brightness distribution, although both number of photons and energy in watts (think of it as flux of some sort)  given off by sources is linear.

This is why we have magnitude system for stars - which is also log based (power law).

Camera "sees" is linear space and raw data recorded with camera of these three lights will record 2, 4 and 6 ADUs (or whatever other numbers depending on lens, distance and integration time) and if we directly convert that to sRGB we will see nice linear gradient on screen, but our eyes wont see it as linear in real life. Image on the screen will be wrong!

This is why it is important to do proper transforms and we almost never do it in astronomical imaging for some reason (probably lack of understanding or simply not caring about it - there is old notion that there is no proper color in astrophotography anyway - which I think is wrong).

As for 3d coordinate space, well first you need to understand that RGB space used for displays is actually smaller than space of all colors. Many colors can't be reproduced on computer screen, and in fact, no pure wavelength color can be reproduced on computer screen (nor in almost any other color space apart from absolute color spaces). This means that rainbow will always look different in real life compared to a photograph.

Some colors in real life end up with negative coordinates in RGB space - since we can't emit negative light - these colors are simply outside of RGB color space.

Similarly, with above transformations, there could be cases where we get results that are outside sRGB color space. Examine transform:

new green = -0.62*r+1.23*g-0.21*b

(this time rounded to two decimal places). It states that we should subtract both raw_r and raw_b value from raw_g to get linear green. What happens if raw_g is zero and raw_r and raw_b have some value? Clearly result will be negative. Such color will be outside of sRGB linear color space.

Can it happen? Yes it can, but not as often as you might think. Here, look at QE graph for ASI178:

QE-ASI178-e1529580514772.jpg

Green matching function never falls to 0. If raw_r has value greater than 0 the so must raw_g, and similarly with raw_b. There are cases where new green will be negative. Take for example Ha signal and pretend it is at 650nm (it is 656nm in reality but it is easier to read off 650nm values from the graph).

raw_r will be 0.8, raw_g will be about 0.16 and raw_r about 0.045.

Clearly

new green = -0.62 * 0.8 + 1.23 * 0.16 - 0.21 * 0.045 = -0.496 + 0.1968 - 0.00945 = -0.30865

So this color is outside of sRGB space with our transform. sRGB can't show pure Ha signal. Is this true? Yes it is, look at chromaticity diagram for sRGB

CIExy1931_srgb_gamut.png

This is something called xy chromaticity diagram. Each hue irrespective of lightness can be mapped to xy coordinate like this. All colors that can be displayed by sRGB standard fall into coloured triangle (it shows mapping of actual colors). On the edge of this horseshoe shape all pure wavelengths are mapped. If you look for 650 or 656 - it will be some where to the right next to edge of this shape between 620 and 700 (these are nanometers btw) - it is outside of sRGB color triangle.

What do we do with colors that fall outside of sRGB color space when trying to do color transform from our camera?

We should really map it to closest color that can be shown. Simple but not most accurate way to do it would be to simply clip results to 0-1 range. In our example above resulting green would simply be set to 0. Why is this not the best way? Because sRGB is not what we call perceptually uniform color space. It means that finding closest point to original point in 3D that falls within our color space is not necessarily what we would call "closest color" - perceptually.

For example - Take pure blue, pure red and pure green colors from sRGB space. Which one do you think is most similar to pure black? I would say blue. However, they are geometrically the same distance from (0, 0, 0) - each one is at distance "1".

We can then modify our method of finding replacement color above to - convert color to perceptually uniform color space and then find closest color that is in sRGB space.

In the end to address actual geometrical transform. I don't know how it looks like. Not sure if makes any sense to know it as you can't relate it to color in any easy way. It is however matrix transform which means it is linear transform. It will possibly scale space, rotate it, translate it, shear it,  perspective distort it, but straight lines will remain straight. It won't do any sort of "warping" of the space.

I want to mention just one more thing in the end - method of deriving transform matrix. It is really nothing special. Most DSLR cameras have similar matrices defined for different white balancing scenarios (and you can sometimes find these online, and I believe they are embedded in camera raw file as well).

You take both reference image and recorded image. Make sure you have them in linear space. In this case that means doing reverse gamma transform on reference image since recorded raw image is already in linear space. You then measure average R, G and B values in each square for both reference and recorded image.

You scale those rgb vectors to unity length for both reference and recorded image and in the end do least squares solving to find matrix that will convert list of measured vectors into reference vectors. I used ImageJ and Libre office calc to do all of that.

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

This is why it is important to do proper transforms and we almost never do it in astronomical imaging for some reason (probably lack of understanding or simply not caring about it - there is old notion that there is no proper color in astrophotography anyway - which I think is wrong).

A lot for me to absorb here but on the point above and in particular "no proper colour" - hope this is not a red herring! - there seems to be an obsession with stretching r,g,b or ha,oiii,sii in none equal proportions when processing astro images, especially oiii which is often stretched out of all proportion with ha. If it is preferable/necessary to carry colour balancing surely equal exposure times and equal stretching should be applied to each component otherwise we are intentional unbalancing the colours in the image.

I always try to expose each channel equally and then stretch each channel as equally as I can in PI which means I tend to end up with a pastel image. Doing the likes of removing stars so I can stretch oiii more seems intrinsically wrong - or am I missing something?

Adrian

Link to comment
Share on other sites

4 minutes ago, Adreneline said:

A lot for me to absorb here but on the point above and in particular "no proper colour" - hope this is not a red herring! - there seems to be an obsession with stretching r,g,b or ha,oiii,sii in none equal proportions when processing astro images, especially oiii which is often stretched out of all proportion with ha. If it is preferable/necessary to carry colour balancing surely equal exposure times and equal stretching should be applied to each component otherwise we are intentional unbalancing the colours in the image.

I always try to expose each channel equally and then stretch each channel as equally as I can in PI which means I tend to end up with a pastel image. Doing the likes of removing stars so I can stretch oiii more seems intrinsically wrong - or am I missing something?

Adrian

We can talk about what would be proper color in astro image, and no there is no way to ensure 100% proper color as our eye would see, simply because we want to show both faint and bright structure - we often compress dynamic range that is beyond human capacity to see at the same time. Humans can see that much dynamic range but eye employs tricks to enable us to do it. In bright daylight our pupils contract. In very faint conditions our pupils dilate and we also undergo dark adaptation. Color perception is different under these conditions - so we can't have it all at the same time.

Anyways - there is what we could call "as scientifically accurate as possible" rendition of image.

You don't need to necessarily expose for same periods of time simply because camera won't have required quantum efficiency in all bands - this is precisely why we develop these transform matrices. What you can do however is if you expose for longer - meaning more subs of equal length use average stacking method (instead of addition - but this is probably by default in almost every astro stacking software). If you use different sub duration - just mathematically scale data while still linear. You can think of it as - calculating photon flux per second for each sub and then using that as resulting data (divide each stack with corresponding sub duration in seconds).

When applying flats, make sure flats are normalized to 1 for full illumination.

Next step would be to apply above mentioned transform matrix to transform your raw rgb data into sRGB linear rgb. Following step would be to apply correction for atmospheric attenuation. This is what I'll be doing next for above data - we want image as it would look like from space not from here on earth where it depends on atmosphere and target altitude above horizon (air mass).

In the end, you have proper sRGB balanced linear data and it is time to compose rgb image out of it. First step would be to create (or use if you took LRGB approach) - luminance file. You stretch it and not the colors. If you don't have it, you can calculate it from sRGB linear data by using following formula:

image.png.88a620e34ff32064a35d48566b5fc605.png

Y component of XYZ color space is closely related to luminance perception of human eye. It is also related to L part of Lab color space (you can also convert it to L from Lab if you want perceptual uniformity - but this really does not matter if you are going to stretch your data, as L from Lab is just power law over linear data):

image.png.f3b2df5fdd7b60b2f564ec2d3002c8d8.png

You can see that luminance only depends on Y component (and Y of reference white point) and third power function (again there is split here - linear in low values and power law other wise as it mimics human vision like sRGB).

Ok, so we have our luminance and we stretch it, denoise it, sharpen it - do all the processing until we have monochromatic image that we are happy with as our final image. Final step would be to apply color information to it.

We take our linear r, g and b, and first step would be to do inverse gamma function on our processed L - to bring it into linear regime.

Next for each pixel we normalize linear rgb like this r_norm = r / max(r,g,b), g_norm = g / max(r,g,b) and b_norm = b/(max(r,g,b).

We get our final color components as:

r_lin = r_norm * l_linear
g_lin = g_norm * l_linear
b_lin = b_norm * l_linear

and we take r_lin, g_lin and b_lin and do forward gamma transform on them to get our final R, G and B in sRGB color space that represents our image.

BTW, doing narrowband imaging and then trying to compose it into true color image is rather counter productive. There is almost no distinction in color between Ha and SII as they are very close in wavelength (both are just red and if you keep their true color you won't be able to tell the difference). You can true color compose Ha + OIII image and for most part it would have color that can be displayed in sRGB color system. However, like we mentioned before pure Ha and pure OIII can't be displayed in sRGB.

If you want to know what sort of color you could expect from Ha and OIII data in true color - chromaticity diagram is your friend:

image.png.9ffcfe4a1ef684fae5b0e7d1b8d3e758.png

Any combination of intensities from Ha and OIII wavelengths of light is going to produce colors that are along the black line in image above - line that joins respective points of wavelengths along the outer path. Since we use only linear transforms in color spaces involved (sRGB gamma is just encoding thing - there is still linear sRGB part and it uses regular matrix for transformation) - any linear combination of intensities a * Ha + b * OIII will be a line and under matrix transform will remain line and all colors that we can possibly get will be along that line.

Btw, this is reason why sRGB colors are all inside triangle above - vertices of that triangle are sRGB primaries - pure red, green and blue of sRGB space and every possible color that can be made using linear combination of them lies on lines joining them - which covers whole triangle (green + red can create orange and orange connected with a line with blue can create some pale magenta hue or whatever other combination you can do).

This means that if you do image of nebula or galaxy with DSLR / color camera and do proper color balance you can actually end up with some whiteish / green-teal type of color for Ha+OIII nebulosity.

For narrowband images it is still best to keep false color approach and just explain which color is what gas (HSO, SHO, etc ...)

 

  • Thanks 1
Link to comment
Share on other sites

I have occasionally calibrated the colour correction matrix (CCM) for a sensor but always using a ColorChecker chart illuminated by sunlight.  It's an interesting idea to use one displayed on a screen.  Although the display won't have the same broad spectrum of the colour patches, I think it should still be possible to obtain a CCM that is in the right ballpark

Your derived CCM is this:

  • new red = 1.3516376483848*r+0.033549040292064*g-0.054423382816904*b
  • new green = -0.619460118029851*r+1.2330224800277*g-0.211822127381072*b
  • new blue = -0.178260528770039*r-0.088350168915713*g+1.19254273096779*b

However your figures leave me a bit puzzled because when a CCM is applied to data that is already white balanced, the whites and greys should be relatively unaffected.  However your CCM would result in a strong colour cast because the coefficients in each row do not sum to unity.  Maybe you have combined white balancing into your CCM?

Mark

Edited by sharkmelley
Link to comment
Share on other sites

2 minutes ago, sharkmelley said:

Your figures leave me a bit puzzled because when a CCM is applied to data that is already white balanced, the whites and greys should be unaffected.  However your CCM would result in a colour cast because the coefficients in each row do not sum to unity.  Maybe you have combined white balancing into your CCM?

I don't know technical details of how the CCM should be defined (as per standard or what is usually done). I did not calculate CCM on white balanced data - it was done on raw data without any other modification so I suspect it contains white balancing information "bundled" together.

Is it customary to separate the two? First apply white balancing as simple red and blue scaling functions - like I did as an example above to compare it to matrix approach, and then derive CCM afterwards?

My approach was fairly simple, and I'm sure that is how CCM is derived usually.

Measure linear values of recorded and reference RGB. Scale those triplet vectors to unity length (to avoid matrix scaling the values) and then use linear least squares to solve for transform matrix.

I can now see how CCM can be a separate thing. Similarly I'll now need to derive atmospheric correction matrix (ACM? :D ) - depending on air mass and other factors and it is also going to be "stand alone" type that can be applied after CCM while data is still linear.

Here is base code for calculating attenuation that I've found and plan to examine and adopt:

https://gitlab.in2p3.fr/ycopin/pyExtinction/-/blob/master/pyExtinction/AtmosphericExtinction.py

I just really need simple attenuation per wavelength graph so I can again take a set of (reasonable) spectra that represent colors and apply atmospheric extinction to it and see what color it represents then. Same least squares approach as above.

 

Link to comment
Share on other sites

Yes, it seems you have bundled white balancing and CCM into a single matrix.  There's nothing wrong with that but it's customary to separate out white balancing from the CCM.   You have calculated your matrix from a screen display but screen displays are actually fairly blue because they are based on the D65 illuminant.  So you have created a matrix that will not work so well for daylight - it will make daylight images look too red.  However, if you separate the operations of white balancing and colour correction then you will have a CCM that will work well over a wide range of conditions.

Mark

  • Like 1
Link to comment
Share on other sites

1 hour ago, sharkmelley said:

Yes, it seems you have bundled white balancing and CCM into a single matrix.  There's nothing wrong with that but it's customary to separate out white balancing from the CCM.   You have calculated your matrix from a screen display but screen displays are actually fairly blue because they are based on the D65 illuminant.  So you have created a matrix that will not work so well for daylight - it will make daylight images look too red.  However, if you separate the operations of white balancing and colour correction then you will have a CCM that will work well over a wide range of conditions.

Mark

Did you by any chance ever see atmospheric correction matrix (or calculated one), to save me trouble of doing it?

I suspect that it will be different for different air mass, but just a few should be enough to cover most of the cases - like 90, 60, 45 and 30 degrees above horizon.

Link to comment
Share on other sites

8 minutes ago, sharkmelley said:

No, I haven't calculated any atmospheric correction.  I simply adjust the white balance to the stars in the image.  

Mark

Not really practical for planetary imaging, but I get your point.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.