Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Universe looks quite dull in true color :D


Recommended Posts

Just fiddled with color spaces and color calibration on one of my old captures (data lacks quality) and this is result:

image.png.ba2df1a4f7087675f9a9844675bef08b.png

There are bluish parts in outer arms of the galaxy but due to poor data (both lum - which is quite better, and color data - taken with small telescope and osc camera) it does not show clearly - it's in the "shadows".

It can also be due to choice of calibration stars and derived transform - all of 16 stars used for color calibration fall between ~4200K and ~6200K - I did not manage to find really hot stars in this image and that could be the cause of "poor" blue.

  • Like 2
Link to comment
Share on other sites

  • 2 weeks later...

Natural colour is something that really interests me and I'm refining a workflow to do this.  But your image looks unusually dull for a natural colour image so I'm intrigued to know why. 

What camera are you using - is it a OSC?  If OSC, are you using the appropriate colour correction matrix after performing the white balancing?   If the camera is not OSC how are you transforming the RGB data to the colour space you are using?

Are you also applying the gamma transfer function appropriate to the colour space?  By the way which colour space are you using?  In my opinion, Adobe RGB is easiest to use because it has constant gamma which means you can scale the RGB values up and down within the colour space without altering chromaticity/saturation.  Scaling data whilst preserving natural colour is a big problem in sRGB, for instance.

How good was your background subtraction?  You need to get that pretty accurate before applying the colour space gamma transfer function.

If you are stretching the data in any way, after applying the colour space gamma, is this a colour preserving operation?  I ask this because most "curves" type stretching operations that people use will bleach the colour from the data.

Also, I would start with the RGB data alone.  Experiment and get this right before attempting to blend with luminance data.

Mark

Edited by sharkmelley
Link to comment
Share on other sites

I'll describe complete process I used.

Since image is intended for display on web, sRGB is color space of result. Most systems should properly display sRGB image, and most systems will treat image without profile as being sRGB image.

Data for above image come from two sources:

1. Lum data was captured with 8" RC scope and ASI1600 at native ~0.5"/px. Seeing was particularly poor that evening so I did quite a bit of binning on it. I also image from red zone (bordering on white - SQM about 18.5), and this totals less than 4h.

2. Color data was captured with ASI178MCC - so cooled OSC camera with small pixels. Scope used was TS80 with x0.79 FF/FR. This gives about 1.3"/px, but since we are talking about OSC camera, each color was effectively captured at 2.6"/px. I resampled OSC data after splitting each channel and stacking separately to match stack of Lum data. I used IDAS LPS P2 filter with this camera (and for lum, but that is not important for color)

Each channel (L, R, G and B ) was wiped using following method: I calculated mean value of each pixel in the image and did 10 rounds of sigma rejection (upper) with cut off value of sigma 3.0. This gave me "mask" of background pixels. Linear fit was done on background pixels and this was remove from whole image (gradient map). Process was repeated until gradient map values were order of 1e-15 or less.

After that I did color calibration with 16 stars on the image.

B-V values were found for those stars (SDSS data set on Simbad), and following formula used to estimate star temperature:

image.png.b6fd0cecfb0c84b9e57749ea2b985e39.png

This color space calculator was used to obtain sRGB values for each star temperature with gamma of 1.0 (still in linear RGB space):

http://www.brucelindbloom.com/index.html?ColorCalculator.html

(D65 was used as per sRGB standard, but gamma of 1.0 was chosen in options).

These RGB values were normalized (component vector divided with magnitude to produce unity vector in RGB space).

AstroImageJ was used for aperture photometry on those stars and R, G and B values were recorded for each star. Again these tri-component vectors were normalized.

Least squares fitting was performed between measured and reference values and transform matrix derived.

Transform matrix was applied to each pixel of wiped R, G and B to produce R', G' and B' channels - calibrated colors.

Luminance data was stretched. R', G' and B' was divided with max(R', G', B') to obtain color ratios normalized to one of primaries. Each of these ratios was multiplied with stretched luminance data to get final channels of the image.

In the end, gamma of 2.2 was applied to each channel as per sRGB spec to produce final result.

Rationale for processing is as follows:

Luminance carries intensity information that we can consider to be 0-100% depending how bright "source" pixel we want to be. We adjust this by stretching. Star cores end up at 100% (or 1.0 value). Color information is in linear RGB after calibration. At first I wanted to do everything in XYZ color space - but since linear RGB is just affine transform of XYZ - any scaling can be performed on it - it's the same really.

We then transform each pixel of calibrated color (still linear) into ratio such that one of the primaries is set to 1 (this is done by division with max of each R, G and B ) - this will enable us to encode color in RGB space without clipping - max intensity of one of primaries. Since we multiply each of these "ratio" pixels with intensity - we get value that represents linear RGB as it would be recorded if actual brightness of the light source was as it is in stretched Lum. This way even star cores won't saturate - common problem with "regular processing".

In the end, in order for devices to properly represent colors in the image we need to do a transform to some color space. Most displays know how to handle sRGB, and it is de facto standard for web graphics. From linear RGB (sRGB) to sRGB - it just takes gamma of 2.2 (in reality, it is a bit different formula, but gamma 2.2 approximates it very well) - if you want to be very precise you should use following:

image.png.0543904978dc99b5d90261b865c199c6.png

(R, G and B being linear RGB of sRGB and scaled to 0-1 range - we get that from above procedure).

  • Like 1
Link to comment
Share on other sites

2 hours ago, sharkmelley said:

Natural colour is something that really interests me and I'm refining a workflow to do this.  But your image looks unusually dull for a natural colour image so I'm intrigued to know why. 

What camera are you using - is it a OSC?  If OSC, are you using the appropriate colour correction matrix after performing the white balancing?   If the camera is not OSC how are you transforming the RGB data to the colour space you are using?

Are you also applying the gamma transfer function appropriate to the colour space?  By the way which colour space are you using?  In my opinion, Adobe RGB is easiest to use because it has constant gamma which means you can scale the RGB values up and down within the colour space without altering chromaticity/saturation.  Scaling data whilst preserving natural colour is a big problem in sRGB, for instance.

How good was your background subtraction?  You need to get that pretty accurate before applying the colour space gamma transfer function.

If you are stretching the data in any way, after applying the colour space gamma, is this a colour preserving operation?  I ask this because most "curves" type stretching operations that people use will bleach the colour from the data.

Also, I would start with the RGB data alone.  Experiment and get this right before attempting to blend with luminance data.

Mark

Following thread might be of interest to you as well

There is a bit more detail on processing there as well as some data on calibration (in later posts).

Link to comment
Share on other sites

From your description, my understanding is that you are calibrating the right white balance for your data but you are not applying the compromise colour correction matrix for your ASI178MCC camera.  Unlike the DSLR world, I'm not sure anyone has ever generated them for dedicated astro-cameras.  As an aside, I'm seriously thinking about writing a PixInsight script that will generate one from the image of a ColourChecker chart. 

The colour matrix reverses the mixing of colours caused by the response curves of the colour filters in the Bayer colour filter array.  Without applying this matrix, colours will look washed out and unsaturated even though the white balance and gamma are correct.  To understand more about its role, take a look at web pages linked below:

https://www.strollswithmydog.com/raw-file-conversion-steps/

http://www.odelama.com/photo/Developing-a-RAW-Photo-by-hand/Developing-a-RAW-Photo-by-hand_Part-2/

Mark

 

Edited by sharkmelley
Link to comment
Share on other sites

3 hours ago, sharkmelley said:

From your description, my understanding is that you are calibrating the right white balance for your data but you are not applying the compromise colour correction matrix for your ASI178MCC camera.  Unlike the DSLR world, I'm not sure anyone has ever generated them for dedicated astro-cameras.  As an aside, I'm seriously thinking about writing a PixInsight script that will generate one from the image of a ColourChecker chart. 

The colour matrix reverses the mixing of colours caused by the response curves of the colour filters in the Bayer colour filter array.  Without applying this matrix, colours will look washed out and unsaturated even though the white balance and gamma are correct.  To understand more about its role, take a look at web pages linked below:

https://www.strollswithmydog.com/raw-file-conversion-steps/

http://www.odelama.com/photo/Developing-a-RAW-Photo-by-hand/Developing-a-RAW-Photo-by-hand_Part-2/

Mark

 

Actually that is exactly what I'm doing and not simple color balancing.

Simple color balancing involves independently scaling R, G and B.

I'm creating color conversion matrix much like in first article (and subsequent link that takes you to another article that uses ColorChecker Passport).

Only difference being that I'm not using 12 colors to do conversion matrix, but rather set of stars in the image. Another difference is that recommended process is:

Camera RGB space -> XYZ -> sRGB

with two conversion matrices (XYZ -> sRGB being known and defined by sRGB standard). Since these both matrices represent affine transform - applying them, one after another is the same thing as applying single matrix that is "combination" of the two.

I opted for simpler approach of using single matrix and that's the one I'm deriving.

From reference stars and their temperature we can get exact XYZ values of black body of that temperature and thus linear sRGB values. This gives us precise color chart. It has added bonus over using "terrestrial" color chart to derive transform - removes impact of atmospheric scattering. If you calibrate on local source and then apply that transform on stellar image - star colors will be shifted due to atmospheric scattering / attenuation (less blue). By using actual stellar RGB values and calibrating against what those RGB values should be for black body - we are incorporating atmospheric response together with instrument and camera response.

Problem with my approach is that it needs reliable data and lot's of calibration stars to be accurate. At first, when I started "developing" method, I used Stellarium as a source for B-V index. It turned out that catalog used with Stellarium is highly unreliable for this data. Photometry of faint stars also brings in a bit of noise due to lower SNR. But using enough stars should improve overall SNR as things average out.

We could argue that choice of calibration colors is not enough to do proper calibration - as it does not cover enough of color gamut. If you look at Planckian locus (https://en.wikipedia.org/wiki/Planckian_locus) you will see that it spans very small region of chromaticity diagram.

image.png.1a9b106605355658e14f285ea7d3d7eb.png

and this calls into question derived matrices. But on the other hand - except for few narrow band sources - like Ha, Hb, OIII and such, all sources of light out there are very close to this line. This is why we don't see green stuff in the universe. Therefore, for colors present in the image, I would argue that stellar color calibration is sufficient to render proper colors (at least on images of star fields, clusters, and galaxies - maybe not on emission nebulae, but that are hardly inside sRGB gamut anyway. Not sure about reflection nebulae, but I'm guessing that pretty accurate color can be obtained by above method - it is after all scattered light of the stars).

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.