Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

L-eXtreme with RGB camera - what are the 'correct' colors


Recommended Posts

Hello, recently I imaged the pelican nebula with my ASI071MC Pro, Esprit 120 and L-eXtreme filter.

My processing workflow is usually as follows:
In pixinsight, doing background extraction and background work in general, sometimes I do some additional stuff but here I just stretched and wanted to go to photoshop with it.

My issue is - in pixinsight there is an option for linked stretch and unlinked stretch, I tried to process roughly the same both linked and unlinked stretches, but I don't know which colors are the correct colors and which aren't.

For reference, here is the unlinked stretch after background extraction:
oxFGSxW.png

Here is the linked stretch:
2ruxvTU.png

Usually, when imaging RGB unlinked stretch was always the 'correct' colors so this is what I've always done, but here I'm a bit confused by my ending results.
Here is the final unlinked stretch image(sort of, mostly did it to compare both images and see the potential of the image):
u6B7Z6r.jpg


Here is the final(same as above) linked stretch image:
sknFsjY.jpeg

 

The colors of the unlinked stretch, which in most cases is correct for me, seems pretty off here...not sure how I got all this orange and it doesn't make sense to me considering the fact that the L-eXtreme should be mostly HA and OIII(if I'm not wrong?)
The stack itself seems pretty correct in my opinion as a bi-color stack representing the L-eXtreme, but the orange is still weird to me.
 

This is the unlinked stretch stack with ABE + some saturation:
qMjv1AE.png

As you can see, it goes orange quite fast, but is it correct?

Would love to learn some more about it. I know that 'correct colors' in space is a bit vague topic, but still, I think I should have some 'right' processing colors here, and I don't want to make up colors or whatever.

Here is the stack if anyone is interested:
https://drive.google.com/file/d/1snIAfOl53kuFEX24s2Zo48UtRFZR7l02/view

Thanks for the help :)

Link to comment
Share on other sites

20 hours ago, msacco said:

I'm interested, but I don't own a copy of PI so would prefer in standard file format like 32bit floating point FITS rather than xisf that is PixInsight only file format.

On color topic - well answer is rather complex.

Fact that this is in essence narrowband image does not mean that it is necessarily false color. If color is to be consider accurate - we must decide what color are we talking about.

- Actual color of the object and stars - well, no luck there, filter used completely obliterated color information there

- Color of the light that passed thru the filter - we can talk about that color. Since these are emission type objects with Ha and OIII emission lines - we can talk about those colors.

You can choose to do actual narrowband color image - and you may be partially or fully successful in recreating actual color of the captured light. That largely depends on ratio of Ha to OIII signal.

Both Ha and OIII colors are outside of sRGB color gamut and we will here talk only about sRGB color gamut as I presume that intent is to display image online and sRGB is implied standard (we could also talk about other wider gamut color spaces - but only few people would be able to use that as it requires both wide gamut display and properly setup operating system that can display wide gamut images).

image.png.683108c2b47499860f4818f7573cb18b.png

This is chromaticity diagram that shows all "colors" (at max brightness) that exists - with actual colors from sRGB color space shown. This is because other colors (gray area) simply can't be properly displayed by your screen / sRGB color space.

I outlined a line on this diagram. It connects ~500nm point at spectral locus to 656nm point at spectral locus. These are OIII and Ha colors. Any light consisting out of these two wavelengths (any combination of strength of each) will have color that lies on that particular line.

sRGB color space will only be able to show those colors that are in colored triangle along black line. All other colors that are left or right on that line will be too saturated green/teal or too saturated deep red. Computer screen simply can't display those.

If your image consists of combinations that lie inside triangle - great, you'll be able to fully display color of the light that passed thru the filter. If not - we must use a trick. There are several ways to do it: we can just clip the color along the line - if color lies in deep reds - we just show it as reddest color along the line that we can display. Similarly for OIII side. Another approach would be to do perceptual mapping - we instead choose color that looks most like color that we can't display - like dark deep red for Ha. This process is know as gamut mapping.

You can also decide to do fake color narrowband image - similar to SHO or HSO images - but you only have two wavelengths captured so you can only create bicolor image. Above is also bicolor image - but with accurate colors for the captured light.

With fake color - you can choose any bi color scheme you like - like HOO or similar.

In any case - before you choose any of these - you actually need to extract color data from you image. You used color sensor with duo band filter and you'll need some math to extract the actual data.

Look here:

image.png.656d22315eabd135a96128e7be6d6e74.png

We can see that L-eXtreme only passes ~500nm and ~656nm, but if we look at QE graph for ASI071:

ASI071-QE-e1509346837511.jpg

We can see that 500nm ends up being picked by all three channels and so does 656. In fact we can write couple of equations:

red = OIII * 4.5% + Ha * 78%

green = OIII * 68% + Ha * 9%

blue = OIII * 45% + Ha * 3%

From these you can get OIII and Ha in several different ways using pixel math - I recommend following:

Ha = (red - green / 15.111) * 1.292

OIII = (green - red / 8.667)  * 1.46

If you want to get accurate color, then you need to convert Ha + OIII into XYZ value and then convert XYZ to sRGB, otherwise - just use Ha and OIII to compose RGB like HOO or other bicolor combination.

 

  • Like 2
Link to comment
Share on other sites

3 hours ago, R26 oldtimer said:

As this is actually a narrowband image, there isn't really a "correct " colour. All nb images are really false colour representations. If I may suggest that you split the channels,  do a linear fit and then recombine.

Yep, I know that there is no really 'correct' color, and I'm not even referring to 'correct color' in term of what that area would look to the naked eye or something, but just specifically to my results, is there anything wrong? is it supposed to be that way and fine? By correct color I mean that this is the data I captured, and that I'm not inventing colors out of nowhere.

I'll try that, thanks :)

2 hours ago, vlaiv said:

I'm interested, but I don't own a copy of PI so would prefer in standard file format like 32bit floating point FITS rather than xisf that is PixInsight only file format.

On color topic - well answer is rather complex.

Fact that this is in essence narrowband image does not mean that it is necessarily false color. If color is to be consider accurate - we must decide what color are we talking about.

- Actual color of the object and stars - well, no luck there, filter used completely obliterated color information there

- Color of the light that passed thru the filter - we can talk about that color. Since these are emission type objects with Ha and OIII emission lines - we can talk about those colors.

You can choose to do actual narrowband color image - and you may be partially or fully successful in recreating actual color of the captured light. That largely depends on ratio of Ha to OIII signal.

Both Ha and OIII colors are outside of sRGB color gamut and we will here talk only about sRGB color gamut as I presume that intent is to display image online and sRGB is implied standard (we could also talk about other wider gamut color spaces - but only few people would be able to use that as it requires both wide gamut display and properly setup operating system that can display wide gamut images).

image.png.683108c2b47499860f4818f7573cb18b.png

This is chromaticity diagram that shows all "colors" (at max brightness) that exists - with actual colors from sRGB color space shown. This is because other colors (gray area) simply can't be properly displayed by your screen / sRGB color space.

I outlined a line on this diagram. It connects ~500nm point at spectral locus to 656nm point at spectral locus. These are OIII and Ha colors. Any light consisting out of these two wavelengths (any combination of strength of each) will have color that lies on that particular line.

sRGB color space will only be able to show those colors that are in colored triangle along black line. All other colors that are left or right on that line will be too saturated green/teal or too saturated deep red. Computer screen simply can't display those.

If your image consists of combinations that lie inside triangle - great, you'll be able to fully display color of the light that passed thru the filter. If not - we must use a trick. There are several ways to do it: we can just clip the color along the line - if color lies in deep reds - we just show it as reddest color along the line that we can display. Similarly for OIII side. Another approach would be to do perceptual mapping - we instead choose color that looks most like color that we can't display - like dark deep red for Ha. This process is know as gamut mapping.

You can also decide to do fake color narrowband image - similar to SHO or HSO images - but you only have two wavelengths captured so you can only create bicolor image. Above is also bicolor image - but with accurate colors for the captured light.

With fake color - you can choose any bi color scheme you like - like HOO or similar.

In any case - before you choose any of these - you actually need to extract color data from you image. You used color sensor with duo band filter and you'll need some math to extract the actual data.

Look here:

image.png.656d22315eabd135a96128e7be6d6e74.png

We can see that L-eXtreme only passes ~500nm and ~656nm, but if we look at QE graph for ASI071:

ASI071-QE-e1509346837511.jpg

We can see that 500nm ends up being picked by all three channels and so does 656. In fact we can write couple of equations:

red = OIII * 4.5% + Ha * 78%

green = OIII * 68% + Ha * 9%

blue = OIII * 45% + Ha * 3%

From these you can get OIII and Ha in several different ways using pixel math - I recommend following:

Ha = (red - green / 15.111) * 1.292

OIII = (green - red / 8.667)  * 1.46

If you want to get accurate color, then you need to convert Ha + OIII into XYZ value and then convert XYZ to sRGB, otherwise - just use Ha and OIII to compose RGB like HOO or other bicolor combination.

 

Thanks a lot for the detailed comment. Obviously, as you suggested, I'm mostly referring to the color of the light that passes through the filter.

My question is, would the correct thing be to combine my results to some sort of HOO? And would having these orange colors I got be considered 'incorrect'?
Again, it's not like I expect the image to represents the 'true color' of the area, but the only thing I want is to have the correct colors in terms of 'these colors are not added' or w/e, I want my image to be 'genuine' to the way I captured the data, which is OSC with a dual band filter.

Would the orange thing I got meet these requirements?

And as requested, here is the FITS stack :)
https://drive.google.com/file/d/12gzZhHJgppVhU7s--NQJwDYyo39EPZT1/view?usp=sharing

Thanks!

Link to comment
Share on other sites

58 minutes ago, msacco said:

My question is, would the correct thing be to combine my results to some sort of HOO? And would having these orange colors I got be considered 'incorrect'?
Again, it's not like I expect the image to represents the 'true color' of the area, but the only thing I want is to have the correct colors in terms of 'these colors are not added' or w/e, I want my image to be 'genuine' to the way I captured the data, which is OSC with a dual band filter.

Would the orange thing I got meet these requirements?

Ok, I understand now what you are after - you just want to make sure you preserved color that came out of camera as is, right?

Well, that is something that is not possible. We stretch our data and in doing so we are changing it. RGB ratios change when you stretch your data - and even if you stretch it in particular way that keeps relative ratios - you are again going to change what we see as color. Different stretches of same RGB ratio will produce different color sensation in our eye.

Bright orange is well - orange, but dark orange is no longer orange - it is brown to our eye - although it is same spectrum of the light - we see it as different hue.

Having said that - if you don't intentionally mess with color - you should be able to preserve raw color from camera "in general" (that is not accurate in terms of color reproduction - but can still be considered authentic as made by your equipment). Level of processing involved will determine how "accurate" or rather "preserved" color is (and there are different metrics of "preservation" - one related to light spectrum for example and one related to our perception).

If you again look at that image I posted - yes, OIII + Ha combination can produce orange color:

image.png.da1882a2bddeebf4ec8686a579fb586c.png

It can range from green/blue combination across pale yellow to orange and red.

In fact, when doing bicolor image - we often intentionally change color. That is because OIII signal can often be much fainter than Ha signal and image will end up being mostly red. If we want to show OIII structure - we will often boost it compared to Ha and that will shift color on above line towards green end.

So here is what "as is" raw color from camera looks like:

image.png.4fa9f9fc4c0759532d2ef775da99c19e.png

This is to be expected - Ha signal is much stronger and image is almost completely red.

If you want to show OIII a bit more clearly - well, you need to boost it separately. This is already deep in fake color territory - as we adjust color components separately.

image.png.39dadbabf1bdf659daf7b4a3f72729f8.png

Now that we made OIII stronger - it shows some structure - in upper right corner there is wall that still has dominant Ha but rest of the nebula shows OIII presence as well.

This shows that both of your versions are "correct" - just depend on how you define correct. If you want data as out of camera - linked stretch will provide that. If you want to emphasize otherwise weaker OIII signal - use unlinked stretch.

 

Link to comment
Share on other sites

8 hours ago, vlaiv said:

Ok, I understand now what you are after - you just want to make sure you preserved color that came out of camera as is, right?

Well, that is something that is not possible. We stretch our data and in doing so we are changing it. RGB ratios change when you stretch your data - and even if you stretch it in particular way that keeps relative ratios - you are again going to change what we see as color. Different stretches of same RGB ratio will produce different color sensation in our eye.

Bright orange is well - orange, but dark orange is no longer orange - it is brown to our eye - although it is same spectrum of the light - we see it as different hue.

Having said that - if you don't intentionally mess with color - you should be able to preserve raw color from camera "in general" (that is not accurate in terms of color reproduction - but can still be considered authentic as made by your equipment). Level of processing involved will determine how "accurate" or rather "preserved" color is (and there are different metrics of "preservation" - one related to light spectrum for example and one related to our perception).

If you again look at that image I posted - yes, OIII + Ha combination can produce orange color:

image.png.da1882a2bddeebf4ec8686a579fb586c.png

It can range from green/blue combination across pale yellow to orange and red.

In fact, when doing bicolor image - we often intentionally change color. That is because OIII signal can often be much fainter than Ha signal and image will end up being mostly red. If we want to show OIII structure - we will often boost it compared to Ha and that will shift color on above line towards green end.

So here is what "as is" raw color from camera looks like:

image.png.4fa9f9fc4c0759532d2ef775da99c19e.png

This is to be expected - Ha signal is much stronger and image is almost completely red.

If you want to show OIII a bit more clearly - well, you need to boost it separately. This is already deep in fake color territory - as we adjust color components separately.

image.png.39dadbabf1bdf659daf7b4a3f72729f8.png

Now that we made OIII stronger - it shows some structure - in upper right corner there is wall that still has dominant Ha but rest of the nebula shows OIII presence as well.

This shows that both of your versions are "correct" - just depend on how you define correct. If you want data as out of camera - linked stretch will provide that. If you want to emphasize otherwise weaker OIII signal - use unlinked stretch.

 

Thanks for the great comment once again! So is there some 'accepted standard' here on which of them should be used? Or it's really just a matter of personal preferences and each of them could be considered as 'okayish'?

Link to comment
Share on other sites

2 hours ago, msacco said:

Thanks for the great comment once again! So is there some 'accepted standard' here on which of them should be used? Or it's really just a matter of personal preferences and each of them could be considered as 'okayish'?

No, there is no accepted standard.

I would argue that one should follow set of technical standards when attempting to recreate actual color of object (within acceptable limits), but otherwise, people are free to do what they like with their images - sort of artistic freedom.

There were couple of discussions on what is "allowed" to be done to processing of astrophotography that is not to be considered "cheating" / "making data up", and my view on that topic is rather conservative. I don't really like clone stamp tool or use of brush or morphological filters for star reduction, I also prefer minimal noise reduction and minimal sharpening if any.  I also don't like boosting saturation - although most people are used to seeing images like that - over saturated.

Link to comment
Share on other sites

In Pixinsight, an initial unlinked stretch will give you a good idea of what the data looks like once you start to stretch it. If you colour calibrate, ie align the RGB channels, then you can do the linked stretch to show what the aligned RGB will look like.

Link to comment
Share on other sites

Do you need the filter to fight LP or are you using it to hold down non-nebular light sources in order to find more structure in the nebulae? If you're not using it to fight LP you might consider doing even a shortish run without it it to obtain a natural colour result which you could then use to define colour in the final version, perhaps using the unfiltered data for star colour and reflection nebulosity?

Olly

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.