Jump to content

740427863_Terminatorchallenge.jpg.2f4cb93182b2ce715fac5aa75b0503c8.jpg

Is using SCNR to tame green in hubble palette images a good idea or a bad idea


Recommended Posts

I have a bit of a problem of image processing philosophy, and now I am tangled up in knots.

When I look at narrowband workflows, I see many people using SCNR in Pixinsight to reduce the green from a strong Ha signal in the Hubble palette. Often quite agressively.

But it seems to me that this is useful data that you have painfully collected, and now you are throwing it away. You chose to map Ha to Green and now you don't like the result. But is it right to 'zap' the green?

Part of the problem is I don't really understand what SCNR does. When it eliminates a green pixel what does it replace it with?

Is a better approach to make a green colour mask of the image and then adjust its hue to be more pleasing? At least then you are maintaining the intensity of the signal, just shifting where it maps on the colour wheel.

Maybe I don't understand, but SCNR just seems a bit heavy handed.

Any thoughts from narrowband and Pixinsight gurus?

  • Like 1
Link to comment
Share on other sites

I'm neither NB nor PI guru, but here are few fun facts :D

SHO has Ha as green for a purpose. Green color "participates" the most in luminance information of the image. Ha is often the strongest NB component and will result in best SNR.

image.png.cd64d7c672f23715a898a0ad48782afc.png

(out of three primary components - R, G and B - green carries the most luminance information over 80%, red and blue being 54% and 44% respectively)

Human eye/brain system is most sensitive for luminance information. It perceives the noise in luminance the most. We are perfectly able to watch "black and white" movie or image - which is just luminance information - while corresponding chrominance part would look flat and would carry far less information.

Check out this Ferrari image:

ferrari.jpg.7d4b44fca256e43c4932d4053fccefef.jpg

If we take luminance of that image:

ferrari_lum.jpeg.37db9bfe0b84cebfc8dcd58c5ec32f64.jpeg

We see a lot of detail in the image -in fact we see all of the detail in the image - we just don't see color, but look what chrominance information carries:

ferrari_chrom.jpeg.26ba6d92da17defdbc21c3910a73b19f.jpeg

We can guess that it is some sort of car - probably racing car - but we have no idea what the setting for the image or anything else.

To me - the idea of killing green in SHO images and trying to make "Hubble palette" - something that it is not - is complete abomination.

For above reasons and some technical ones - most cameras are simply the most sensitive in green part of spectrum (that helps with daytime photography and coincides with the fact that we are most sensitive in green and it also coincides with the fact that green channel carries the most of luminance information).

Since people don't know how to properly handle raw camera data and convert it to proper color - most of astronomic images (that really best start off as raw data) have green cast.

99% of objects in space are not green because they are stellar in nature (which means it is one of these colors image.png.07f2bed485db221c0987751390e90e83.png ) and instead of learning how to properly deal with color - people came up with "hasta la vista green" or SCRN scripts or whatever.

In fact - people get so frustrated with green they get from their raw data - that they started treating all green - even in false color images - as something unnatural.

In the end - my advice would be - if you don't mind going against popular expectations - don't kill the green in SHO - embrace it.

  • Like 6
Link to comment
Share on other sites

I agree with Vlaiv and think that, if you don't like green in astrophotos, don't use the Hubble Palette. As Vlaiv points out, in most cases the Ha, mapped to green in the Hubble palette, has by far the strongest signal. So what do you expect?

There is also a rationale behind the HP which is often overlooked and that is that, looking at where the true wavelength of each emission lies, the HP simply stretches the separation between them but leaves them in the original order left to right.  (In that sense it has something in common with the stretching of brightness where, again, the original order of captured brightnesses is respected.)

I thought this was well worth five minutes: 

Olly

Link to comment
Share on other sites

I also do not like green that much in astrophotos so I avoid the Hubble palette and use HSO instead since the Sii signal is often rather weak. Then I tweak the green towards yellow which looks a bit more natural to me.

Link to comment
Share on other sites

17 hours ago, Adreneline said:

This might be worth a read :)

I also use SCNR in HP images to remove magenta by first inverting the image, applying SCNR green, and then re-inverting the image.

HTH

Thanks for the link. SCNR is not quite as brutal a process as I had imagined. Clearly, it can be applied with some subtlety. At least I now have a better understanding of what SCNR does.

Link to comment
Share on other sites

17 hours ago, vlaiv said:

To me - the idea of killing green in SHO images and trying to make "Hubble palette" - something that it is not - is complete abomination.

Thanks for the explanation of human visual systems and the role of the green channel. I agree with your points, but aesthetically (rather than scientifically), dominant acid green does not look that good. I am looking for a way to preserve the Ha data, whilst producing a more nuanced appearance. It is not the straight Hubble palette, but I have not found a general purpose alternative yet.

Link to comment
Share on other sites

6 hours ago, ollypenrice said:

I agree with Vlaiv and think that, if you don't like green in astrophotos, don't use the Hubble Palette. As Vlaiv points out, in most cases the Ha, mapped to green in the Hubble palette, has by far the strongest signal. So what do you expect?

Hi Olly,

Of course. When you map the strongest signal to the green channel you get a predominantly green image. However, it often does overwhelm the signals from the other filters, and as @vlaiv says, it is where the human visual system is most sensitive. The result can easily be an acid green image with little colour variation. I am looking for a process that preserves data, but allows stronger visualisation of the different narrowband components. I have found for some targets a 50:50 blend of HOO and SHO produces an image which better balances the filters. Perhaps I just need to mess around more with the palettes to find something that works for each specific image.

Link to comment
Share on other sites

20 minutes ago, gorann said:

I also do not like green that much in astrophotos so I avoid the Hubble palette and use HSO instead since the Sii signal is often rather weak. Then I tweak the green towards yellow which looks a bit more natural to me.

Gorann - I'll give that a try. I have used HOS, but find it underwhelming on many targets. A 50:50 blend of HOO and SHO often gives an interesting starting point, but it depends on the target.

Link to comment
Share on other sites

Although the RGB mapping is a given, with the Hubble, you can still adjust hue to adjust the nature of each primary colour channel. Photoshop's Selective Colour gives three way control more than just red, green and blue, in fact.

Olly

Link to comment
Share on other sites

Here is interesting idea one might try:

Any color mapping between NB data and RGB color space will come as transform matrix. This is due to nature or light - it is linear in nature - meaning that you can increase it (multiply intensity by constant) and add it (shine two lights at sensor and number of photons in each wavelengths adds from those two sources).

As such - any transform that is applied to light needs to be linear - and in case of vectors - that is matrix multiplication.

That really means that

R = c1 * Ha + c2 * OIII + c3 * SII

G = c4 * Ha + c5 * OIII + c6 * SII

B = c7 * Ha + c8 * OIII + c9 * SII

Of course - point is to find these coefficients c1 ... c9 that will give pleasing result. One way of doing it is to go "other way around". Take RGB chromaticity chart and select your primaries for the image:

CIExy1931_srgb_gamut.png

In above chart - you can select any three points to represent you Ha, OIII and SII signal. All colors in the image will be within triangle specified by those three colors. For example - you want to avoid green and you want yellow to be Ha. You also don'e really like pink and purple tones in the image - well maybe select following three primaries:

image.png.90eda73452b0842d603008d9edf1cf0b.png

Next - open your application for image processing (like Gimp or PS or whichever has color picker - and note down RGB values of colors you have chosen as primaries):

image.png.d0ee5fab7e050b81d9ad63b3bb1c1f7e.png

So SII has (1.0, 0.095, 0) as RGB triplet.

Ha has (1.0, 0.91, 0) as RGB triplet and OIII will have (0, 0.6, 1) triplet

Final color will be given as:

color = Ha * (1, 0.91, 0) + OIII * (0, 0.6, 1) + SII * (1, 0.095, 0)

Now we group first numbers in parenthesis - that being red and we write:

red = Ha * 1 + OIII * 0 + SII * 1 = Ha + SII

green = Ha*0.91 + 0.6*OIII + 0.095 * SII = 0.91 * Ha + 0.6 * OIII + 0.095* SII

blue = Ha*0 + OIII * 1 + SII * 0 = OIII

There you go, we created our own palette with three primaries so that Ha is yellow, SII is red and we avoided green and purple / pink colors

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

6 hours ago, vlaiv said:

So SII has (1.0, 0.095, 0) as RGB triplet.

Ha has (1.0, 0.91, 0) as RGB triplet and OIII will have (0, 0.6, 1) triplet

Final color will be given as:

color = Ha * (1, 0.91, 0) + OIII * (0, 0.6, 1) + SII * (1, 0.095, 0)

Now we group first numbers in parenthesis - that being red and we write:

red = Ha * 1 + OIII * 0 + SII * 1 = Ha + SII

green = Ha*0.91 + 0.6*OIII + 0.095 * SII = 0.91 * Ha + 0.6 * OIII + 0.095* SII

blue = Ha*0 + OIII * 1 + SII * 0 = OIII

There you go, we created our own palette with three primaries so that Ha is yellow, SII is red and we avoided green and purple / pink colors

Now that is clever!

Thanks for explaining. I was stumbling around trying different blends without any framework. This is excellent and will be tried as soon as possible!

Link to comment
Share on other sites

I tried @vlaiv's methods on some IC1396 data. 

This is hte image. Processed as normal, but no changes to colour balance except increasing saturation and adjusting contrast and brightness overall. Many other faults no doubt, but a nice range of colours.

IC1396(sm2).thumb.jpg.54405c308f4a8f3b28eccfc316298e50.jpg

  • Like 1
Link to comment
Share on other sites

6 hours ago, old_eyes said:

This is hte image. Processed as normal, but no changes to colour balance except increasing saturation and adjusting contrast and brightness overall. Many other faults no doubt, but a nice range of colours.

A bit too much green in there - more than I would expect from selected primaries, but that is probably due to increased saturation.

What is your processing workflow and what does image look like without boosting saturation?

  • Like 1
Link to comment
Share on other sites

I am not too worried about some green, it is the dominance I try and avoid.

This is the result after initial channel combination:

IC1396Combine.thumb.jpg.3b8afe27278906c5108aa4b9d61349f6.jpg

Process to this point was:

  1. WBPP calibration, registration and stacking.
  2. Dynamic crop to remove dodgy edges
  3. DBE
  4. EZ Denoise
  5. Masked Stretch
  6. Channel Combination (using your example weights, but dividing by the sum of the weights for each channel to normalise)

Processing after this image to get the final version.

  1. Curves transformation to increase brightness and contrast plus initial saturation boost
  2. Starnet to create starmask and starless version.
  3. Dark Structure Enhance
  4. Extract luminance from starless
  5. Further contrast and saturation boost with Curves Transformation
  6. MMT (6-layers) to reduce noise and slightly boost structure
  7. Dark Structure Enhance, MMT and CurvesTransformation on Luminance image to get as much detail as I can
  8. LRGB Combination
  9. Pixel Math to reunite starless and star_mask
  10. Export to JPG
  11. Tweak contrast and brightness in Affinity adjust JPG
  • Like 1
Link to comment
Share on other sites

Quite happy to say I overdid the saturation. Something I am prone to. I have not yet mastered visualising what the XISF image will look like when crushed into a JPG or PNG.

Comparing the XISF with the JPG of the original channel combination posted above,  they are significantly different. JPG shows higher saturation and a differnet colour balance.

In fact, having now exported that intermediate image as a JPG I like the general balance of this one best. I might go back and do the MMT processing without further pushing saturation and contrast.

Link to comment
Share on other sites

15 minutes ago, old_eyes said:

Channel Combination (using your example weights, but dividing by the sum of the weights for each channel to normalise)

This will slightly alter things. If you want to divide - divide all three with the same number, but you don't really need to divide - you can "clip" - that is standard operation in color space conversion.

In above equations - we have that red is equal to Ha + SII and that would mean it can easily be larger than 1 in value (or 100% or 255 - which ever range you use) - but all values larger than 1 are clipped to 1.

To be absolutely precise about it we should really include gamma operations as well as standard sRGB values are gamma encoded (sRGB gamma) - so we need to undo gamma for calculation of above values and redo gamma when we are done composing the image - but that could be seen as "advanced" stuff so I did not want to bring it (that early) into discussion :D

 

  • Thanks 1
Link to comment
Share on other sites

14 minutes ago, vlaiv said:

To be absolutely precise about it we should really include gamma operations as well as standard sRGB values are gamma encoded (sRGB gamma) - so we need to undo gamma for calculation of above values and redo gamma when we are done composing the image - but that could be seen as "advanced" stuff so I did not want to bring it (that early) into discussion :D

Thanks for that. Your explanations are excellent, but I do sometimes feel I am clinging on to understanding by my fingertips!

If I take away the normalisation I get this:

IC1396CombineNoNorm.thumb.jpg.3b905eb69b64219cd1369c663980df6f.jpg

Brighter and warmer.

an interesting effect, I ned to go back through the matrix maths to understand what I have done.

 

Link to comment
Share on other sites

2 minutes ago, old_eyes said:

If I take away the normalisation I get this:

Ah, yes - more like one would expect. We mapped Ha to yellow and image is predominantly yellow because Ha is the strongest.

Although I'm not sure I like this version though. I like the brightness of it, but I prefer the color tone of last one - one that is normalized.

Actual color tone of the image will depend on stretch of each channel. That is important part. Stretching each wavelength a bit differently will change overall tone of the image.

I'm not sure even what is the best approach - when to do non linear stretch - before or after color combination.

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

I'm not sure even what is the best approach - when to do non linear stretch - before or after color combination.

I would have thought that to preserve colour balance we should combine and then stretch. If each colour was stretched separately then you coud end up with an unauthentic colour as each channel may have been stretched differently. Is my understanding correct?

Link to comment
Share on other sites

The consensus seems to be to channel combine linear images for RGB and non-linear for narrowband, although I have not heard a good explanation of why.

I like using the masked stretch  because it gives a similar background for each image, which should mean the combination is more neutral.

I think what I have done with the 'normalised' image is to reduce the red and green channels (by the division process) and boosting the blue because its divisor was 1. I get a similar image if I simply multiply the O3 blue channel x 2. Except there are more magenta stars.

  • Like 1
Link to comment
Share on other sites

8 minutes ago, AstroMuni said:

I would have thought that to preserve colour balance we should combine and then stretch. If each colour was stretched separately then you coud end up with an unauthentic colour as each channel may have been stretched differently. Is my understanding correct?

I think that is right for broadband imaging, where you are picking up the 'real' colours of stars, nebulae etc. And even then you have to compensate for filter bandpass and detector quantum efficiency. That's why Pixinsight has tools like Photometric Color Calibration, that matches the stars in your image to their known colour (from a database) and applies corrections to get a more 'realistic' white balance.

For narrowband the general desire is to highlight the differences between the emission lines. So, for example, Ha and S2 are both red and the distinction is subtle to the human eye. In most narrowband imaging you want to make the difference more obvious, so you map the narrowband filter images, not to their 'correct' colours, but to something that stretches the differences. Hence all the different narrowband palettes that people use, and that @vlaiv is teaching me more about.

Some people like to replace the stars in narrowband images with RGB stars because they find the 'unnatural' star colours that result unpleasant. But I am also not sure about an image that has a clearly remapped nebula floating in a field of broadband stars. That looks equally odd to me.

Once you are no longer mapping to 'real' emission colours, it does not matter if you combine non-linear images. There is no reality to maintain, and it gives you the chance to adjust the relative strengths of the different signals in search of a more effective communication of the target structure and characteristics. 

As I said, it is not clear to me whether you should stretch first and then combine or vice-versa, but the consensus is that the job is easier if you stretch narrowband first. What you don't have to worry about is preserving colour balance, because there isn't any - except the balance you choose.

  • Like 1
Link to comment
Share on other sites

33 minutes ago, AstroMuni said:

I would have thought that to preserve colour balance we should combine and then stretch. If each colour was stretched separately then you coud end up with an unauthentic colour as each channel may have been stretched differently. Is my understanding correct?

You are partially right - but that relates to true color images.

With NB images - if we did that - we would have ended up with almost monochromatic image. Ha usually has much much stronger signal than OIII and SII in particular.

NB images are often processed so that each channel is treated separately and then they are combined in the end.

Even for true color images - you need to be careful as non linear stretch causes issues with RGB ratios. This is the main reason why people often end up using saturation in the end - because non linear stretch of colors de-saturates them.

Say that you do power stretch (simple curves stretch is in fact power stretch - or middle slider in levels, in gimp you can set exponent in box). Say you stretch by power or 0.5 (that is square root)

image.png.ff12650de889d7ec1f50655ecd4f7e67.png

That is how curves usually look like (in 0 - 1) range.

Imagine now you have RGB as being 0.5 : 0.25 : 0.25, so ratio of R to G is x2 (0.5 : 0.25)

After stretch, you'll have  ~0.70711 : 0.5, 0.5 (each value is raised to power of 0.5 or square root is taken)

Now R to G is no longer x2 - it is ~1.4142.

Colors "came closer together", or contrast between color components reduced - and that is essentially de saturation.

What you can do for NB images is to maintain linearity of each component - but scale them so that they are close in values. Instead of using non linear stretch - you can linearly stretch each before combining them, but then again - you need to do this visually so that colors are nicely "balanced". Alternative is to do regular stretch before combining them.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.