Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Monochrome imaging - Colour balance


Recommended Posts

I'm pretty new to processing monochrome imaging and struggling with getting the color balance right. 
The attached imaged is M31 i took with my asi183mm and Baarder RGB filter set. 

The colour seems a bit "off"??
Many images of M31 has a golden/Yellow/brown center fading into a blue colour towards the edges of the galaxy. 

I use photoshop for editing and any advice would be appreciated. 

M31NewTry.jpg

Link to comment
Share on other sites

To the first approximation, color calibration can be performed on any star on the image.

First thing to avoid is - stretching individual channels (curves / levels on individual channels). You should really stretch luminance and leave color as is.

Other thing is - perform color calibration while data is still linear.

It really depends how complex procedure you want to follow.

Photoshop might not be the best tool to do this with as you need something to calculate pixel values (or rather star intensity) correctly.

Here is a handy tool that you'll need:

http://www.brucelindbloom.com/index.html?ColorCalculator.html

image.png.9de93859a372bb1f60bf1d1b63c472f6.png

Set values that are marked with red arrows as they are in screen shot, then enter star temperature in CCT field and press it (marked with blue) and you'll get linear RGB triplet values in fields marked with green.

You need to scale your R, G and B channels until you get that exact ratios for star intensity (measured by photometry or simply selecting star with circular mask, and doing stats on all three channels and comparing sum or average pixel value in each channel).

Alternative is to use some software that attempts to do something similar for you automatically. Best and simplest choice would be to use Siril (as it is free) and does it on single click.

https://siril.org/tutorials/tuto-scripts/

(checkout photometric color calibration section)

Link to comment
Share on other sites

Because I image from a dark site I don't have too much work to do on colour, but I do have a little. What I do (which is very little!) has also worked on guests' data from light polluted sites giving a bright orange wash-out.

I just use dynamic background extraction on the linear data in Pixinsight. Sometimes I'll use automatic background extraction when I can't really find any neutral background in the image. This usually gets me very close, though often a green bias exists at least somewhere in the image. This can be fixed with SCNR green, applied as sparingly as possible.

Being happier in Photoshop, I take the modified linear image into Ps for stretching. By placing a few colour sampler points (set to 5x5 average) on the image background sky, I can see if my background channels are equal, as I like them to be. I try to end up at 23/23/23. in RGB. If the background sky is right, I tend to find that the rest of the image will follow. 

I do think, though, that you will need one of the dedicated and astro-specific gradient tools in your workflow. Many astro processing packages offer something.

M31 is an oddball on colour.  Unless worked on, it tends to have a fairly nondescript, murky and monochromatic look, located somewhere on the green-magenta axis. Prizing the reds away from the blues is the trick, but don't over do it.

Olly

Link to comment
Share on other sites

4 hours ago, ollypenrice said:

 

M31 is an oddball on colour.  Unless worked on, it tends to have a fairly nondescript, murky and monochromatic look, located somewhere on the green-magenta axis. Prizing the reds away from the blues is the trick, but don't over do it.

Olly

Thank you!

When you say " Prizing the reds away from the blues is the trick, but don't over do it", what do you mean?

Turning down the the red in relationship to the blue? 

Link to comment
Share on other sites

This is not a scientific approach which is what I would recommend, but this is after gradient removal in Astro Pixel Processor and adjustment in Startools and Affinity Photo. The usual fixes for colour adjustment in PI (Photometric colour calibration) and APP (Calibrate star colours) did not work on the image.

The adjustments were made in AP with the HSL tool, Channel Mixer and selective colour. I have posted the AP screenshots so you can see the changes to the Histograms.

Original

OriginalM31.thumb.jpg.4c681edb71d98f5a8a67a61fd35c727b.jpg

Modified

ModifiedM31.thumb.jpg.baa7d0603d3ae743935841515de666ae.jpg

 

Edited by tomato
typo corrected
  • Like 2
Link to comment
Share on other sites

12 minutes ago, senzala said:

When you say " Prizing the reds away from the blues is the trick, but don't over do it", what do you mean?

Here is what M31 really looks like:

image.png.2e12142f591519de5889c5541a90f13a.png

Inner part of the galaxy is full of yellow stars. There is some brown / darker color in dust lanes and ha regions, and there are white/ bluish young stars in outer parts.

All of the color of M31 is mostly composed out of star light and stars shine with light that is equivalent to their temperature:

image.png.d54a52f5848f29bd82d26808a8bed67b.png

Since both Ha regions and young hot stars are relatively faint compared to yellow core - it is hard to separate them nicely and people often employ saturation to get those bluish stars - but it is very easy to over do it.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Here is what M31 really looks like:

image.png.2e12142f591519de5889c5541a90f13a.png

Inner part of the galaxy is full of yellow stars. There is some brown / darker color in dust lanes and ha regions, and there are white/ bluish young stars in outer parts.

All of the color of M31 is mostly composed out of star light and stars shine with light that is equivalent to their temperature:

image.png.d54a52f5848f29bd82d26808a8bed67b.png

Since both Ha regions and young hot stars are relatively faint compared to yellow core - it is hard to separate them nicely and people often employ saturation to get those bluish stars - but it is very easy to over do it.

The image you post is an example of what I mean to aim for. I've worked on lots of M31 datasets and am working with a guest on a his new set at the moment. As I said earlier, I've invariably noticed  that they tend towards a fairly monochromatic colour, more or less green or tending towards magenta (which is one axis), in the stack. What we should ideally see is a yellowish-red, stellar population II  core with traces of blue population I stars  in the spiral arms. Initially these colours are insufficiently colour-contrasted to appear within the dull greenish initial appearance.

The colours of the image in your link are often latent in the greenish data, as a crude boost of saturation will show - but saturation is not the way to extract them. I only use it as a test to see what is latent in the data.

I use two methods to boost colour contrast in Photoshop, both saved as actions so they are one-click thereafter. I apply them as a layer so that the opacity slider lets me decide how much of the modification to retain.

1) The Lab colour trick. 

Image-Mode-convert to Lab Colour.

Channels. Make 'a' channel active and go to Image, Adjustments, Brightness and Contrast and boost the contrast massively. I use 30. Make 'b' channel active and do the same, boost contrast to 30.

Convert back to RGB colour.

2) The Soft Light Trick.

Make two copy layers, so there are three layers in all.

Change the blend mode of the top layer to Soft Light and flatten onto the middle layer.

Give the middle layer a slight blur (about 0.6 on the Gaussian blur slider.) Change the blend mode to Colour.

Flatten.

 

On a really reluctant, colour-weak image I might run 3 iterations of each, alternating the actions. Sometimes only one iteration is needed. The results are much more subtle than Saturation boosts and don't provoke the colour noise of saturation either.

Olly

Edited by ollypenrice
False click.
Link to comment
Share on other sites

20 minutes ago, ollypenrice said:

The image you post is an example of what I mean to aim for. I've worked on lots of M31 datasets and am working with a guest on a his new set at the moment. As I said earlier, I've invariably noticed  that they tend towards a fairly monochromatic colour, more or less green or tending towards magenta (which is one axis), in the stack. What we should ideally see is a yellowish-red, stellar population II  core with traces of blue population I stars  in the spiral arms. Initially these colours are insufficiently colour-contrasted to appear within the dull greenish initial appearance.

The colours of the image in your link are often latent in the greenish data, as a crude boost of saturation will show - but saturation is not the way to extract them. I only use it as a test to see what is latent in the data.

I think that "green issue" in astrophotography is issue of misunderstanding.

People often reach for things like "Hasta la vista green" or SCRN in PI - and in my view that is simply wrong thing to do.

Green is essential component of the image - although it might not be so obvious. It is strongest component of brightness.

I've often posted this image that nicely shows the effect:

image.png.0f177122521d332feda214813c3437ff.png

Another way to see the importance of green is to look at RGB -> Luminance transform which goes like this:

image.png.70901a41d4f1b9d28fd57e91784a4b1f.png

From this matrix we can see that Y component (which is essentially luminance) is calculated from RGB triple as ~0.2*R + ~0.7*G and ~0.07*B

or in another words - Luminance is over 70% green channel.

Artificially killing green is really hurting luminance information.

Problem with green comes from lack of color calibration. sRGB matching functions look like this:

image.png.cf391b0a2fa210c703322f041955b9e0.png

This represents ideal sensor - that is matched to our computer screens (or at least to sRGB color space all images on internet without explicit color space should be encoded in). Fact that some parts of graph are negative - has to do with fact that sRGB is limited in gamut and can't display some colors properly. For example - it can never display pure OIII color properly - as it would need to produce "negative" red light in order for our eyes see accurate color (which is impossible).

But important thing is - compare that to normal sensor QE curves:

post-42840-0-89161100-1629389171.jpg

Or add RGB filters to grey monochrome curve - in either case, green will be strongest of the three, but for correct sRGB color - we need it to be weakest of the three as seen on sRGB color matching function graph.

Sensors simply produce stronger green than should be for color space we are working in and we need to compensate for that - we need to color calibrate our sensor (as each will have different QE curve - but sRGB has standard one).

30 minutes ago, ollypenrice said:

I use two methods to boost colour contrast in Photoshop, both saved as actions so they are one-click thereafter. I apply them as a layer so that the opacity slider lets me decide how much of the modification to retain.

1) The Lab colour trick. 

Image-Mode-convert to Lab Colour.

Channels. Make 'a' channel active and go to Image, Adjustments, Brightness and Contrast and boost the contrast massively. I use 30. Make 'b' channel active and do the same, boost contrast to 30.

Convert back to RGB colour.

Thing with saturation is that it is not a physical property of light.

This is something that I think most people don't think about or realize when working with images. There are two different "approaches" or "sides" to color. One is physical and other is perceptual.

Maybe much more easy to understand analogy would be with temperature. With temperature we have measurable quantity - physical value of temperature of something. We say something is at 15C and that is something that can be measured.

There is also perceptual side to temperature - we can touch something and it will feel - cold, warm, hot, freezing - wide variety of terms, but most important thing is - how it feels - does not depend solely on temperature. It also depends on other factors - ambient and condition of our body (being sensory system).

Color is the same in this regard. What we perceive as color depends on conditions that we do our observing in.

For example - if we take 6500K light and look at it in daylight / normal office conditions - it will be white light, but if we take the same light and view it in the evening, next to incandescent light bulb or maybe fireplace - it will no longer look white - it will look bluish. Light has not changed - our perception did.

Thus we distinguish two different types of color spaces. One relate to physical side of things, while other attempt to tackle perceptual side of things. If we look at terminology for CIECAM02 for example - it will be more clear what this means:

Quote

The two major parts of the model are its chromatic adaptation transform, CIECAT02, and its equations for calculating mathematical correlates for the six technically defined dimensions of color appearance: brightness (luminance), lightness, colorfulness, chroma, saturation, and hue.

https://en.wikipedia.org/wiki/CIECAM02

All those names - like hue, saturation, chroma, brightness - are things that we perceive. There is nothing in electromagnetic spectrum that says one should be more saturated then other (at least nothing obvious, we need to derive complex mathematical equations to be able to calculate saturation).

Here is explanation of terms from the same page:

Quote

Brightness is the subjective appearance of how bright an object appears given its surroundings and how it is illuminated. Lightness is the subjective appearance of how light a color appears to be. Colorfulness is the degree of difference between a color and gray. Chroma is the colorfulness relative to the brightness of another color that appears white under similar viewing conditions. This allows for the fact that a surface of a given chroma displays increasing colorfulness as the level of illumination increases. Saturation is the colorfulness of a color relative to its own brightness. Hue is the degree to which a stimulus can be described as similar to or different from stimuli that are described as red, green, blue, and yellow, the so-called unique hues. The colors that make up an object’s appearance are best described in terms of lightness and chroma when talking about the colors that make up the object’s surface, and in terms of brightness, saturation and colorfulness when talking about the light that is emitted by or reflected off the object.

We are attempting to create uniform color spaces - which means that same numerical "distance" results in same perceived distance / difference in color. If we boost saturation by 10 and then by another 10 - we should perceive that increase in saturation to be linear as well.

In any case - you choice of LAB color space is appropriate for saturation manipulation as it is intended to be (relatively) uniform perceptual color space, but maybe CIECAM02 / CIECAM16 would be even better color space to do that in as you'll have finer grained control over what you want to do with color.

50 minutes ago, ollypenrice said:

On a really reluctant, colour-weak image I might run 3 iterations of each, alternating the actions. Sometimes only one iteration is needed. The results are much more subtle than Saturation boosts and don't provoke the colour noise of saturation either.

Here is the thing. There is no such thing as color-weak image. It is what it is - certain spectrum of EM radiation and if that spectrum happens to produce color that is weak in Colorfulness or Saturation - then that is just the case. If you take a photo of pale beige ambient like this:

image.png.4b821ed831e61c87903f9ab56c93cb9d.png

you don't want it to look like this:

image.png.c380c33be2b8b8050be31d98624b9a0e.png

(I did not even do a good job of killing that green in plant :D).

 

Link to comment
Share on other sites

Sorry, a bit of a thread hijack here, but I think it's relevant.

@vlaiv Speaking of colour calibration, pixinsight has recently gained a new one: spectrphotometry-based colour calibration, which uses star spectra data and camera QE curves and filter transmissions to perform colour calibration. https://pixinsight.com/doc/docs/SPCC/SPCC.html

What are your thoughts on this - is this essentially the approach you've been advocating for for ages, or is it still an imprecise method?

Link to comment
Share on other sites

2 minutes ago, The Lazy Astronomer said:

What are your thoughts on this - is this essentially the approach you've been advocating for for ages, or is it still an imprecise method?

Just gave it a quick glance, and while approach is very valid in its roots, it simply falls short to deliver accurate color.

Here is the problem:

image.png.f91f7cd519fb8b961b669112112531de.png

As soon as someone starts introducing white reference into color calibration - it is sure sign that something is not right.

I'll expand to make this a bit more understandable. There is no white light - or rather there is no spectra that can be identified as white. White is product of our perception (very much the same as any other color) - but more importantly, we see some light as white only given certain viewing conditions.

To give temperature analogy that I used above - white is "warm".  Will 20C temperature feel warm to you to the touch? Well - that depends. If you are in freezing arctic conditions and you touch something that is 20C - it will feel warm to the touch.

Similarly - we sometimes say: "White point" is D50 or D65 or some other illuminant. That just means that given conditions that you are likely to view the image in - to your brain D65 (or D50) will look white.

Under some other conditions - it might look bluish or maybe even reddish (or rather yellowish). Whiteness also depends on intensity. Shine some light that you see as white and then put same source but of much higher intensity. First source will suddenly become gray rather than white.

Color calibration is step (or rather should be) - independent of our perception. All observers must agree on measured value - regardless of what their sensation of phenomena is. It should also be performed in absolute color space (one that does not know maximum value - they can be arbitrarily high).

Another problem is that R/G and B/G ratios are simplified version of color calibration matrix - "diagonal" matrix. Using full matrix would give better results (more accurate).

 

  • Thanks 1
Link to comment
Share on other sites

On 24/11/2022 at 14:56, tomato said:

This is not a scientific approach which is what I would recommend, but this is after gradient removal in Astro Pixel Processor and adjustment in Startools and Affinity Photo. The usual fixes for colour adjustment in PI (Photometric colour calibration) and APP (Calibrate star colours) did not work on the image.

The adjustments were made in AP with the HSL tool, Channel Mixer and selective colour. I have posted the AP screenshots so you can see the changes to the Histograms.

Original

OriginalM31.thumb.jpg.4c681edb71d98f5a8a67a61fd35c727b.jpg

Modified

ModifiedM31.thumb.jpg.baa7d0603d3ae743935841515de666ae.jpg

 

Thank you ! 
Is that my picture you used?
This is the colour I'm trying to create :) 

To me the histogram looks very similar to me?!

Did you blend the gradient map? 


 

Edited by senzala
Link to comment
Share on other sites

Yes, I started with the image you posted at the start of the thread. You can see the distribution of the red and green pixels of the modified image is subtly different from the original.

I was aiming for colours to match the example M31 posted by @vlaiv, it needs more work.

Link to comment
Share on other sites

2 hours ago, tomato said:

Yes, I started with the image you posted at the start of the thread. You can see the distribution of the red and green pixels of the modified image is subtly different from the original.

I was aiming for colours to match the example M31 posted by @vlaiv, it needs more work.

Did you use masks?

Sorry, I dont know affinity photo, but the adjustment layers seems similar. 
Which color profile would you recommend when editing astrophotography?

Edited by senzala
Link to comment
Share on other sites

20 hours ago, senzala said:

Did you use masks?

Sorry, I dont know affinity photo, but the adjustment layers seems similar. 
Which color profile would you recommend when editing astrophotography?

No, masks were not used, just adjustments in HSL, selective colour and colour balance.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.