Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Creating Luminance From RGB


Recommended Posts

Hi,

I'm whiling away the short cloudy nights by trying to improve my processing skills. I have been re-watching Anna Morris's excellent video tutorials

https://www.eprisephoto.com/videofiles/h1b3af346#h107c11a6

She has a technique whereby she creates a luminance layer by combining R, G & B into an image, copying it and turning the copy into a greyscale image. It would mean that all data captured would be represented twice in the processed image (once in the RGB and once in the luminance). My question is, how would this compare with shooting separate luminance subs in terms of capturing detail? It would certainly mean more time spent on RGB capture if no luminance subs were needed.

I'd be grateful for any thoughts.

Cheers

StevieO

  • Like 1
Link to comment
Share on other sites

This is somewhat complex topic and I'll try to answer your question - hopefully in understandable way.

Let's first look at producing mono image from RGB data.

There are multiple ways to produce luminance layer from RGB data and most of it reduces to this formula:

c1*R + c2*G + c3*B = mono value

Choice of c1, c2 and c3 will determine outcome. For example if you wish to produce luminance layer from mono camera and RGB filters that would be the same as capturing luminance layer with L filter and you happen to have RGB filters that split 400-700nm range in disjoint way and cover whole range (in principle no filters do it but most filters come very close to this as following graph shows:

image.png.f11a541908c5e63d2f7c323ac2fdf405.png

there is a bit of overlap around 500nm and a bit of poor coverage at 580), then c1, c2 and c3 will simply be 1 each, so formula will be:

R + G + B

That translates into - take all light between 400nm and 500nm (blue) and 500nm and 600nm (green) and all above 600nm up to 700nm and add together and you will get all light between 400nm and 700nm.

There is another approach often quoted on how to get luminance data from color:

If you happen to have sRGB linear RGB data (not the case with OSC or mono+RGB filters unless you do color calibration of such data), then coefficients c1, c2 and c3 have values: 0.2126,  0.7152, 0.0722 (as defined by sRGB standard).

Why such different numbers? This is because in this case luminance is designed to mimic our eye sensitivity to light - we perceive green to be brightest color (by intensity) while red is less bright and blue even more so (for the same intensity).

I often show this image as demonstration how different colors (different mix of three primaries - r, g and b) give different sense of brightness:

image.png.8a73f218ad74f289d5770765374211e2.png

You can see that pure green has something like 80% brightness, red about 54% while blue only 44% (these percentages differ from above coefficients because here we have gamma of ~2.2 applied by RGB standard while above values should be linear).

Therefore if you want to mimic sensor response you should just add colors together, but if you want to get brightness as we would perceive it - you should use different coefficient (which depend on color space / color calibration of your RGB data - example above given for linear sRGB).

There are other ways (other coefficients) that you can do above with slightly different results. In regular image processing because you will be using nonlinear stretch of the image - there won't be much difference between approaches and resulting image will be more influenced by your processing then choice of method for luminance (sensor based, perceived luminance based or something else).

For example with DSLR or OSC sensor you might consider using different coefficients for producing luminance based on the fact that color sensors with bayer matrix have twice more green pixels than red or blue - this means more data gathered in green part of spectrum and better SNR. Couple that with type of target you are shooting and you can get different c1, c2 and c3 values based on best SNR.

This gets us to difference between LRGB vs pure RGB.

Why people use LRGB approach vs RGB when in principle both provide same result in terms of image? It is down to SNR and perceived noise in the image. It is known fact that human eye/brain system is more sensitive to variations in luminosity than in variations in color - we tend to spot luminance noise more easily then noise in color data.

Shooting L gets you better SNR then adding RGB together. This is because there are other noise sources besides shot noise - dark current noise and read noise. Imagine above scenario where you have RGB filters that split 400-700nm range. To gather all the light in that range you can either use L filter in 10 min sub, or you can use RGB filters each in 10 min sub. L sub will have dark current noise of 10 minute exposure and one read noise dose, while adding RGB subs will result in 3 doses of read noise and 30 minutes of dark current.

This may not seem like a lot, but it does have quite a bit of impact in some cases, like when capturing faint signal (comparable to read noise amplitude) or when shooting for example narrow band target that does not have much if any light in green part of the spectrum - in this case adding G to R and B to produce luminance just adds noise and no signal at all - in that case it would be better to produce luminance by just adding R and B together and leaving out G completely in RGB case (setting coefficient c2 to 0 in this case).

Thus LRGB vs RGB depends on many factors and ratio of time spent on L and that on RGB will have impact on final result in case of LRGB.

Like, I've said it is fairly complex topic and there is no simple and straight forward answer.

However, for processing purposes it is better to have luminance and RGB information separate as you can apply different processing to each (remember human eye/brain sensitivity to noise in luminosity vs color), so even having RGB data and creating artificial luminance out of it can help with processing - especially if you pay attention how you create luminance to maximize SNR of artificial luminance.

As a last point, I want to just briefly talk about "best of both worlds" - how to maximize effectiveness of LRGB imaging. As far as I'm aware, currently no one is using this approach and it is not really supported in software, but that is something that I'm working on:

when working with above filters that "add up" to full 400-700nm range, we have seen that adding R+G+B produces same result as having L (a bit lower SNR, but same signal captured) - this can be useful. When you shoot your LRGB data and start stacking L, you can include R+G+B part if you have algorithm that deals with subs of different SNR - this way you will improve your L by adding more data.

That is not all - if you look at equation L = R+G+B, then you can easily see that rearranging it can lead to this for example: R = L - (G+B), and other combinations - this means that you can augment your color data by use of L in similar way.

Moral of this short final part is that although LRGB is often better than pure RGB - we are still not utilizing it's full potential in data reduction. Hopefully software that does it will be available soon.

Hope this helps understanding artificial vs "natural" luminance.

 

 

  • Like 2
Link to comment
Share on other sites

Wow, my gob is smacked, I wasn't expecting so comprehensive a reply, thank you!

I was especially struck by the improved SNR  to be gained from capturing genuine Luminance subs, and if I interpret correctly, that adding a greyscale copy of an RGB image as luminance is likely to increase noise, rather like stacking multiple copies of the same sub together.

The last part about including RGB data in the Luminance stack was interesting, I will give that a go and see what happens.

Thanks again!

StevieO

Link to comment
Share on other sites

1 hour ago, StevieO said:

adding a greyscale copy of an RGB image as luminance is likely to increase noise

Not sure what you mean by adding grayscale copy of RGB as luminance, but if for example you have OSC image (rather than mono/RGB variant) and you want to apply LRGB workflow to it, meaning:

1. create artificial luminance

2. process that luminance (stretch, denoise, all the fancy stuff one does)

3. apply RGB ratios from RGB data to above luminance after color calibration

you could end up with better looking image - less noisy.

Why is that? Thing with OSC data is that you have twice as much green pixels (data) than red and blue. If you apply luminance transform for linear sRGB - given above, you will take most of green and just a bit of blue and red, and green has best SNR because you have twice as much data - then your luminance will be a bit less noisy.

Other thing that you can do - when doing LRGB style of composing and processing you can denoise color data much more without blurry impact on final image - most of the sharpness is carried by luminance.

  • Like 1
Link to comment
Share on other sites

That's been very helpful, thanks. Taking everything into account, I think I'll stick with what I've been doing, which is 50% Luminance and 50%RGB. I think after all that creating a 'false' Luminance from a greyscaled RGB image will be noisier and less detailed than otherwise.

S. 

Link to comment
Share on other sites

The luminance filter passes about three times the light passed by any one of the colour filters. If you are struggling to get faint signal above the noise you'll find real luminance does this best.

I've messed about with synthetic luminance added to real and concluded, quite honestly, that it isn't worth the bother. Even though 3 hours of synthetic lum ought to match 1 hour of real lum I never find that it does. On the other hand, the synthetic lum, having less signal, can often stand in for short lum subs for the repair of blown out cores etc. For that purpose I do find it useful - because it fails to match up to real lum in signal.

However, for OSC users it can be a very good idea to separate the processing funcions into luminance and colour because they have very different objectives even though the data is from the same source. Process the synthetic lum for sharpness and the RGB for low detail and bold colour.

As ever I enjoyed Vlaiv's reply and greatly respect it. My position is just a pragmatic one.

Olly

Edited by ollypenrice
  • Like 1
Link to comment
Share on other sites

14 hours ago, HunterHarling said:

I found it is best to throw all the R, G, B, and L subs together in a stack. There is the same amount of signal as L and a bit less noise.

This surely can't be a good way to do it, though, because all the colour subs will be weighted at the same value as the luminance subs. They are very defective in signal compared with the L subs and they are more numerous, so their defective signal will outweigh the L signal where that is better.

I'd be more inclined to put all the R,G and B subs into a stack and make a greyscale output image called Synthetic Luminance. I think PI can read the noise level in this stack and the L stack and combine them at an appropriate weighting. Alternatively you could give the Synth Lum and the real Lum equivalent stretches (not stretching them right to the limit but till the backgrounds were the same.) Then paste one on top of the other in Ps and play the opacity slider till you got the lowest level of noise by looking into the image at a high zoom factor.

Olly

Link to comment
Share on other sites

6 hours ago, ollypenrice said:

This surely can't be a good way to do it, though, because all the colour subs will be weighted at the same value as the luminance subs. They are very defective in signal compared with the L subs and they are more numerous, so their defective signal will outweigh the L signal where that is better.

I'd be more inclined to put all the R,G and B subs into a stack and make a greyscale output image called Synthetic Luminance. I think PI can read the noise level in this stack and the L stack and combine them at an appropriate weighting. Alternatively you could give the Synth Lum and the real Lum equivalent stretches (not stretching them right to the limit but till the backgrounds were the same.) Then paste one on top of the other in Ps and play the opacity slider till you got the lowest level of noise by looking into the image at a high zoom factor.

Olly

Yes, this would probably work better. I might try this with some of my older data.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.