Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

# Synthetic luminance

## Recommended Posts

Hello all,

I am wondering which is the best way to combine a synthetic luminance image made from RGB channels with a real luminance image. Should you just combine the RGB and convert the image to grayscale or should you white balance the RGB before converting it to grayscale?
Not sure it makes a real difference, but still...

Regards,

Alex

##### Share on other sites

How do you want to combine them?

If you want to create additional subs for stacking with real luminance subs to improve SNR or do some measurement, then it matters and it depends on camera for both luminance and RGB. If the same mono camera was used, and R,G and B were shot with filters that distinctly divide frequency range into sub ranges - meaning no overlap and no "spaces" - you simply add 1 sub of each R, G and B to get one luminance sub. If it's different camera / OSC or filters have overlap / "space", then best way would be to create 3x3 matrix and solve it approximately with least squares method by using luminance as guide. You sample multiple stars and solve for transformed R, G and B values that added together give same amount of light as lum for particular stars. Not easy thing to do and you would need to write custom software to do it.

With this method you will end up with bunch of lum subs of different SNR, so stacking algorithm needs to deal with subs of different quality. In this case - regular average, and average based methods will not produce optimum result.

If you on the other hand have already stacked everything and you want to enhance lum layer for the purpose of having a nice image. Then it really does not matter. You can choose weights of R, G and B by judging how much signal is in each. Add more weight to those subs that have more signal in them. So you can simply do 1:1:1 ratio (or in another words add them up and divide by 3, or don't divide at all). Or you can use some other ratio, for example for HII regions, put more weight on B and R channels. For galaxies, green is probably going to be strongest so it should dominate. When you create synthetic light, again add it with a bit less weight then real lum (well decide based on total integration time). You can also color balance before adding, but it will produce less quality result in my opinion. Color balance tents to increase weak signal, and weak signal is probably going to have lower SNR, and treat it after amplification equally to stronger signal (that has good SNR) - thus not providing optimum SNR mix.

• 1

##### Share on other sites

My approach is pragmatic.

I extract a Luminance channel from my white balanced RGB in AstroArt because it has a ready made command for this purpose. Whether or not this differs from simply converting the RGB to greyscale in Ps I'm afraid I don't know.

The next thing is to assess its usefulness. (I find this quite variable.) I stretch the real lum to about 90% of what I think it will give. I then stretch the Syn Lum to the same level so that the backgrounds are comparable. Invariably the syn lum is noisier. Once the backgrounds are the same I paste the syn lum onto the real and experiment with the opacity slider till I get the best looking result. Typically I find that a 3 hour syn lum over an hour's real lum might be worth adding at maybe 25% opacity. You might expect it to be about 50% but I never find it is. I don't know why.

In imaging at high resolution I use nights of poorer seeing for shooting colour and only shoot L on good nights and at good elevation. In this case I won't normally bother with a syn lum at all.

Olly

• 1

##### Share on other sites

Interesting points of view from both of you!

In my particular case with the same camera and non-overlapping filters, indeed, it makes sense to add all the data from the RGB channels, which should account for about the same amount of unfiltered signal.

It also makes sense that by whitebalancing you increase the weak signal.

I should have after this night ~13-14 hours of luminance taken through both a LP filter and a simple L filter and 3-4 hours each RGB. The shooting conditions varied a lot between the sessions and since I'm trying to determine a good way of combining the lum files, I also thought to throw in the RGB information. Maybe a 15:85-20:80 ratio of RGB:L should do.

I doubt it will make a big difference with my data, but I think there's some valuable information you shared, I didn't find any simple explanation on the internet. Thank you!

##### Share on other sites

Interesting topic!  I'm going to add a little 'how to do this in Pixinsight' sort of question....

So, I have a set of calibrated and registered R, G, B and L subs, all 60s long, all at the same gain and binning.     Can I somehow make a synlum using the R G and B subs and combine with the L subs for a hopefully smoother/better SNR L integration?   Whats the optimum way of doing this in Pixinsight?

##### Share on other sites

yes you can, I've experimented with various ways.

Combining all the subs together in one super image integration isn't likely to work since some channels will be stronger than others so lots of subs from particular channels will get rejected, so you should probably run an integration on each individual channel first.

The Pixinsight book suggests you can then produce a super luminance from your LRGB, or just RGB - linear-fit them all to the strongest channel first, then load them all in ImageIntegration, no rejection, but noise-weighted, and try that.  I've not been too impressed tbh.

Another method I've used - combine and whitebalance your RGB, then with the RGB working space tool, experiment with trying different weights for each channel before using the extract-luminance tool.  You can see what weights you'd like by noise evaluation, the contrast-vs-noise script and visual inspection.  If you have L as well, then you could mix that in with pixelmath - lum*weight+rgb_l*(1-weight) kind of thing.

to be honest though, these days I just take lots of lum, and only use the rgb for colour.

• 1

##### Share on other sites
2 hours ago, glowingturnip said:

yes you can, I've experimented with various ways.

Combining all the subs together in one super image integration isn't likely to work since some channels will be stronger than others so lots of subs from particular channels will get rejected, so you should probably run an integration on each individual channel first.

The Pixinsight book suggests you can then produce a super luminance from your LRGB, or just RGB - linear-fit them all to the strongest channel first, then load them all in ImageIntegration, no rejection, but noise-weighted, and try that.  I've not been too impressed tbh.

Another method I've used - combine and whitebalance your RGB, then with the RGB working space tool, experiment with trying different weights for each channel before using the extract-luminance tool.  You can see what weights you'd like by noise evaluation, the contrast-vs-noise script and visual inspection.  If you have L as well, then you could mix that in with pixelmath - lum*weight+rgb_l*(1-weight) kind of thing.

to be honest though, these days I just take lots of lum, and only use the rgb for colour.

Agreed, I find synthetic lum from RGB quite unpredictable in terms of effectiveness. What can be useful, though, is exploiting the weaker RGB signal to make a syn L layer for repairing saturated parts of the L image. I just did this with the core of M51, for instance.

Olly

## Create an account

Register a new account