Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Best way to stack different exposures?


opticalpath

Recommended Posts

I've often wondered what is the optimal way to combine different exposure images of the same object.

I don't mean combining summed short and long exposure stacks with masking to get the best of both worlds and achieve a good representation of high dynamic range subjects like the Orion Nebula. Rather, how, for example, should I combine (1x1) RGB images with unfiltered images in the same stack to produce a luminance image that's superior to a stack of only the unfiltered luminance shots? In this case, would adding in the somewhat lower S/N of the RGB images 'pull down the average' and result in a luminance image that's actually inferior to the luminance-only stack ....... or is it a case of any extra data is good, even lower S/N data? I'm assuming we can use stacking algorithms that normalise images for the effects of differing brightness so it's not a matter of just adding dimmer images to the stack, but a question of what happens to S/N.

What do people think? What does theory say ..... and what do you do in practice?

Adrian

Link to comment
Share on other sites

  • Replies 29
  • Created
  • Last Reply

You don't stack the L with the colour. You stack the L and the R and the G and the B into 4 images. Then combine the colours in whatever programme you have. I use AstroArt 5.

Process the RGB to get good good saturation, colour balance and low noise. You don't need to chase detail, which makes the noise easier to control.

Now process the L chasing the detail and the faint signal. Process in zones so that regions of low signal (and stars) are not sharpened but may be noise reduced. The bright signal, excluding stars, can be sharpened and shouldn't need noise reduction.

Finally apply the L to the RGB as a top layer in Photoshop, set to blend mode Luminance. Before flattening these layers adjust the saturation in the bottom RGB layer in the light of how it is working under the L.

There are lots of other things you can do but that's the gist of it.

Olly

http://ollypenrice.smugmug.com/Other/Best-of-Les-Granges/22435624_WLMPTM#!i=2277139556&k=FGgG233

Link to comment
Share on other sites

I know how to do LRGB, Olly :rolleyes: ....... been doing it for years. I probably didn't ask the question very clearly. What I was talking about was using full resolution RGB images twice: first to construct the colour layer in the normal way, and second adding them into the construction of the luminance stack with the L images. The R, G and B images all have an intrinsic luminance after all ...... why use it only for colour? Add it to the luminance too if it improves the S/N.

Take a simple case: I have 10 luminance images and 10+10+10 RGB images at the same resolution. Suppose the RGB images are well exposed and each one has a S/N that is just as high as each luminance image. What happens if I stack all 40 images and call that my luminance layer - would that be a better or worse luminance layer than one made from just the 10 luminance exposures? I'm certain the answer is "better", so I might use all 40 for luminance and the 30 RGB for colour.

The more difficult question arises in the more likely scenario where the S/N of the RGB images is somewhat poorer than that of the luminance images. In that case would adding them to Lum. improve it or make it worse? That's the question I'm posing.

The RGB example was just a particular case of the general question: if I mix shorter and longer exposures in the same stack, will the shorter exposures improve the overall S/N or will they make it worse? If they are very nearly equal exposure, surely adding the shorter ones will improve overall S/N. But suppose the shorter exposures are only 1/10th of the longer ones - would they improve the result ..... and if not, where's the cross-over?

Adrian

Link to comment
Share on other sites

I know you know how to do LRGB so I re-wrote an answer after I'd re-read your question! Sorry about the confusion. I then couldn't edit the first post after a phone call!!

I understand just what you're saying and it's interesting. This is something I've tried;

1) Process an RGB as usual going for saturation, low noise, soft stretch, low detail.

2) Go back to linear RGB and greyscale it then process it to just past its noise limit as if it were normal Lum. This is synthetic L, shall we say. Put it on one side.

3) Process the Real Lum till it reaches about the same point as the Syn L, or just a bit beyond it. Take it just past its noise limit as well.

4) Paste the Syn L onto the Real L and adjust the opacity till you see the greatest reduction in noise. This, I think, is the key part of your question. How do you know how much Syn L to apply? This was the best method I could think of. I flattened at the level of the lowest noise and then...

5) Continue to stretch and sharpen the combined L till it can't give any more. Did this give more than the Real Lum alone? I think it did. But, here's the question I can't answer;

Is this really any different from applying the Real Lum to the normal RGB at slightly less than 100% opacity? I think it may be but really only because the processing has been rather different. The normal RGB was not processed for detail while the Syn L from the RGB was indeed processed for detail.

I haven't formed any final conclusions on whether it's worth going with the Syn L plus Real L scenario. My gut feeling is that it ought to work. I used this while trying to tame the noise in my Praying Ghost image, a desperate struggle with S/N ratio. I even shot some binned L to add only to the faint signal where all the noise was.

Olly

Link to comment
Share on other sites

The toughest question, Olly, is what are we going to call this: (RGBL)RGB? :eek:

I would still like to understand what is the effect of summing unequal exposures in the same stack - let's say all unfiltered luminance to keep things simple. At one extreme, where the exposures are nearly equal, it must improve the S/N. At the other extreme, where exposures are very widely different, presumably summing them together makes for a poorer result than just summing the longer exposures (I think). When you get this kind of relationship, it makes me curious: there must be a point somewhere in the middle of the two extremes where mixing the exposures is a benefit in one direction and a disadvantage on the other side; what are the opposing factors at play to cause that relationship? Perhaps a little experiment .......

Adrian

Link to comment
Share on other sites

If there is a reasonable way to measure or estimate the S/N ratio of two different stacks then you can weight them accordingly before combining them. AstroArt and Registar both allow you to weight two images before averaging them into one. No doubt lots of programmes do so. You can give one 40% and the other 60%. But how do you know how to weight them? Even if you had all the signal strength data at your fingertips and knew what to do with it (a big 'if' in my case...) there would still be the effect of the sky on the night to consider.

This is why, faced with this problem on occasion, I've taken both stacked images just past their noise limits in terms of stretching and then pasted one onto the other and adjusted the opacity to see where the lowest noise level seems to be. This does give a coherent and credible result. You might see the noise at its lowest with a 30-70 weighting or whatever but the opacity slider does, as you go from 0% to 100%, produce a blended image that goes noisy-smooth-noisy. That, at least, is fairly encouraging in that it agrees what might be expected. I've done this in informal collaborations where several imagers have contributed processed data and you need to know how to weight the contributions of each.

Olly

Link to comment
Share on other sites

Interesting, Olly. I hadn't really thought about it in these terms but blending two layers in PS at 50% opacity must I suppose be mathematically the same as averaging them. So adjusting the opacity in the blend creates a weighted average, but with the benefit of being able to vary the weighting in real time and see the result immediately. Makes sense.

Adrian

Link to comment
Share on other sites

Interesting, Olly. I hadn't really thought about it in these terms but blending two layers in PS at 50% opacity must I suppose be mathematically the same as averaging them. So adjusting the opacity in the blend creates a weighted average, but with the benefit of being able to vary the weighting in real time and see the result immediately. Makes sense.

Adrian

The problem here is that I can't find (or can find but can't understand!) what exactly Blend Mode Normal does to greyscae images. Most of the desriptions of what it does concentrate on how it handles colour.

Here's a brain teaser from Wiki, for instance; http://en.wikipedia....iki/Blend_modes It's a great relief to know that we have plenty of Porter-Duff operations available to us. Ahem...

The mathematical terms are beyond me but maybe it is saying that in a greyscale a simple average will be performed. In practice it certainly looks that way. I've just realized, though, that it would be easy to confirm this using AstroArt or Registar. In AA, for instanxce, you can open two images and, in Arithmetic, select Average and then use the slider to weight the top and bottom image contributions. Now for the mental breakthrough! Roll of drums... In AA you could do this to the linear images because it gives you a good screen stretch (as in Pixinsight) which would allow you to assess the quality of different weightings in real time.

This is certainly something I'm going to try. It's so obvious yet it hadn't occurred to me till you set me off in thought. I'll let you know what transpires.

Olly

Link to comment
Share on other sites

The problem here is that I can't find (or can find but can't understand!) what exactly Blend Mode Normal does to greyscae images. Most of the desriptions of what it does concentrate on how it handles colour.

Here's a brain teaser from Wiki, for instance; http://en.wikipedia....iki/Blend_modes It's a great relief to know that we have plenty of Porter-Duff operations available to us. Ahem...

The mathematical terms are beyond me but maybe it is saying that in a greyscale a simple average will be performed. In practice it certainly looks that way. I've just realized, though, that it would be easy to confirm this using AstroArt or Registar. In AA, for instanxce, you can open two images and, in Arithmetic, select Average and then use the slider to weight the top and bottom image contributions. Now for the mental breakthrough! Roll of drums... In AA you could do this to the linear images because it gives you a good screen stretch (as in Pixinsight) which would allow you to assess the quality of different weightings in real time.

This is certainly something I'm going to try. It's so obvious yet it hadn't occurred to me till you set me off in thought. I'll let you know what transpires.

Olly

Re Porter-Duff operations ......... always have my cardboard cut-outs to hand :grin:

When you referred to linear images, Olly, I take it you mean the separate stacked, but unstretched, summed images, figuring out the optimal blend before applying non-linear operations? Clever!

I could do this in the 'pixel math' function of Maxim too; it allows you to add/ subtract/ multiply/ divide two images and select the desired % of each in the mix .... and stretch the previewed result. Hmm, interesting!

BTW, I found a reference to 'averaging in layers' (or in 'apply image') in Jerry Lodriguss's book - extract here:

http://www.astropix.com/HTML/J_DIGIT/PSALIGN.HTM

that confirms 50% opacity normal blend = averaging.

Adrian

Link to comment
Share on other sites

Just had another thought, Olly, building on your good idea to trial-blend the different stacks before stretching .......

If I do this in Maxim, playing around with the % mix, not only can I preview the results and judge subjectively what mix gives lowest noise, I can go one better. By setting the cursor mode to 'aperture' I get a read out of S/N for the area inside the aperture. So I can run the cursor over the preview of the mix and should get a direct read of S/N in any critical areas, so I could perhaps even fine-tune the %mix to give the optimal blend for particular areas of interest in the image - faint spiral arms, for example. At the least, it would be interesting to have comparative S/N numbers to supplement the Mk. 1 eyeball.

Adrian

Link to comment
Share on other sites

It would be handy if there was a measure that applied to the whole frame that could be used to compare SN of images, but I don't think it works that way. I think it's only meaningful to compare SN of small regions in each image; hence the aperture tool in Maxim that's used for it.

I haven't got AstroArt but I seem to remember something about SNR in the image statistics tool or window.

Adrian

Link to comment
Share on other sites

It would be handy if there was a measure that applied to the whole frame that could be used to compare SN of images, but I don't think it works that way. I think it's only meaningful to compare SN of small regions in each image; hence the aperture tool in Maxim that's used for it.

I haven't got AstroArt but I seem to remember something about SNR in the image statistics tool or window.

Adrian

Not sure that a global SN ratio is needed. It's in regions of low signal that it matters, surely?

Olly

Link to comment
Share on other sites

Not sure that a global SN ratio is needed. It's in regions of low signal that it matters, surely?

Olly

That was my conclustion also. 'Global' SN doesn't really mean anything as SN varies hugely across the image. As you say - what matters is to be able to compare SN in the critically low signal regions. I was just trying to imagine some single measure that would somehow represent SN in those critical areas across the whole images, as a shortcut to sampling different areas manually in each image ..... though that wouldn't really be much of a chore in practice.

I tried adding a batch of 30 sec exposures to a batch of 150 sec exposures of the same object (I know they're short .... but I'm 15 miles from Heathrow!!) and though the shorter exposure stack made little or no improvement to SN, nor did it seem to 'dilute' the higher SN of the longer exposure batch.

Adrian

Link to comment
Share on other sites

OK, latest update; Rik over on PAIG has said that he does always use a syn L from his RGB data to add to the L data and that Pixinsight can weight it appropriately using its npoise analysis. I don't know how to do that but I need to find out! What is more, he says that it produces a better SN value than lum alone.

Rather an exciting discovery. I've raised this opnce or twice before but never had much reaction. From now on I'll be using a Syn L to boost my true L every time.

Olly

Link to comment
Share on other sites

Not sure that a global SN ratio is needed. It's in regions of low signal that it matters, surely?

If you haven't done any non-linear stretching, then the SN ratio will be the same everywhere.

The 'professional' way of doing this would be to match the photometric zero-point of the frames (basically you need the ratio of counts in the same objects on different frames). If you then linearly scale each image to have the same zero-point, and then measure the noise in a blank area of sky, you can combine them optimally by weighting (i.e. multiplying) each image by the inverse of the scaled noise (I can never remember if it is the variance or the sqrt of the variance, but you get the idea) and summing them up. This will give you the best SN in your stack.

So to answer another example, if you have a very short noisy exposure and nice noise free long one, if you weight this way then you will get a very small, but non zero, improvement in S/N by adding in the short exposure frame.

NigelM

Link to comment
Share on other sites

OK, latest update; Rik over on PAIG has said that he does always use a syn L from his RGB data to add to the L data and that Pixinsight can weight it appropriately using its npoise analysis. I don't know how to do that but I need to find out! What is more, he says that it produces a better SN value than lum alone.

Rather an exciting discovery. I've raised this opnce or twice before but never had much reaction. From now on I'll be using a Syn L to boost my true L every time.

Olly

I've done it a few times in the past (without any weighting) and it does seem to give a subjective improvement to the luminance image. One thing I have noticed is that it can of course change the relative luminosity of different coloured stars if the RGB stack is not well colour balanced. I particularly noticed that once when I added in the R and G images to the lum. stack, but omitted the blue images because of their poorer quality. No surprise then when blue stars came out fainter!

The only fly in the ointment in all this is if you generally shoot RGB binned, at a lower resolution than Luminance. That complicates things. I wonder if there could still be a benefit from adding upscaled RGB to boost the SN of the luminosity image, at the cost of some loss of resolution? On the other hand, if adding full-resolution RGB made a significant improvement to the luminance stack, perhaps it strengthens the case for shooting RGB at 1x1 in the first place.

Adrian

Link to comment
Share on other sites

Can't get rid of these italics, sorry. I wouldn't use binned RGB as luminance because it would wreck the resolution. However, if really battling noise I suppose the binned synthetic lum could be selectively blended into areas of low detail, faint signal and high noise. I did once shoot binned luminance for this purpose but had to be careful where I appled it. Olly

Link to comment
Share on other sites

So, playing devil's advocate .... is it worth shooting RGB unbinned then? Sure, it would take longer, but you get to use the data twice, maybe even save some time in Luminance exposures if you want. Some imagers do seem to swear by RGB only.

Adrian

Link to comment
Share on other sites

So, playing devil's advocate .... is it worth shooting RGB unbinned then? Sure, it would take longer, but you get to use the data twice, maybe even save some time in Luminance exposures if you want. Some imagers do seem to swear by RGB only.

Adrian

Wouldn't the time be better spent capturing real Lum?

Link to comment
Share on other sites

I never bin RGB for myself. I'll do it if guests really want to do it in order to work a little faster but it deprives me of one of my favourite techniques, which is to remove lum from the stars and use RGB only, ideally given a special 'stars only' stretch to keep them small, colourful and not over exposed. If you bin the RGB the stars will be naff and will need the L layer to clean them up. The best stars, without a doubt, are RGB only but they must be in bin 1 (assuming a FL whjich is best in Bin 1 of course). This thread makes me want to try a little less lum and relatively more RGB, from which I'll extract a syn L to add to the real L. It will only be a refinement at best but it's Worth a bit of playing, I think. Olly

Link to comment
Share on other sites

Very interesting thread :) I've been debating whether to bin RGB 2x2 or not. With the MN190 and 214L+ I find the seeing generally limits me to 2-3 arcseconds or about 2 pixels and 2x2 binned subs seem pretty much as good as unbinned. So I've been using unbinned L and 2x2 binned RGB plus unbinned Ha, for galaxies. Then again would I be better with 2x2 binning everything in these conditions? Would this give me better S/N ratio for a given total time than unbinned?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.