Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

What are your views on mixing exposure times in stacks?


BrendanC

Recommended Posts

I ask because with my trusty old DSLR, I knew exactly how the camera behaved in different parts of the sky, so to the (hang about, has a think, sticks tongue out) East for example I know I get more light pollution from the local village, and as I go towards the zenith, this reduces very quickly.

So, I could take, say, 120s exposures when shooting East, and on the same night do 180s facing West where there's less light pollution, but as I went up, I could increase from 120s to 180s and even 240s. Provided I calibrated like with like, using the correct darks, everything was fine.

HOWEVER, I'm still wrestling with getting my ASI1600 to behave with LRGB, and I'm wondering whether mixing durations like this just doesn't work with a more sensitive camera.

So, to mix (with correct calibrations, to get the most exposure), or not to mix (to keep everything consistent)? Your thoughts?

Link to comment
Share on other sites

I have mixed 30s and 60s subs and also 2 different gains in a stack and i dont think there were any issues.

Normalization did its thing and less than 1% was clipped in the end with sigma clipping, so i dont think in that case there were any issues.

  • Like 1
Link to comment
Share on other sites

Mixing sub length is absolutely fine and for brighter objects quite common. Given you are in the south east UK, I am guessing you do not have particularly dark sky, so long subs are probably not really necessary. I live in bortle 5/6 and use 2 minute subs but I could easily use 1 minute - just lots more data to process. You will not gain much from longer exposures in terms of S/N ratio unless in dark skies or narrowband. (Good video regarding exposure lengths below).

https://www.google.com/search?q=dr+robin+glover+astrophotography&oq=dr&aqs=chrome.5.69i60l2j69i57j69i61j69i60j35i39l2j46i199i433i465i512.2130j0j4&client=tablet-android-asus-tpin&sourceid=chrome-mobile&ie=UTF-8#

 

  • Like 1
Link to comment
Share on other sites

It seems to me that you should make a stack of the longer subs, then a stack of the shorter ones, then combine the stacks with a weighting equivalent to their total exposure time. I'm happy corrected on this. Where's Vlaiv? 😁

Olly

Link to comment
Share on other sites

@Clarkey I'm in a Bortle 4 zone which isn't bad. I'm still trying to decide what the best exposure times are. Thanks for the link, I've seen the video before and it's excellent. I also have Sharpcap Pro which I intend to use for its Smart Histogram feature some day soon.

@ollypenrice Yes, that would be a good idea I guess, but then it would also come down to their respective quality scores I suppose.

Edited by BrendanC
Link to comment
Share on other sites

If you use shorter subs on one side of the pier because of light pollution, you should ask yourself if these subs really add to the image. Light pollution itself just raises the background level. But if you use a shorter exposure because of it, you record fewer photons from the main target. And because of the light pollution, those subs will also have more noise. The increased background is removed in processing, but the noise level is not, and neither are photons added to increase the main target.

What I would do is to stack the short subs and long subs seperately, as Olly suggested. Then I would compare the long subs master image to the combined image. If there is no real improvement, I would probably not bother imaging on one side of the pier. To get the most out of a night, you can always have two targets on the same side of the meridian. Start with the target which passes the meridian first. Collect data on that target untill target nr 2 comes into view. Do this multiple nights until you have enough data per target.

 

  • Like 2
Link to comment
Share on other sites

7 hours ago, wimvb said:

If you use shorter subs on one side of the pier because of light pollution, you should ask yourself if these subs really add to the image. Light pollution itself just raises the background level. But if you use a shorter exposure because of it, you record fewer photons from the main target. And because of the light pollution, those subs will also have more noise. The increased background is removed in processing, but the noise level is not, and neither are photons added to increase the main target.

What I would do is to stack the short subs and long subs seperately, as Olly suggested. Then I would compare the long subs master image to the combined image. If there is no real improvement, I would probably not bother imaging on one side of the pier. To get the most out of a night, you can always have two targets on the same side of the meridian. Start with the target which passes the meridian first. Collect data on that target untill target nr 2 comes into view. Do this multiple nights until you have enough data per target.

 

When combining data from entirely different runs, that's what I do. I stretch both stacks separately and to the same level, getting both backgrounds to the same value. I than co-register them and put one on top of the other in Photoshop layers. By moving the opacity slider and looking at the noise level in the blend I can weight it to give the cleanest result. With the opacity at whatever it is at the point of least noise, I just flatten and save.

Olly

 

  • Like 2
Link to comment
Share on other sites

9 minutes ago, ollypenrice said:

With the opacity at whatever it is at the point of least noise, I just flatten and save

The opacity functions as a weight which is applied.

Stack1 × Opacity + Stack2 × (1 - Opacity)

In case of a mostly uniform level of light pollution across the sky, the opacity should end up about midway. With the moon about or high clouds in one part of the sky, I imagine it could deviate.  But if the opacity always ends up close to 0 or 1, the question arises if it wouldn't be better to avoid imaging in that part of the sky with lowest opacity entirely.

Link to comment
Share on other sites

On 01/05/2022 at 09:49, wimvb said:

The opacity functions as a weight which is applied.

Stack1 × Opacity + Stack2 × (1 - Opacity)

In case of a mostly uniform level of light pollution across the sky, the opacity should end up about midway. With the moon about or high clouds in one part of the sky, I imagine it could deviate.  But if the opacity always ends up close to 0 or 1, the question arises if it wouldn't be better to avoid imaging in that part of the sky with lowest opacity entirely.

I don't follow this, Wim. The opacity in Ps Layers just decides on the relative weighting of the two stacks. If one stack is better than the other, this will show in real time as you move the slider. Sometimes one stack can be almost useless, in which case the best result will have that image's opacity set close to zero. Note: both stacks have been stretched to the same background value. You can't do this with linear stacks.

Olly

  • Like 1
Link to comment
Share on other sites

2 minutes ago, ollypenrice said:

I don't follow this, Wim. The opacity in Ps Layers just decides on the relative weighting of the two stacks. If one stack is better than the other, this will show in real time as you move the slider. Sometimes one stack can be almost useless, in which case the best result will have that image's opacity set close to zero. Note: both stacks have been stretched to the same background value. You can't do this with linear stacks.

Olly

That's what I meant. I guess I was just not clear about it.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Interesting feedback, thanks. I must admit I hadn't really considered not using half the sky! But doing two objects in the 'good' half makes sense. 

Having said which, you could class the lower half as 'bad' too, which leaves me with just a quarter of sky. 

Link to comment
Share on other sites

A bit late to the party, but here are my views (and supporting math).

I think there are at least two types of mixing long and short exposures. I completely support both, but I think that both are lacking proper software support.

First approach is using "filler" subs. We take several very short subs and use data from those to replace over exposed / saturated parts of long exposure. Given that here we are working with very strong signal (otherwise it would not saturate pixels) - there is little concern for SNR issues.

My view on this is that it should be done at linear stage in certain way: we stack short subs, then register against each of long subs and replace saturated pixels and then work with long subs as we would normally do. I'm not aware of any software solution that does this. We can replace exposed pixels in final stack - but that has drawback of:

1. not knowing exactly what pixels are over exposed because we used interpolation for alignment of subs in order to stack and that changes pixel values (resamples the image) - thus we would need to replace all pixels above some threshold - like 90% of max signal.

2. resampling algorithms work the best if data is "natural" - i.e. not clipped / saturated, so aligning subs that contain clipped signal is not optimum solution.

Second approach is straight forward mixing of different exposures in single stack.

This one is tricky as there is no optimum solution. There is optimum solution in principle - but we can never find it because of lack of information. No currently implemented algorithm tackles this problem properly in my view, with the possible exception of "Entropy Weighted Average (High Dynamic Range)" used in DSS, but I haven't read the paper on that algorithm so I'm not really sure what it does and if it's any good for this particular case. It does say on DSS technical info page:

Quote

It is particularly useful when stacking pictures taken with different exposure times and ISO speeds, and it creates an averaged picture with the best possible dynamic. To put it simply it avoids burning galaxies and nebula centers.

Here is what we need to know in order to see extent of the problem:

1. Subs of different exposures will have different SNR for same signal levels.

This is fairly easy to see - both with our own eyes and with math. This is the reason, like @ONIKKINEN pointed out - that we normalize our subs (we make them have the same signal level, regardless of noise present in them)

2. There is no such thing as one SNR for entire sub. SNR is per pixel metrics rather than per image. This can easily be seen if we take SNR of any part of nebula and SNR of background. For nebula there will be some signal so S>0, noise is always present, so S/N>0, but for background - there is no signal, so signal is 0 -> S/N is also zero regardless of the noise level (zero divided with any number apart from zero itself is still zero). We have two different SNR values. By changing signal (like spiral arms and core - one is fainter other is brighter) - we again have different SNR.  In the end - we can see that we pretty much have unique SNR for every pixel.

3. Straight forward average of two values with different SNR will produce sub optimal result

Imagine we have two samples (you can extend this to subs, but subs have many more samples - for simplicity let's look at just two samples) - one with SNR 5 and other with SNR 4. We normalized our subs so they have same signal - say that signal is set at 10e.

This means that first has 10e/2e = SNR 5 (there is 2e noise for 10e signal) and second one has 10e/2.5e (there is 2.5e of noise for 10e signal). Let's calculate SNR of simple average.

For signal we will have (unsurprisingly):  (10 + 10) / 2 = 10e

For noise we will have: sqrt(2^2 + 2.5^2) / 2 = sqrt(4 + 6.25) / 2 = sqrt(10.25) / 2 = ~1.6

So final SNR is 10/~1.6 = ~6.25

But is this best solution? We can actually calculate best solution with a bit of math. We just need to think about the problem. Given that signal is the same - it's average (regardless of averaging coefficients) is going to be the same - that same value. This means that best SNR is one that minimizes averaged noise.

We have two samples and in order to average with some weights - that weights need to add to 1.

If one of weights is p, where p is (0-1), then other will simply be 1-p

Noise expression will then be:

sqrt( (2 * p)^2 + (2.5 * (1-p))^2) and we want minimum of that expression.  Let's tidy it up a bit.

sqrt(4 * p^2 + (2.5 - 2.5*p)^2) = sqrt(4 * p^2 + 6.25 - 12.5p + 6.25* p^2) = sqrt(10.25*p^2 - 12.5*p + 6.25)

That is expression that we need to minimize. How do we minimize it? We find first derivative and see where it is equal to zero.

First derivative will be (20.5 * p - 12.5)/sqrt(10.25*p^2 - 12.5*p + 6.25)

In order for that expression to be equal to zero 20.5 * p - 12.5 must be equal to zero (we can't divide with zero so sqrt expression can't be equal to zero)

From that we have that p = 12.5 / 20.5 = 0.6097560975609756097560975609756..... or roughly 0.61

Other coefficient will then be equal to 1-0.61 = 0.39.

We can again calculate SNR with these coefficients:

signal: 10 * 0.61 + 10 * 0.39 = 10 (not much surprise there)

noise: sqrt((2 * 0.61)^2 + (2.5 * 0.39)^2) = sqrt( 1.4884 + 0.950625) = 1.56173781410... = ~1.56174

Total SNR is then 10 / 1.56174 = ~6.4

By choosing right coefficients we raised resulting SNR from 6.25 to 6.4 in this example. Higher the difference in SNR of samples we stack - bigger difference to regular average (one with 1/number of samples - coefficients) there will be if we use optimum weights.

So what is the problem?

Problem is that all of this was per pixel. Each pixel in each sub will have its own SNR - we can't use simple weights for whole image - usual approach used in stacking as each pixel has different SNRs.

On top of that - we don't know their SNRs, even approximately prior to stacking. Even after stacking we only have estimate of their SNR - not true value, simply because we don't have true pixel values - only those polluted by noise. More subs we stack - closer to true value we get - but we never get 100% there.

Closest thing to optimal solution of this problem that I've seen is actually an algorithm that I developed (using in part analysis similar to above one), and I've presented that algorithm, together with results here on SGL.

 

Link to comment
Share on other sites

I'm sorry Vlaiv, but when I read your posts, they're dazzlingly brilliant and all I can think to say is 'wow'.

My small contribution is that I used the Entropy Weighted Average algo in DSS with mixed results. Sometimes it was OK, but for complex images such as the Rosette Nebula, with lots of stars against a rich nebula background, it created very noticeable halos around the stars. Pity because as you say, it seems ideal for this situation (which is also why I used it). Also, DSS doesn't seem to be very actively developed any more.

I'll hand this thread over to folk who want to go into this deep dive, but before I do, have you ever worked with the developers of stacking and/or processing software? If not, you should, it seems you have a lot to offer especially in relation to your algorithms and insights. I've been experimenting with Siril recently and am finding it measures up very nicely alongside the pay-for APP, and to get your ideas into an open-source solution such as Siril would be wonderful. Same with processing: I'm a big fan of StarTools, because it's so elegant and algo-driven as well as affordable, and I wonder what you could do to improve that too. Just an idea!

Edited by BrendanC
  • Like 1
Link to comment
Share on other sites

20 hours ago, BrendanC said:

Interesting feedback, thanks. I must admit I hadn't really considered not using half the sky! But doing two objects in the 'good' half makes sense. 

Having said which, you could class the lower half as 'bad' too, which leaves me with just a quarter of sky. 

I'm not yet persuaded by this 'half of the sky' theory.  The sky is some distance away :D and yet we're asked to believe that moving the telescope what, 60cm to the right, makes a radical difference to the LP present in the subs. 

Really?  I'm using LP, here, to mean skyglow and I accept that there can also sources of LP very close to the telescope. However, if they are sufficiently close to the telescope for a 60cm relocation to the right to make a difference they must, surely, be close enough to be excludable by some kind of small light sheild?

It seems to me that any LP changed by so small a movement in the OTA (which, after all, remains perfectly parallel with its former position after the flip) must be very close and possibly within the observing site itself.

I suppose there are other possibilities such as one source of LP being switched off at a time very similar to the time of your flips, but some coincidence is needed for that to be the case. Or perhaps you unwittingly begin your runs before astronomical darkness?  You probably don't and this would produce a gradual rather than a sudden improvement, though it would appear as a sudden one if comparing a 'first half of the night' stack with a 'second half of the night' stack.

Search for that light source and see if you can block it.

Olly

Link to comment
Share on other sites

All I know is that, from experience with shooting to one side of my garden, where there is a buildup of local housing and streetlights, I could never expose as long with my DSLR as I could when shooting to the other side which is more fields. Same as when shooting close to the horizon as nearer the zenith, but I understand that's also about air mass. I'm not talking 60cm here! There's obviously a continuum from one extreme to the other, it doesn't suddenly step up or down every 60cm, right?

Now that I'm using a much more sensitive camera, it would appear to have exposed a light leak in my focuser which I think I've fixed. But that still leaves open the question regarding differences in sky conditions.

One way to test this, I've been thinking, is Sharpcap's Smart Histogram feature. I have in mind a plan to create a grid of coordinates, have the scope slew to them, and measure the sky brightness and calculate an optimum exposure time. I might even do that tonight, and I'll share my results here.

Edited by BrendanC
  • Like 1
Link to comment
Share on other sites

9 minutes ago, BrendanC said:

All I know is that, from experience with shooting to one side of my garden, where there is a buildup of local housing and streetlights, I could never expose as long with my DSLR as I could when shooting to the other side which is more fields. Same as when shooting close to the horizon as nearer the zenith, but I understand that's also about air mass. I'm not talking 60cm here! There's obviously a continuum from one extreme to the other, it doesn't suddenly step up or down every 60cm, right?

Now that I'm using a much more sensitive camera, it would appear to have exposed a light leak in my focuser which I think I've fixed. But that still leaves open the question regarding differences in sky conditions.

One way to test this, I've been thinking, is Sharpcap's Smart Histogram feature. I have in mind a plan to create a grid of coordinates, have the scope slew to them, and measure the sky brightness and calculate an optimum exposure time. I might even do that tonight, and I'll share my results here.

Ah right, I thought that you were finding a sudden change in LP after the flip. If it's a gradual change as the object moves west then, sure, eastern LP will diminish.

Olly

Link to comment
Share on other sites

Ah, right, just re-read my original post and I can see that, when I say 'shooting East' or 'shooting West' it could mean east/west of the pier, as in, pre/post flip. Soz. I'm using my loose, decidedly amateurish terminology here to mean 'shooting over the local village where they've build a load of matchbox houses for the plebs' and 'shooting over the once-bucolic countryside which was all fields before the plebs came'.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.