Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

3 or 5 min subs?!


smr

Recommended Posts

Is there any benefit in going for 5 minute subs ? My data isn't clipping on 3 minute subs (data is off the LHS of the histogram in APT) and I'm shooting with a 2600MC Pro, Z73 in Bortle 3/4. 

But I''m wondering if 5 min subs would be better as I'm imaging the Iris Nebula - and there's lots of dust there

Attached image to show histogram 

Iris.png

Link to comment
Share on other sites

The last time I imaged the Iris was with my astro modified DSLR and I used ISO 800 at 180s exposures and it came out fine, picking up lots of the black dust.

With the sensitivity of your ASI2600MC, you should be fine using 180s and if the focus goes off or there's high cloud then you're loosing less exposure time for each sub. ;)

  • Thanks 1
Link to comment
Share on other sites

8 minutes ago, Budgie1 said:

The last time I imaged the Iris was with my astro modified DSLR and I used ISO 800 at 180s exposures and it came out fine, picking up lots of the black dust.

With the sensitivity of your ASI2600MC, you should be fine using 180s and if the focus goes off or there's high cloud then you're loosing less exposure time for each sub. ;)

Cheers Martin, your Bortle 2 skies must be stunning. I'm stunned by Bortle 3/4 skies - such a difference to 5/6 at home. 

Link to comment
Share on other sites

Just now, smr said:

Cheers Martin, your Bortle 2 skies must be stunning. I'm stunned by Bortle 3/4 skies - such a difference to 5/6 at home. 

They are, naked eye Milky Way in detail is great. The downside is that at this latitude I have to stop imaging between May & August because we get Twilight All Night.

I guess there's pros & cons wherever you live. :D

Enjoy those skies!

Link to comment
Share on other sites

Just measure it.

Your exposure length needs to be such that background / LP noise is at least x3-5 times larger than read noise.

Simple as that.

Take one sub - measure LP levels, convert to electrons, take square root and compare to published read noise values for the gain you are using.

Increase sub length accordingly (depending on measured ratio - mind the difference between signal and noise which is square root of signal).

  • Like 1
Link to comment
Share on other sites

5 minutes ago, vlaiv said:

Just measure it.

Your exposure length needs to be such that background / LP noise is at least x3-5 times larger than read noise.

Simple as that.

Take one sub - measure LP levels, convert to electrons, take square root and compare to published read noise values for the gain you are using.

Increase sub length accordingly (depending on measured ratio - mind the difference between signal and noise which is square root of signal).

How do I go about this? Have you heard about the 'Brain' feature in Sharpcap? I've just learnt of it and started reading about it - that might help me by the sound of it 

Link to comment
Share on other sites

22 hours ago, smr said:

How do I go about this? Have you heard about the 'Brain' feature in Sharpcap? I've just learnt of it and started reading about it - that might help me by the sound of it 

As far as I know - yes, Sharpcap has the feature (not sure what it is called) - that can be used to determine necessary exposure length.

If you want to do it yourself - procedure is rather simple:

- take one of your raw subs and calibrate it properly.

- convert ADU values to electron values - you can read e/ADU value from fits header or from data published by manufacturer of the camera. This is simple multiplication of pixel values with constant.

- select patch of sky without any target / nebulosity and as empty as possible (odd faint star here or there should not be a problem). Measure median value in that patch of the sky. If patch is completely void of objects you can also use mean value, but median is better as it will remove impact of few stars if there are any.

- take 25 and multiply with squared value of read noise at given gain settings and divide with above value - this will give you factor needed to multiply current exposure length to get proper one.

Example:

You have sub that you debayered since camera in question is 2600mc, that you exposed for 3 minutes at gain of 100. e/ADU value for this gain is ~0.25, so you multiply pixel values with 0.25 to get electron count (use green channel).

Now you measure background value and you find that it is 200e, you also see that there is 1.5e of read noise at gain 100 from ZWO graphs.

Exposure length multiplier will be 25 * 1.5^2 / 200 = 56.25 / 200 = 0.2815

You exposed for 3 minutes or 180s when in fact it was enough to expose for 180 * 0.2815 = 50.625s or ~50s

(if you get number greater than 1 - that will mean longer exposure than you used, if you get smaller than 1 that will mean shorter exposure than you used).

  • Like 2
Link to comment
Share on other sites

2 hours ago, vlaiv said:

As far as I know - yes, Sharpcap has the feature (not sure what it is called) - that can be used to determine necessary exposure length.

If you want to do it yourself - procedure is rather simple:

- take one of your raw subs and calibrate it properly.

- convert ADU values to electron values - you can read e/ADU value from fits header or from data published by manufacturer of the camera. This is simple multiplication of pixel values with constant.

- select patch of sky without any target / nebulosity and as empty as possible (odd faint star here or there should not be a problem). Measure median value in that patch of the sky. If patch is completely void of objects you can also use mean value, but median is better as it will remove impact of few stars if there are any.

- take 25 and multiply with squared value of read noise at given gain settings and divide with above value - this will give you factor needed to multiply current exposure length to get proper one.

Example:

You have sub that you debayered since camera in question is 2600mc, that you exposed for 3 minutes at gain of 100. e/ADU value for this gain is ~0.25, so you multiply pixel values with 0.25 to get electron count (use green channel).

Now you measure background value and you find that it is 200e, you also see that there is 1.5e of read noise at gain 100 from ZWO graphs.

Exposure length multiplier will be 25 * 1.5^2 / 200 = 56.25 / 200 = 0.2815

You exposed for 3 minutes or 180s when in fact it was enough to expose for 180 * 0.2815 = 50.625s or ~50s

(if you get number greater than 1 - that will mean longer exposure than you used, if you get smaller than 1 that will mean shorter exposure than you used).

Is there any benefit to the OP exposing for 180s over 50s? Is 50s the minimum amount of time to expose, and then anything longer increases SNR?

Link to comment
Share on other sites

12 hours ago, Pitch Black Skies said:

Is there any benefit to the OP exposing for 180s over 50s? Is 50s the minimum amount of time to expose, and then anything longer increases SNR?

Do bear in mind that I just gave example there - I'm not saying go for 50s vs 3minutes - unless you calculate so.

So what is the difference?

Only difference if you image for same total amount of time (like 1h in 60 one minute exposures, or 1h on 30 two minute exposures or again 1h in 12 five minute exposures) - is level of read noise.

All other noise sources just add up with integration time - they don't care about number of subs - they just care of total amount of integration time.

Read noise is only one that depends on number of subs. For the same imaging time if you use longer subs - you will need less of them - and you will add less read noise.

Noises add in particular way - not like signal that you just add together like A+B, but instead add like vectors so for noise it goes like this: square_root(A^2 + B^2)

or square root of sum of their squares.

This type of addition has important property. If things that you are adding are significantly different in magnitude - result is very close to larger one.

Here is an example - if we say have 1 and 10 and we add them regularly - we get 1+10 = 11

There is some difference between 10 and 11 as result

Let's now add 1 and 10 the way noise adds - we will have: square_root(10^2 + 1^2) = sqrt(100+1) = sqrt(101) = ~10.05 (or more precisely 10.049875.....)

In first case we had 10% increase, but in second case, while adding same two numbers we had increase of only 0.5%

Larger the difference between two noise sources - less impact smaller noise source has.

Point of choosing particularly long exposure length is to make read noise smaller than any of the other noise sources. In particular, above approach is targeting LP noise - that is why we measure background levels - to see how much LP there is (and consequently what is associated noise).

There is a point where we no longer distinguish between noise levels. This is found to be when read noise is between 3 to 5 times smaller than LP noise.

Just as an exercise - we can calculate total resulting noise increase in both cases:

1 and 3 for example (that is three times larger): sqrt(9+1) = sqrt(10) = ~3.1623

1 and 5: sqrt(25+1) = sqrt(26) = ~5.1

In first case, increase is 5.4%, while in second case it is 2%

I usually like to use x5 rule rather than x3 - just to be safe, although 5.5% increase in noise is imperceivable

Here are examples:

image.png.261b661dddf4c7185ecb944f8fbf27df.png

two different noise patches in the image above - right one has 25% higher noise level than left one. You should be able to tell the difference between the two.

image.png.0a000d2d8dcf8f37ee1a01b13af48eda.png

same thing, but here one is 5.5% higher noise than the other. You can't really tell the difference (maybe, just maybe).

With 2% difference - visually there is definitively no difference.

So there you go - that is how we select subs.

If you calculate your sub to be short - there is no drawback in going longer as far as image quality goes (you will have less data and that is good for storage, but some algorithms like more data to work with, so get at least 20-30 subs in any case. Also - with longer subs if something goes wrong - you throw away more data)

Edited by vlaiv
typo
  • Like 3
Link to comment
Share on other sites

Just experiment really based on what youre imaging and your conditions (as no one else can really advise on the latter, only their own circumstances), I have diminishing returns reaching the 3 minute plus mark in LRGB due to light pollution. Shooting narrowband on the other hand does benefit from around 5 minute subs plus. You could always take more images rather than extend the sub duration. I tried the Iris but couldn't do more than 2 minutes per image from a Bortle 7, so only got the initial dust around the bright central star.

  • Like 1
Link to comment
Share on other sites

On 01/04/2022 at 22:08, smr said:

Is there any benefit in going for 5 minute subs ? My data isn't clipping on 3 minute subs (data is off the LHS of the histogram in APT) and I'm shooting with a 2600MC Pro, Z73 in Bortle 3/4. 

But I''m wondering if 5 min subs would be better as I'm imaging the Iris Nebula - and there's lots of dust there

Attached image to show histogram 

Iris.png

Honestly I doubt that you even need 3min subs, 2min would be my pick and should burry read noise at unity gain on that camera at that sky class. Just would not require longer unless you are needing to reduce file storage. However as there is little disadvantage to going longer with that camera you need to choose between ease of guiding and storage essentially. 

Edited by Adam J
Link to comment
Share on other sites

It would take you just two minutes to try a 5 minute sub over a 3 minute! The calculations will take more than that...  :D However, I'm not going to argue against understanding the theory since science advances using both theory and experiment. The best test in terms of simplicity and time economy would be to compare 3x5 with 5x3, I'd have thought?

Olly

  • Like 1
  • Haha 1
Link to comment
Share on other sites

2 hours ago, ollypenrice said:

It would take you just two minutes to try a 5 minute sub over a 3 minute! The calculations will take more than that...  :D However, I'm not going to argue against understanding the theory since science advances using both theory and experiment. The best test in terms of simplicity and time economy would be to compare 3x5 with 5x3, I'd have thought?

Olly

Because humans can be very very poor at judging things.

If I give you two subs - like on 3 minutes and on of two minutes and ask you following question:

Imagine first one being stacked 6 times and second one being stacked 9 times - which one will have less total noise, would you be able to tell just by looking at the subs?

Proper way to handle this situation is to use both theory and experiment - but in a correct way. Be sure that you know what you are measuring and why. It is enough to measure just single calibrated sub for background signal level if you know your e/ADU and your read noise.

In fact - you can do it before imaging time - on some of your old subs and adjust exposure time for future sessions.

Link to comment
Share on other sites

I try to make my life simple and stick to 3 min subs for RGB (no filter) and 5 min subs for NB (IDAS NBZ filter) with my ASI2600MC at gain 100 on a RASA 8 at Bortle 2-3. But then I usually go for the faintest of faint objects with that set up. For something bright I would either go for shorter subs or gain 0. My thinking is that I should expose as long as possible to keep the number of subs managible (and avoid spending half a day stacking), and I would only shorten the exposures if I see too many blown out stars.

Link to comment
Share on other sites

@vlaiv I just ran some recent Atik 460 data through that handy formula above. LRGB were 300s subs and Ha was 600s.

If ive done it right, it says my L subs should only be 235s, R subs 1380s, G subs 308s and B subs 631s. The deal breaker for me is the Ha. Median background value of 350 ADU so 94.5e

25x(5e²) = 625 so 625/94.5 = 6.613 

So 600s x 6.613 = 3968s subs needed. Really? Where does the 25xRN come from btw? Why 25?

Link to comment
Share on other sites

32 minutes ago, david_taurus83 said:

So 600s x 6.613 = 3968s subs needed. Really? Where does the 25xRN come from btw? Why 25?

Yes, really - that is nothing uncommon. Atik has smaller pixels and still some read noise (compared to modern CMOS sensors).

In CCD days - it was not uncommon to use half an hour long narrow band subs because of low LP noise that narrow band filters ensure (they block most of LP spectrum).

Why 25? That number is actually rather arbitrary, but math is rather simple - noise adds like linearly independent vectors - so square root of sum of squares.

We can calculate read noise impact over just having LP noise rather easily and turn it into formula.

Say that read noise is X and that we have some number N (above I used 5 as N - so x5 higher LP noise compared to read noise).

We need to calculate "total" read noise - and that is straight forward

sqrt(X^2 + (N*X)^2) = sqrt(X^2 + N^2*X^2) = sqrt(X^2 * (1+N^2)) = X * sqrt(1+N^2)

Total noise will be X * sqrt(1+N^2) while increase over just LP noise will be X * sqrt(1+N^2) / X*N (total divide with just LP) or

sqrt(1+N^2) / N

Plugging in number will tell you how much higher resulting number will be, let's do it for some numbers like 1,2,3,4,5,6 to see what happens:

1 : sqrt(1+1) / 1 = sqrt(2) = 1.414... = 41.4% increase

2: sqrt(1+4)/2 = sqrt(5)/2 = 1.118... = 11.8% increase

3: sqrt(1+9)/3 = sqrt(10)/3 = 1.0541.... = 5.41% increase

4: sqrt(1+16)/4 = sqrt(17)/4 = 1.0308... = 3.08% increase

5: sqrt(1+25)/5 = sqrt(26)/5 = 1.0198... = 1.98% increase

6: sqrt(1+36)/6 = sqrt(37)/6 = 1.0138... = 1.38% increase

So you see you can use any of above numbers if you accept certain increase in noise over just having LP noise (above are increases over perfect camera with 0 read noise).

Similarly to imaging time - at some point you enter domain of diminishing returns.

There is very small improvement between 5 and 6 (which is quite a bit longer sub as sub duration is related to square of this number 36/25 = 1.44 or 44% increase in sub duration) and there is question if it is forth it.

Some people are ok with ~5.5% increase - and use x3 rule instead of x5 rule. I like to use x5 rule - but again - if you can't make your subs lone enough - don't sweat it, use as long as sensible, and make up small SNR loss by adding few more subs to stack (if you really need to).

  • Like 1
Link to comment
Share on other sites

7 hours ago, vlaiv said:

Because humans can be very very poor at judging things.

If I give you two subs - like on 3 minutes and on of two minutes and ask you following question:

Imagine first one being stacked 6 times and second one being stacked 9 times - which one will have less total noise, would you be able to tell just by looking at the subs?

Proper way to handle this situation is to use both theory and experiment - but in a correct way. Be sure that you know what you are measuring and why. It is enough to measure just single calibrated sub for background signal level if you know your e/ADU and your read noise.

In fact - you can do it before imaging time - on some of your old subs and adjust exposure time for future sessions.

Sure, but I suggested a comparison, ideally, of 3x5 and 5x3, though longer comparisons would be better.  Now when I have a stacked dataset I don't measure it, I process it. I judge it on how well it behaves under processing because, at the end of this long process, I'm not going to measure the final image, I'm going to look at it. Along the way I'll find strengths and weaknesses in a dataset. It's not enough simply to spot them or measure them: what I want to know is which defects I can process out and which I can't. That tells me which defects I prefer! :D No defects is rarely an option.

So, as you know, I remain a pragmatist. One of my rigs produces uneven background sky at pixel scale (a scattering of over-dark pixels.) I can fix that. Another rig likes long subs but saturates stellar cores. I can fix that as well. What worries me are defects I can't fix. Experiment lets me find them and avoid them.

Olly

Link to comment
Share on other sites

16 hours ago, vlaiv said:

Yes, really - that is nothing uncommon. Atik has smaller pixels and still some read noise (compared to modern CMOS sensors).

In CCD days - it was not uncommon to use half an hour long narrow band subs because of low LP noise that narrow band filters ensure (they block most of LP spectrum).

Why 25? That number is actually rather arbitrary, but math is rather simple - noise adds like linearly independent vectors - so square root of sum of squares.

We can calculate read noise impact over just having LP noise rather easily and turn it into formula.

Say that read noise is X and that we have some number N (above I used 5 as N - so x5 higher LP noise compared to read noise).

We need to calculate "total" read noise - and that is straight forward

sqrt(X^2 + (N*X)^2) = sqrt(X^2 + N^2*X^2) = sqrt(X^2 * (1+N^2)) = X * sqrt(1+N^2)

Total noise will be X * sqrt(1+N^2) while increase over just LP noise will be X * sqrt(1+N^2) / X*N (total divide with just LP) or

sqrt(1+N^2) / N

Plugging in number will tell you how much higher resulting number will be, let's do it for some numbers like 1,2,3,4,5,6 to see what happens:

1 : sqrt(1+1) / 1 = sqrt(2) = 1.414... = 41.4% increase

2: sqrt(1+4)/2 = sqrt(5)/2 = 1.118... = 11.8% increase

3: sqrt(1+9)/3 = sqrt(10)/3 = 1.0541.... = 5.41% increase

4: sqrt(1+16)/4 = sqrt(17)/4 = 1.0308... = 3.08% increase

5: sqrt(1+25)/5 = sqrt(26)/5 = 1.0198... = 1.98% increase

6: sqrt(1+36)/6 = sqrt(37)/6 = 1.0138... = 1.38% increase

So you see you can use any of above numbers if you accept certain increase in noise over just having LP noise (above are increases over perfect camera with 0 read noise).

Similarly to imaging time - at some point you enter domain of diminishing returns.

There is very small improvement between 5 and 6 (which is quite a bit longer sub as sub duration is related to square of this number 36/25 = 1.44 or 44% increase in sub duration) and there is question if it is forth it.

Some people are ok with ~5.5% increase - and use x3 rule instead of x5 rule. I like to use x5 rule - but again - if you can't make your subs lone enough - don't sweat it, use as long as sensible, and make up small SNR loss by adding few more subs to stack (if you really need to).

I just don't understand the obsession with focusing on the background levels with little signal. If I'm imaging with an Ha filter I'm after that signal. I don't care what the background is doing. Noise can be dealt with in processing. If I took 1 hour long subs the stars would be big and bloated and the image would probably look washed out. Think I'll stick to 10 minutes!

  • Like 1
Link to comment
Share on other sites

1 hour ago, david_taurus83 said:

I just don't understand the obsession with focusing on the background levels with little signal. If I'm imaging with an Ha filter I'm after that signal. I don't care what the background is doing. Noise can be dealt with in processing. If I took 1 hour long subs the stars would be big and bloated and the image would probably look washed out. Think I'll stick to 10 minutes!

I see.

It is not obsession with background levels. Read noise is impacting whole image. Fact that target is not bright is actually working against you in this case.

If you want to show signal in image - it must be above noise level, so all we really care about is SNR or signal to noise ratio.

Here is most important bit - this is what it is all about: "For given amount of imaging time, only thing that makes a difference to final outcome in terms of SNR between using short and long subs is - level of read noise. If read noise were zero - there would be absolutely no difference between 0.1s subs versus 1000s subs - adding up to same total imaging time".

Since there is read noise, we are simply trying to select sub length that won't make much difference compared to one long single exposure. This depends on other noise sources in the image. By comparing this to LP noise (that is direct consequence of background levels) - we can calculate impact of it. Background is there - it is simply there and we can't do much about it (except use some sort of filtering if applicable). Here we focus on it - because it simply lets us calculate impact of read noise on overall SNR - not just background levels.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.