Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Length of subs vs more subs


Recommended Posts

Am i right in thinking that if the histogram of my stacked images is about one third from the left then my individaul subs are the right exposure lenght and then to increase quality / reduce noise i just take more subs?

if i increase the exposure time on each sub i think id move the histogram peak right and as i understand one third from the left is where i need it to be?

Link to comment
Share on other sites

I'm no expert on this yet, and I may be corrected by people who know better but I think it depends on the camera.  If you have a lot of read noise it might be worth going for longer subs providing you don't over saturate any bright areas of the image, but in general, more subs that get a histogram that doesn't clip on the left hand side is the best way to go.

  • Like 1
Link to comment
Share on other sites

No.

Position of histogram peak is related to this, but not as simply as 1/3 or some other rule. People are often afraid that they won't make a good image if their histogram is too far left, or too far right, but in reality their image will be just fine (with adequate processing of course).

Only difference between few longer subs and many shorter subs lies in amount of read noise of the camera, or specifically - how level of read noise relates to levels of other noise sources. Once read noise source amplitude becomes small in comparison to other noises - difference becomes too small for all practical purposes.

What does that mean? Well, depends on what camera / filters / sky conditions do you have. If you have high light pollution - LP noise will fast swamp read noise and you can use shorter exposures. Same if you have non cooled camera - thermal noise will overtake read noise rather fast.

If you have cooled camera and you do narrow band imaging (that cuts down LP significantly) - then it pays to go really long exposures like 20-30 minutes per sub.

  • Like 2
Link to comment
Share on other sites

The 1/3 rule is a guideline, nothing more. If your exposure is too short, the electrons generated by the dimmest parts of your target are too few, and it's hard for stacking to discriminate their values. If your exposure is too long, your sensor's "well" will be full of electrons and so the signal for that part of the image will be undifferentiated full-scale values, regardless of how many subs you take and stack.

Between those extremes, there's a lot of wiggle room! And it's not always right to begin with. For many targets, the peak of the histogram corresponds to the background skyglow -- the most numerous pixels are those representing "black" sky (which isn't really). But if nebulosity fills most of your frame, it's a different story.

It's a really complex topic, but to criminally oversimplify, for a sensor with an infinite capacity to record and discriminate photons, a single long exposure will have a better signal/noise than a bunch of stacked ones adding up to the same length. However, longer exposures are more prone to satellite trails, aircraft flying exactly between you and your target, coyotes bumping the tripod, clouds, cable snags, cars illuminating you with headlights, wind gusts, periodic errors due to mechanical imperfections, fireflies alighting on the dew shield...and simply filling the well of the sensor. (If you class me as a coyote, I've had every one of those problems.)

So practical matters force us to shorten our exposures and do lots of them. I like Robin Glover's explanation of these issues.

Deep Sky Astrophotography with CMOS Cameras

  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

3 minutes ago, rickwayne said:

If your exposure is too short, the electrons generated by the dimmest parts of your target are too few, and it's hard for stacking to discriminate their values.

This is common misconception - there is nothing wrong with using very short exposures to do DSO imaging. Even exposures that don't capture single photon from a target can be stacked to produce good results. If photon arrives every ten seconds, then yes, we will capture 6 photons per exposure in one minute exposure on average, but we can also "capture" same signal using 1 second exposures. I'll just phrase it correctly and you will see why it is so - on average 1 out of 10 subs will capture photon, and when we stack those 10 subs - if we use summing - we will have total of 1 photon from target (on average) - if we stack 60 such 1 second subs - we will get 6 photons, so again 6 photons per minute.

Every sub captures signal, but also captures the noise - and that is why we have difference between lot of short subs and few long subs - just because there is read noise (all other noise sources act like signal - they accumulate over time and are the same in sum of short and sum of long subs if they add up to same total imaging time, so signal adds up, noise adds up - their ratio adds up to same thing).

If we ever manage to design camera with 0 read noise - it will work with any exposure time. But you are right for everything else related to sub duration - less wasted imaging time if you need to discard sub, and better statistical methods when having more data to work with.

Link to comment
Share on other sites

2 hours ago, vlaiv said:

This is common misconception - there is nothing wrong with using very short exposures to do DSO imaging. Even exposures that don't capture single photon from a target can be stacked to produce good results. If photon arrives every ten seconds, then yes, we will capture 6 photons per exposure in one minute exposure on average, but we can also "capture" same signal using 1 second exposures. I'll just phrase it correctly and you will see why it is so - on average 1 out of 10 subs will capture photon, and when we stack those 10 subs - if we use summing - we will have total of 1 photon from target (on average) - if we stack 60 such 1 second subs - we will get 6 photons, so again 6 photons per minute.

@vlaiv Typically stacking is an "average" method - a fundamental question that niggles on my general understanding is that long exposures must capture a lot more faint detail as you must be averaging a higher number of photons/electrons Vs a short exposure.  If that question makes sense? Can you clarify where my thinking is likely going wrong?

Edited by geeklee
Link to comment
Share on other sites

I found this talk by Dr Robin Glover (Sharpcap) useful. The background electron rate link in the video is http://tools.sharpcap.co.uk/  and if using a DLSR the read noise can be found here http://www.photonstophotos.net/Charts/RN_e.htm#Canon EOS 600D_14 (need to select your camera).
 

https://youtu.be/3RH93UvP358?t=77

 

 

Edited by Aramcheck
Initially wouldn't posted link as embedded. SNAFU.
  • Like 1
Link to comment
Share on other sites

12 hours ago, geeklee said:

@vlaiv Typically stacking is an "average" method - a fundamental question that niggles on my general understanding is that long exposures must capture a lot more faint detail as you must be averaging a higher number of photons/electrons Vs a short exposure.  If that question makes sense? Can you clarify where my thinking is likely going wrong?

There is not difference between stacking with average and stacking with summation. There is in sense that you will get different absolute values, but you will still get - same measured value (but in different units) and same SNR. Average is just sum divided with number of samples, so mathematically, difference between stacking with sum and stacking with average is just constant multiplication.

In measured value sense - it is best explained like this. Imagine you have 60 exposures of one second. Stacking with average will give you value that corresponds to "per second" measurement - if you have calibrated subs to read photons - it will be in photons per second. Stacking with sum will produce same measured value - but this time expressed in "per minute" - again if you calibrated for photons it will be photons per minute (or we might simply say ADU per second or ADU per minute if you did no calibration).

For SNR it is easy to see that ratio of two numbers does not change if you divide them both with some constant - so SNR does not change.

Having said all that - let's address level of captured detail in few long subs vs many short subs - or rather myth that short subs don't record faint stuff. It is easier to look at it via stacking by summation - as it shows that on average you capture same amount of photons for same total time although you capture much less photons per single short exposure. I gave that example above, but let's go with average to see what happens then (it will be same thing):

Let's imagine we have a source that gives off 6e/min/px with our setup (we will omit /px and just say signal is uniform and we observe single pixel). In one minute exposure we will certainly have 6e on average, so if we stack 10 minutes - we will again have on average value of 6e.

6e / minute means 1e every 10s or rate of 0.1e/s. Now imagine you have 600 subs - each long 1s. Each of them will have 0.1e. But hold on, you will say - electrons come in chunks and you can't have 0.1e - and I agree, you will have 0.1e per sub on average - or we could say that 9/10 subs will have 0e and 1/10 subs will have 1e (or some other distribution that will average out to 0.1).

Now we take those 600 subs and average them out - what do we get? Value of 0.1, but since we are averaging 1s exposures - it will be value of 0.1e/s.

People will often say - in first case you have 6e value and in second case you have 0.1e value - surely 6e is larger than 0.1e and therefore signal is larger! But what about units? Let's see when we have same units 6e/min = 1/(10s) = 0.1e/s - they are the same value.

If read noise is 0 and all other noise sources depend on time - and they do depend on time because they depend on quantity that grows with time - stack of many short exposures will be equal to stack of few long exposures (for same total imaging time).

In video that you have linked there is talk of max sub duration after which there is practically no difference in achieved SNR and this max sub duration depends on ratio of read noise to sky flux - when it is lower than certain value you can stop increasing exposure. If we take that read noise is 0 - for any sky flux this value will be zero and smaller than threshold value - you can use any sub duration and result will be the same.

This means that there is signal in any sub duration - although it might seem like there is none.

  • Like 2
Link to comment
Share on other sites

21 minutes ago, vlaiv said:

People will often say - in first case you have 6e value and in second case you have 0.1e value - surely 6e is larger than 0.1e and therefore signal is larger! But what about units? Let's see when we have same units 6e/min = 1/(10s) = 0.1e/s - they are the same value.

Thanks @vlaiv for taking the time to lay this out - really appreciated.  Fundamentally I understand the basic math you've stated and I think absolute Vs measured value regarding time.  I'm just not clear on why the measured value that includes "per second" (for example) affects this.

The quote above is definitely where my practical application of this is still falling down.

In my head reading a 60s image will mean the pixel in question has 6e which converts to an ADU of X (for example).  Reading a 10s image will mean the pixel in question has 1e which converts to an ADU of X/6.  If you then take multiple exposures of each and average, the 60s exposure should have a pixel with an ADU 6x larger (and thus a brighter pixel).  I understand that they will both received the same average e/s.

I know this is wrong, but the penny just hasn't dropped as to why. 😐

Link to comment
Share on other sites

27 minutes ago, geeklee said:

the 60s exposure should have a pixel with an ADU 6x larger (and thus a brighter pixel).

Ah ok, yes, brightness of the pixel is something that you assign rather than it is fixed in stone.

We do this in stretching phase. Let take above example and make e/ADU = 1 for simplicity and continue using e (less to type). You can decide that 6e is very bright and assign it brightness value of 90% or you can make it very dim and assign it brightness value of 1% - that is up to how you present your image.

What is important in the image is ratios of things rather than absolute value.  There are two important ratios in our images. First is of course SNR and second is ratio between signal or ratio between pixels if we exclude noise. In principle - these two are the same thing - except in one case we have ratio of two good things and in other we have ratio of a good and a bad thing :D.

Now that we know we need to look at ratios - let's to again above example with signal values to see what we get (let's for the moment leave noise aside). Let's say that we have in first case 6e and 3e (effectively being 6e/min and 3e/min) and in second case we have 0.1e/s and 0.05e/s. These are same flux values, or same brightness values if we compare them in same amount of time (make both /s or /h or any other time unit).

But when we stretch our image and make 6e to be really bright - let's say 80% brightness and we want to keep "ratios" so that we know if something has half the value it will have half the brightness - we will assign 40% brightness to 3e value.

Our image thus becomes 80% and 40%. (Btw that is linear stretch - opting not to keep ratios creates non linear stretch and when we want to show really faint stuff - we assign larger brightness values than that faint stuff would otherwise have if we kept ratio the same).

Now let's stretch our 0.1e and 0.05e image. Again we want 0.1e part to be bright and we assign it 80%, then we ought to assign 40% to 0.05 because it is again half of intensity / brightness - we end up with 80% - 40%, or some image as above.

Absolute brightness values are not important for presenting content of the image - only ratios of intensities are important. If you take image and make it dimmer - it will still contain same image. In fact in bright room you watch TV and you dim it down. It now looks too dim and you may think it is now different image because you can't make out some things that are in shadows - but don't change anything on TV - just pull on curtains and kill all the ambient light and look at your TV now - it shows nice image with all the details again.

Another way to put it is - image does not change if you look at it from a distance. Let's say that you have painting and you look at it from 2 meters away. Now move 4 meters away - is it different? No? But absolute values of photons reaching your eyes just halved (or is that factor of x4 less? flux goes down like surface, so x4?)! Same image regardless :D

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

But when we stretch our image and make 6e to be really bright - let's say 80% brightness and we want to keep "ratios" so that we know if something has half the value it will have half the brightness - we will assign 40% brightness to 3e value.

Our image thus becomes 80% and 40%. (Btw that is linear stretch - opting not to keep ratios creates non linear stretch and when we want to show really faint stuff - we assign larger brightness values than that faint stuff would otherwise have if we kept ratio the same).

Now let's stretch our 0.1e and 0.05e image. Again we want 0.1e part to be bright and we assign it 80%, then we ought to assign 40% to 0.05 because it is again half of intensity / brightness - we end up with 80% - 40%, or some image as above.

Thanks again @vlaiv I think the penny has hopefully dropped.

So, taking two final stacked images (short Vs long exposures)...  If they were initially stretched in the same way (e.g. same levels/curves), the longer exposure stack would look "brighter" and "show more detail" but only because the image started off with higher values in the pixels?  Putting it another way, the stacked image from the shorter exposures needs additional stretching to get it's brightness and detail to the same level, but presumably it can take this further stretching due to the related SNR?

If I've got this all wrong, I think I can just lose all hope and just accept it 😁

Link to comment
Share on other sites

3 minutes ago, geeklee said:

Thanks again @vlaiv I think the penny has hopefully dropped.

So, taking two final stacked images (short Vs long exposures)...  If they were initially stretched in the same way (e.g. same levels/curves), the longer exposure stack would look "brighter" and "show more detail" but only because the image started off with higher values in the pixels?  Putting it another way, the stacked image from the shorter exposures needs additional stretching to get it's brightness and detail to the same level, but presumably it can take this further stretching due to the related SNR?

If I've got this all wrong, I think I can just lose all hope and just accept it 😁

That is right. You don't need separate levels of stretch - you can use same levels of non linear stretch with simple multiplication before that, or you can use different levels of stretch - in that case difference between stretches will be that multiplicative constant.

Difference between these two images is really multiplicative constant (provided that there is no read noise and we have infinite SNR - or if we just observe signal and not noise) - same multiplicative constant that differentiates stacking by summation and stacking by average - in average case you end up dividing with number of subs, but if you want to go in reverse and from average get to sum case - you need to multiply with number of stacked subs.

In case of subs of different duration - multiplicative constant is ratio of exposure lengths - that is all you need to bring those two stacks (provided that they were stacked with same method - for example average) - to same value.

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

That is right. You don't need separate levels of stretch - you can use same levels of non linear stretch with simple multiplication before that, or you can use different levels of stretch - in that case difference between stretches will be that multiplicative constant.

Difference between these two images is really multiplicative constant (provided that there is no read noise and we have infinite SNR - or if we just observe signal and not noise) - same multiplicative constant that differentiates stacking by summation and stacking by average - in average case you end up dividing with number of subs, but if you want to go in reverse and from average get to sum case - you need to multiply with number of stacked subs.

In case of subs of different duration - multiplicative constant is ratio of exposure lengths - that is all you need to bring those two stacks (provided that they were stacked with same method - for example average) - to same value.

Thank you @vlaiv for seeing this through for me 👍

Link to comment
Share on other sites

Yeah, I was really struggling for a response to this and eventually I had to drown my Inner Nerd and just make my response "he's right". 🙂

I think where I was having trouble was conflating some similar issues. In case anyone else has the same confusion, here's me: "It's analogous to blowing out highlights, but in the other direction, right? Once you get to zero, you're at the floor, you can't discriminate values below zero. So you're losing differentiation among the darkest tones."

Which is true...for a single exposure. So, other things being equal, six photons are six photons, whether captured in one exposure or 200. As vlaiv is careful to caveat, not all other things are, in fact, equal. But once you get past read noise, they're equal enough.
 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.