Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Lots of short subs vs fewer long subs


James

Recommended Posts

What are peoples preferences? I used to try for decent length subs (20 min plus) but with the weather being so grim and so few chances to image I realized I'd never get all the data I needed to make a decent RGB or narrowband image so I've reverted back to collecting 5 minutes subs as I can get a fair number of them (four for every one 20 minute sub).

I've told myself that statistically the more subs the better for an accurate/smooth image as all stacking methods rely on some form of statistical analysis for rejecting pixels usually revolving around working out the standard deviation of the data and averaging whats left after those parts of the data that are too many standard deviations away from the mean have been removed**. As in all things statistical where there's doubt as to the quality of individual data you overcome this by large amounts of data..

So, is it better to go for 50x 5min subs or say 12x 20 minute subs? I can see the argument from both sides. 5 minute subs on most objects need to be stretched - this requires good stacked data to start with therefore needs lots of subs as stretching will just exacerbate any noise that's there. On the other hand with 20 minute subs on some targets you don't need to stretch the data as much (or barely with some Ha targets!) and therefore won't exacerbate any noise.

Of course I know that lots of longer subs is the ideal but what do you find - are the benefits of longer subs outweighing having less of them to stack...?

James

** The maths purists amongst us are probably grimacing at this description, sorry :embarassed::evil:

Link to comment
Share on other sites

  • Replies 41
  • Created
  • Last Reply

I'm not that much of an expert - I usually try subs short enough that the brightest parts show no sign of saturating. But I'm using a DSLR so I guess this is more of a problem than with an astro CCD camera.

Link to comment
Share on other sites

Ultimately it is about achieving the best possible Signal to Noise Ratio (SNR). What it boils down to is that the maximum useful subexposure length depends on the brightness of the background vs. the brightness of the object. This paper explains the maths:

http://www.starrywonders.com/snr.html

One of its graphs is probably most relevant:

http://www.starrywonders.com/comparisons.jpg

Perhaps obviously what it comes down to is that if you have a bright background relative to the object, then you should take shorter subexposures than if you have a darker background. The more light polluted the sky the brighter the background will be so take shorter subs. Using a filter (narrowband or light pollution filter) will increase the relative brightness of the object vs. the background, enabling longer subs, as will moving to a darker site.

If the maths is a bit much, skip though to point 15, which provides a link to a spreadsheet in to which you can enter some measurements for your camera and sky background to determine the optimum subexposure length you should use. The practical point is that you need to measure the brightness of the sky background at the imaging site for each of the filters that you use; the optimum sub-exposure length you should use will be longer for a Ha filter than for one of the RGBs for example.

The main issue is that you need to know your camera's gain and read-out noise (only a handful of CCD cameras are listed in the spreadsheet by default). The links on the page no longer point to an explanation of how to determine this, so try these instead:

The link explains how to calculate your camera's gain by taking a Bias and a couple of Flats (which is what it means by "2 acquisitions"). You'll need some astro processing software to calculate the means and standard deviations needed, I use PixInsight but pretty sure something free like IRIS would do the job. Once you have the values they even have their own calculator linked to determine the e/ADU figure you need:

http://www.photometrics.com/resources/whitepapers/mean-variance.php

Once you have the gain in e/ADU, you can then take a couple of Bias frames, do another simple bit of processing and use the calculator below to determine the camera's read noise, as described here:

http://www.photometrics.com/resources/imaging-tools/read-noise-calculator.php

Finally you can then use the spreadsheet from the first paper to take your measured sky brightness, the camera gain and read noise to calculate the maximum useful subexposure length:

http://www.starrywonders.com/subexposurecalculator_V3.xls

Link to comment
Share on other sites

Think I'll stick with trial and error :D I might have been able to grasp all that, take the measurements and do the calculations once but I'm afraid my poor old brain's past it now :shocked:

Link to comment
Share on other sites

I tried plugging in some values to a similar calculator and it suggested 2 min subs in RGB and 4 min subs for Ha, given my bright sky background so I am not sure how far to trust these things. I typically shoot 5-7 mins for LRGB and 10-15 min for Ha with an SXV-H9 at f/5 and it seems to work okay for me.

Link to comment
Share on other sites

I tried plugging in some values to a similar calculator and it suggested 2 min subs in RGB and 4 min subs for Ha, given my bright sky background so I am not sure how far to trust these things. I typically shoot 5-7 mins for LRGB and 10-15 min for Ha with an SXV-H9 at f/5 and it seems to work okay for me.

As per the graph that I linked, you can shoot a sub for as long as you like (until you start saturating pixels of course) and the SNR will continue to increase, but the curves show that it's a game of diminishing returns after a point. At a light polluted site you could easily get 90% of the possible SNR after 2 minutes. Carrying on for another 5 minutes is going to net you some portion of the remaining 10%, and to get to 100% you would have to shoot for an infinitely long time (not possible of course).

The choice of whether you want 80%, 90%, 95% or whatever of the available SNR is yours to make, assuming you have good enough tracking/guiding to shoot for longer. The other risk to consider is aircraft/satellite trails. Hopefully it is obvious that if one does hove in to view during (say) a 20 minute period, it is better to ruin one out of ten 2 minute subs rather than ruining your only 20 minute sub.

Also just to be clear, the paper does not suggest that shooting 4 hours worth of (short) subs at a light polluted site will produce the same SNR as shooting 4 hours of (longer) subs at a dark site. I will have to dig around but I have some other links that demonstrate that if you can only shoot short subs at a light polluted site you would need vastly more of them than you would of dark site subs to get the same SNR.

Link to comment
Share on other sites

Thanks for the links, Ian - I will look more closely at these when I can.

My simple approach to this is that really faint objects (or parts of objects) require longer exposures just to collect any usable data from them so multiple shorter exposures will never 'catch up' with some objects. What I do like about shorter exposures is the potential for smoother images when stacked and less demands on your tracking.

Link to comment
Share on other sites

What I do like about shorter exposures is the potential for smoother images when stacked and less demands on your tracking.

That's what I like about them too - getting 30+ subs in one night is fairly easy and with a fast scope I can get a fair amount of light. I don't currently have any flex issues with my side by side setup up to 10 mins (haven't tried longer yet!) so the tracking isn't a problem though.

The links Ian has posted are interesting and whilst the maths has fogged my brain it turns out that my wife (Maths degree) finds it easy to understand :rolleyes: so I'm going through it all and see whether my finger in the air "5 minute" subs are appropriate/efficient for my circumstances and go from there.

Of course, the 'easy' solution is just to mount 3 scopes on the side by side setup and triple the data captured... :grin:

James

Link to comment
Share on other sites

Of course, the 'easy' solution is just to mount 3 scopes on the side by side setup and triple the data captured... :grin:

James

Yes indeed, I'd like to do that :D I can see my bank ballance running on empty for many years to come :D
Link to comment
Share on other sites

Well I have a twin wide-field DSO rig using two DSLRs and two telephoto lenses one for Ha and the other for OIII mounted on an aluminium plate onto the dovetail and it works well. I see no reason why it shouldn't work with two ED80s for instance. The mounting would need to be good and solid though so that there isn't any differential flexing either between imaging scopes or the guide scope.

Link to comment
Share on other sites

I ran through the spreadsheet and got 4.2 min to reach 95% SNR, which fits nicely to my 5 min standard subs. Turns out my background skyflux is horrible, though I did shoot over the city so more or less worst case :(

Link to comment
Share on other sites

My simple approach to this is that really faint objects (or parts of objects) require longer exposures just to collect any usable data from them so multiple shorter exposures will never 'catch up' with some objects.

Surprisingly, this is not true. Providing you can detect a single photon (or a fixed percentage of them) those faint parts will eventually appear no matter how short your subs. The presence of read-noise just slows you down a bit. You start getting into trouble if your setup requires several photons to trigger one ADU though ...

NigelM

Link to comment
Share on other sites

Surprisingly, this is not true. Providing you can detect a single photon (or a fixed percentage of them) those faint parts will eventually appear no matter how short your subs. The presence of read-noise just slows you down a bit. You start getting into trouble if your setup requires several photons to trigger one ADU though ...

Wouldn't those details tend to get "averaged out" when you stack the frames though?

James

Link to comment
Share on other sites

Didn't Earl try that with two scopes and end up deciding it just wouldn't work? I don't fully understand why not though.

I routinely run two telescopes and CCD cameras at the same time capturing different data from the same object and it works pretty well. Balance, accurate polar alignment and rigidity are vital though.

post-1029-0-83372300-1352390930_thumb.jp

Link to comment
Share on other sites

Wouldn't those details tend to get "averaged out" when you stack the frames though?

James

No. Simplistically speaking, as you stack more frames, the noise averages whilst the signal adds. That's what all those brain-hurty equations on the first page I linked boil down to.

So provided the signal is distinguishable from the noise, it is always possible to detect it. How long it takes and how clearly you can separate signal from noise depends on how much difference there is between the signal and the noise.

Instinctively you already understand most of this:

- If you take a single short exposure of (say) M51 in a light polluted sky, you will see the core regions, even if they are barely showing against the light pollution and also very grainy due to the various sources of noise.

- If you take a longer exposure, the sky gets brighter, but so does M51, more of its outer arms show up, and the image is less grainy because the signal to noise ratio (SNR) is greater.

- Of course you limit your exposure length because at some point the light pollution will start to saturate the pixels of your sensor.

The problem arises because at this point the common sense (mistaken) assumption is that anything that did not show up against the light pollution with that length of exposure will never show up, because the background sky is 'brighter' than the faint parts of the galaxy.

This is not true and if you apply some correct common sense, it is blindingly obvious why it is not true, but tends to get lost in difficult equations:

Let's say an average of 400 photons per minute are arriving on each of your camera's pixels due to light pollution. Where there is no galaxy to be imaged, the total number of photons for that pixel will average 400.* Where even the faintest wisp of galaxy exists though, some extra photons will be arriving. It might be as few as 1 extra photon per minute. So the average number of photons per minute on that pixel is going to be 401, which is clearly more than 400, and so it is detectable.

The problem is the word 'average'. Each of these sources of photons also contains noise. The light pollution photons do not arrive in a constant and predictable stream. They splat randomly on to the sensor like bugs hitting your car windscreen on a summer evening. The same goes for the signal photons from the galaxy. The camera also adds photons in the form of dark current (which arises in a random and noisy way) and camera read noise too.

In order to tell whether that difference of 1 photon between 400 and 401 is a result of the object or whether it is just noise (i.e. a random fluctuation), you have to measure it for long enough to 'average' out the noise. That is basically what we mean by Signal to Noise Ratio (SNR). The higher the SNR, the more certain we are of the difference between signal and noise, and so the 'brighter' the faint object will be compared to the background and the less grainy it will be too.

This is basically what you are doing when you stack images, even if you don't understand the maths! The point is that even if only one extra photon per hour was arriving from an object, or one a year, or one a decade, you could (theoretically) measure it and detect the object. Practically there is a limit to what you can achieve, but mostly it is just a question of how many hours of exposure you can gather.

To leave you with a final thought, it's analogous to what those geniuses with the Large Hadron Collider were doing. They were chasing tiny, tiny signals in a whole mess of other signals and noise and it took them more than a year to gather enough data to reliably separate the signal from the noise to 'find' the Higgs Boson. (Nobody asked the Higgs if it thought it was lost though!) Once they had reached a 'five sigma' level of certainty, that meant there was only a one in 3.5 million chance that the cause was random noise rather than the signal they were looking for.

* I know there are other sources of photons such as sky glow, unresolved star background, etc. that also add a variable amount of extra photons, but I'm keeping things simple here.

Link to comment
Share on other sites

I have a question.

If the exposure is too long then Sky Glow could be too much and therefore difficult to remove?

I am imaging M33 at the moment which has low surface brightness. I did a calculation in PI using SkyLimitedExposure Calculator and it suggest 7 min subs, whereas I was using 10 min subs. I've got 33 subs at the moment but there is a lot of noise even though it is averaged. Would I be better reducing my exposure time to 7 mins to make the Sky Glow / Noise easier to deal with? In theory I should still capture that 1 photon per hour, whether my exposure length is 1 min or 60 mins.

Apologies if this is answered above, I have gone through it all three times now :grin:

Link to comment
Share on other sites

I've done the calculations as well: gain is 0.54 (Canon 450D ISO400) and read noise is 5.69 electrons. I got two sets of data in one night, one of M33 and the other in Auriga, both widefield (200mmFL). The background counts are quite different (no idea why, really), one set comes out at 400, the other set at 550. So the recommendation comes out as either 6 or 9 minutes. Seeing as I image at 5 minutes, it looks like I need to go longer.

Link to comment
Share on other sites

I've done the calculations as well: gain is 0.54 (Canon 450D ISO400) and read noise is 5.69 electrons. I got two sets of data in one night, one of M33 and the other in Auriga, both widefield (200mmFL). The background counts are quite different (no idea why, really), one set comes out at 400, the other set at 550. So the recommendation comes out as either 6 or 9 minutes. Seeing as I image at 5 minutes, it looks like I need to go longer.

The Milky Way runs through one one side of Auriga so depending on what you were imaging that may be the reason, as the sky will be intrinsically brighter there due to the unresolved background stars. Otherwise check whether you were pointing in to a more light polluted part of the sky for one rather than the other. In either case, the calculations would quite validly suggest different sub-lengths to reach the same % of potential SNR if the sky is brighter or darker where you are imaging.

Link to comment
Share on other sites

Empirically I find that long subs are essential if you want to capture the faintest data from a dark site. You can take one minute subs till you are blue in the face but my experience says that you won't get the IFN this way. I've always assumed that it was a matter of accumulating less read noise by having fewer 'reads' in the total exposure. I can only apologise to the theoreticians but I won't be shortening my long Ha subs. I'm perfectly convinced that they work.

Olly

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.