Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Short Exposure DSO Imaging


nmoushon

Recommended Posts

The difference between landscape and astro photography is the landscapes have a large dynamic range, and astrophotography is multiple shades of black ;)

In landscape photography the signal is many orders of magnitude above the noise from the camera. In astrophotography the signal is often only just higher than the noise levels. So if we take 100 shots and calibrate them (remove stuff we are pretty confident is noise using dark and bias frames), then we decide if something is signal by how often and how brightly it appears in the individual images. For instance, if a certain pixel is only high on a couple of frames out of 100, then it is probably noise and not signal (maybe a plane passing overhead), if on the other hand the pixel is a constant value across 95 of the 100 frames, then we can be highly certain that it is in fact signal.

On top of this, many of the astro images need to be exposed for many hours in total. Many of my images have a total exposure time of 3-4 hours, and a lot of the really good images are up to 24 hours in total. You simply can not leave the shutter open for 24 hours and not have it white out regardless of how good your viewing conditions are (plus the target would have spent half of the time below the horizon).

Link to comment
Share on other sites

  • Replies 44
  • Created
  • Last Reply

Think it like this: put your hand in your pocket and pull it out full of dirt and a small piece of dust of gold. Now, if you only get a small hand of dirt and almost no gold, it wouldn't have any value. But if you get a lot of dirt, you get also some gold which starts to have some value. Doesn't matter if you get it all at once or you have to reach your pocket several times. Same thing here with a few long exposure and many short ones.

On top of this, each time when you put the hand in the pocket, you introduce additional dirt. The difference is when you have a higher read noise, you introduce a higher amount, whereas with a lower read noise sensor you introduce less.

PS. This is an old thread.

Link to comment
Share on other sites

It's probably true that exposure time needs to be long enough in order to pick up at least some signal in order to stack many subs so as to improve signal/noise. The threshold will depend on target, sky conditions, camera and scope/lens. I've recently started to have a go at spectroscopy :) A single 1s exposure of Castor is plenty long enough to acquire spectrum data using a F6 80mm APO and (mono) Minicam5s :) 

Louise

Link to comment
Share on other sites

Interesting point Jack. I wonder what would happen though if you were in the unusual position of taking lots of exposures in daytime of something that did not change at all. Obviously it would have to be done in a lit studio environment, as the movement of the Sun would be a problem. I would think that stacking and then stretching the image would have a beneficial effect, though I don't know how it would compare to a properly exposed single shot. I suspect there will always be some kind of signal- noise hit (more "downstream" non-sensor noise) but that it would decrease with the number of images stacked.

Olly's earlier question about stacking and the square root issue got me thinking also. I'm not an imaging expert, but from a data and sampling perspective I'd expect the signal-noise ratio to be proportional to the square root of the sample size more generally. If this is so (and it might not be) then is increasing exposure time for single subs not susceptible to the same issue?

Billy.

Link to comment
Share on other sites

Actually, to piggyback on this, when we talk about signal to noise ratios I've been assuming that what falls with the square root of the number of subs is the standard error of the mean for each pixel. Is this actually correct?

Link to comment
Share on other sites

19 hours ago, jACK101 said:

I can understand the need to have different exposures for different brightness levels which when stacked give an acceptable result, but why does astrophotography seem to work to different rules?

Actually it works to the correct rules - it is just that these can usually be ignored in terrestrial photography! For instance, most daytime photographers consider ISO a way of changing the sensitivity of their camera to light. it isn't - it make no different whatsoever to the number of photons your camera detects. And all images would benefit from stacking, even landscapes, but the S/N ratio is so high in a single exposure that you probably aren't going to notice the benefit.

NigelM

Link to comment
Share on other sites

2 hours ago, Thalestris24 said:

It's probably true that exposure time needs to be long enough in order to pick up at least some signal in order to stack many subs so as to improve signal/noise.

Not really - if your target only emits one photon per hour and you take 3600 1sec exposures, then one of them will detect that photon and the other 3599 will have nothing, but at the end of the day the stack will have the correct signal in it.

NigelM

Link to comment
Share on other sites

30 minutes ago, dph1nm said:

Not really - if your target only emits one photon per hour and you take 3600 1sec exposures, then one of them will detect that photon and the other 3599 will have nothing, but at the end of the day the stack will have the correct signal in it.

NigelM

I think you'd need more than 1 photon/hr to elicit a response from a typical amateur camera!

Link to comment
Share on other sites

18 hours ago, Thalestris24 said:

I think you'd need more than 1 photon/hr to elicit a response from a typical amateur camera!

Providing you are working at unity gain then a DSLR is quite capable of detecting a single photon (although not with 100% efficiency). Of course, at 1 photon per hour you will need a lot of hours of exposure to overcome the noise, but in principle it could be done!

NigelM

Link to comment
Share on other sites

If you are only getting 1 photon per hour then you also need to take into account the QE of the camera + filter at the wave length of the photon. So for my 60D there is only a 20% chance of a ha  photon making it to the sensor and being recorded by the sensor...

Link to comment
Share on other sites

9 minutes ago, dph1nm said:

Providing you are working at unity gain then a DSLR is quite capable of detecting a single photon (although not with 100% efficiency). Of course, at 1 photon per hour you will need a lot of hours of exposure to overcome the noise, but in principle it could be done!

NigelM

Well I guess it depends on the qe of the sensor. If you had a 100% qe then a single photon would be detected. If you have, more typically for a dslr, 40% qe, then only 4 out of 10 will be detected so single photon sensitivity is not at all guaranteed.

Louise

Link to comment
Share on other sites

Just now, frugal said:

If you are only getting 1 photon per hour then you also need to take into account the QE of the camera + filter at the wave length of the photon. So for my 60D there is only a 20% chance of a ha  photon making it to the sensor and being recorded by the sensor...

Snap! :)

Link to comment
Share on other sites

On ‎05‎/‎04‎/‎2017 at 22:00, jACK101 said:

I cannot think of any situation where many hundreds of grossly underexposed frames when added together could give an acceptable result because the information would not be there on the film or sensor.

 

Think about it in terms of collecting photons because that is what the sensor is doing.  If you take 100 exposures of 1 sec you will collect the same number of photons (on average) as a single exposure of 100sec.  Once you add together the 100 exposures then you have the same total of photons in both cases.  The only difference in final image quality will be the read noise: 100 exposures added together will have 10x the read noise of the single 100sec exposure.  But this read noise is no longer the problem it used to be.  Many modern sensors have read noise of less than 1 electron - especially those sensors favoured by planetary imagers.

Mark

Link to comment
Share on other sites

This possibly answers my original question which restarted this thread.  What form of stacking?  And it would appear to be 'Sum'.  Median would certainly lose occasional photon captures, and Average probably likewise.  So read noise notwithstanding, is it  Sum?  And is it sensible to subtract a master Dark which will hopefully remove faulty 'hot' pixels, and maybe some of the read noise?

Cheers,

Peter.

Link to comment
Share on other sites

1 hour ago, petevasey said:

This possibly answers my original question which restarted this thread.  What form of stacking?  And it would appear to be 'Sum'.  Median would certainly lose occasional photon captures, and Average probably likewise.  So read noise notwithstanding, is it  Sum?  And is it sensible to subtract a master Dark which will hopefully remove faulty 'hot' pixels, and maybe some of the read noise?

Cheers,

Peter.

Use sum or average - it works out the same in the end if it is all being done in floating point arithmetic.  Yes it makes sense to subtract a master dark because that will remove the thermal fixed pattern.  As for read noise, any fixed pattern in the read noise will be contained in the master bias.  The rest of the read noise is uncorrelated random noise which you can't do anything about.

Mark

Link to comment
Share on other sites

As Mark says, sum and average are the same in floating point, and are theoretically optimal in reducing noise. However, there are occasions when you want to do something else (but not median, which loses a lot of SNR). That something else might be sigma clipping, which gets rid of outliers (defined as values that are more than a certain number of sigmas (= standard deviations) away from the mean), then recomputes the mean. This does a good job with satellite and plane trails for instance.

Martin

Link to comment
Share on other sites

9 hours ago, petevasey said:

This possibly answers my original question which restarted this thread.  What form of stacking?  And it would appear to be 'Sum'.  Median would certainly lose occasional photon captures, and Average probably likewise.  So read noise notwithstanding, is it  Sum?  And is it sensible to subtract a master Dark which will hopefully remove faulty 'hot' pixels, and maybe some of the read noise?

Cheers,

Peter.

In the case of my setup, I have a 14-bit sensor.

DSS does 32-bit stacking.

This means it can stack over a quarter of a million 14-bit frames without losing data.

Now technically it gives me an 'average' result as it's scaled to give normal dynamic range from 0 to 2^32-1 instead of 0 to 2^14-1.

So the difference between sum and average matters little.

Median is useful with high-noise, low sub-count images as it ignores the outliers.

The best methods of stacking are the ones like sigma stacking that work out the standard deviation of each pixel, then ignore the outliers (aircraft trails, gamma ray bursts, random noise) and average the others, but they need a larger sample to work from.

(32 bit stacking may sound excessive, but it needs to be a multiple of 8 and 16-bit stacking loses data if you stack more than 4 14-bit frames).

Link to comment
Share on other sites

The answer is 'sum', because it is that which is doing the physics for you. Average (or median etc) only works because you are assuming identical length subs - but there is no need for subs to be the same length. The improvement in S/N comes from the increased number of photons detected. If I sum 1+10+100 sec subs I get the equivalent of a 111 second exposure (ignoring read noise) - if I averaged the three together I would not, and the resulting S/N would not be as good.

NIgelM

Link to comment
Share on other sites

4 hours ago, dph1nm said:

The answer is 'sum', because it is that which is doing the physics for you. Average (or median etc) only works because you are assuming identical length subs - but there is no need for subs to be the same length. The improvement in S/N comes from the increased number of photons detected. If I sum 1+10+100 sec subs I get the equivalent of a 111 second exposure (ignoring read noise) - if I averaged the three together I would not, and the resulting S/N would not be as good.

NIgelM

If you imagine the final image as a 16-bit TIFF, being generated from 14-bit RAW files, it becomes clear that that you are doing is neither a true average or sum.

For stacking 14 bit frames to output as 16 bit tiff the simplest approach would be to:

 

  1. Sum the data
  2. Divide the number of frames by four and round the result according to the carry.
  3. Divide the data by this figure.
  4. That would give a result scaled to 16-bits with maximum resolution and minimum loss of data.

Obviously with more than four subs, this approach loses data.

I don't know how DSS does it, but as it also has to deal with various control frames, each of which may be processed in different ways and potentially these can lose data through rounding as well.

I know DSS uses 32 bits for its final data so presumably it either uses a floating point format, normalises all calculations to a fixed decimal point or works to a greater resolution and normalises the result to 32 bits. I mote that it can save 32-bit tiffs in integer or floating point format.

 

 

Link to comment
Share on other sites

20 hours ago, dph1nm said:

The answer is 'sum', because it is that which is doing the physics for you. Average (or median etc) only works because you are assuming identical length subs - but there is no need for subs to be the same length. The improvement in S/N comes from the increased number of photons detected. If I sum 1+10+100 sec subs I get the equivalent of a 111 second exposure (ignoring read noise) - if I averaged the three together I would not, and the resulting S/N would not be as good.

NIgelM

Sure, but if not using identical length subs one would use a weighted-average to get the same effect. 

My point is that it can be confusing to provide both sum and average options as if they were fundamentally different beasts...

Martin

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.