Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

image stacking - adding v averaging


nytecam

Recommended Posts

Hi - what are the advantages, if any, for averaging a stack against adding a series of imaging? Also is the exposure time of an averaged series = a single frame or the accumulative stack as for added images? I was thinking of long exposure DSOs rather than planetary imaging where the advantages of averaging are well known :-)

Link to comment
Share on other sites

Hi - what are the advantages, if any, for averaging a stack against adding a series of imaging? Also is the exposure time of an averaged series = a single frame or the accumulative stack as for added images? I was thinking of long exposure DSOs rather than planetary imaging where the advantages of averaging are well known :-)

From what I  have read, averaging or adding the data in AP are almost the same thing, the disavantage of these stacking algorithms is that the noise is not effectively reduced as some of the more sophisticated ones such as SD but these need a large number of subs to work effectively.

Regards,

A.G

Link to comment
Share on other sites

I've always assumed that if you added you just added up the noise as well, but on going to the help page opf AstroArt 5 it says that adding and averaging give the best S/N ratio while Sigma rejects outliers. I'm now curious about adding and will run some files through this routine!

Olly

Link to comment
Share on other sites

I've always assumed that if you added you just added up the noise as well, but on going to the help page opf AstroArt 5 it says that adding and averaging give the best S/N ratio while Sigma rejects outliers. I'm now curious about adding and will run some files through this routine!

Olly

Sigma reject has almost identical noise suppression compared with adding/average so you won't gain anything. Adding noise is a bit odd since it is random it tends to cancel out. It is this effect that is behind the improvement we see in our images as  we add more subs or exposure time.

Link to comment
Share on other sites

Hi - what are the advantages, if any, for averaging a stack against adding a series of imaging? Also is the exposure time of an averaged series = a single frame or the accumulative stack as for added images? I was thinking of long exposure DSOs rather than planetary imaging where the advantages of averaging are well known :-)

I don't think there is any practical difference, as long as the intermediate arithmetic is done in floating point to avoid any rounding/ quantizing errors.  

 

Whether you combine a series of exposures in processing by summing them or by averaging them, the total exposure time is ..... the total exposure time - no difference.   For DSOs, the most important factor is signal to noise ratio.  Detecting faint information is another way of saying "being able to distinguish faint signal from noise", so the faintest detail you can discern with certainty depends entirely on S/N.  And the S/N of an average-combine is mathematically the same as the S/N of a sum-combine.

Strictly speaking, to maximise S/N some form of weighting should be applied to individual sub-exposures so that particularly noisy subs make a lesser contribution than 'clean' subs to the combined result.

There's a good explanation in the PixInsight help pages:

http://pixinsight.com/doc/tools/ImageIntegration/ImageIntegration.html#description_001

Adrian

Link to comment
Share on other sites

I used to understand all that maths but my brain is no longer what it was!  Maths was one of my best subjects and also played a part in my career for noise reduction, correlation and all that.  Detecting very faint signals in noise where the signal was actually lower than the noise and using correlation techniques to improve the S/N and lift the signal out of the noise.  This is just what we are doing with stacking software where the faint image signal is lifted out of the noise.  Post processing techniques further rely on correlation to reduce the noise further by actually examining the image signal pattern and applying algorithms to boost the S/N, reduce stars etc.  For more information, "Google is your friend" :D

Link to comment
Share on other sites

Sigma reject has almost identical noise suppression compared with adding/average so you won't gain anything. Adding noise is a bit odd since it is random it tends to cancel out. It is this effect that is behind the improvement we see in our images as  we add more subs or exposure time.

Surely this is only true if you have no dither or polar misalignment. If you do use dither then Sigma will reject outliers like hot or dead pixels. It will also dispose of satellites, aircraft, etc.

Olly

Link to comment
Share on other sites

The PixInsight help pages mentioned earlier give a good explanation of why SUM and AVG combines result in the same S/N ..... *quantitatively*

Adrian

Yes.  It was intended as a bit of a rhetorical question really.  Mind you, I can imagine some people reading that PI web page and glazing over within the first few sentences :D

James

Link to comment
Share on other sites

I'm now thinking of "average" in terms of scaling - to maintain a brightness range...

Improving the SNR needs the addition of the signal to reduce the shot noise - this is effectively done first (ie add all the images) then divide by the # of subs (ie divide the summed total by a number).

You get the same SNR outcome.....

Link to comment
Share on other sites

Surely this is only true if you have no dither or polar misalignment. If you do use dither then Sigma will reject outliers like hot or dead pixels. It will also dispose of satellites, aircraft, etc.

Olly

But mathematicaly hot/dead pixels, satellites, aircraft, cosmic rays etc. are not noise, they are signals. Sure they are unwanted signals (much like dark current and bias signals*) that Sigma Clip (and similar algorithms) are able to reject. However just like using dark and bias calibration will increase the noise and decrease the SNR (we are subtracting the signals of the calibration images from the total signal and adding to the noise) Sigma Clip will decrease the gain in SNR compared to average/addition by choosing to reject some of the signal (the rejected pixels). The effect is usually small but in the worst case (complete mismanagement of the parameters or very low number of subs) Sigma Clip/SD mask/etc. could be just as bad as median stacking (SNR = sqrt(2N/PI), about 0.8*SQRT(N) or 80% of the SNR suppression of average/addition) . 

*flats have the same general behaviour but differ in the details

Link to comment
Share on other sites

Very good point!

I think for "astrophotography" to get that good "picture" you have to separate in your mind the scientific "noise" from the unwanted signal "noise"

I'm sure there are many defects/ unwanted reflections/ halos/ background pollution etc etc etc which get cleaned up in imaging....I don't get the same degree of freedom in spectroscopy. There's a pretty rigorous, well established methodology which must be followed to gain acceptance....no fudging or sharpening allowed.

Link to comment
Share on other sites

But mathematicaly hot/dead pixels, satellites, aircraft, cosmic rays etc. are not noise, they are signals. Sure they are unwanted signals (much like dark current and bias signals*) that Sigma Clip (and similar algorithms) are able to reject. However just like using dark and bias calibration will increase the noise and decrease the SNR (we are subtracting the signals of the calibration images from the total signal and adding to the noise) Sigma Clip will decrease the gain in SNR compared to average/addition by choosing to reject some of the signal (the rejected pixels). The effect is usually small but in the worst case (complete mismanagement of the parameters or very low number of subs) Sigma Clip/SD mask/etc. could be just as bad as median stacking (SNR = sqrt(2N/PI), about 0.8*SQRT(N) or 80% of the SNR suppression of average/addition) . 

*flats have the same general behaviour but differ in the details

I'm not a mathematician, I'm an imager! Noise, for me, means anything that isn't coming from the object out there is space. Try imaging the WItch Head Nebula. You'll be persecuted by Geostationaries, several in any 10 minute sub. Take more than a dozen subs and they're 'Gone Baby Gone' provided you use a Sigma reject of quality. (EG AstroArt 5 but not AstroArt 4.) Every single sub in this image had top to bottom satellites on the right hand side. Nightmare. They vanished using Sigma in AA5.

WITCHEAD%20NEBULA-L.jpg

Olly

Link to comment
Share on other sites

I'm an electronics engineer, so I'm quite familiar with the meaning of signal to noise. In electronics noise is down to random processes in the components; it does not include components that you would call interference. So I suppose if you apply the term to astro-photography then items like satellites and aircraft trails and even light pollution should not count as noise. However, I don't think we are using the term here as a precise scientific measure. I tend to agree with Olly's definition that signal is the stuff you want and noise is everything else.

For what that's worth.

If it wasn't cloudy, contrary to the BBC weather prediction, I wouldn't have to interfere. So blame the BBC - that's all I'm saying.

Excuse me while I dismantle my gear and kick the pier. Knew I shouldn't have bought that new scope.

cheers

gaj

Link to comment
Share on other sites

I'm an electronics engineer, so I'm quite familiar with the meaning of signal to noise. In electronics noise is down to random processes in the components; it does not include components that you would call interference. So I suppose if you apply the term to astro-photography then items like satellites and aircraft trails and even light pollution should not count as noise. However, I don't think we are using the term here as a precise scientific measure. I tend to agree with Olly's definition that signal is the stuff you want and noise is everything else.

Well that is they way it is commonly used on most astrophotoforums. However I can't help to wonder if that is precisely the reason why there is so much confusion in most discussions about noise and related topics (stacking etc).

I mean using it the right way isn't any harder and if you use the terms correctly the math actually works. You got wanted and unwanted signals (dark current, satelites,etc), all signals have noise, you can add/subtract/etc signals (the reason why bias/darks/flast work and  median/Sigma Clip can remove satellite trails) but the only way handle noise is to increase the signal. That is pretty much it.

Link to comment
Share on other sites

I think the problem comes about because of the digitizing of the signal.

I don't know what the true numbers are but lets say that 100 photons = 1bit in the converter. For a given pixel 99 photons will not register. if you have ten frames added then that should have been 990 photons that could have been 9bits. However digitally it will still be represented as 0 because 10 frames of 0 is still 0. So if an exposure of 10 seconds captured 99 photons - ie not quite 1 bit. 10 frames of 10 seconds will produce 0. That pixel will never get to register a single bit - no mater how many 10 second frames you take. However if you expose for 100 seconds then 990 photons will be captured an 9bits will be registered. That's why we all bust a gut to get long exposures.

Ok I realise that photon capture would not be so predictable but I'm just illustrating the problem. Now lob on top of that the fact that there is a background light level and that the sensor is bunging in some random data and it all gets a bit messy. You are looking for some very small changes, that have some random content, in much bigger numbers that also have a random nature. The target image fortunately can be considered as a periodic waveform ( albeit a very complicated one ) and therefore stacking can correlate the periodic and extract it from the aperiodic.

 And that my liege is how we know the world to be banana shaped ( Monty Python and the holy grail. )

Ohhh my head hurts!

cheers

gaj

Link to comment
Share on other sites

that 100 photons = 1bit in the converter

This depends on the gain, and an astro camera would never have this sort of gain. Nor even a DSLR. It is quite easy to set 1 photon = 1 bit, or even more, so this digitisation problem does not occur. Check out 'unity gain' for DSLRs - depends on the model, but it is usually around ISO400-1600.

NigelM

Link to comment
Share on other sites

That's right. With 16 bit digitisation and high quantum efficiency sensors, we're pretty close to counting individual photons so I don't think we see that kind of quantisation problem, and typical noise contribution ensures that pixels always receive more than enough photons to exceed basic detection thresholds.

Detecting very faint stuff boils down to being able to clearly distinguish the object signal from the noise, in other words achieving a minimum SNR. Some say 3:1 is the absolute minimum SNR for certain detection. Typical observing sites are 'sky limited'; i.e. sky background is overwhelmingly the largest source of noise. At these sites arguably there is little if anything to be gained from long individual sub-exposures; the improvement in SNR from longer exposure rapidly becomes a diminishing return. (An exception would be when using narrow band filters.) On the other hand, at a very dark site like Olly's, sky background signal is very low and other noise sources dominate. In that case, exposure length can be extended much further, with considerable improvement in SNR, before a plateau of diminishing return is reached.

I personally found that I could increase sub exposure times by a factor of at least 10 times at Olly's, and still get a lower background count compared to my home observatory  :eek: .  Makes me weep to think about it!

 

Adrian

Link to comment
Share on other sites

To be fair, near-unity gain performance applies to high end (mono) astro cameras. DSLR CMOS sensors suffer some spatial sampling inefficiency due to larger gaps between their photo-sites, and of course they are colour filtered, so quantum efficiency must be quite a bit lower; I've seen references to 20-30%.

Adrian

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.