Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

To stack or not to stack: 30 x 1s = 1 x 30s?


Martin Meredith

Recommended Posts

And an annotated version of that last image, compiled using PDM and SDSS.  There are a few obvious galaxies in the SDSS image that are not detectible here, and clearly the SDSS signal is much stronger, able to support spectroscopic analysis of the objects, etc.  But still, to be able to pull in some signal from most of what the 2.5 meter SDSS instrument detects using an 8" amateur scope is pretty cool.  I wonder what it would take to be able to get enough signal from some of these nearer galaxies to measure a spectrum?  BTW, the majority of the 32 labeled galaxies here are part of the hercules Supercluster at ~550 MLy, but there are a few background objects (must be giant ellipticals or AGN to be visible) at more like 2 BLy.

post-43095-0-20180500-1435494636.png

Link to comment
Share on other sites

  • Replies 62
  • Created
  • Last Reply

Great stuff Alex!

Its quite possible some of the stacking could be that the object is being confused as a fuzzy star, there is some processing in place to try and prevent that, but may not always work 100% (as with all things image processing...).

Dense star fields will take longer to stack as that star matching algorithm has to work harder (it does limit the number of stars) - do you know how many stars LL was reporting it was matching (in the stacking stats).

If you have continuing issues, if you can set LL to export all your exposures to FITS files, save them for a given stack and PM me we can sort out transferring them to me and I can take a look at what might be going on.

I will have to try the shorter exposure times myself next time out - I usually use 30s but the 'nearer' real time viewing of 10s exposures is very appealing!

Link to comment
Share on other sites

  • 3 weeks later...

I've been reading (again) the excellent Keith Wiley article that Don recommended. Here's a quick summary. He covers three separate but related issues.

First, the question of whether mean and sum stacking are the same. I entirely agree with his argument that sum and mean stacking are the same thing if the internal representation of the data is in floating point. This last part is important and relevant to the current discussion, which is phrased in terms of summing but applies equally well to the mean stacking that I use in my images.
Second, he states that the "classical application of image stacking is to increase the signal-to-noise ratio". This is true, although what isn't mentioned is that stacking is not the only way to achieve this. It can also be done by increasing exposure. Stacking -- to my mind -- is not about increasing SNR, but about a way to increase SNR without making unrealistic assumptions about tracking, without over-exposing, and with the opportunity to reject transient artefacts (i.e. bad subs).
Third, he argues that stacking increases the dynamic range of the signal: too short an exposure risks not recording the faintest information, but pushing up the exposure time to deal with this risks saturation of brighter parts. The solution is to stack subs that are short enough to avoid saturation.
But the issue I'm raising is at the other end of the scale, as Alex points out, and concerns the representation of faint objects.  Wiley also deals with this. I'd like to quote the article here: "you take a bunch of images that are carefully exposed so as not to saturate the brightest parts. This means you honestly risk losing the dimmest parts. However, when you perform the stack, the dimmest parts accumulate into higher values that escape the floor of the dynamic range, while simultaneously increasing the dynamic range as the brightest parts get brighter and brighter as more images are added to the stack".
He then goes on to say "Now, it should be immediately obvious that there is something slightly wrong here. If the raw frames were exposed with a short enough time period to not gather the dim parts at all, because the dim parts were floored to zero, then how were they accumulated in the stack? In truth, if the value in a particular raw falls to zero, it will contribute nothing to the stack. However, imagine that the true value of a dim pixel is somewhere between zero and one. The digitization of the A/D converter will turn that value into a zero, right? Not necessarily. Remember, there is noise to contend with. The noise is helpful here, in that the recorded value of such a pixel will sometimes be zero and sometimes be one, and occasionally even two or three. This is true of a truly black pixel with no real light of course, but in the case of a dim pixel, the average of the Gaussian will be between zero and one, not actually zero. When you stack a series of samples of this pixel, some signal will actually accumulate, and the value will bump up above the floor value of the stacked image, which is simply one of course."
(BTW This is why an internal representation in floating point is crucial)
My reading is that stacking can help us overcome an apparent loss of information at the faint end, which in the context of the current discussion presumably permits us to get away with shorter subs than we might have expected, leading to all sorts of advantages for near real-time observing (including the use of longer FLs on alt-az mounts without field rotation, not hanging around for ever collecting darks, seeing the image develop at a faster rate). I guess the question becomes: what is the shortest sub I can get away with for each type of object? 
Martin

This is the principle of 'dither' which can be used to achieve greater resolution by oversampling. I have tried to use it to get hi-res results from measuring equipment without much success and various electronic experts told me it was because my noise wasn't truly random.

This assumes that the noise is gaussian in its distribution - at such low levels it I would guess it is more likely be poisson or some other distribution. It also needs to be truly random.

To really get this data out effectively you need to add truly random gaussian noise to the signal BEFORE the A2D assigns it a value, and that would mean changing the architecture of the chip.

You also need enough samples so that, when converted back to integer form the few registered photos are sufficient to push you up out of the noise.

I'm not saying this approach doesn't work, but that (1) maximising the number of subs is likely to help and (2) the results may not be as good as simplified explanations of how tiny signals are recorded may suggest.

<edit> it seems CCD read noise does have a poisson distribution - this means rare random high values which will act to confound the rare high values from data.

Link to comment
Share on other sites

This is the principle of 'dither' which can be used to achieve greater resolution by oversampling. I have tried to use it to get hi-res results from measuring equipment without much success and various electronic experts told me it was because my noise wasn't truly random.

This assumes that the noise is gaussian in its distribution - at such low levels it I would guess it is more likely be poisson or some other distribution. It also needs to be truly random.

To really get this data out effectively you need to add truly random gaussian noise to the signal BEFORE the A2D assigns it a value, and that would mean changing the architecture of the chip.

You also need enough samples so that, when converted back to integer form the few registered photos are sufficient to push you up out of the noise.

I'm not saying this approach doesn't work, but that (1) maximising the number of subs is likely to help and (2) the results may not be as good as simplified explanations of how tiny signals are recorded may suggest.

<edit> it seems CCD read noise does have a poisson distribution - this means rare random high values which will act to confound the rare high values from data.

Interesting thoughts, thanks. For higher values of the Poisson parameter (call it lambda), Poisson is pretty close to a normal distribution; and since the sum of two events from a Poisson is also Poisson, but with a lambda that is the sum of their lambdas, then it follows that as more and more events are summed the distribution becomes more and more like a normal distribution. Whether this applies in our case is open to debate and will to some extent depend on other noise sources that are present before digitisation (sky noise, for instance), which may not fall into the very faint category. I have noticed that short subs often don't produce a very smooth histogram in the lower reaches, but sum stacking appears to restore some of the smoothness. But there's definitely a need for more analysis of the pros and cons of applying these sorts of approaches to near real-time viewing in the hope of squeezing out every last drop of signal while achieving as rapid an update as possible.

Martin

Link to comment
Share on other sites

I do find this fascinating, despite a lack of practical experience. For the moment I can see myself being dependent on relatively short subs (no guiding) and therefore I am interested to see what stacking lots of images can achieve.

My camera is 12-bit so in principle I should be able to stack 16 subs without any loss of resolution, even using integer arithmetic, i.e. 16 x 1 minute exposures stacked and averaged without any FP maths or cleverness should, in principle, be comparable to a single 16-minute sub subject to the comments on  noise sources above.

But if I think about it, to achieve a minimum extra bit-count of 1, I need  one 1 of 16 frames to register an extra count. But that means that one pixel needs to randomly miscount a signal 16 times larger than its nominal threshold.

I can see stacking detecting pixels at half, or maybe a quarter of the nominal threshold - but sixteen times?

I've just done a quick simulation in Excel. I assigned 64 'pixels' a random level of dark noise with a mean of 10 and a standard deviation of 7. I then added in a 'signal' with mean levels of 1, 1/2, 1/4 etc. down to 1/16 and standard deviations of 0.7*magnitude.

I then totalled the 64 readings for each level of noise and subtracted the total noise reading.

The numbers left represented what was detected of the 'signal' assuming that the output value had been increased in resolution by 4 bits (64 times).

Now I know this is somewhat artificial, but the results are interesting given that the magnitude of the 'noise' was ten times the largest signal.

signal size (input bits)           output signal (two samples)

1                                                    57  71

1/2                                                 33  38

1/4                                                  9   14

1/8                                                  8   6

1/16                                                4   4

1/32                                                2   2

1/64                                                1  2

1/128                                              0  0

OK, this is a gross over-simplification, and everything is down to the size of the standard deviation of the noise and the signal (0.7 for SD is probably a fair bet) but it does suggest that stacking 'n' frames will allow you a sporting chance of detecting a signal of magnitude 1/n although there will still be some residual noise.

Link to comment
Share on other sites

  • 1 month later...

To briefly re-open this thread, I just came across this image of M51, taken with (lots of) 1s subs for RGB and 4s subs for luminance -- pretty impressive and shows what can be achieved (in this case by a large Dob on an equatorial platform). I'm guessing these are not live stacked, but in principle they could be. The key ingredient seems to be the low read-noise camera.

Martin

Link to comment
Share on other sites

  • 2 weeks later...
  • 2 weeks later...

Martin,

Just joined SL and catching up on the threads here. This is an excellent thread. After reading it I went "exactly!'. Seems to tie in very well with my findings from using both Lodestar and the ASI224. Great work.

Couple of comments on Emil's images that you linked above.

The 3 dominating factors for image quality when using a short sub stacking approach to imaging or EAA are light pollution, read noise and seeing. Emil's results are so good as he gets around all 3. For the M51 he has used a low read noise sensor i.e. 224 to add color data (along with 174M which is also relatively low noise - a 16" dob doesn't hurt either for resolution). He images from a rural location. And his approach is similar to lucky imaging which gets around the seeing issue.

Overall impressive results and truly demonstrates the value in stacking short exposures. I think as we get more sensors with lower read noise, the EAA/imaging paradigm will change quite dramatically. In theory if the read noise falls below a certain threshold (say .1e = 0.01e) there will be absolutely no difference in exposure time requirements between long and short subs even for very faint objects as even that single lonely photon will get recorded (and all your graphs will flatten out).

Hiten

Link to comment
Share on other sites

Ant,

In my view the key difference between imaging and EAA is the amount of data collected and the image processing workflow not the imaging approach. Note that even Video Cameras even in a single exposure are stacking multiple frames internally (they just don't have any alignment functionality)

Whether you are doing EAA or imaging you need to collect sufficient photons for that purpose. For EAA we need to collect just enough photons so that we can "view" the object. Imaging requires much more data since the quality requirements are higher. 

We can collect these photons as one long exposure or as smaller sub exposures and stack them. For example one 60s exposure or do 6x10s sub exposures and use stacking to combine them. The benefit of the latter approach is that you can employ it (with the appropriate SW) to get around the limitations of your mount. E.g. Alt Az mounts which are typically not capable of exposures longer than a few seconds and not perfectly polar aligned EQ mounts.

The other critical difference is that in EAA there is no post processing... you collect the data and you view/process it "at the scope" and maybe capture a snapshot for sharing. This is exactly the same thing you do with video cameras when you adjust color/contrast/brightness.

Just FYI the M51 & NGC891 examples above are representative of imaging and not EAA but I think Martin was using them to make a technical point on SNR.

Hiten

Link to comment
Share on other sites

To briefly re-open this thread, I just came across this image of M51, taken with (lots of) 1s subs for RGB and 4s subs for luminance -- pretty impressive and shows what can be achieved (in this case by a large Dob on an equatorial platform). I'm guessing these are not live stacked, but in principle they could be. The key ingredient seems to be the low read-noise camera.

Martin

This just underlines the fact that I MUST get a Servo Cat set up on my Dob.  The combination of 450 mm aperture at only 1900 mm FL plus the Ultrastar should be amazing.  Just have to find time to get it done...

Link to comment
Share on other sites

Thanks for your kinds words Hiten. I must say the images you and others have shown recently with the ASI224 are really impressive. I'm going to sit this one out though having had the Lodestar X2 for less than a year, but I'll keep an eye out for any future mono version, by which time maybe someone will have written an ASILive ;-)

Martin

Link to comment
Share on other sites

Martin, I think the X2 is very sensitive and a fantastic EAA camera. I use it all the time, especially for going deep and for viewing galaxies where color is less important. A mono version of the 224 sensor would be killer though... I would buy it in a heartbeat.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.