I like people who admit they don't know, because I don't know either :-) "Don't know" is the start to understanding. I read Keith Wiley's explanation of why stacking works, and don't agree :-) He says "photons that entered the telescope and accumulated in the CCD" when actually it's electrons that accumulate. OK, semantics. Then he explains that by multiple sampling (stacking) the random noise fluctuations average out. That's right, but they also average out if you just accumulate the electrons in the CCD and read them out at the end of the period for which you would stack. There's no difference if you consider as he does a single pixel. Just the same as there's no difference in signal/noise if you average or sum, as he rightly explains. If sampling at short intervals could improve S/N then, reductio ad absurdum, you could improve S/N indefinitely by finer sampling. I started low-light imaging 9 years ago with true video CCDs the Wat-120 and 120+. Then the image accumulated within the CCD, which also averaged out random electron noise, but it was sampled continuously at 50 Hz (PAL). Of course the dynamic range was limited to 8-bits, much less than with digital readout. Then when I searched the forum I found a nice demonstration by Martin Meridith "To stack or not to Stack", which I think shows that there is indeed no difference. The only case where I suppose that stacking can help with noise is when the image is moving over the sensor. Apart from correcting for such movement, stacking might also reduce noise by sampling with different pixels. I'm not asking anyone to justify stacking. Obviously it's practical, it works, and SL is a nice application. I would just like to know a little more about how stacking is implemented :-)