Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

To stack or not to stack: 30 x 1s = 1 x 30s?


Martin Meredith

Recommended Posts

Nigel, much obliged for the calculation and suggestion.

I did wonder if things would be different in darker skies (and will have a go at the next opportunity). The upshot seems to me -- correct me if I'm wrong -- that for any given sky brightness there may be little to gain from subs longer than a certain amount -- and that this amount might be actually quite short? If so, it would be useful to know the quantity (based on an SQM reading) on any given occasion. I'm going to work through the relevant computations when I get a chance.

Martin

Link to comment
Share on other sites

  • Replies 62
  • Created
  • Last Reply

Encouraged by Nigel's remarks, I've carried out a calculation of signal-to-noise ratio (SNR) for various sub lengths, as a function of the sky brightness (SQM) and the magnitude of the target object. This data is for my scope and camera, and assumes seeing of 3" FWHM. I'm happy to explain how I did the calculation if anyone is interested, but essentially I compute the ratio of number of electrons from the target in the central pixel divided by the 4 noise sources (object, sky, dark current, readout) added in quadrature.

The first set of plots are for an overall exposure of 30s (my original case). Each panel shows the sky brightness, from SQM 17 (pretty poor) to 22 (pristine). Within each panel I'm plotting the relative SNR, defined as the ratio of SNR of the shorter subs to that of the single longer sub, expressed as a percentage. At 100%, the shorter subs have no detrimental effect on the final image quality. I'm plotting a separate coloured line for a number of sub lengths, from 1s to 30s. 
post-11492-0-08964000-1433430899_thumb.p
My first test in this thread was in SQM 17.7 skies and involved a fairly bright object (M57). The simulation suggests that the 30 x 1s subs will have 90-95% of the SNR of the single 1 x 30s sub. Note that going up to 5s subs gets us to more or less 100% (i.e. no loss of SNR).
We can also see that as the target object gets fainter, the degradation due to decreasing sub length gets greater, up to a point. This backs up Alex's earlier intuition.
Further, as skies get darker (increasing SQM), the detrimental effect of shorter subs also increases, as per Nigel's implied point. Even so, for an overall exposure of 30s we're getting at least 80% of the SNR with 5s subs, and over 90% with 10s subs (light blue line).
The next plot shows a more realistic case of an overall exposure of 300s (similar to some of my later shots in this thread e.g. post 15). Here we see that, not surprisingly, the degradation caused by the shorter subs is increased: we have 10x the read noise compared to the 30s case. My skies were 17.5 on that evening and I used 5s subs (red lines). For these conditions the degradation even for faint targets is quite small: we're operating at 93-95% of the best possible SNR. This ties up with the results themselves in that post. 
post-11492-0-73393500-1433430982_thumb.p
In my post 22 I had a better SQM reading of 18.7. The plots suggest that fast stacking 5s subs here led to an efficiency of around 85%. Note that 1s subs would be nearer 40% (consider the comparison of Hickson 55 where 1s subs showed a clear deterioration).
In general the simulation ties in with the actual results, so assuming I haven't screwed up the calculations, what might we learn from this?
(1) In poor-average skies (SQM 17-18) and for a 5 minute overall exposure time, the loss in SNR from stacking 5s subs is under 10%, and the loss from 10s subs is negligible, even for very faint objects.
(2) In good skies (SQM 19-20), the efficiency falls progressively to about 70%. Nevertheless, even here doubling sub length to 10s takes up back to 90%, which might be worthwhile if tracking for 30s is problematic.
(3) In the very best skies (SQM 21-22) there is a clear advantage to longer subs when seeking faint objects. Interestingly, the difference between a stack of 10 x 30s subs and a single 1 x 300s sub is not that great (about 92% of the SNR for the stacked case).
Those remarks need to be taken in the context of near real-time viewing as opposed to astrophotography where a few percent drop in SNR might be considered more important. 
These numbers may change for different apertures, read noise levels, CCD quantum efficiencies etc. but read noise is going to make the biggest difference. To show this, here's what happens in the 300s exposure case if read noise is 1e- rather than the 10e- I've assumed for the Lodestar.
post-11492-0-96900600-1433431038_thumb.p
Quite an improvement and maybe a view of the future for near real-time work? (I believe some cameras are already at these low read noise levels.) 
[added in edit] Note that these are relative SNRs and not actual SNRs. In many cases (e.g. detecting mag 22 objects in 30s on a SQM 17 night) the actual SNR is going to be too low to be useful!
cheers
Martin
Link to comment
Share on other sites

Martin,

Nice work building the SNR model - it does tie nicely to your observations. My takeaway from it is that, as a person who does urban EAA (ie the skies are ALWAYS poor), my ability to detect faint objects is obviously less than it would be under pristine skies, BUT, in the conditions I have to work with, my results will not be harmed by stacking very short exposues, even down to a few seconds each.

This also leads me to question if I or anyone in my situation really needs to be messing around with an equatorial mount. Given that LL and most other such programs can easily rotate subs while stacking, it seems there's little disadvantage to using a computerized alt-az mount, since there's no worry of visible field rotation within a single 5 second sub.

Link to comment
Share on other sites

Thanks for the link Guy. 

Alex, I'd be very interested to hear of your experiences if you do try. Re EQ, although my mount (AZEQ6) supports both modes, I've never got round to using it in equatorial mode, and I routinely use quite lengthy exposures (30s at 800mm FL) which LL handles well. I have an 10" f6.3 mirror lying around that I hope to turn into a dedicated secondary-less EAA platform at some point which will provide a significantly longer FL, and I'm hoping that shorter subs will be the way to go with that one. Failing that I could always polar align, but as I don't have an observatory I try to keep setup time to a minimum (and shorter subs for darks helps a lot here too…).

Martin

Link to comment
Share on other sites

Martin - excellent plots. I have been meaning to do something like this for years and never got round to it.  I think they should be displayed everytime someone in the imaging forum insists that you cannot do "real" DSO imaging with 30sec subs!

NigelM

Link to comment
Share on other sites

  • 2 weeks later...

Martin,

At long last I think I'm going to have a chance to try my hand at your fast stacking technique.  Work, travel, short summer nights, and the fact that June is the cloudiest month of the year around here have kept me out of the EAA game for a long while now!

If you have a chance, could you post the exact parameters you use for stacking in LL?  For example: There are a couple of possible noise reduction modes - do you use them?  Where do you set the maximum allowable pixel drift?  Do you always use mean stacking?

Thanks

Alex

Link to comment
Share on other sites

Hi Alex

For the runs in this thread (and mainly nowadays post v0.10) I used mean stacking. I have a suspicion that median wouldn't work as well since it doesn't deliver the optimal SNR improvement, but I might be wrong.

What I find is critical -- and I only discovered this on Friday during a very long stack aimed at detecting the outer halo of M51 (*) -- is to adjust the Max Pixel Displacement. In the earlier runs I hadn't done this and found the stacked image undergoing a sudden deterioration after say 30-40 subs. Actually, this is described in Paul's users' guide when he talks about key frames, but of course I'd only skimmed the guide a year or so ago… So what I do is monitor the mean displacement value (under Status in the stacking tab) and increase the max pixel displacement to a value greater than this. I was doing this incrementally (every 10-20 subs) on Friday and haven't had the chance to see whether it is OK to set it once initially at a large value. It may be that setting too large a tolerance initially could lead to incorrect stacking, I suppose.

I'm not sure what you mean by noise reduction? Do you mean the nonlinear display modes? I mainly use x^0.25, occasionally experimenting with arcsinh (on the same object). Once you get used to finding the sweet spot on the brightness scale with these two modes there's no going back to linear.

I hope you get a chance to try it out and post some results. 

cheers

Martin

(*) In my M51 test I got up to 100+ subs before calling it a night, though in total I collected 250 of the same object for later post-processing in Nebulosity (hence posted in deep sky and not here). 

Link to comment
Share on other sites

Martin,

Thanks for your suggestions (and nice M51 image).  The other control I was referring to in the stacking window was the "output filter", which has the possible options below:

post-43095-0-85341700-1435144998.png

I think I understand your point about monitoring the max pixel displacement.  If each newly-stacked image is "indexed" back to the pixel map of the original sub, then the distance that subsequent subs have to shift to be aligned should grow over time (and faster if one's polar alignment is not good).  I wonder (question for Paul I suppose) how much more computationally expensive the alignment is when given a longer leash to find an optimal stack position?  If my supposition is true then guiding would be valuable even in this context of very short subs, no?

Link to comment
Share on other sites

Hi Alex

Ok, no, I never use those options as I don't really like the smoothed look. Processing M51 in Nebulosity made me realise how tempting it is to sharpen though -- that is, until I saw what a mess it tends to make of the stars.

Your description of indexing back to the original frame is also my understanding (and ties up with Paul's documentation). One thing I'm not sure about is why the mean pixel displacement appears to grow more rapidly when using fast stacking as it ought to grow at the same rate regardless (e.g. change in image in 3x10s = change in 1x30s). Perhaps it doesn't and I've been using longer overall exposures than usual or have never noticed the change in keyframe and accompanying image degradation before. But it is something I intend to key an eye on from now on. Setting an initially high value doesn't seem to slow anything down but I haven't tested it carefully. My worry is whether it increases the chances of misalignment.

I've never considered guiding as my aim is to cut everything back to the minimum kit necessary in general (I often observe away from the house so the less gear to cart around the better). I doubt guiding is really necessary in this style of observing, and less so for short subs. What would be really interesting would be to get the sub rate down to equal the slowest guide cam update rate and somehow use the Lodestar simultaneously as an imager and a guider. 

regards

Martin

Link to comment
Share on other sites

Martin,

I had that same thought at one time - could you somehow have LL and e.g. PHD "looking" at the same Lodestar image data?  One intriguing possibility (probably would require a single app doing both stacking and guiding) would be to have the sequence:

1. Capture "key" frame

2. Capture next frame

3. Align next frame to stack

4. Compute tracking imprecision based on pixel shift required to align frame stack

5. Send command to mount to move to correct tracking error

6. Go to 2.

This would not be helpful for doing long-exposure subs, obviously, but should make "fast-stacking" for an almost unlimited time duration possible, without requiring really exact polar alignment.  I'll note this to Paul for his backlog of development ideas :-)

Alex

Link to comment
Share on other sites

Hi Alex,

There is no real computational effort difference to align a slightly miss-aligned image to a fairly significantly one, the majority of the computation time goes into matching the stars in the image and also transforming the new image to match the orientation of the stack (its uses lanczos resampling). These are constant no matter the magnitude of alignment.

The max pixel displacement is mainly a feature for alt-az mounts (or badly polar aligned eq ones) to keep field rotation in check. As new images are always aligned to the first, after a while there is a big difference between the first and the latest (due to field rotation), so the new image won't contribute much to the stack as there isn't much overlapping. So when the max displacement is reached the stack is aligned to the new image, and then afterwards the new image is aligned to the stack again.

The guiding idea is interesting and not something I have thought of, but a good one. Even alt-az would benefit as sometimes objects drift so the system could instruct the mount to re-align. You would need to use small exposures though. I'll certainly be thinking about it for the future!  :grin:

Sadly given the way USB works LL and PHD can't share the camera - it would be nice! Although maybe I could make LL 'repeat' the lodestar data and spoof a virtual camera - then PHD talks to that?!

The main developments at the moment:

1. Exposure colour channel masking - this allows you to use filters and combine multi-spectrum exposures (think the hubble palette) (V0.13)

2. Invert the image filter (V0.13)

3. I'd like to get a sharpen filter in there... Ive experimented with a simple one but it was rubbish - needs more work (V0.14)

4. Allow multiple instances of the app to control multiple cameras (V0.14)

Paul

Link to comment
Share on other sites

Hi Paul,

Thanks again for the great work on LL! It's been cloudy here so I haven't been able to try it yet but hope to soon!

Could you add a manual midpoint/grey point adjustment for the histogram in a future version? Miloslick for Mallincams implements a version of this which allows for a pretty large sized histogram so you can see small changes pretty easily, but without taking too much interface space up. Thank you in advance for considering this!

Cheers!

Greg A

Link to comment
Share on other sites

Paul,

Thanks for the preview. I like the plate solver idea, especially if you could somehow start it with an approximate direction to constrain the area of the sky it was trying to search. The ones I've tried on line take a long time if they have to search for your image in the whole sky. I suppose seeding it with an approximate image scale could help constrain the search as well.

The other thing I really hope to try someday is a larger chip. I've been coveting the SX694, which has a 16mm diagonal chip with 4.5 micron pixels. It has very good QE, and I'm thinking if it can bin 2x2 "on chip" then it essentially becomes a giant Lodestar X2M - a 1.5 MP array of 9 micron pixels. So if you could put that binning feature somewhere in the develoment backlog... :-)

Alex

Link to comment
Share on other sites

The guiding idea is interesting and not something I have thought of, but a good one. Even alt-az would benefit as sometimes objects drift so the system could instruct the mount to re-align. You would need to use small exposures though. I'll certainly be thinking about it for the future!  :grin:

Paul,

One possible shortcut to a similar effect might be to align the stack to the new sub, rather than the other way around.  In other words, rather than establishing the first sub as the key and mapping all subsequent subs back to it in pixel space, for each new sub compute the alignment of sub to preceding stack as you do now, but then update the coordinates of the stack in pixel space to match the new sub.  That way the next sub is never more than one exposure-length of drift displaced from the stack, rather than having that displacement factor continually growing over time.

Alex 

Link to comment
Share on other sites

So my experiments with "fast-stacking" were fairly pleasing.  I found that for whatever reason, if I go shorter than about 10sec subs, the stacking in LL starts to fall behind the image capture and eventually gives up.  So everything I did tonite was at 10 sec per sub, with 20x10 sec dark frames recorded first.  

For those who are interested, all of this was done with my c8, 0.5X focal reducer, Astronomik CLS CCD LP filter, under white zone urban light pollution (Boston USA).  Cloudless skies, fair transparency, fair seeing at best, 1/4 waxing moon.

First up was the Draco triplet, NGC 5981, 5982, 5985.  I was pleased to see structure coming out in the face-on spiral (5985).  The blue line points to a mag 17.5 PGC galaxy that also came in nicely.  50x10 second subs.  I really enjoyed watching these images slowly getting better and better as the subs stacked up.

post-43095-0-88808600-1435210731.png

Link to comment
Share on other sites

To try the method out on an "easy" target, and one I'm very familiar with from a visual standpoint, I next looked at M13.  This is just 15 10 second subs.  I found that the arcsinh display option was really superior for this very bright object, whereas for galaxies I always get better results with X^0.25.

post-43095-0-50368600-1435210821.png

This is a nice view for an urban astronomer, although I have to confess to preferring the direct view through my 18" newt under darker skies...

Link to comment
Share on other sites

I find that the stacking algorithm struggles on globular clusters, especially this small one.  I had to take 20 subs to get 15 to stack, and at that point it looked to be on the edge of stopping altogether.

Link to comment
Share on other sites

As a direct comparison to Martin's beautiful results under much darker Spanish skies, here's my best attempt at M51, 50 x 10 sec.

post-43095-0-16608400-1435211413.png

I very much enjoyed watching the dark dust lanes come into sharper and sharper view as the subs accumulated.  Really drives home the "real-time" appeal of doing this.

Link to comment
Share on other sites

Low surface brightness galaxies are the hardest thing for me to do in my light polluted environs.  The arms of the large but faint barred spiral NGC6140 are just barely visible after 40 subs.

post-43095-0-82860100-1435211817.png

Link to comment
Share on other sites

Finally I headed over to an area in the Hercules Supercluster centered on NGC6061.  I need to spend more time looking carefully at this image along with Martin's Pretty Deep Maps, but just at first glance there are a couple dozen galaxies visible here down to mag ~18 (referring to the equivalent SDSS image obtained through astrometry.net).  60 x 10 sec.

post-43095-0-47464000-1435212296.png

Still blows my mind that this can be done at all under visual mag ~3.3 skies...

Link to comment
Share on other sites

Hi Alex

Thanks for all these shots, which are very informative and tie in with some things I've been seeing using this technique. I have to agree, it is amazing to pull out mag 17.5 galaxies under mag 3.3 skies with a half-moon.
Technically, for me the most interesting aspect of your images is the clear noise artefact you're getting when you go up to a large number of subs. This is aligned with the hot pixel drift -- linear in your case due to EQ mount I assume. You can see a similar artefact on some of my shots (e.g. the 64x10s case for the Shakhbazian 10 group). My artefacts are always curved, following field rotation. 
I've been thinking about what might cause these lines. It is interesting that they don't really show up in short stacks and one (positive) thing you might read into this is that they represent a kind of noise that is usually buried under sky noise, and which only show up when stacking has dealt with that type of noise. 
I believe it has something to do with imperfect dark current calibration -- maybe the bias component. If I take a bunch of darks then stretch to absolute limits I see some horizontal artefacts. Supposing dark subtraction is not perfect (too few darks, for example). Then after subtraction some low level of noise remains, but is invisible due to other noise sources. With stacking, the noise emerges (along with the galaxies!) and since it has structure it isn't reduced by taking the mean. 
By moving into very long stacks, we are transitioning into AP territory without perhaps taking the care over calibration that APers do. The seams are showing! But by further study of calibration we can probably (eventually) deal with these issues. Time to reach for Berry and Burnell again. Meanwhile, I'm just enjoying the "nearer" real-time view!
BTW, I was surprised that LL doesn't keep up with sub-10s subs for these types of images which are not particularly star-rich. Perhaps this is a function of the processor? On my MacBook Air (1.7 GHz Intel Core i5) I can usually go as low as 2s without problems.
What I do find is that stacking of very dense star fields can -- occasionally -- be problematic. This is a separate (not fast stacking) issue and I have some data from last night so I'll bring it up in a separate thread.
cheers
Martin
Link to comment
Share on other sites

Martin,

Thanks for the feedback.  I too noticed the linear "scratchiness" artifact (and yes I'm using an EQ mount).  As you note, it's only really objectionable when images are stretched to near the breaking point. Perhaps more dark subs are needed.  Or perhaps I'm not using them correctly?  I literally just pushed the "dark frames" button, ran 20 exposures, then switched back to light frames, uncapped the scope, and started collecting/stacking.  I'm not sure if there's a way to "stack" the darks to make a better master dark, or if perhaps that happens automatically?  

I am also surprised that my system has trouble stacking 5 sec subs - I'm running a MacBook Pro with 3.1 GHz i7 chip.  I am running PHD at the same time as my e-Finder / sometimes guider, which does I'm sure chew up some resources.  I can try quitting it first to see if that helps.  The stacking issue with dense star fields may be related to the difficulty I was having stacking NGC6229.  The brightest part of the core is not much bigger than one of the brightest field stars, and I was wondering if it was being captured as a fuzzy "star" by the stacking algorithm and making it hard to match.  Question for Paul I suppose.

Interesting counterpoint that M13 stacks beautifully.

Alex

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.