Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Hey, how do I properly do DSO lucky imaging?


Recommended Posts

What does the workflow look like? I understand that the exposures are around 500ms to 1s. Do I need flats and darks like one would when doing normal DSO astrophotography? I have a f/6 dob and a ZWO 462mc, do I need a coma corrector?

Link to comment
Share on other sites

You are also assuming or requiring a very low read noise per pixel and almost zero dark current in order to get the stacking benefits and desired signal to noise ratio in realistic timescales.

Fortunately this is possible from modern CMOS cameras now.

Link to comment
Share on other sites

4 minutes ago, skybadger said:

You are also assuming or requiring a very low read noise per pixel and almost zero dark current in order to get the stacking benefits and desired signal to noise ratio in realistic timescales.

Fortunately this is possible from modern CMOS cameras now.

Yeah, I have an asi462, I assume that camera is good enough?

Link to comment
Share on other sites

7 minutes ago, skybadger said:

You are also assuming or requiring a very low read noise per pixel and almost zero dark current in order to get the stacking benefits and desired signal to noise ratio in realistic timescales.

Fortunately this is possible from modern CMOS cameras now.

What programs should I use for processing? I’ve seen people use AutoStakkert for stacking because of it’s ability to reject distorted frames.

Link to comment
Share on other sites

20 minutes ago, JokubasJar said:

What programs should I use for processing? I’ve seen people use AutoStakkert for stacking because of it’s ability to reject distorted frames.

Yes AS3 would be best I think, as there would be thousands of subs and so manually rejecting bad ones would be tedious. Not sure how you would place alignment points, on stars probably unless there was some good nebula/galaxy with structure in the subs and you could put points on those too. I would definitely take darks as it wouldn’t take that long with the short exposures. Probably wouldn’t bother with flats with the small sensor of the 462.  For the same reason I wouldn’t bother with a coma corrector either. 

Link to comment
Share on other sites

21 minutes ago, JokubasJar said:

What programs should I use for processing? I’ve seen people use AutoStakkert for stacking because of it’s ability to reject distorted frames.

Yes Autostakkert works well. From my notes of trying it you need to use Image Stabilization = Surface and Quality Estimator = Local.

This thread has an example of an image. The examples I have seen usually have an exposure in the 5s-10s range for DSO. I have used it extensively for double stars and it works well for that too albeit with a much shorter exposure length.

Link to comment
Share on other sites

  • 3 weeks later...

This is just a theoretical observation -- I have only used lucky imaging for planets. But...what benefit is there to a 5-second image? That's enough time for a whole bunch of atmospheric turbulence cycles. I mean, yes, seeing distortions are cumulative, but they accumulate very very quickly to an asymptote.

Now, if the goal is to reduce errors from guiding that's not up to the task, or tracking problems, OK, that makes sense. Those errors occur over much longer time scales.

Ready to be educated, here.

Link to comment
Share on other sites

38 minutes ago, rickwayne said:

This is just a theoretical observation -- I have only used lucky imaging for planets. But...what benefit is there to a 5-second image? That's enough time for a whole bunch of atmospheric turbulence cycles. I mean, yes, seeing distortions are cumulative, but they accumulate very very quickly to an asymptote.

Now, if the goal is to reduce errors from guiding that's not up to the task, or tracking problems, OK, that makes sense. Those errors occur over much longer time scales.

Ready to be educated, here.

Yes you’re right I think, you’d need ideally much shorter exposures to gain something, but I think it depends on how long the ‘moments of calm’ are during your imaging session. 

My understanding is planetary imaging relies on very short (<10ms) subs to ‘freeze’ the seeing, i.e., exposing for a shorter period than the coherence time.  For DSO lucky imaging you’re using much longer exposures of around 0.5-1.0s which aren’t short enough to freeze the seeing, but the hope is to catch the periodic moments of steady calm air when the seeing improves momentarily (visual observers will be familiar with the brief spells of clarity), and grab a few frames during those periods (the ‘lucky’ bit). By including only the best frames from the session you can improve on the resolution that is normally attained in long exposure imaging and so you’d be resolving at the magnitude of the seeing blur in those steady moments which could well be under 1”.   On the flip side the total exposure time would be low and hence the image not very deep/smooth and only really suitable for brighter targets or targets where you already have some long XP data to combine. 
 

Link to comment
Share on other sites

1 hour ago, rickwayne said:

This is just a theoretical observation -- I have only used lucky imaging for planets. But...what benefit is there to a 5-second image? That's enough time for a whole bunch of atmospheric turbulence cycles. I mean, yes, seeing distortions are cumulative, but they accumulate very very quickly to an asymptote.

Now, if the goal is to reduce errors from guiding that's not up to the task, or tracking problems, OK, that makes sense. Those errors occur over much longer time scales.

Ready to be educated, here.

Absolutely none :D

Well, actually that is not 100% true.

It provides one with two small benefits:

- it virtually removes need for precise tracking. Even basic mounts should be able to perform well in few seconds of exposure (one of reasons seeing is defined as FWHM in 2 second exposure - that removes mount performance out of the equation)

- much better granularity in selection of smaller FWHM values. Spells of better seeing can last only a few seconds and during the course of regular DSO exposure - that will be averaged out with regular seeing and worse seeing as it fluctuates about mean FWHM value. If you take really short exposure of 1-2 seconds - then you can select those spells of slightly better seeing to get better overall FWHM after stacking - but that will of course result in many rejected subs because worse FWHM.

In any case - I'm not sure that lucky DSO imaging is valid approach after all. I think that better resolution images can be achieved with better use of algorithms and careful processing of data. I believe that better SNR will beat slightly smaller FWHM in regular stacking - especially if one utilizes "non regular" stacking to get best of both worlds - better SNR and smaller resulting FWHM.

 

Link to comment
Share on other sites

I am tempted to try an experiment -- shoot RGB at normal sub-exposure times and accept that seeing will blur the image a fair bit, but then do as many gigabytes of luminance at video speeds as will fit on the disk. Getting the compute resources just to classify the frames and select the best ones, to say nothing of calibration and integration, would be a challenge. But maybe I could charm the necessary Amazon instances out of the University somehow. I mean, I have the administrative privileges, but jail has even fewer astrophotography opportunities than Wisconsin in December.

Link to comment
Share on other sites

6 minutes ago, rickwayne said:

I am tempted to try an experiment -- shoot RGB at normal sub-exposure times and accept that seeing will blur the image a fair bit, but then do as many gigabytes of luminance at video speeds as will fit on the disk. Getting the compute resources just to classify the frames and select the best ones, to say nothing of calibration and integration, would be a challenge. But maybe I could charm the necessary Amazon instances out of the University somehow. I mean, I have the administrative privileges, but jail has even fewer astrophotography opportunities than Wisconsin in December.

That won't work.

I mean - RGB part will, but luminance at video speeds wont.

It works for planetary because planets are really bright and single 5-6ms or even shorter sub contains enough signal for software to be able to align subs for stacking. If you try that sort of exposures, or even a bit longer - like 20-30ms - all you are likely to get is a lot of noise and maybe 1-2 brighter stars in the FOV.

Not enough to judge quality of the sub (how tight FWHM is - since these stars will be distorted by seeing and polluted by noise) and not enough to compute good alignment information.

In theory, you could do the following:

Take bunch of very short subs (video speeds) that fit in some short time frame and then do brute force approach - stack without alignment all combinations of subs that contain at least some percentage and then select best combination. You will get 2s frame that you'll be later able to stack with other such frames.

But that would require massive processing power to do all the calculations in any sensible time frame.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.