Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

High resolution deep sky imaging


alan4908

Recommended Posts

Yeah, it's a shame! Maybe some very clever person in the future will figure out some magical way of acquiring higher resolution images from simple telescopes. Still, I've plenty to get on with trying to overcome the challenges of normal imaging!

Louise

Link to comment
Share on other sites

  • Replies 34
  • Created
  • Last Reply

Yep, you'd need waveguides (read optical fibres) to get the light signal from both scopes to the camera.

Maybe if someone invents a light PHASE camera rather than a light INTENSITY camera, one day this may become possible.

Done already: holography. But there's little use, since even such a camera won't beat seeing.

Slightly off topic.

Closer to topic: when Göran and I processed images from the Liverpool Telescope last year, we always had quite fat stars. The scope images at 0.3"/pixel (20 m FL and 15 um pixels, binned 2x2). Even on a mountain top high above the clouds, terrestrial scopes and cameras are limited by seeing.

Since all sources that introduce "fuzziness" (decrease resolution) add up in a similar way that noise sources add up, each source adds a little to the overall fuzziness. And if you can control some sources, it's better not letting them be the limit. Ie, it's better to oversample (and finetune guiding) and let the seeing set the resolution than to undersample and let the camera set the resolution.

One other experience I gained from the LT exercise was, that at least in PixInsight, it's easier to process round stars than it is to process blocky stars. Also in favour of oversampling.

Link to comment
Share on other sites

3 hours ago, wimvb said:

Closer to topic: when Göran and I processed images from the Liverpool Telescope last year, we always had quite fat stars. The scope images at 0.3"/pixel (20 m FL and 15 um pixels, binned 2x2). Even on a mountain top high above the clouds, terrestrial scopes and cameras are limited by seeing.

Since all sources that introduce "fuzziness" (decrease resolution) add up in a similar way that noise sources add up, each source adds a little to the overall fuzziness. And if you can control some sources, it's better not letting them be the limit. Ie, it's better to oversample (and finetune guiding) and let the seeing set the resolution than to undersample and let the camera set the resolution.

One other experience I gained from the LT exercise was, that at least in PixInsight, it's easier to process round stars than it is to process blocky stars. Also in favour of oversampling.

I agree that being oversampled is good, since this lets you capture those images when the seeing is better than average and deconvolution allows you recover the image for those areas with a high SNR. My question is how oversampled should you be in order to gain this benefit but not loose too much SNR ?

If I take my current imaging set up, the diffraction limited angular resolution (in arc seconds) is given by 2.5E5 x wavelength/diameter. If I put in an average wavelength for green light of 540nm and my diameter of scope of 150mm then I get 2.5 E5 x 540E-09/150E-03 = 0.9 arc seconds. If I take the Stan Moore recommendation of sampling at x3.5 for a CCD then I need to sample at 0.26 arc seconds per pixel. (Sorry @vlaiv I don't understand how to work out the corresponding number using your analysis, could you please comment). Given that I'm currently at 0.7 arc seconds per pixel, the lowest rate that I should sample at is 0.26 arc seconds per pixel. However, if go down to this diffraction limit then I will presumably be giving up too much SNR, which doesn't seem a good idea. 

Alan 

Link to comment
Share on other sites

2 hours ago, wimvb said:

Closer to topic: when Göran and I processed images from the Liverpool Telescope last year, we always had quite fat stars. The scope images at 0.3"/pixel (20 m FL and 15 um pixels, binned 2x2). Even on a mountain top high above the clouds, terrestrial scopes and cameras are limited by seeing.

Interesting. The biggest telescope in mainland Europe is about 50km from me. The ability to see its domes is a measure I use of air transparency (though obviously that doesn't tell me anything about looking up - just sideways :icon_biggrin:).
I occasionally check their website as they record lots of real-time stats. One of them that comes online occasionally is their pro-quality seeing measurement.
As a reference point this is a screen capture I grabbed in June 2016. It's V-band data, their IR seeing tends to be a little better.calar-alto-seeing-09jun2016.jpg.77a0754ebfa70b826bf786c83a8bb358.jpg

Link to comment
Share on other sites

2 hours ago, alan4908 said:

I agree that being oversampled is good, since this lets you capture those images when the seeing is better than average and deconvolution allows you recover the image for those areas with a high SNR. My question is how oversampled should you be in order to gain this benefit but not loose too much SNR ?

If I take my current imaging set up, the diffraction limited angular resolution (in arc seconds) is given by 2.5E5 x wavelength/diameter. If I put in an average wavelength for green light of 540nm and my diameter of scope of 150mm then I get 2.5 E5 x 540E-09/150E-03 = 0.9 arc seconds. If I take the Stan Moore recommendation of sampling at x3.5 for a CCD then I need to sample at 0.26 arc seconds per pixel. (Sorry @vlaiv I don't understand how to work out the corresponding number using your analysis, could you please comment). Given that I'm currently at 0.7 arc seconds per pixel, the lowest rate that I should sample at is 0.26 arc seconds per pixel. However, if go down to this diffraction limit then I will presumably be giving up too much SNR, which doesn't seem a good idea. 

Alan 

Based on my calculations, here is what I would recommend as important resolutions for 150mm scope:

Critical sampling rate should be: 0.35"/pixel in green (510nm) - no detail can be captured with higher sampling because of diffraction of light. According to my research appropriate value is x4.8 per Airy disk diameter, or x2.4 Airy disk radius (again, not x2, x3 or x3.3 as often quoted). There is thread in planetary imaging that explains how I arrived to this figure (again, Fourier transform and MTF based on Airy disk).

For long exposure astro photography, if you for example achieve average FWHM of 2", recommended resolution for being able to do deconvolution should be:

PI / sqrt( -2 * ln(fraction) / (FWHM / 2.355)^2 ) - here for fraction you should put 0.01, and for FWHM you should put 2" - that gives:  0.88"/pixel

At the moment, I recommend putting 0.1 as fraction if you don't plan to do deconvolution (it will give you larger sampling resolution), and 0.05 down to 0.01 (depending on how much deconvolution you plan to do - based on SNR - good SNR go with lower value). This numbers are not entirely based on theory, they are just sort of common sense values (but not confirmed) - I concluded that your SNR must be really high in order to boost highest frequencies by factor of 100 - maybe that is even too high when I think about it. For planetary imaging where target is bright enough (SNR of 5-20 per frame x couple of hundred frames gives SNR over 100 - something you rarely see in deep sky photos) sometimes you can boost highest frequencies by factor close to 100 before noise starts to be obvious, on another occasion boosting over 20 starts to show noise (probably stacking fewer frames like 100-200, making SNR close to 50).

So sort of common sense suggests that one should use 1/SNR value for fraction or close to that.

Link to comment
Share on other sites

13 hours ago, vlaiv said:

Based on my calculations, here is what I would recommend as important resolutions for 150mm scope:

Critical sampling rate should be: 0.35"/pixel in green (510nm) - no detail can be captured with higher sampling because of diffraction of light. According to my research appropriate value is x4.8 per Airy disk diameter, or x2.4 Airy disk radius (again, not x2, x3 or x3.3 as often quoted). There is thread in planetary imaging that explains how I arrived to this figure (again, Fourier transform and MTF based on Airy disk).

For long exposure astro photography, if you for example achieve average FWHM of 2", recommended resolution for being able to do deconvolution should be:

PI / sqrt( -2 * ln(fraction) / (FWHM / 2.355)^2 ) - here for fraction you should put 0.01, and for FWHM you should put 2" - that gives:  0.88"/pixel

At the moment, I recommend putting 0.1 as fraction if you don't plan to do deconvolution (it will give you larger sampling resolution), and 0.05 down to 0.01 (depending on how much deconvolution you plan to do - based on SNR - good SNR go with lower value). This numbers are not entirely based on theory, they are just sort of common sense values (but not confirmed) - I concluded that your SNR must be really high in order to boost highest frequencies by factor of 100 - maybe that is even too high when I think about it. For planetary imaging where target is bright enough (SNR of 5-20 per frame x couple of hundred frames gives SNR over 100 - something you rarely see in deep sky photos) sometimes you can boost highest frequencies by factor close to 100 before noise starts to be obvious, on another occasion boosting over 20 starts to show noise (probably stacking fewer frames like 100-200, making SNR close to 50).

So sort of common sense suggests that one should use 1/SNR value for fraction or close to that.

Vlaiv

Many thanks for your very detailed answer. Your analysis does suggest that there will be little chance of improving the resolution of my system by going to smaller camera pixels unless my FWHM is significantly decreased.  For instance if I take the lowest FWHM measured in a single frame of 600s duration - 1.49 arc seconds, assume that I will use deconvolution on areas of high SNR and so use a fraction of 0.01 in your formula I obtain a value of 0.7 arc seconds/pixel - precisely where I am at the moment. 

Therefore the only (small) chance of increasing my current resolution would appear to be a go for much smaller sub frame duration that occasionally would result in a lower FWHM as a result of DSO lucky imaging. This implies that if I went down this path then I would need to search for a new camera with a much higher QE. However, depending on the required exposure to make lucky imaging effective (less than 10s ?) and the condition that I need a very high SNR to make deconvolution effective,  implies that this would work on only very bright parts of a DSO. 

Alan 

Link to comment
Share on other sites

2 hours ago, alan4908 said:

Vlaiv

Many thanks for your very detailed answer. Your analysis does suggest that there will be little chance of improving the resolution of my system by going to smaller camera pixels unless my FWHM is significantly decreased.  For instance if I take the lowest FWHM measured in a single frame of 600s duration - 1.49 arc seconds, assume that I will use deconvolution on areas of high SNR and so use a fraction of 0.01 in your formula I obtain a value of 0.7 arc seconds/pixel - precisely where I am at the moment. 

Therefore the only (small) chance of increasing my current resolution would appear to be a go for much smaller sub frame duration that occasionally would result in a lower FWHM as a result of DSO lucky imaging. This implies that if I went down this path then I would need to search for a new camera with a much higher QE. However, depending on the required exposure to make lucky imaging effective (less than 10s ?) and the condition that I need a very high SNR to make deconvolution effective,  implies that this would work on only very bright parts of a DSO. 

Alan 

Quite so. I'm planning a little experiment, and I'll make sure I share the results of it. My plan is to image very high surface brightness target - M57 in this case, under best possible conditions from my location - zenith, using narrow band filters, and try to collect as many subs as possible (at least 4h in each Ha and OIII). I'm going to do this with 8" scope and ASI1600 at native resolution 0.5"/pixel (I usually bin my images taken at native to get to 1"/pixel). I'm going to try to show that even "non lucky" DSO imaging can possibly capture images up to critical resolution for a given scope. So my approach will be 60s subs, and two "special" algorithms - supersampled stacking + wavelet decomposition for frequency restoration (prior to stretch, I'll probably add histogram stretch to the app as screen transfer function to be able to see how much each sub band need boosting).

So I aim for final resolution of 0.25"/pixel.

Capture it self is not going to be a problem, but there is quite a bit of work on software to try to get it all together.

Link to comment
Share on other sites

3 hours ago, alan4908 said:

I need a very high SNR to make deconvolution effective,  implies that this would work on only very bright parts of a DSO. 

No matter what camera you use, deconvolution always works best on the brightest structures. For these, you can safely replace fewer an longer subs with more and shorter subs. Just stay well above the read noise of your camera, and don't decrease the total integration time. This is the realm where cooled cmos shines.

The main drawback with dso lucky imaging is that for this to work, you need hundreds of subs. If you want quality results, there's no shortcut; you can't decrease the total integration time. At 10 to 30 MB each (16 bit x 5 to 15 Mpixel), each imaging night will use a big chunk of storage space, and stacking software must be able to handle the large amount of data.

Even if you only plan to keep just the best subs, you'd still need to evaluate them all. And you'd probably still need a few hundred in the final stack.

And (to get back to topic), you need a large aperture, fast scope to collect those photons. Especially with small pixels. The imaging efficiency of a setup is determined by 

a (D/L)^2

where a is pixel area, D is aperture, and L is focal length. For a certain pixel scale, this translates to

(rD)^2

where r is the pixel size divided by focal length. For a higher resolution rig, you need a larger aperture to compensate for the loss in efficiency. There's always a tradeoff between efficiency and resolution.

Link to comment
Share on other sites

On 17/05/2018 at 09:20, wimvb said:

I think that once camera technology gets there, we can all safely ditch guiding. (And unfortunately that nice mount of yours will then be overkill. :wink:)

Hmm, when doing planetary images you can usefully use a pixel scale about 1/3 of  the theoretical resolution (sort of Nyquist+) but you generally need exposures in the tens of milliseconds to defeat the seeing, which kind of rules out DSOs until the 'zero noise camera' is invented (just after the perpetual motion machine).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.