Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Optimum resolution - "new recommended value" and examples


Recommended Posts

There are a few recommendations around the web regarding the optimum planetary resolution with respect to aperture of telescope.

I've found values x2, x3 and x3.3 pixels per airy disk.

While playing with ImageJ and generation of different airy disk patterns (obstructed, clear aperture, different sizes) and examining results, I've come to following conclusion:

Optimum resolution for planetary imaging, when including wavelet post processing is x4.8 pixels per airy disk :D

My conclusion is based on following: x2 sampling (as per Nyquist 2d sampling in rectangular grid) of frequency that is attenuated to effectively 0 if one examines perfect airy disk pattern.

If we look at the following graph representing attenuation depending on frequency (well known MTF graph - which represents 1d cross section of power spectrum of airy disk pattern, for clear and obstructed aperture)

bely-4.22.jpg

(dashed line - clear aperture, solid line obstructed aperture)

we can see that for attenuation of 99% we are approaching cut of frequency at almost 99%, this means that frequency very close to cut of frequency will be at 1% of strength of original. For visual observation that is significant because loss of contrast is great at those frequencies, but when one applies frequency restoration and enhancement (effectively boosting contrast) of wavelet algorithm - we can see that this frequency can easily be restored by multiplication with factor of 100 (something eye can't do, but computer can easily with appropriate algorithm).

Now in my simulation of airy disk influence, I've found that for 200mm aperture with 25% central obstruction, cut off wavelength is ~0.5333" - meaning optimum sampling rate would be ~0.2666"/pixel or in relation to airy disk size of ~1.28" (510nm light) that equals x4.8 airy disk diameter.

In order to demonstrate these effects I've made "virtual recording" of Jupiter (original high resolution image used: http://planetary.s3.amazonaws.com/assets/images/5-jupiter/20120906_jupiter_vgr1_global_caption.png)

sampled at x3pp, x4pp, x4.8pp, x5pp, x6pp (in relation to airy diameter) of high resolution image convoluted with airy disk psf. So images represent perfect 25% obstructed optical system without seeing influence.

This is reference image (no convolution, sampled at 6pp):

reference.png.b1ce299b19af287d9ca69ad721174cb0.png

And this is result of convolution without frequency restoration (sampled at 6pp):

jupiter.png.37bf0946a91f580426ff0772ffa27814.png

Next I did different sampling resolutions, applied wavelets to each, and rescaled for comparison (scaling was done with bicubic interpolation), so here is result 6pp, 5pp, 4.8pp, 4pp and 3pp in that order from left to right:

Montage.thumb.png.6dc14fe6d27f5eb189a5f6e0a4674cd5.png

It can be seen that first 3 images look almost identical, while x4pp starts to show little softening and x3pp (as usually recommended for planetary) shows distinct lack of detail compared to others.

I also made a zoom of section around GRS - this zoom was made using nearest neighbor scaling (so individual pixels can be seen) - to evaluate small scale differences:

Montage_zoom.thumb.png.bc09140295c6ac91e37d3117fa27a944.png

In image with x4pp contrast starts to suffer and x3pp lacks both features and contrast, while first 3 (6, 5, 4.8pp) show virtually no difference.

So, there ya go, it looks like x4.8 is the way to go with planetary imaging :D

 

  • Like 6
Link to comment
Share on other sites

I'm not versed in such things at all, but thinking what this means in practical terms for me...

I have just read that the linear size of the airy disc depends only on the focal ratio (0.00124 mm times the focal ratio). Also that the angular size of the airy disc is 260/diameter mm (http://www.bbastrodesigns.com/AiryDisk.html). I don't know if that's accurate to begin with but I wanted to try and work through the above in terms of what that might mean for the two scopes and one of the cameras that I own.

 

For example, my 12" 1520mm (f/5-ish) - I calculate airy disc as 0.0062 mm, 0.85". I've seen a theoretical resolving power value of 0.38", which is approximately half of that value.

4.8 pixels per airy disc would imply a linear pixel size of 6.2um/4.8 = 1.3um, or an angular pixel scale of 0.85"/4.8 = 0.175 arcsec/pix.

My camera, Datyson T7M, has 3.75um pixels, or 1.65pp. A 2x barlow takes me to 3.31pp, and a 3x barlow to 4.96pp. Suggesting perhaps a 3x barlow.

For my scope and camera (1520mm, 3.75um), I get a natural pixel scale of 0.51 arcsec/pix, x2 barlow to 0.25 arcsec/pix or x3 barlow to 0.17 arcsec/pix.

 

For my DIY scope, 215mm 1635mm, f/7.6-ish...

AiryDisk - 0.0094mm, 1.209". 4.8pp implies 1.96um or 0.25".

Same camera, 2x barlow, 5.01pp or 0.237 arcsec/pix.

 

 

So using the Datyson T7M camera in my f/5 12", I should use a 3x barlow and if I were to keep my 8.5" f/7.6, a 2x barlow.

Is this how I should be going about applying what you've written above? I've not taken into account any central obstruction, and I seem to be rather oversampling (though I've read that's "good" for planets). Is the fact that both of these end up around f/15 just a coincidence? At 0.17 arcsec/pix for the 12", I get a field of view of 3.6' x 2.7' - that might be fun centering Jupiter.

Or am I misunderstanding this completely?

Link to comment
Share on other sites

Yes, that would pretty much be it.

You take size of airy disk for blue (highest frequency, shortest wavelength) and find the size of Airy disk in seconds of arc (formula is 1.22 * lambda / aperture diameter - this one is for angle to first minima in radians so you need to multiply by 2 and convert to arc seconds, both lambda and aperture are in meters). Once you found your airy disk diameter, divide with 4.8 - that should give you resolution that you should be sampling at and from that you calculate FL.

So for first scope you mentioned:

2 * 1.22 * 0.000000450 / 0.3  radians = ~ 0.755" diameter of airy disk in blue (450 nm)

this should give you resolution of ~ 0.1573"/pixel.

For camera with 3.75um, FL required for this resolution would be: 4940mm, so you are looking at x3.25 barlow. You will not miss much by using x3 barlow (don't need to sample blue to max, you can go with green instead, blue will be undersampled a bit but it will not make too much difference).

Now, this is all theoretical more than practical - that is maximum achievable resolution with perfect optics, perfect seeing and frequency reconstruction (with wavelets or something else).

As such this should be guide to max usable resolution above which there is simply no point in going higher. In real use case scenario you might not even want to go with such high resolution - it requires exceptional seeing, and with increase of sampling resolution - SNR per frame will go down - so you need plenty of good frames for stack in order to recover SNR to acceptable levels to be useful for frequency reconstruction.

I did all of this analysis because I was always intrigued by the fact that there are many different "optimal" resolutions out there in the "wild", and I wondered which one is correct. Now, x2 airy disk is somewhat related to visual - that is how x50 per inch or x2 in mm for max useful magnification is derived. That and average human eye resolution of 1 minute of arc. But eye can't do what computers do with frequency reconstruction - we can't make our eye amplify certain frequencies - it sees what it sees. This is the main reason behind x3 and x3.3, I guess people just realized that sampling on higher resolution does indeed bring more detail after wavelets (and certain article that does similar analysis as above, but for some unknown reason quotes x3 for 2d Nyquist, and gives wrong interpretation of cutoff wavelength that rises from given airy PSF).

Central obstruction absolutely plays no part for imaging. If you look at MTF graph, central obstruction has same cut of frequency, and only impact is to attenuate a bit more certain spatial frequencies (larger details will have a bit less contrast compared to unobstructed scope - this is important for visual, and you often hear that refractors, having no CO, have better contrast on planets - that is the reason). With frequency reconstruction this is of no importance.

 

Link to comment
Share on other sites

Ooops :D

I just realized something. x2, x3 and x3.3 sampling was given in relation to airy disk radius, not diameter.

My example was talking about x4.8 of airy disk diameter. This means that actual value comparable with those listed is x2.4 of airy disk radius. So yes, please if you feel confident in my analysis (and I'll post some more info to back up all of this tomorrow, now it's getting a bit late), use that resolution - it will provide you with better conditions for planetary imaging than other "optimal resolutions" quoted. It will provide more detail than x2 and it will have higher SNR per frame than x3 and x3.3 (but same amount of detail - max possible for given aperture as those).

 

Link to comment
Share on other sites

Vlaiv,

Glad you found that error....

I think you need to revisit Suiter's "Star Testing Astronomical Telescopes". On p61 he models a simplistic analysis of the Airy Disk based on the Heisenberg uncertainty principle and comes up with an answer close to the 1.22lambda/D.

I have yet to see an astronomical image showing the Airy Disk. (On the bench a pinhole optical system is usually used to generate the Airy disk images normal see in the text.) It needs at least a >f30 system and absolutely perfect conditions. Suiter's work is based on VISUAL and high magnifications.

What we record is the PSF of the seeing disk.

Re Sampling (See Suiter, p41) and the MTF Chapter 3.4. Eversberg and Vollmann in their "Spectroscopic Instrumentation" p 76 discuss at some length the issues of effective sampling - starting with the Nyquist criterion - they advocate at least a sampling rate of >3. (This again is based on the PSF rather than the "absolute" Airy Disk)

I'd also refer you to Schroeder's "Astronomical Optics". 

 

Edited by Merlin66
Link to comment
Share on other sites

7 hours ago, Merlin66 said:

Vlaiv,

Glad you found that error....

I think you need to revisit Suiter's "Star Testing Astronomical Telescopes". On p61 he models a simplistic analysis of the Airy Disk based on the Heisenberg uncertainty principle and comes up with an answer close to the 1.22lambda/D.

I have yet to see an astronomical image showing the Airy Disk. (On the bench a pinhole optical system is usually used to generate the Airy disk images normal see in the text.) It needs at least a >f30 system and absolutely perfect conditions. Suiter's work is based on VISUAL and high magnifications.

What we record is the PSF of the seeing disk.

Re Sampling (See Suiter, p41) and the MTF Chapter 3.4. Eversberg and Vollmann in their "Spectroscopic Instrumentation" p 76 discuss at some length the issues of effective sampling - starting with the Nyquist criterion - they advocate at least a sampling rate of >3. (This again is based on the PSF rather than the "absolute" Airy Disk)

I'd also refer you to Schroeder's "Astronomical Optics". 

 

Hm, I did similar analysis recently - Heisenberg vs Huygens–Fresnel principle and it turned out that there is numerical mismatch, but I did not look at Suiter's example, will have a look.

As far as I understand, circular aperture in telescopes (obstructed or clear) produces Airy pattern as PSF on focal plane for point source aligned with optical axes. If we explain the phenomena in classical terms (Huygens-Fresnel) - it is because on aperture plain there are infinite number of points acting as point sources for spherical waves (all in phase, because incoming wave front hits them in phase) and interference of those waves at focal plane gives raise to Airy pattern.

On the topic of Airy PSF vs Seeing PSF

- we are indeed applying wavelets on stacked image that has been under influence of seeing, but actual final image, due to lucky imaging has 2 blur components - Airy PSF and Gaussian distribution of Seeing influence. Each stacked frame was blurred by particular seeing PSF and each frame had different seeing PSF applied to it, but we align and stack such frames which leads to central theorem - sum of individual random seeing PSFs will tend to have Gaussian shape. Each frame has Airy PSF of telescope applied to it, but that is each time the same.

Now property of Gaussian PSF is that it attenuates high frequencies but there is no cut off point - all frequencies are present, and therefore can be restored with frequency restoration - power spectrum of Gaussian PSF is Gaussian (fft of gaussian is gaussian). So cut off that is applied is still Airy PSF cut off, and all the seeing attenuated frequencies can be restored given enough frames.

On the matter of sampling and Nyquist criterion in 2d - it is simple, on square grid you need x2 sampling rate (on rectangular grid you need x2 sampling rate in x and x2 sampling rate in y but if we use square grid units are same in x and y) to record any planar wave component. Imagine sine wave in x direction (y component is constant) - it is the same as 1d case so we need x2 sampling rate to record it. Also imagine sine wave in y direction (x is constant) we need x2 sampling rate in y direction to record it - same as 1d case. Now for any sine wave in arbitrary direction - projection on x and y axes will have longer wave length - so it will be properly sampled with x2 because each component will have lower frequency than when wave is in x and y direction.

image.png.4b16a8eb0b1a263e6f7e6710fa70558a.png

So 2d Nyquist is clearly same as 1d, on square grid we need 2 samples per shortest wavelength we want to record.

Now, I did not do my analysis mathematically, but I will present what I've done and how I came to the conclusion, which I believe is supported by above example.

I generated Airy PSF from aperture, both clear and obstructed (25%) and made Airy PSF be 100 units (pixels in this case) in diameter. Here is screen shot of clear aperture PSF (normal image, stretched image to show diffraction rings, profile plot, and log profile plot to compare to Airy profile):

Screenshot_5.thumb.jpg.4bec92785bf7afa443b3d6f95cbf6c7d.jpg

This is how obstructed aperture looks like (same things shown in same order):

Screenshot_2.thumb.jpg.acf0a48dbe174eca98ab0ab961a8f858.jpg

you can clearly see energy going into first diffraction ring - cause of lower contrast when doing visual.

Now lets examine what MTF graph looks like for both cases:

First clear aperture:

Screenshot_6.jpg.ad3d5b263ba2ed0802a34158cb638167.jpg

And obstructed

Screenshot_1.jpg.343c745e9355637e543280f9b655014d.jpg

If you compare those with the graph in the first post - you will see that they match.

Now question is, at what frequency / wavelength does cut off occur? We started with airy PSF with disk diameter of 100 units. Now I did not do calculation but I rather "measured" place where there is cut off. On this image log scale is used, so where central bright spot finishes that is cut off point:

Screenshot_4.jpg.d945d3168de6af273351249b55625e02.jpg

"Measurement" tells us that cut off point is roughly at 40.93 pixels per cycle. This means that we need our sampling to be 40.93 / 2 units or ~20.465 units long.

How big is that in terms of airy diameter? Well airy diameter is 100 units, so let's divide those two to see what we get: ~4.886

So in this case we get factor of ~ x2.45 of airy disk size for proper sampling resolution.

When I did my original post I also did measurement and found that rounded off multiplier is x4.8 in relation to diameter, or x2.4. So true value is somewhere in that range x2.4 - x2.45. We would probably need rigorous mathematical analysis to find exact value, but I guess x2.4 is good enough as a guide value - most people will probably deviate from it slightly because there is only limited number of barlows out there :D

So to recap:

going x2 Airy disk radius - undersampling (you might miss out on some detail, but really not much - it is 4pp example from first post)

going x2.4 Airy disk radius - seems to be proper sampling resolution for planetary lucky imaging (if you buy into all of this :D )

going x3 or x3.3 - you will be oversampling, thus loosing SNR with no addition detail gain (it is 6pp image from first post)

 

 

Link to comment
Share on other sites

  • 1 year later...
5 hours ago, MarsG76 said:

Cool experiment.. so basically you blured the original image unsing convolution and brought back the detail using wavelets?

 

Yes, exact workflow was:

- Take original image and blur it with Airy pattern of certain aperture (and I included 25% central obstruction)

- Resample each copy of original blurred image to certain sampling rate

- Apply wavelets to each copy

- Resize images to the same size for easier comparison

Link to comment
Share on other sites

Interesting work as ever @vlaiv. In a slightly different context this paper "Detector sampling of optical/IR spectra: how many pixels per FWHM?" found here https://arxiv.org/abs/1707.06455 may be of interest. While its focus (pun intended) is on slit spectrographs planetary features can be pseudo-linear. 

It quotes sampling in the 3 - 6 pixels per FWHM.

Regards Andrew

 

Link to comment
Share on other sites

Just now, andrew s said:

Interesting work as ever @vlaiv. In a slightly different context this paper "Detector sampling of optical/IR spectra: how many pixels per FWHM?" found here https://arxiv.org/abs/1707.06455 may be of interest. While its focus (pun intended) is on slit spectrographs planetary features can be pseudo-linear. 

It quotes sampling in the 3 - 6 pixels per FWHM.

Regards Andrew

 

Let's do a comparison to see what sort of numbers do we get and if there is a match between the two.

My proposal is to go for 2.4 pixels per Airy disk radius. Airy disk radius is 1.22 * lambda / d, while Gaussian approximation of such airy disk pattern is 0.42 * lambda / d - being sigma

So number of pixels per Gaussian sigma will be 2.4 * 0.42 / 1.22 = ~ 0.82623

Sigma to FWHM is related by multiplying with 2.355, so pixels per FWHM should be ~1.946. This is close to my recommended value for deep sky imaging that uses "discard 10% of most attenuated frequencies" approach and uses Gaussian approximation of star profile. There derived value is 1.6 * FWHM.

However it is quite different then recommended sampling rate of 3-6 pixels per FWHM found in that paper. I'll have to look a bit deeper into that paper to see if there is particular reason why authors recommended those values, or if it's the case of me simply being wrong somewhere :D

 

Link to comment
Share on other sites

22 minutes ago, vlaiv said:

Let's do a comparison to see what sort of numbers do we get and if there is a match between the two.

My proposal is to go for 2.4 pixels per Airy disk radius. Airy disk radius is 1.22 * lambda / d, while Gaussian approximation of such airy disk pattern is 0.42 * lambda / d - being sigma

So number of pixels per Gaussian sigma will be 2.4 * 0.42 / 1.22 = ~ 0.82623

Sigma to FWHM is related by multiplying with 2.355, so pixels per FWHM should be ~1.946. This is close to my recommended value for deep sky imaging that uses "discard 10% of most attenuated frequencies" approach and uses Gaussian approximation of star profile. There derived value is 1.6 * FWHM.

However it is quite different then recommended sampling rate of 3-6 pixels per FWHM found in that paper. I'll have to look a bit deeper into that paper to see if there is particular reason why authors recommended those values, or if it's the case of me simply being wrong somewhere :D

 

I think the main difference is that they are looking at errors in specific measurements e.g line positions, width etc. This I think is  more demanding.

For example from the paper "the HARPS planet-finder spectrograph (Mayor et al, 2003) uses 3.2 pixels/FWHM".

Regards Andrew 

Edited by andrew s
Link to comment
Share on other sites

28 minutes ago, andrew s said:

I think the main difference is that they are looking at errors in specific measurements e.g line positions, width etc. This I think is  more demanding.

For example from the paper "the HARPS planet-finder spectrograph (Mayor et al, 2003) uses 3.2 pixels/FWHM".

Regards Andrew 

Yes indeed - there are couple of differences between sampling LSF in spectrograph and planetary imaging.

You pointed out one significant difference - finding line wavelength, or line width is of concern in spectroscopy - but in planetary imaging we are interested in image alone. Both wavelength and line width are sub pixel features and pixel size (and phase) have significant impact on precision of measurement of those features.

Another difference is LSF vs Airy PSF. Airy PSF is band limited signal, there is definite and hard cut off point in frequency domain that is imposed by physics of the system. As such, it is perfect fit for Nyquist criteria of band limited signal. On the other hand LSF can vary in shape, often dominated by seeing influence (and hence being Gaussian/Moffat in nature) and possibly "clipped" by entrance slit. Such function is probably not going to be band limited in context of LSF and its FWHM which is quite a bit higher than aperture airy disk diameter. Authors raise this concern in abstract, here is a quote:

Quote

The common maxim that 2 pixels/FWHM is the Nyquist limit is incorrect and most LSFs will exhibit some aliasing at this sample frequency

Aliasing happens when you don't have band limited signal, or you sample at lower than max frequency. It is artifact of signal restoration where higher frequency components duplicate in the same way all frequency components duplicate, but for band limited signal these "harmonics" are out of band and therefore can be safely removed because we know that signal is band limited, where as with band unlimited signal you don't know and copies of high frequency components overlap with sampled signal causing aliasing artifacts.

In case of spectroscopy I certainly believe that higher sampling rate is beneficial as discussed by authors of that paper, and to get exact sampling rate needed - one should consider shape of LSF and pixel blur in relation to required feature detection (line CWL and widths, etc ...).

  • Like 1
Link to comment
Share on other sites

  • 1 year later...

In practical terms, would this mean that I could try capturing eg Jupiter with a Skymax 180 and an ASI178 with good prospects for an adequate sample?

According to the FOV calculator in Astrotools, the Dawes limit is 0.64" arc/secs and the sensor resolution 0.18", which would mean a bit over 3x supersampling, if I haven't messed up units?

N.F.

 

Link to comment
Share on other sites

7 hours ago, nfotis said:

In practical terms, would this mean that I could try capturing eg Jupiter with a Skymax 180 and an ASI178 with good prospects for an adequate sample?

According to the FOV calculator in Astrotools, the Dawes limit is 0.64" arc/secs and the sensor resolution 0.18", which would mean a bit over 3x supersampling, if I haven't messed up units?

N.F.

 

Diameter of Airy disk for 180mm scope is 1.43". Optimum sampling rate would therefore be:  ~0.3"/px.

With ASI178 you will be at 0.18"/px. That is over sampling by almost double.

This is perfect if ASI178 is color model, and if it is mono - I would recommend that you bin x2.

  • Like 1
Link to comment
Share on other sites

  • 2 months later...

Hi @vlaiv,

I really like your analysis and examples. I just started thinking about this, so here is a question:

In the above post you recommend double the sampling rate for a colour camera than for a mono camera. Is this related to the Bayer filter these cameras use? Can you maybe explain the argument for double the sampling rate.

Clear Skies

Link to comment
Share on other sites

7 minutes ago, alex_stars said:

Hi @vlaiv,

I really like your analysis and examples. I just started thinking about this, so here is a question:

In the above post you recommend double the sampling rate for a colour camera than for a mono camera. Is this related to the Bayer filter these cameras use? Can you maybe explain the argument for double the sampling rate.

Clear Skies

Hi, yes, it's related to bayer matrix. Under ideal conditions - one would do twice sampling rate and then split matrix into RGB components where there would be twice as much G subs then R and B. Processing would then be much like mono + filters.

Alternative is to do full processing at twice sampling rate and then reduce image size by factor of 2 in the end for sharpest results (many people don't do this and tend to leave their images over sampled which results in blurry looking image - I don't like it but most don't seem to mind it).

  • Like 1
Link to comment
Share on other sites

Thanks @vlaiv for the explanation. At my current setup (180 Mak and a ASI224 MC), I can get about 0.3"/px without a barlow and 0.15"/px with my barlow. So that should be just optimal sampling.

Now I can experiment with the processing and try your suggested processing path for very sharp results. I also prefer sharp images instead of large one.

Clear Skies!

 

Link to comment
Share on other sites

On 04/07/2019 at 11:24, vlaiv said:

Yes, exact workflow was:

- Take original image and blur it with Airy pattern of certain aperture (and I included 25% central obstruction)

- Resample each copy of original blurred image to certain sampling rate

- Apply wavelets to each copy

- Resize images to the same size for easier comparison

Hi @vlaiv,

I just have another question on your workflow. When you applied the wavelet filter to each copy, did you use the same filter for all images or did you "optimize" for each sampling resolution? In case you applied the same wavlet filter setting for all resolutions, did you optimize that one on the highest sampling (x6pp)?

Link to comment
Share on other sites

13 minutes ago, alex_stars said:

Hi @vlaiv,

I just have another question on your workflow. When you applied the wavelet filter to each copy, did you use the same filter for all images or did you "optimize" for each sampling resolution? In case you applied the same wavlet filter setting for all resolutions, did you optimize that one on the highest sampling (x6pp)?

It was quite long ago and if I remember correctly - I think I did each one separately - until I got the best looking image.

Btw, I've manage to find theoretical value since and it shows that I was very close with this simulation and measurement.

image.png.6859ba7aeb53582795a0bc1458e232a7.png

This is expression for cutoff frequency, found here https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

Lambda is expressed in millimeters and F# is telescope F/number

Since airy disk size is give as:

image.png.b6fb44c52f579709849c6a7a4f17579a.png

or X = 1.22 * lambda * F/number

We can see that maximum spatial frequency relates to airy disk radius by factor of 1.22, sampling rate relates to it by factor of 2.44 and airy disk diameter relates to it by factor of 4.88 - this is very close to my measured value of 4.8.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.