Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Aperture vs Seeing


Recommended Posts

Hi all,

In a different post I have been discussing the Pro's and cons of 3 different Newtonian of varying aperture and focal ratios for imaging Dso's, galaxies, comets etc - when 2 people posted saying that extra aperture is pointless unless you have perfect seeing and mentioned that they had very similar results with their 5"-6" apo's vs their large reflectors (14" ODK).

I thought this sounded a bit odd, and wasn't entirely sure if this was a Bias towards Refractors or a genuine valid comment, so wanted to ask the good folk of SGL without starting a Refractor vs Newtonian war.

For the 3 Newts I was looking at, the resolution of the setup with my CCD would be range between 0.89 - 1.11 per pixel.

So say I selected the 10" F5 newt @ 1250mm FL (0.89 per pixel), is this going to be useless in average seeing conditions  ?

Cheers - Rich.

 

Cheers,

Rich.

Link to comment
Share on other sites

  • Replies 47
  • Created
  • Last Reply

Best answer to this that I can offer comes in terms of some formulae and some theory that I put together during my research on the topic. I'm aware that some people will disagree with this, and others will recommend practice over theory (just try things and figure out what is best kind of approach - but in my view that is often not feasible).

Here is a bit of theory on the matter. I'm going to be discussing maximum possible detail attainable with any given setup under given conditions. This sets optimum sampling rate for such setup under those conditions, but does not imply that different setups will not work - they will and they do as many amateurs out there work with various setups.

Actual detail in the image is closely related to PSF of system (star profile) in long exposure image. This profile depends on couple of things, most notably: aperture size, seeing conditions and mount tracking / guiding precision. In perfect optical system, circular aperture produces Airy disk pattern from a single star. Size of this pattern depends on size of aperture. This pattern can be approximated by Gaussian profile (good enough for this discussion). Seeing also produces gaussian type blur in long exposure image (this is explained by central theorem in mathematics). Tracking/guiding error can also be approximated by gaussian (this one I lack thorough understanding/confirmation, but empirical evidence supports it).

Resulting PSF is convolution of these three gaussian profiles. It is also gaussian profile with sigma of resulting profile being square root of sum squares of individual sigmas. With this theoretical approach I can run some numbers for you to put things in perspective - we can compare 10" scope to 5" scope in various conditions.

Let's take 3 different seeing values 1" FWHM being excellent 1.5" FWHM being good and 2" FWHM being average. What these values represent? They represent FWHM of Gaussian profile fitted on a star in 2 second exposure for very large aperture (so it minimizes impacts of aperture and guiding/tracking error - it is thought that for most conditions 2 seconds will be enough to form gaussian profile. Sometimes it's not, especially in presence of local thermals when seeing can change in couple of seconds - but we discuss given value averaged over course of exposure).

Let's also take 3 different guide/track scenarios - average entry level mount with guide performance of 0.8" RMS, mid range mount with 0.5" RMS and top tier mount with 0.2" RMS guide error.

Here are results (each represents expected gaussian star profile FWHM):

2" FWHM seeing, 0.8" RMS guiding:

10" aperture: 2.89"
5" aperture: 3.27"

1.5" FWHM seeing, 0.8" RMS guiding:

10" aperture: 2.57"
5" aperture: 2.99"

1" FWHM seeing, 0.8" RMS guiding:

10" aperture: 2.41"
5" aperture: 2.77"

2" FWHM seeing, 0.5" RMS guiding:

10" aperture: 2.49"
5" aperture: 2.92"

1.5" FWHM seeing, 0.5" RMS guiding:

10" aperture: 2.11"
5" aperture: 2.6"

1" FWHM seeing, 0.5" RMS guiding:

10" aperture: 1.77"
5" aperture: 2.35"

2" FWHM seeing, 0.2" RMS guiding:

10" aperture: 2.24"
5" aperture: 2.7"

1.5" FWHM seeing, 0.2" RMS guiding:

10" aperture: 1.81"
5" aperture: 2.36"

1" FWHM seeing, 0.2" RMS guiding:

10" aperture: 1.42"
5" aperture: 2.08"

Sorry for the longer post - this all could have been better in a sort of table I suppose? Well, it is what it is. We can see from data that 5" scope produces similar results as 10" scope under variety of conditions. Both produce values between 2" and 3" FWHM for the most part. However, 5" simply can't match resolution even under best (realistic, we won't count in case with no atmosphere and perfect mount without any error) circumstances to that of 10" scope when seeing is great and mount is very good (1" or less FWHM and 0.5" or less RMS - giving resulting star profile values less than 2").

You can translate above star profile FWHM values in optimal sampling rate by dividing with 1.6. This tells you that you need exceptional seeing conditions and top tier mount if you are going to attempt to image at 1" or there about.

I just need to clarify something. You can certainly image at all sorts of resolutions - you can image at less than 1"/px - I do it, one of my setups is 0.5"/px. What above tells you is that you will be over sampling and that your images will not look sharp (or as sharp as they could look) on such resolutions - for this reason I recommend binning your image - this reduces sampling rate and also increases SNR. You will not loose any details if you bin up to optimal resolution - and you can determine that by looking at FWHM of your stack. Image will be smaller, but if you resample it to original size - it will look just as blurry as original and not blurrier (is that a word? blurrier? more blurry? :D ).

Btw, discussion above has more to it, we have not discussed pixel blur, and it applies to mono data only. OSC data is already sampled at half the resolution and that should be taken into account when discussing detail.

HTH

Link to comment
Share on other sites

Vlaiv,

Many thanks for taking the time and effort to write such an in-depth response. So from the calculations you have presented, unless I'm reading it wrong, I most scenarios  the 10" will win on resolution. In terms of convolution, I'm  just wondering how much of this can be countered in post processing, I.e. Creating a PSF star profile in pixinsight and then running deconvolution processes over the image.

As you mentioned, 2x binning  could be an option for me if needed.

Cheers,

Richard.

Link to comment
Share on other sites

Just now, Northernlight said:

Vlaiv,

Many thanks for taking the time and effort to write such an in-depth response. So from the calculations you have presented, unless I'm reading it wrong, I most scenarios  the 10" will win on resolution. In terms of convolution, I'm  just wondering how much of this can be countered in post processing, I.e. Creating a PSF star profile in pixinsight and then running deconvolution processes over the image.

As you mentioned, 2x binning  could be an option for me if needed.

Cheers,

Richard.

People use deconvolution and other frequency restoration methods on their data and yes it can be used. One has to be careful with deconvolution method used though. Deconvolution methods are usually designed to be applied in certain conditions. For example Lucy-Richardson deconvolution expects knowledge of PSF and image to be polluted with noise mix of poisson and gaussian distributions. Using it on stack of images is not optimal usage, and one often has to "guess" original PSF - approximate it with gaussian of certain form. There are also other methods for frequency restoration that work well - like wavelet sharpening. Here too one must guess or try out parameters.

These methods work by restoring attenuated high frequency components of the image. Blur works by attenuating high frequency components - like when you divide a number with some value to get smaller value. Inverse of that is multiplying result with same number. Problem is that noise is independent of this attenuation "constant" (constant is in quotes because each frequency gets different constant of attenuation) - signal is first attenuated and then polluted with noise - noise does not get attenuated. When you do reverse and multiply signal to restore its original value - you are also multiplying respective components of the noise - greatly increasing its magnitude.

This is something you might have noticed when processing images - when you apply any sort of sharpening - noise becomes much more evident

For sharpening to work the best, you need very high SNR signal, this is why people often choose to do masked deconvolution / sharpening - only on parts of the image where signal is strong and SNR is good enough (like main structure of galaxy or nebula).

Indeed, If one is planing to do deconvolution / sharpening as processing step to try to recover some of attenuated frequencies, then one must sample to higher resolution than I recommended above. Above recommendation is based on 1/100 threshold - all frequencies that get attenuated to 1% or less of original value can be discarded - one would need very high SNR signal to try to attempt recovery of those (must multiply with number 100 and greater - so noise must also be very very small compared to signal)

Link to comment
Share on other sites

Vlaiv

I read your reply with some interest and was curious how you calculated the total FWHM from the individual components. Presumably you convert the scope aperture into resolving power using the Rayleigh formula (at 500nm?). Is there then some root-of-sum-of-squares calculation over 2 dimensions? I can't get the same numbers with a simple calculation.

Regards, RL

Link to comment
Share on other sites

2 hours ago, rl said:

Vlaiv

I read your reply with some interest and was curious how you calculated the total FWHM from the individual components. Presumably you convert the scope aperture into resolving power using the Rayleigh formula (at 500nm?). Is there then some root-of-sum-of-squares calculation over 2 dimensions? I can't get the same numbers with a simple calculation.

Regards, RL

Here is complete method.

First thing to note is relation ship between FWHM and sigma in gaussian curve - that is simple one = FWHM = sigma * 2.355

Other "ingredient" that you will need to have it all is - Gaussian approximation to Airy disk. In this case sigma = 0.42 * Lambda * F/ratio (lambda being mean value of 550nm in my calculations). This value is in radians and we need to convert it to arc seconds to match other values. We are using small angle approximation here sin(x) = tan(x) = x for very small angles.

So resulting formula is sigma = 7200*(0,42* lambda_in_nm/(1000000*aperture_diameter_in_mm))*180/PI   or  sigma = 7200*arctan(0,42* lambda_in_nm/(1000000*aperture_diameter_in_mm))*180/PI   both are valid for small angles - they give same value (to a precision we are interested in).

Now that we have sigma for all three components:

1. Seeing sigma being seeing FWHM / 2.355

2. Aperture sigma being given in above gaussian approximation of airy disk - from aperture and wavelength of light at 550nm (middle of 400-700 range)

3. Sigma of guiding/tracking error being RMS of value from PHD2 (this I still don't have rationale for, but it matches empirical evidence).

We simply use square root of sum squares for resulting sigma, so sigma = sqrt(sigma1^2 + sigma2^2 + sigma3^2).

After we get resulting sigma, we can convert it back to FWHM by multiplying with 2.355

 

 

Link to comment
Share on other sites

Looking back at above formulae, I'm not sure they are correct!

I'm often get confused by this - "Airy disk size", sometimes people use diameter and sometimes radius. We need to check above approximation of Airy disk with Gaussian, because I think that I over estimated sigma for Gaussian by factor of 2 (I was looking at formula for radius instead for diameter).

According to wikipedia article on airy disk, and other sources, first minima in airy pattern appears at sin(angle) = 1.22 * lambda / aperture_diameter, that would be the radius, and diameter is therefore 2.44 * lambda / aperture_diameter. Gaussian approximation (on the same page) gives sigma to be 0.42 * lambda / aperture_diameter ( there is actually another numerical constant here 0.45, depending if one approximates by volume or by peak intensity, not sure which one should be used - probably one that gives least sum squares error).

Reference: https://en.wikipedia.org/wiki/Airy_disk

According to this page:

http://www.wilmslowastro.com/software/formulae.htm#Airy

Diameter of airy disk is given by: 2.44 * lambda * F/ratio, and angular size (I think it now stands for Airy diameter) is given by 2 * arctan( 1.22 * lambda / apereture_diameter),

This means that sigma in radians would be 2 * arctan( 0.42 * lambda / 2*aperture_diameter) - two times arc tan of half of "pattern", but we can use small angle approximation here so it would be 2 * 0.42 * lambda / 2 * aperture_diameter, and two's cancel out, we have 0.42 * lambda / aperture_diameter that we need to convert from radians into arc seconds.

(0.42 * lambda / aperture_diameter) * 180 / PI * 60 * 60  = 3600 * (0.42 * lambda / aperture_diameter) * 180 / PI

Where lambda and aperture_diameter are in same units - meters or mm or um, ... - it does not matter since unit cancels out, but we need to convert one to other because wavelength is specified in nm and aperture in mm.

(60 arc minutes in one degree, 60 arc seconds in one arc minute)

If above is correct, and again someone better check if it is :D as half the time I'm likely to get it wrong by factor of 2, then above one that I used is not correct as it has 7200 * ..., instead of 3600 * ...

Well, good news if this is now correct! Ratio of 5" to 10" scope resolving power won't change much, and one still needs very good conditions to see the difference, but it is actually feasible to use 1"/px resolutions under good circumstances.

For example, with this reduced Airy sigma, 10" can utilize 1"/px with 1" FWHM seeing and 0.5" RMS guiding.

 

Link to comment
Share on other sites

I'm probably one of the people in question, so let's do an objective test. Which of these images do you prefer? Do you have a strong preference, do you have a slight one, do you see little to choose between them?

spacer.png

spacer.png

One was taken in a 14 inch reflector, one in a 5.5 inch refractor. The image scales were (refractor) 0.92""PP and (reflector) 0.64"PP. They are close crops of the core of M101 prepared at the same screen size for presentation here. 

Olly

 

 

 

Link to comment
Share on other sites

3 minutes ago, Northernlight said:

Olly, put us out of our misery, which one is which 

That one is easy - bottom one is ODK, and not because of sharpness, but rather telltale diffraction spikes barely showing on few brightest stars.

Link to comment
Share on other sites

I too think the bottom one is more detailed, looking at the group of "blobs" in the core (Making a rough triangle) the bottom image shows clearer separation between them.

Looking forward to being able to do my own comparisons, as I have a 12" ODK incoming, and a 130mm f/7 TS Triplet. An Atik 16200 on the ODK and a Trius 694 on the TS Apo give very similar fields of view.

Link to comment
Share on other sites

51 minutes ago, Northernlight said:

Olly, put us out of our misery, which one is which 

You've chosen the 14 inch image and should buy a large reflector and all the misery which will go with it! :icon_mrgreen:

In my original Astronomy Now article, September 2018, I concluded that in this comparison image the 14 inch had very slightly out resolved the refractor. I'm entirely at a loss to see any difference worth getting excited about. Can anyone point to a small scale structure present in one image and absent from the other? I felt that maybe I could see three points of light in the tiny central core from the 14 inch and only two from the refractor. I very much doubt that if the images were not next to each other on the same page anyone would consider them to be significantly different but maybe I'm just a widefield imager at heart!  The reflector image had about six hours more exposure. (Kodak losing to Sony here.)

48 minutes ago, vlaiv said:

That one is easy - bottom one is ODK, and not because of sharpness, but rather telltale diffraction spikes barely showing on few brightest stars.

Heh heh, it did occur to me to photoshop them out!  I know what you lot are like. :BangHead: ?

*            *            *

So our conclusions seem to differ. I'm happy to accept the detail in the refractor image as being in the same order of magnitude as that of the 14 inch and the refractor is much easier to live with, maintenance being comparable with my coiffure routine - an occasional wipe with a damp cloth. Taking out an ODK mirror to clean it, putting it back, getting the optimising lens distance just right again and re-collimating will be some peoples' idea of great fun but for me it would be about as much fun as question time with Klaus Barbie...

Olly

Link to comment
Share on other sites

Hi Olly, yep got to agree with Vlaiv ?, the bottom one, but could that simply be different processing?

I agree refractors are so much easier to maintain, but to me I enjoy the tatting except when things go wrong, which they invariably do?

Both are superb images and have their merits, I'd be happy with either.

Link to comment
Share on other sites

This is interesting. Guests who've looked at the images I used for the article have never expressed the preferences appearing in this thread. They've always said they found them insignificantly different. However, this is purely a matter of personal opinion and if you prefer the 14 inch image then you do.

Olly

Link to comment
Share on other sites

Depends how one perceives resolution - looking at the "big picture" might make it hard to notice.

Have a look at this for example - it is crop of one area that shows increase in resolution, and it's been arranged in animated gif to blink the differences - that way it's easier to see:

example.gif.ca1f788931cd78caa0f9f72157d344bf.gif

Looking at this example, do you get the sense of more resolution loss compared to big images above?

Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Depends how one perceives resolution - looking at the "big picture" might make it hard to notice.

Have a look at this for example - it is crop of one area that shows increase in resolution, and it's been arranged in animated gif to blink the differences - that way it's easier to see:

example.gif.ca1f788931cd78caa0f9f72157d344bf.gif

Looking at this example, do you get the sense of more resolution loss compared to big images above?

Maybe, but there is more Ha in the refractor image which slightly confuses the issue.  I think my point is that if we need to go to these lengths to find an advantage in the big scope then it's a small advantage!

Olly

Link to comment
Share on other sites

1 hour ago, vlaiv said:

To my eye, bottom one clearly shows more "resolution" - image looks sharper.

Yes.
An interesting question is whether the less distinct one could be photoshopped¹ to get it closer to the sharper one.

[1] other image enhancement tools are available

Link to comment
Share on other sites

5 minutes ago, pete_l said:

Yes.
An interesting question is whether the less distinct one could be photoshopped¹ to get it closer to the sharper one.

[1] other image enhancement tools are available

If the intention were to make it resemble the other image then that might be possible but my intention was to perform a sympathetic processing of the individual datasets.

Olly

Link to comment
Share on other sites

9 minutes ago, pete_l said:

Yes.
An interesting question is whether the less distinct one could be photoshopped¹ to get it closer to the sharper one.

[1] other image enhancement tools are available

I don't think that would be possible - let me explain why - by looking at my blinking gif, its obvious that in larger aperture image there are sharp stars present.

In smaller aperture image those stars are blurred out. You can try to sharpen them up, but you would need to sharpen so much that noise levels around those stars would also raise, and star would be lost due to poor SNR - one would not be able to distinguish sharp star from surrounding (sharp) noise.

In theory, given infinite amount of time, one could take image produced with particular scope and blurred by seeing and mount tracking errors and recover sharpness up to telescope optics resolving limit. But this requires infinite SNR. In practice this is very hard to do - planetary imagers sharpen up their images right up to telescope resolving power and they do that by stacking tens of thousands of frames, selected so that they are minimally blurred by atmosphere. Each sub is of very high SNR to start with compared to subs in deep sky imaging, and stacking just increases that SNR hundred fold. This is why one can sharpen planetary images down to theoretical resolving power of telescope - small blur due to atmosphere - by selecting only the best frames, single sub being with good SNR (often SNR of 10 or more per sub) and > x100 SNR increase due to stacking, making resulting SNR order of 1000.

For all intents and purposes, in DSO imaging, frequency restoration is limited by SNR and people often sharpen up only brighter parts of the image because in those parts SNR high enough to "handle" it.

Link to comment
Share on other sites

4 hours ago, vlaiv said:

 

For all intents and purposes, in DSO imaging, frequency restoration is limited by SNR and people often sharpen up only brighter parts of the image because in those parts SNR high enough to "handle" it.

Exactly this.

Olly

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.