Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

M51 SCT at f10 vs triplet at f7.5


Magnum

Recommended Posts

Thank you for your answers 😀 

3 hours ago, Magnum said:

 

With regard to the what sampling Damien is using, his latest images are taken from Chile with either a 50CM F15 cassegrain which has a focal length of 7500mm  paired with the ASI 462MC camera which has 2.9micron pixels resulting in a Pixel sampling of 0.08 Arc seconds per pixel 

Or the 106cm F17 Cassegrain which has a focal length of 18,000mm Paired with the ASI 174MC which has  larger 5.87 micron pixels resulting in a similar pixel sampling of 0.07"/pixel,  so yes very close to the 0.05" figure. 

 

 

 

 

The argument with 0.05" details, that can be seen, was regarding a 254mm scope.  The airy disc size would be 0.48". 

The airy disc of an 106cm scope is 0.115". Since we are not (as) seeing limited with short exposures, i can see why it would makes sense to sample at 0.07" with that scope, which isnt even half the size of the airy disc. So this would agree with what Vlaiv explained about Nyquist sampling theorem .

And with that scope a 0.05" detail is therefore less "blown up". (i dont consider lucky imaging processing here, only single ultra short exposures). 

3 hours ago, Magnum said:

if I take the barlow out the image will not scale up to the same level and show the same detail, which is contrary to what was mentioned previously in this thread.

 

 

 

 

 

 

 

 

 

Good point, i have to think more about that :D

Your Jupiter image looks amazing btw!

 

17 hours ago, vlaiv said:

 

If one of the three components is very high - other two won't make much difference.

If seeing is very poor - you don't really care if your guiding is poor or what is the size of your aperture, but if you are shooting in good seeing with good mount then aperture size actually makes a difference

That is in part what we are seeing above - yes 8" will out resolve 5" in good seeing in long exposure astrophotography.

 

I really appreciate you taking the time and effort explaining this stuff in a simplified way. 

So lets say the seeing blur is 2" and the airy disc size for a 5" and 8" scope are 0,96" and 0.61". This means they the resulting FWHM would be 2,22" for the 5 inch scope and 2,09" for the 8 inch scope. (I assume a guiding error sufficient low to be neglectable) Seems like just a tiny difference.

But could it be true, that for the 8" scope, the curve could fall more steep before it hits the cut off frequency? So that there is already more energy in those high frequencies right before the cut off, resultung in already more contrast without sharpening. 

Like that:

 

image.jpeg

Edited by Bibabutzemann
Link to comment
Share on other sites

3 hours ago, Bibabutzemann said:

So lets say the seeing blur is 2" and the airy disc size for a 5" and 8" scope are 0,96" and 0.61". This means they the resulting FWHM would be 2,22" for the 5 inch scope and 2,09" for the 8 inch scope. (I assume a guiding error sufficient low to be neglectable) Seems like just a tiny difference.

But could it be true, that for the 8" scope, the curve could fall more steep before it hits the cut off frequency? So that there is already more energy in those high frequencies right before the cut off, resultung in already more contrast without sharpening. 

You can't directly combine airy disk and seeing FWHM as numbers.

Let's go over that bit as well. Analytical solution to exact FWHM shape is non trivial since airy disk function is rather complex mathematically. It is therefore best to approximate all blurs with Gaussian distribution.

When we say FWHM 2" for seeing - it simply means that it has Gaussian distribution with FWHM of 2" (although seeing at any given moment is far from Gaussian - due to central theorem, when you expose for enough time it averages out to a Gaussian shape - same happens with guide error).

Convolution (that is the way these blurs combine) of a Gaussian with a Gaussian is a Gaussian and they combine so that their sigma / FWHM add in quadrature (square root of sum of squares). Sigma is related to FWHM by factor of ~2.355, so FWHM of 2" is the same thing as sigma 2/2.355 = ~0.85"

Approximation of Airy disk with Gaussian is 0.42*lambda*f-ratio as opposed to 1.22 * lambda * f-ratio for position of first minima or radius of Airy disk. When you simplify everything down expression for sigma of Gaussian approximation to Airy disk is 47.65/aperture_size where aperture size is in mm

Guide RMS should not be neglected as it is important bit (that is sigma of equivalent Gaussian).

Either make everything FWHM or sigma.

Example - 1.5" FWHM seeing, 200mm of aperture and 0.5" RMS guiding - what is resulting FWHM for diffraction limited optics?

1.5" FWHM seeing = ~0.637" RMS

0.5" RMS guiding = 0.5" RMS

200mm aperture = 47.65/200 = 0.23825" RMS

Total rms = sqrt(0.637^2 + 0.5^2 + 0.23825^2) = sqrt(0.7125320625) = ~0.844 RMS = ~1.99" FWHM

As far as what the seeing limited frequency looks like - well, you need to combine above three PSFs with convolution.

Convolution theorem says that convolution in spatial domain is equivalent to multiplication in frequency domain. We need to see what sort of Fourier transform we get for each of these three types of filter and then multiply those.

Gaussian is simple - Fourier transform of Gaussian is a Gaussian so filter looks like this:

image.png.e98a9cf4bea448434a76afd2afd65b99.png

Just right side is interesting one (zero is centered on profile itself) - so filter starts slowly then falls rapidly then eases off to infinity at low values. Wider the blur in spatial domain - narrower the filter itself (and vice verse).

Airy disk filter is actually MTF of a telescope:

image.png.9bf9169d4c274fd5400829b3839634df.png

This is graph of MTF for two telescopes (one half the diameter of second). Both are unobstructed. Unobstructed telescope almost has straight MTF (it is just a bit curved).

This graph clearly shows cut off frequency. Gaussian does not have that - and goes off to infinity. But telescope optics has it and it is well defined and depends on aperture size alone.

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

In above images we have seen some interesting combinations of F/ratio and pixel sizes - but from above link we can determine what sort of F/ratio we need for given pixel size:

image.png.c0e94e546973aabcdaf6a7b072490e6c.png

If we rearrange that formula and apply Nyquist theorem we get this:

F/ratio = 2 * pixel_size / wavelength

Where pixel_size and wavelength need to be in same units so either um or nm.

Is Jupiter image properly sampled with ASI462mc camera?

Optimum F/ratio = 2 * 2.9um / 0.4um = 14.5

So you need F/14.5 to capture down to 400nm with 2.9um camera. Image is slightly over sampled at F/15 - but difference is really negligible.

Note that this is without any influence of atmosphere or tracking error as very short exposures and lucky imaging is utilized.

For long exposure imaging we have to combine above graphs, and here is what it looks like:

image.png.66cb47d78f23858af8e0e5c2b5ff5a5c.png

(here seeing is given by Fried parameter or coherence length - r0)

Look at that bottom curve how it fast approaches zero and then stays there for a long time - that is due to Gaussian shape. Above telescope MTF is a bit more curved - that is because it has central obstruction

image.png.d35e08098892993e38e27033307b7bde.png

In any case, from above graph, you can see rationale for FWHM/1.6 parameter I gave earlier. It is not place where all data is captured - it is place where almost all data that can be used is captured - rest is below noise level.

image.png.f0db1eaeddfeb1d66f894d547d86c44b.png

I roughly outlined "zone" where this happens. If we convert that to numbers in that example - 0.6 cycles per arc second makes wavelength be 1/0.6 = 1.6667 arc second and we need to sample twice per this wavelength so about 0.833"/px in this case (it is below 1"/px - as no mount error was added in this case - but for most part - you won't get below 1"/px in realistic amateur conditions for long exposure).

 

  • Like 2
Link to comment
Share on other sites

6 hours ago, vlaiv said:

You can't directly combine airy disk and seeing FWHM as numbers.

Let's go over that bit as well. Analytical solution to exact FWHM shape is non trivial since airy disk function is rather complex mathematically. It is therefore best to approximate all blurs with Gaussian distribution.

When we say FWHM 2" for seeing - it simply means that it has Gaussian distribution with FWHM of 2" (although seeing at any given moment is far from Gaussian - due to central theorem, when you expose for enough time it averages out to a Gaussian shape - same happens with guide error).

Convolution (that is the way these blurs combine) of a Gaussian with a Gaussian is a Gaussian and they combine so that their sigma / FWHM add in quadrature (square root of sum of squares). Sigma is related to FWHM by factor of ~2.355, so FWHM of 2" is the same thing as sigma 2/2.355 = ~0.85"

Approximation of Airy disk with Gaussian is 0.42*lambda*f-ratio as opposed to 1.22 * lambda * f-ratio for position of first minima or radius of Airy disk. When you simplify everything down expression for sigma of Gaussian approximation to Airy disk is 47.65/aperture_size where aperture size is in mm

Guide RMS should not be neglected as it is important bit (that is sigma of equivalent Gaussian).

Either make everything FWHM or sigma.

Example - 1.5" FWHM seeing, 200mm of aperture and 0.5" RMS guiding - what is resulting FWHM for diffraction limited optics?

1.5" FWHM seeing = ~0.637" RMS

0.5" RMS guiding = 0.5" RMS

200mm aperture = 47.65/200 = 0.23825" RMS

Total rms = sqrt(0.637^2 + 0.5^2 + 0.23825^2) = sqrt(0.7125320625) = ~0.844 RMS = ~1.99" FWHM

As far as what the seeing limited frequency looks like - well, you need to combine above three PSFs with convolution.

Convolution theorem says that convolution in spatial domain is equivalent to multiplication in frequency domain. We need to see what sort of Fourier transform we get for each of these three types of filter and then multiply those.

Gaussian is simple - Fourier transform of Gaussian is a Gaussian so filter looks like this:

image.png.e98a9cf4bea448434a76afd2afd65b99.png

Just right side is interesting one (zero is centered on profile itself) - so filter starts slowly then falls rapidly then eases off to infinity at low values. Wider the blur in spatial domain - narrower the filter itself (and vice verse).

Airy disk filter is actually MTF of a telescope:

image.png.9bf9169d4c274fd5400829b3839634df.png

This is graph of MTF for two telescopes (one half the diameter of second). Both are unobstructed. Unobstructed telescope almost has straight MTF (it is just a bit curved).

This graph clearly shows cut off frequency. Gaussian does not have that - and goes off to infinity. But telescope optics has it and it is well defined and depends on aperture size alone.

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

In above images we have seen some interesting combinations of F/ratio and pixel sizes - but from above link we can determine what sort of F/ratio we need for given pixel size:

image.png.c0e94e546973aabcdaf6a7b072490e6c.png

If we rearrange that formula and apply Nyquist theorem we get this:

F/ratio = 2 * pixel_size / wavelength

Where pixel_size and wavelength need to be in same units so either um or nm.

Is Jupiter image properly sampled with ASI462mc camera?

Optimum F/ratio = 2 * 2.9um / 0.4um = 14.5

So you need F/14.5 to capture down to 400nm with 2.9um camera. Image is slightly over sampled at F/15 - but difference is really negligible.

Note that this is without any influence of atmosphere or tracking error as very short exposures and lucky imaging is utilized.

For long exposure imaging we have to combine above graphs, and here is what it looks like:

image.png.66cb47d78f23858af8e0e5c2b5ff5a5c.png

(here seeing is given by Fried parameter or coherence length - r0)

Look at that bottom curve how it fast approaches zero and then stays there for a long time - that is due to Gaussian shape. Above telescope MTF is a bit more curved - that is because it has central obstruction

image.png.d35e08098892993e38e27033307b7bde.png

In any case, from above graph, you can see rationale for FWHM/1.6 parameter I gave earlier. It is not place where all data is captured - it is place where almost all data that can be used is captured - rest is below noise level.

image.png.f0db1eaeddfeb1d66f894d547d86c44b.png

I roughly outlined "zone" where this happens. If we convert that to numbers in that example - 0.6 cycles per arc second makes wavelength be 1/0.6 = 1.6667 arc second and we need to sample twice per this wavelength so about 0.833"/px in this case (it is below 1"/px - as no mount error was added in this case - but for most part - you won't get below 1"/px in realistic amateur conditions for long exposure).

 

This is all marvellous stuff from a theoretical point of view @vlaiv, but for me it really robs the process of some of its subjective discovery and wonder.

I’d prefer to be guided by theory at the high level, but sometimes love it when I or others try what we want to try and be surprised by the results, even if they don’t match up to that theory. I don’t have an infinite amount of time to be analysing what I get if it doesn’t correspond to what’s expected, if it’s a decent result (I.e. can’t be bother there too ask “Why is it so good?? It shouldn’t be better than the frac???”). Especially in my case, there are so many variables involved, and I’m not working with a kit setup where it’s easy to replicate a setup night after night (no observatory etc). If I find a method that works, I’ll use it, be glad my stars are round 😂, and enjoy the incoming subs (yes I still get that buzz when the subs download), and enjoy the results. Agonising over whether I’ve taken the path of least theoretical resistance is not something that really floats my boat so to speak. 

If the ACF has produced a better set of data than the smaller aperture Frac, and keeps on doing so then great, even if the theory says it shouldn’t be the case.  

I’m happy to get any decent data myself.  Life’s too short for in-depth analysis of why something was unexpectedly better…

I’m absolutely not criticising you here by the way - you obviously enjoy it, and are very knowledgable, so fill your boots :D.  In the meantime, I’ll carry on getting pretty pictures with my limited setup and enjoying the hell out of it in the process.  
 

Cheers Lee for giving the ACF a run for its money.  And no you can’t have it.  Unless you have a nice camera you may be willing to trade?? 😙😋😆  

Thanks all!

 

 

Edited by AstroAdam
  • Like 2
Link to comment
Share on other sites

On 10/04/2022 at 12:12, vlaiv said:

You can't directly combine airy disk and seeing FWHM as numbers.

Let's go over that bit as well. Analytical solution to exact FWHM shape is non trivial since airy disk function is rather complex mathematically. It is therefore best to approximate all blurs with Gaussian distribution.

 

 

 

thanks again Vlaiv! This cleared lots of things up for me.

Now it also makes sense to me why Lee gets better results with the 2.5x Barlow, he theoreitcal optimum would be 3,75x5=18,75 with the ASI224MC, so without Barlow it is undersampled at F/10 and with Barlow 2.5x its only a bit oversampled at F/25

On 10/04/2022 at 18:29, AstroAdam said:

This is all marvellous stuff from a theoretical point of view @vlaiv, but for me it really robs the process of some of its subjective discovery and wonder.

I’d prefer to be guided by theory at the high level, but sometimes love it when I or others try what we want to try and be surprised by the results, even if they don’t match up to that theory. I don’t have an infinite amount of time to be analysing what I get if it doesn’t correspond to what’s expected, if it’s a decent result (I.e. can’t be bother there too ask “Why is it so good?? It shouldn’t be better than the frac???”). Especially in my case, there are so many variables involved, and I’m not working with a kit setup where it’s easy to replicate a setup night after night (no observatory etc). If I find a method that works, I’ll use it, be glad my stars are round 😂, and enjoy the incoming subs (yes I still get that buzz when the subs download), and enjoy the results. Agonising over whether I’ve taken the path of least theoretical resistance is not something that really floats my boat so to speak. 

If the ACF has produced a better set of data than the smaller aperture Frac, and keeps on doing so then great, even if the theory says it shouldn’t be the case.  

I’m happy to get any decent data myself.  Life’s too short for in-depth analysis of why something was unexpectedly better…

 

 

 

I agree that theoretical limits shouldnt discourage anyone to try out setups, that are not considered to be in the optimum. Like you said, there are many variables we could overlook 😀

But i think its certainly helpful to know the limits, so when there are unexpected results one doesnt immediately jumps into wrong conclusions. For example a newer camera with more pixels may lead to better results, even though the old camera was already oversampling.

That doesnt mean, heavy oversampling is suddenly a good thing, physics wont change, but the new camera has simply a more advanced sensor (higher QE, less noise etc).

 

Also i think it adds another layer to this awesome hobby to learn about the science behind the instruments we are using, likewise understanding the science behind objects we are imaging 😀 

Link to comment
Share on other sites

On 10/04/2022 at 04:33, ollypenrice said:

^^^ Great post and nice to hear praise for Photoshop, too. PI has some good tools but Photoshop remains an exquisite software package and should never be discounted.

Olly

Yes and you taught me a lot of good photoshop techniques when we visited in 2012. Ive always applied every tool selectively since then.

lee

Link to comment
Share on other sites

Out of pure chance I just noticed these charts on highpoint Scientific's website on choosing the optimal Camera scope combination vs your local best seeing, and looking at the Above average to great seeing column, it does indeed agree with the figures my research and experiment had been pointing too. With my 3.75um pixel camera in above average seeing it suggests a pixel sampling resolution of 0.75"/pixel, and in great seeing it suggests 0.5"/pixel.

Then in the next chart in above average seeing its suggesting a scope of 1000mm FL and in Great seeing  its suggesting a scope of  1600mm with the same 3.75"/pixel camera to reach the above sampling.

And my original conclusion from my research and experience was to look for a scope between 1200 & 1400mm FL to edge out my current scope sits right in the middle of their suggested range. 

I purchased a used RC6 this morning with FL of 1370mm so Im hopefully I will be able to improve on my 127mm triplet resolution but only on the very best nights, the rest of the time it should be no worse just a bit slower at f9 vs 7.5, and I can always use a reducer to bring it back down to around 1000mm if it doesn't .

image.thumb.jpeg.effadfca910c6be1a8f51fc13a86abab.jpeg

Deep-Sky-Chart2_revise.png

Link to comment
Share on other sites

1 hour ago, vlaiv said:

@Magnum

According to

http://astronomy.tools/calculators/ccd_suitability

You can even go down to 0.33"/px - in good seeing:

image.png.abac8c9a1c4d95bc4f1449da965f8e49.png

Does this mean that I can take my 80mm scope and image at 0.5"/px if I have very good seeing around 1" FWHM?

Does this hold true even for people that use 60mm scopes or Red Cat with 51mm of aperture?

 Thanks for that link, that page supports what ive been saying all along. If you remember I said we need to modify nyquest for digital, you replied "There is no such thing as Nyquist theorem modified for digital signals"

Well the text below supports exactly what I said, that nyquest is a starting point but it needs a slight tweak when dealing with square pixels. ie "modified"!

We dont have to agree on this, but I still dont appreciate that you said I was spreading myths, when highly regarded sources are saying the same thing. 

image.thumb.png.fffe1beca240be123d5f383e7fca8a91.png

No course you wouldn't be able to achieve that with a red cat as it doesn't have the optical resolution to support that, thats the point of using larger aperture longer focal length scopes, also they dont make astro cameras with pixels small enough to even get down to that sampling with the red cat. Why do you think people buy large RC scopes in the first place. if I was buying a 12" RC then I would go for camera with larger pixels. the camera I have seems ideal fro both my 950mm triplet and my new RC6, it was a bit overkill with the 8" f10 sct.

Lee

Link to comment
Share on other sites

2 minutes ago, Magnum said:

 Thanks for that link, that page supports what ive been saying all along. If you remember I said we need to modify nyquest for digital, you replied "There is no such thing as Nyquist theorem modified for digital signals"

Well the text below supports exactly what I said, that nyquest is a starting point but it needs a slight tweak when dealing with square pixels. ie "modified"!

We dont have to agree on this, but I still dont appreciate that you said I was spreading myths, when highly regarded sources are saying the same thing. 

Well, that is another example of misinterpretation of Nyquist theorem.

Why don't you have look at actual theorem and what is says in 1D or Multidimensional case:

https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

I asked above question because 80mm scope has critical sampling at about 0.64"/px and smaller scopes - even coarser sampling than that.

Many people image with smaller scopes - and to say that under 1" FWHM seeing you should use 0.5"/px - even if your scope can't resolve that much even under perfect conditions - is clearly nonsense.

(not to mention whole planetary high SNR and massive amount of sharpening vs DSO / low SNR and barely any sharpening at all thing).

Link to comment
Share on other sites

13 minutes ago, vlaiv said:

Well, that is another example of misinterpretation of Nyquist theorem.

Why don't you have look at actual theorem and what is says in 1D or Multidimensional case:

https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

I asked above question because 80mm scope has critical sampling at about 0.64"/px and smaller scopes - even coarser sampling than that.

Many people image with smaller scopes - and to say that under 1" FWHM seeing you should use 0.5"/px - even if your scope can't resolve that much even under perfect conditions - is clearly nonsense.

(not to mention whole planetary high SNR and massive amount of sharpening vs DSO / low SNR and barely any sharpening at all thing).

So your interpretation is correct and everyone else's interpretation is wrong? 

All I can say is that Niquest - Shannon theorem pre dates digital cameras with square pixels so reading that alone is not enough to make a conclusion, the section I pasted above from Astronomytools website explains exactly why it needs a tweak. It seems totally logical to me and makes more sense to me than anything else ive read in this thread.

I think we've done this to death now, people will read through the thread and come to their own conclusions, then maybe test it with their own experiments which is a crucial step in scientific method.

Link to comment
Share on other sites

On 07/04/2022 at 21:13, vlaiv said:

Why do you think you'll tweak that advice?

You think that above image is well sampled at 0.39"/px - although it is presented at 0.5"/px and has FWHM of 1.9"?

One of two images below has been reduced to 50% in size and then resized back to match the original. Which means it was sampled at 1"/px at that point. If detail is there in the image that needs higher sampling rate than 1"/px - then that detail should suffer. Can you tell which one of two images was resized to 1"/px and then up scaled back to original size? (Actually you should be able to tell as it has less noise, but all the detail is still there).

2022-04-07_20-09.png.b62a55723b71265fe2a96aa190659f2a.png

2022-04-07_20-09_1.png.6e0f2d2f0b7e0e5e1771a555430d502c.png

@Magnum

I dont understand any of the in depth stuff discussed here so probably shouldnt chime in, but i simply cannot tell the difference between these 2 images so what about this then, because if you actually had detail at 0.5" it would have been lost at resampling to 1" and back?

I tried this too by resampling your posted jpeg back and forth and no really cant tell the difference in detail.

Edited by ONIKKINEN
Link to comment
Share on other sites

3 hours ago, Magnum said:

So your interpretation is correct and everyone else's interpretation is wrong? 

Which of "theirs" interpretations do you think is correct?

One that says that sampling should be half of seeing FWHM or one that says its better to go for 1/3 of seeing FWHM, and how do you explain the fact that 80mm telescope simply can't resolve down to either one of these two values in 1" FWHM seeing?

Fact that we are using pixels has certain impact that we did not discuss here, but I've shown before what extent it has - and it is to slightly lower resolution by adding so called pixel blur.

Theorem is very clear about what it says - neither of two sources explain how are their recommendation related to maximum frequency of band limited signal nor why is it band limited in the first place.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.