Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

MTF of a telescope


Recommended Posts

2 minutes ago, John said:

I understand what you mean. In reality though, how likely is that to actually happen in the world of amateur telescopes ?

I think that one technique used by experienced astronomers is to "sort" through scopes to find "good" ones regardless of test reports 😃 or lack of.

  • Like 2
Link to comment
Share on other sites

41 minutes ago, jetstream said:

I think that one technique used by experienced astronomers is to "sort" through scopes to find "good" ones regardless of test reports 😃 or lack of.

I used the technique of buying them from somebody I knew personally, was experienced and that I trusted :smiley:

 

Link to comment
Share on other sites

3 hours ago, vlaiv said:

Why is this better than say - Roddier analysis - which is essentially free if you have a camera?

I have no experience of either system.  In fact I had not heard of Roddier analysis before your mention of it, it looks interesting thank you.

Regards Andrew 

Link to comment
Share on other sites

1 hour ago, JeremyS said:

I was enjoying the lull in the thread to digest the rich content 🤣

Me too but I don't want the momentum to stop lol! I'm actually understanding (a tiny bit) of this stuff. Well I think I do any way😃 but this is just a subjective opinion of myself :grin:

Link to comment
Share on other sites

On 09/02/2021 at 11:28, vlaiv said:

Well, I would like to explore what sort of results do we get from under sampled data.

There is well known drizzle algorithm that deals with this in imaging, but I believe it is often misused since people don't have control over dither step and I don't think it works good for random dither (original Hubble documentation implies that telescope is precisely pointed for dithers of a fraction of pixel - like 1/3 pixel dithers)

In this case - we can use edge at an angle to give us wanted consistent dithering step - but we need to compensate for the fact that edge is tilted.

I'm wondering if you implemented drizzle bit - or anything similar in your python code?

Method that I proposed above with afocal and mobile phone - will mix in both optical properties of eyepiece and phone lens (although not sure how much at that scale will be picked up) - but it will provide wanted sampling rate. Camera in prime focus is likely to under sample.

If we want a method that is reliable and easy for amateurs to use - we need to explore both options.

Sorry, little time these days to hang on the forum, which is a pity.

I agree we should explore how much we can exploit the edge detection method. Let's keep at it. I will have to wait until my 125 mm APO gets delivered. Currently without a scope.

regarding the drizzle. Well that is what I have proposed in the beginning. You remember my post on using super-resolution. Now we came full circle. Anyways, I agree we will have to deal with the under sampling, hence my suggestion to use super-resolution, which is in this case about the same as drizzle.

Exactly, we will need to have the edge at an angle, to make use of a "drizzle" approach. Hence I was arguing from the start to use an edge that is not aligned with the sensor array.

And yes, this is why I do the alignment code of the tilted edge data, to compensate for the fact that the edge is tilted.

I see we converge with ideas. That is nice to see.👍

I will post more results as soon as I have my new scope and some time for daytime testing.

Link to comment
Share on other sites

7 hours ago, alex_stars said:

I see we converge with ideas. That is nice to see.👍

I will post more results as soon as I have my new scope and some time for daytime testing.

I think that I'm a bit confused now with all of this. Not the theory but the fact that we all got different results and to add to that in conversation with @jetstream I leaned that some people expressed concern that WinRoddier might be giving wrong results - so I did a test on it, and indeed - I got different values then I used to generate wavefront.

If I did not cross check my data with Aberrator - I would have surely concluded that I'm at fault - but wavefront that I generate is the same as Aberrator 3.0 - so is MTF. Bunch of other things check out as well (like expected star defocus size and such).

I'll run tests once more - with very simple wavefront and post findings here so we can discuss those as well.

I'm hoping that we will eventually have simple test that amateurs can perform that is reliable enough and produces MTF or Strehl for their scopes.

  • Like 2
Link to comment
Share on other sites

7 minutes ago, vlaiv said:

I'm hoping that we will eventually have simple test that amateurs can perform that is reliable enough and produces MTF or Strehl for their scopes.

I have confidence you will (or have) achieve this. Yes, the Roddier test has been questioned. I would absolutely like to have an easy to use very accurate test method that no one can dispute.

Link to comment
Share on other sites

I suspect that these approaches use some form of numerical fft or its inverse  possibly without understanding the limitations of these with the data sets used.

Numerical methods are full of traps if you don't have a good understanding of what the constraints  and assumtion are. 

Regards Andrew 

  • Like 2
Link to comment
Share on other sites

I think we can keep going with the edge detection MTF approach, it just needs to be carried out carefully. Just to keep the ideas going here a link to a paper which discusses the effect of using CMOS sensors (as we have been) with respect to creating an MTF.

https://core.ac.uk/download/pdf/12039012.pdf

I most certainly will keep on developing my code to test my scopes. And I am more than happy to discuss further details here.

If people are interested in running the python code themselves, we can think about making an open-source project out of it. However I will not have the time to create a fancy GUI for people to use...

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Regarding the Roddier test, I think it is really hard to do properly. One has to be very careful with setting up the in and out of focus images and measure the in and out focus distance very accurately to get consistent results. For me there is way more potential for errors in that procedure in comparison to shooting an edge target at distance and getting the focus right. Hmm.

@vlaiv, I look forward to see your comparison of the egde detection method with the Roddier test. Are you still aiming to do that?

  • Like 1
Link to comment
Share on other sites

24 minutes ago, andrew s said:

I suspect that these approaches use some form of numerical fft or its inverse  possibly without understanding the limitations of these with the data sets used.

Numerical methods are full of traps if you don't have a good understanding of what the constraints  and assumtion are. 

Regards Andrew 

Could well be. I'm going to detail exact procedure used and hopefully someone will point out error if there is one.

What does baffle me is that I get consistent results with analytical approach - my MTF looks exactly the same, Airy pattern looks the same - above edge method works flawlessly when done the way analytical math suggests.

  • Like 1
Link to comment
Share on other sites

Just now, alex_stars said:

Regarding the Roddier test, I think it is really hard to do properly. One has to be very careful with setting up the in and out of focus images and measure the in and out focus distance very accurately to get consistent results. For me there is way more potential for errors in that procedure in comparison to shooting an edge target at distance and getting the focus right. Hmm.

@vlaiv, I look forward to see your comparison of the egde detection method with the Roddier test. Are you still aiming to do that?

That is a good point. I have not thought about it - but yes, it could be that in / out focus was the issue.

I deliberately made small difference in defocus. I used 22.1 waves defocus in one case and 22.3 waves defocus in other direction - to simulate fact that one can't repeat exact defocus in both in and out directions (unless using motorized focuser?).

Maybe that was the problem?

I'm going to do another test - this time using simple wavefront (just 1/4 PV spherical) and exact in/out focus. In the meantime - I'm going to copy results that I posted to Gerry in private discussion.

Simulation parameters:

I used 1/5 PV Primary spherical and -1/8 PV Oblique astigmatism to generate wavefront.

80mm F/6 scope, 3.75µm camera, star at infinity, 500nm wavelength, 22 PV waves defocus (in fact - I used 22.1 and -22.3 waves defocus to simulate fact that it is hard to do exact in and out defocus - even when measuring defocus pattern size - that is like 2-3px difference in diameter).

Actual test images used:

image.png

Results:

image.png

Results:

1/5 PV spherical is equivalent of ~1/16.77 RMS. For 500nm wavelength that would give Spherical OPD coefficient of 29.815nm (test reported ~25.156nm)

1/8 PV oblique astigmatism in RMS is ~1/39.192. For 500nm wavelength that would translate to 12.758nm (test reported 4.925nm)

WinRoddier implementation of Roddier test clearly over estimated quality of the wavefront.

 I'm surprised with variation of other Zernike terms - I did not use noise in this simulation.

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, alex_stars said:

I think we can keep going with the edge detection MTF approach, it just needs to be carried out carefully. Just to keep the ideas going here a link to a paper which discusses the effect of using CMOS sensors (as we have been) with respect to creating an MTF.

https://core.ac.uk/download/pdf/12039012.pdf

I most certainly will keep on developing my code to test my scopes. And I am more than happy to discuss further details here.

If people are interested in running the python code themselves, we can think about making an open-source project out of it. However I will not have the time to create a fancy GUI for people to use...

This paper discusses sensor MTF.

Do you know what sort of differences we might expect when trying to evaluate optics MTF instead?

  • Like 1
Link to comment
Share on other sites

Ok, I now performed simpler test and results are the same - wrong. I think that I also found a bug in WinRoddier.

Simulation is as before - except - I did not make difference in defocus - both in and out focus are exactly at 22 P-V waves. I also did not add astigmatism term - but simply left things at 1/4 PV Primary spherical.

Just as check - I generated same thing in Aberrator 3.0 - 80mm f/6 with 0.25 PV spherical and generated wavefront and MTF. Here is comparison between my wavefront and MTF and that of Aberrator:

aberrator_wavefront.jpg.ce7b56d74c49ada80e2a27806be36ef6.jpg

generated_wavefront.png.9d35fc66bc2b8bf033acd34f7724be7c.png

(color scheme is a bit different - I used "physics" LUT in ImageJ - it seems to be lacking violet a bit) - but numerical values are the same - both have 0.25 P/V - so you can't really go wrong there.

Here are MTFs:

aberrator_MTF.png.1f92a1b8cb3ac8902770354f76058d2c.png

generated_MTF.png.25fcca4f84b52744ab0a7e3cdfc8c5b9.png

Again - the same. Here are test in/out focus patterns:

in_out_focus.png.4b711b47f111d3437996cca04ff8c91c.png

And here is result from WinRoddier:

results.png.61c580948202abf8fbc185ec0376318b.png

Again - PV is 105nm or 1/4.743 instead of 1/4 - WinRoddier again over estimated quality of wavefront. There is again strange defocus term - but I guess that is just quirk of method - it should not be taken into account. Again other terms are not completely 0.

For a moment, I thought that maybe I entered wavelength wrong - but no, it was 500nm all the time. In fact, I decided to change it to 600 to see what happens:

results2.png.05d33d36023f98fd9ca82399c7fc9e96.png

And the strangest thing happened - results stayed the same! only error in wavelengths and Strehl ratio changed???

  • Like 3
Link to comment
Share on other sites

Now that I think about it - maybe above with wavelength is not a bug and is quite fine.

Defocus pattern is image on sensor - and sensor pixel size is in units of length. We have all the data to reconstruct wavefront in terms of length (µm, nm) - and we only need wavelength of light to translate that into relative units of "waves"?

So the question is - have I made an error in converting to length units? I don't think that is the case for two reasons:

1. In first test we had two independent aberrations - spherical and astigmatism. If problem was with wavelength - then ratio of measured and generated errors would be constant. However we have:

29.815nm vs 25.156nm, ratio of ~1.1852

and we have

12.758nm vs 4.925nm - ratio of ~2.59

2. There is little utility that comes with WinRoddier - to help you calculate how big you need to make defocused star in order to get wanted defocus in waves:

image.png.2ca9bc5009153ed267fb82d134ac1fbc.png

So for given test parameters - it estimates defocus pattern with diameter of ~141px on screen

image.png.d569cf16b334d3d38a6253917aab35ca.png

Roughly measured out focus has ~140 (well - looking at the image - I would say more like 138px - measurement line is a bit out of the pattern in top right corner)

WinRoddier - measures roughly the same:

image.png.c74b8f6cc160a63022942ce1e0edf6af.png

(note that this is in radius so diameter is twice the value)

 

 

 

  • Like 2
Link to comment
Share on other sites

3 hours ago, vlaiv said:

This paper discusses sensor MTF.

Do you know what sort of differences we might expect when trying to evaluate optics MTF instead?

I scanned the paper and I think they used the known lens mtf in the reduction. I would think any measurement we could make will be for the system. Optics plus sensor. Maybe with a theoretical subtraction  of the sensor mtf. 

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

1 hour ago, andrew s said:

I scanned the paper and I think they used the known lens mtf in the reduction. I would think any measurement we could make will be for the system. Optics plus sensor. Maybe with a theoretical subtraction  of the sensor mtf. 

Regards Andrew 

I think that sensor MTF can be made insignificant if we over sample. It might not be the option when working at prime focus - but can be made so when doing afocal method. Then we just need to figure out how much lens used to image is messing things up.

Here is what perfect square pixel MTF looks like when we sample at optimum sampling rate:

image.png.b3411ee6a4e23b095e23e493d13e8e33.png

So yes, it does have fall off of about 0.7 at max frequency (possibly exactly 1/square root of 2). Indeed - that should be taken into account - even when we are at optimum sampling rate.

  • Like 2
Link to comment
Share on other sites

1 hour ago, jetstream said:

Why is this?

Look at my next post:

https://stargazerslounge.com/topic/371424-mtf-of-a-telescope/?do=findComment&comment=4040601

2 hours ago, vlaiv said:

Now that I think about it - maybe above with wavelength is not a bug and is quite fine.

Defocus pattern is image on sensor - and sensor pixel size is in units of length. We have all the data to reconstruct wavefront in terms of length (µm, nm) - and we only need wavelength of light to translate that into relative units of "waves"?

I think that Roddier method calculates things in nanometers from sensor pixel size and other data in millimeters - like aperture size and focal length and only uses wavelength of light to give answers in "waves".

For example 100nm in 400nm wavelength is equal to 1/4 of wave - because 400nm/4 = 100nm, but same 100nm is 1/6.56 wave in Ha wavelength which is 656nm because 656 / 6.56 = 100nm.

For that reason - changing wavelength does not alter absolute wavefront OPD (optical path difference - deviation of wavefront from optimal in nanometers - or how much "wavefront" is late or advancing with respect to ideal wavefront) but it does change things expressed in waves and Strehl which also depends on wavelength.

  • Like 1
Link to comment
Share on other sites

5 hours ago, vlaiv said:

Do you know what sort of differences we might expect when trying to evaluate optics MTF instead?

Good question. I posted the paper to keep in mind that we observe with CMOS sensors and they themselves have a MTF which we will have to consider as well, not only the MTF of the optics.

No, don't have a feeling yet what we might expect, but I want to try out how good we can measure the MTF in real life....

Link to comment
Share on other sites

I stepped away from this thread for a while and returned to find it covering 10 pages!

This is going to be incredibly useful. I already have an FFT algorithm which I wrote a few years ago as a function for Excel for different purposes (analysis of accelerometer data for a rowing boat - see my sig/blog where there are some more details, not of the signal-processing which was inconclusive, but the acceleration data itself). I I may well be bale to bring this tool to bear on this.

I do have a couple of questions though:

- The straight edge? Should it be Black/White, or is any darker/lighter interface useable?

- Does it matter if the "dark" is simply out-of-focus background with the bright edge being in focus?

- I've attached a photo: will the edge indicated be useable for this? It's about 125m away, taken at prime focus with my Skymax180 at 2799mm FL and EOS 7Dmk2. And against a greenish background (fields). I've only attached a jpeg for now as I don't intend this to be "the" photo, it was getting dark and shutter speed was slow. I'll re-do it on a bright day.

Cheers and I love this thread, Magnus

_S7A5565_ElecPylon.jpg

Edited by Captain Magenta
Link to comment
Share on other sites

1 hour ago, Captain Magenta said:

I do have a couple of questions though:

- The straight edge? Should it be Black/White, or is any darker/lighter interface useable?

- Does it matter if the "dark" is simply out-of-focus background with the bright edge being in focus?

- I've attached a photo: will the edge indicated be useable for this? It's about 125m away, taken at prime focus with my Skymax180 at 2799mm FL and EOS 7Dmk2. And against a greenish background (fields). I've only attached a jpeg for now as I don't intend this to be "the" photo, it was getting dark and shutter speed was slow. I'll re-do it on a bright day.

I think we can certainly give it a go as is. EOS 7D is going to slightly under sample as it has ~4.1µm pixel size - but it will be close enough.

I can work with raw image and edge being straight up. Alex is probably going to want slanted edge for his software. It will be interesting to compare results.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.