Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

MTF of a telescope


Recommended Posts

11 minutes ago, Sunshine said:

Been trying to follow along, maybe someone can sum up all 6 pages of this thread in layman's terms,  thanks.

i'll be waiting........Hello?

I agree and when I have time I will try to do that, please hold on until after the weekend.... time runs out now..... sorry for that,,,,,

Link to comment
Share on other sites

4 minutes ago, Seelive said:

This initially started out as a seemingly interesting thread but I am slowly loosing the will to live.  Perhaps any interested parties could write a paper on the subject and submit it to a suitable journal so that it can be peer reviewed there rather than on this forum.

I don't really think that any of what has been discussed is anywhere near scientific enough to be published - but I do get your sentiment - and I acknowledge that it is my fault as well. Quite a bit of discussion is too technical to be of interest to others.

11 minutes ago, Sunshine said:

Been trying to follow along, maybe someone can sum up all 6 pages of this thread in layman's terms,  thanks.

i'll be waiting........Hello?

I'm for one rather happy with what I've learned in this discussion - that is Edge/Line Transfer function method of deriving section of MTF.

I was hoping that this thread would explain some technical concepts to people that will help them better understand optical performance of telescopes and what is possible and what not, but I guess that topic is simply too technical - or people that discuss it simply fail to convey it in plain language without too much use of technical terms.

Then there is that part when debate gets heated, partly due to misunderstanding and partly because interpretation of evidence or lack of.

  • Like 1
Link to comment
Share on other sites

13 minutes ago, Seelive said:

This initially started out as a seemingly interesting thread but I am slowly loosing the will to live.  Perhaps any interested parties could write a paper on the subject and submit it to a suitable journal so that it can be peer reviewed there rather than on this forum.

Its very easy- if not interested,check out other things.

I hope this informative thread continues.

  • Like 6
Link to comment
Share on other sites

I don't mind it at all, nothing about how the world works is easy when you take a scalpel to it, if a small percentage of us can follow along then great. If anything, it motivates me to delve into and understand a bit more.

Link to comment
Share on other sites

6 minutes ago, alex_stars said:

From your edge spread simulation, I would love to see the horizontal cross section of that "data". Looks to me that this edge is spread over many pixels. Maybe you can show us that if you find time. I'm gonna have more

Sure, I'll do another round of simulation, detailing every step. I'll start with PSF at bare edge of sampling resolution. Here is generated PSF:

image.png.c7e3b65f9d1b659350052b650fe8e4fe.png

As you can see - disk is covered by something like 3 pixels and each ring is one approximately one pixel wide. Although this might not seem like it - it actually properly samples Airy pattern.

Here is FFT of above airy pattern:

image.png.93e6703f422b17c896e185f6decc1dcc.png

and associated MTF diagram - we can clearly see that frequency cutoff is right at the edge of FFT space (pixel at 256 from the center for 512x512 image) - perfect sampling of airy disk.

Here is my edge convolved with PSF and appropriate plot of cross section:

image.png.86acb2e2e8eab89764a2e307e6fba2b3.png

you can see that there is about 10 samples that define edge (airy disk is 3 so at least twice that on each side + first few rings ...

Have to make a pause now, to take dog to a vet ... will continue later.

Link to comment
Share on other sites

1 hour ago, Seelive said:

This initially started out as a seemingly interesting thread but I am slowly loosing the will to live.  Perhaps any interested parties could write a paper on the subject and submit it to a suitable journal so that it can be peer reviewed there rather than on this forum.

You don't  have to read it if your not interested. As long as it is within the rules why not allow it to develop? 

Regards Andrew 

  • Like 2
Link to comment
Share on other sites

1 hour ago, vlaiv said:

I understand now - you are talking about scaling property of Fourier transform

image.png.eb126bb7dac81c8c1db48f155ec2b52b.png

however, you need to understand that this holds as written above  - time squeezed (or in this instance spatially squeezed) function will indeed be FT stretched - and we can clearly see this effect in perfect aperture - Perfect aperture that is twice as large will have twice as high cutoff frequency.

However - this property does not hold for functions that are different.

PSF of obstructed aperture is not simply scaled PSF of obstructed aperture - relative size of peaks also change (strehl changes / encircled energy - ratio of energy in disk vs rings) and above no longer holds - we have two functions and regardless of the fact that to us one looks like "squeezed" or "narrower" version of the other - in above sense where time scaling holds - it is not - those are in fact two different functions and not time scaled single function. We can't apply time scaling property on different functions.

I did not assume a squeezed function the obstructed psf is the subtraction of two PSFs.  It is the subtraction which causes difference I am trying to defend.

I will try to do a full argument and present it. I will avoid the transforms you use at it is the assumptions in those (when applied to an obstructed  aperture)  I am challenging.

This may take some time with various commitments. 

Regards Andrew 

Link to comment
Share on other sites

I got lost around midway down page 2 but I'm still reading everything in the hope that some of it somehow sinks in :)

Image result for hangover maths meme

Edit: Damn that was supposed to be a GIF can't even get that right 🤣

Edited by CraigT82
  • Like 1
  • Haha 2
Link to comment
Share on other sites

47 minutes ago, alex_stars said:

Thanks for redoing your experiment and good luck with the dog, hope it's not something serious. No rush.

Nothing serious, thanks - case of "over licking" (hair got - don't know the term - threaded perhaps - but in a bad way) that created little sore. Noticed that spot is not looking good so its just few rounds of antibiotics and it's already getting better.

In any case, to continue - next I perform differentiation and get line spread function:

image.png.4c84414ea35d903faf947a07989f0ef3.png

And finally, doing FFT of line spread function - produces section of MTF:

image.png.3c8d0679b24829c6e95b37d1d067c8cc.png

Where we again have exact same MTF - but this time as line cross section of above 2D circular MTF - graph is the same.

  • Like 1
Link to comment
Share on other sites

23 minutes ago, andrew s said:

You don't  have to read it if your not interested. As long as it is within the rules why not allow it to develop? 

Regards Andrew 

I'm actually very interested and now becoming somewhat amused by it.  I'm all in favour of a good debate, but I'm just slightly put-off by the seemingly "I'm right, you're wrong" polarisation that seems to be creeping in (although I suppose that is, and always will be, evident in any debate) 😕

Link to comment
Share on other sites

6 minutes ago, Seelive said:

the seemingly "I'm right, you're wrong" polarisation that seems to be creeping in

Each party is expressing their positions and justifying them by various means. If their were no different positions there would be no educational debate and I'm not sure why anyone would be put off by this and especially when they don't have to read it.

  • Like 2
Link to comment
Share on other sites

I think we are all respectful of each others position. It's hard to add all the linguistic nuances in an online debate especially while defending our own views. :)

Regards Andrew 

Edited by andrew s
  • Like 3
Link to comment
Share on other sites

1 hour ago, vlaiv said:

Where we again have exact same MTF - but this time as line cross section of above 2D circular MTF - graph is the same.

Hi @vlaiv good to hear your dog is fine. And well performed theoretical experiment (simulation). Now let us bring you into the real world, where you do not know the PSF to start with, and where you can not just convolve your perfect edge with a perfect PSF AND you have to deal with noise.

Here is some data for you, which is a slice through my measured edge (real camera, real telescope, real air, real target)

for_vlaiv_02.png.118ba70d1684e7c288bc930bf387dd5f.png

and here would be a text file with the respective data:

edge_data_file.txt

and here would be a 2D, 8-bit grayscale image of the same edge data (just vertically repeated). This has perfect vertical alignment with the sensor, as you asked for 😁

edge_test.png.6d6ebb5c5795c3e0af571df154c85720.png

Now please redo your experiment and show me your MTF for my scope.

I look forward to see your results.

Edited by alex_stars
Link to comment
Share on other sites

13 minutes ago, alex_stars said:

8-bit grayscale image of the same edge data (just vertically repeated). This has perfect vertical alignment with the sensor, as you asked for

8 bit is going to introduce much quantization noise in samples and in general samples are really noisy - which can be seen as alternating black and white lines.

image.png.1a56f6cceaed770bb4520d23981de29a.png

This is the result of processing. It does not look like much, but take the image you posted in this thread earlier as exact image you took - again, it is only 8 bit.

image.png.90eb27450fc97234c5798c0dcfeef586.png

Then I took 512x512 region that has fairly straight edge. I applied differential kernel and got this:

image.png.486053426defddc5ae4e5ca66b7ed82c.png

and finally I took FFT of that and got this:

image.png.0336292dcb0d34758b663e2dfdd56219.png

This is zoomed central region. Since edge is not perfectly vertical but angles - we have angles in our MTF cross section - but we can still extract information from it:

image.png.aa57770ba73b549c9e84e0c56f37f61a.png

Now I don't believe that little "belly" that starts around 25 is genuine - I believe that it is "secondary spectrum" so to speak - pollution from other lines at an angle - but first part is fairly strong - very good for only 8 bit data and curvy Edge.

This shows that you can't read off data from the image until you finish complete procedure - it is best to leave image as is and perform operations on it.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

8 bit is going to introduce much quantization noise in samples and in general samples are really noisy - which can be seen as alternating black and white lines.

Yes, welcome to reality.

1 hour ago, vlaiv said:

This shows that you can't read off data from the image until you finish complete procedure - it is best to leave image as is and perform operations on it

I have no idea what you mean by this, maybe you can explain.

My processing program is capable of producing my posted MTF from a simple 8 bit image (colour though)

Here is the original image as it comes of the camera:

edge01.png.839da2a54f4f80dc1a1c30e127d3675f.png

And I don't need a straight edge or any higher bit data, nor do I need any noise reduction or such things. Just the steps I explained and....

SMTF_4.png.2e10322a859351b485bdaeb2c46de909.png

Out comes the red line....

Hmm @vlaiv, what do you say? Can you reproduce the result with your approach?

 

Link to comment
Share on other sites

15 minutes ago, alex_stars said:

My processing program is capable of producing my posted MTF from a simple 8 bit image (colour though)

But its it accurate?

I have proposition for you - run your software on following image:

edge_transfer_function.fits

So we can compare it to known PSF/MTF and see what sort of result your software produces

Link to comment
Share on other sites

55 minutes ago, alex_stars said:

Horizontal scale is in pixels

And?

I really want to get to bottom of this and confirm that doing things the way you do it - 1d derivative and 1d FFT, is producing correct result - but I'm having difficulty in doing so and honestly - you are not helping me much with that.

For example:

I did the same thing you did - I took last image that I posted here - took readout of one horizontal line and convert that to numbers. I then entered those numbers into this:

https://scistatcalc.blogspot.com/2013/12/fft-calculator.html

Untitled.gif.0055dd03de87e5d11471efb8e7e97cb3.gif

I got two different result based on how many samples I've chosen - In both cases I tried for line to be in the middle of the set.

If we can't get consistent result on same (perfect) data using a method - how reliable is that method?

With method as given by underlying math, even if I do different cut outs I still get the same result (only pixels need to be properly converted into frequency units):

image.png.3403da8cf8f8beb4b9cd24645581395a.png

By the way - regular method is quite resilient to noise. Here it is polluted with gaussian noise as to produce SNR of 100 (easily accomplished in single exposure with sensor having full well capacity of about 15K):

image.png.c74faafad1a1e9d40d1148cbd530eb9b.png

Link to comment
Share on other sites

7 hours ago, vlaiv said:

It should go up to 256px or rather up to frequency of 0.5 cycles per pixel

Good Morning. Funny you should mention that now. You did not say anything about a pre define pixel scale beforehand and now you claim my method is inaccurate because of that.

Here is what happens, and I assumed you know that:

Your initial data looks like the graph below (horizontal cross section of your image)

for_vlaiv_04.png.70a66b517a4bda040327697b8d9c0dcf.png

The beginning and end (the swings in opposite directions) have nothing to do with the problem. We are interested in the edge in the middle. So I cut the wings away.

However when one processes data purely numerically, the location where you cut the data is important for the final scale. So initially I cut 50 data points each way and get the result I posted. If I cut a 100 data points each way the graph looks like this:

for_vlaiv_06.png.76a8bcde375fd349ceb0c3fa140d0956.png

Still a nice representation of the edge. Doing the numerical processing as discussed leads to this curve:

for_vlaiv_05.png.cbe36cf123ddf2d59813c267e4cf035d.png

Information wise it is the same curve as the originally posted one (here for comparison):

for_vlaiv_03.png.431ad709050ceeb7244f407128fd78f3.png

However the scale is different on the x-axis. Is that a problem? No. Why should it be. The two curves have the same information just represented on a different scale.

BTW this is one of the reasons why people who talk about MTFs scale also the x axis between 0 and 1, then no confusion arises. Only when you want to compare different optical designs, as I did above, you need to introduce a proper scale.

Link to comment
Share on other sites

7 hours ago, vlaiv said:

I really want to get to bottom of this and confirm that doing things the way you do it - 1d derivative and 1d FFT, is producing correct result - but I'm having difficulty in doing so and honestly - you are not helping me much with that.

@vlaiv, how very kind of you to say. I may offer some advice. You might consider listening to the things others tell you.

I can summarize at this point:

  • You have disregarded all peer-reviewed published science I presented
  • You still claim my method is inaccurate for no apparent reason as I managed to reproduce your result
  • You do not manage to reproduce my results, and complain about noise in the real world data and ... 8 bits ... and ....
  • Ah yes and you base all of your "claims" on simulations you do on your computer with some software (which is?)

Not sure where to take it next, but better let it go (as I am of no help as you say).

Maybe end with a hint. Any basic mathematical textbook on Fourier Transforms will tell you (should you care to read it) that a one-dimensional Fourier Transform is adequate for a one-dimension problem (our edge detection task).

Edited by alex_stars
  • Like 1
Link to comment
Share on other sites

1 hour ago, alex_stars said:

@vlaiv, how very kind of you to say. I may offer some advice. You might consider listening to the things others tell you.

If you don't mind - I would rather tackle this as a team rather than opposing sides. I think this method is a valid one, it is based on solid math. I'm very interested in assessing its usability. You have working software and please understand that I have reservations about accuracy of it for few reasons:

1. Your method deviates from straight forward math that describes this method

2. I don't have proof for some of the claims so I need to understand those claims deeper (like sampling at an angle and doing 1D FFT and such) and if possible find proof of them.

3. Graph that you presented as result of measurements does not quite fit the theory - it is "above" theoretical maximum in two places

image.png.a78c4e0977f8775cda4d310ec5529e4b.png

Quote

The small overshoot of the red curve above the green curve in the MTF plot is due to the data fitting in the Edge Spread Function. So consider the two curves (green and red) to be equal until the red one falls below the green one.

Now this raises my eyebrow - why should we need to do that and to which extent is this method going to produce such results - how accurate it is?

1 hour ago, alex_stars said:

Ah yes and you base all of your "claims" on simulations you do on your computer with some software (which is?)

ImageJ - it is java based free software tool for scientific image manipulation. I use Fiji - which is distribution loaded with assorted plugins - like FFTJ for Fast Fourier Transforms, ...

If you like, I can detail every single step I do so it can be reproduced by anyone (I sort of assume that it is reproducible - when I take screen shots of steps, but I'm not sure if I'm detailed enough).

1 hour ago, alex_stars said:

You have disregarded all peer-reviewed published science I presented

I don't have payed access to it and I don't intend to pay just for the sake of this discussion. I have had a look at the freely accessible one and quoted from it.

1 hour ago, alex_stars said:

Good Morning. Funny you should mention that now. You did not say anything about a pre define pixel scale beforehand and now you claim my method is inaccurate because of that.

First off - let me tell you that I've found mathematical justification for using 1d FFT in this case - but it is limited in its scope. This is example of what I mean when mentioning you being helpful - It would be so much easier if you mentioned - "look Vlad, check out 2D FT of separable functions" - and then I would do that and it would save us time.

So yes in case of separable function - which means perfectly vertical straight edge in this case (can't have bends and be at an angle) - we can use 1d FFT to produce cross section of MTF as in this case - FT of product is equal to product of FTs and in case of vertical straight edge we have f(x) * constant so we'll end up with FT(f(x)) * delta(y) - which is 1D FT of f(x). I don't know if there are other cases where this is applicable as well - feel free to point me in the right direction.

On further thought - could it be that scaling in frequency domain has something to do with angled edge? Here is why I think so - but we'll need to confirm that:

image.png.8bc3ecb746e6679b374640894faec915.png

Sampling LSF at an angle will produce stretching in "time domain" so FT will be shrunken in frequency domain.

I've been thinking of another thing, let me point out something from your data graph:

image.png.a78c4e0977f8775cda4d310ec5529e4b.png

Measured line does not end at 0 - now this might be consequence of noise or some error in the data - but it could be consequence of under sampling. At one point I remembered that you mentioned that image used is color image, and I figured that it is made with OSC sensor. Now, OSC sensor has Bayer matrix and as such - samples at twice the lower frequency than mono sensor (at least for red and blue - green is probably somewhere in between - depending on debayering used).

If above is correct, then this method is not going to be as useful to amateur community as it would require very long focal lengths or very small pixels - which generally means use of barlow element, but then we are not measuring scope, we are measuring scope + barlow. I know that part of super resolution you were mentioning is to tackle this issue and having edge tilted has to do with "drizzling" - however, I don't really think it works - unless I see some sort of proof for it (math proof or even sim)

If you don't mind - I would like us to explore these things. I will, on my part do some simulations and prepare data. If it is not too much trouble, maybe you could run that data thru your software so we can see the result?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.