Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

alex_stars

Members
  • Posts

    223
  • Joined

  • Last visited

Posts posted by alex_stars

  1. I think we can keep going with the edge detection MTF approach, it just needs to be carried out carefully. Just to keep the ideas going here a link to a paper which discusses the effect of using CMOS sensors (as we have been) with respect to creating an MTF.

    https://core.ac.uk/download/pdf/12039012.pdf

    I most certainly will keep on developing my code to test my scopes. And I am more than happy to discuss further details here.

    If people are interested in running the python code themselves, we can think about making an open-source project out of it. However I will not have the time to create a fancy GUI for people to use...

    • Like 1
    • Thanks 1
  2. On 09/02/2021 at 11:28, vlaiv said:

    Well, I would like to explore what sort of results do we get from under sampled data.

    There is well known drizzle algorithm that deals with this in imaging, but I believe it is often misused since people don't have control over dither step and I don't think it works good for random dither (original Hubble documentation implies that telescope is precisely pointed for dithers of a fraction of pixel - like 1/3 pixel dithers)

    In this case - we can use edge at an angle to give us wanted consistent dithering step - but we need to compensate for the fact that edge is tilted.

    I'm wondering if you implemented drizzle bit - or anything similar in your python code?

    Method that I proposed above with afocal and mobile phone - will mix in both optical properties of eyepiece and phone lens (although not sure how much at that scale will be picked up) - but it will provide wanted sampling rate. Camera in prime focus is likely to under sample.

    If we want a method that is reliable and easy for amateurs to use - we need to explore both options.

    Sorry, little time these days to hang on the forum, which is a pity.

    I agree we should explore how much we can exploit the edge detection method. Let's keep at it. I will have to wait until my 125 mm APO gets delivered. Currently without a scope.

    regarding the drizzle. Well that is what I have proposed in the beginning. You remember my post on using super-resolution. Now we came full circle. Anyways, I agree we will have to deal with the under sampling, hence my suggestion to use super-resolution, which is in this case about the same as drizzle.

    Exactly, we will need to have the edge at an angle, to make use of a "drizzle" approach. Hence I was arguing from the start to use an edge that is not aligned with the sensor array.

    And yes, this is why I do the alignment code of the tilted edge data, to compensate for the fact that the edge is tilted.

    I see we converge with ideas. That is nice to see.๐Ÿ‘

    I will post more results as soon as I have my new scope and some time for daytime testing.

  3. 22 hours ago, vlaiv said:

    Ok, so here is complete idea:

    - Take straight sharp edge target (it can be just printed on a piece of paper). It is important that edge is high contrast and of uniform color (black / white), that edge is very sharp with no blurring of its own and that it is straight.

    - measure distance to target as best as you can

    - Take image of the target with high power eyepiece and your mobile phone (use adapter as any shake will void results) - also make sure your phone has good focus

    - Take additional image in same configuration (same distance, same eyepiece, etc ...) of something of known length - maybe a good ruler or similar - so we can calculate sampling rate in given configuration.

    From that data we can derive actual MTF or your scope + eyepiece and compare it to theoretical scope MTF?

    SO I see you are now recommending the edge detection method for estimating the MTF. Great! Mission accomplished. ๐Ÿ˜€

    Is there anything left for us to sort out?

    I don't think it is very interesting for everybody to continue the discussion of our curves, I can scale mine to look exactly like yours and that is not very amazing. Actually expected. If people code their programs in Python, because they can or if they use ImageJ is beside the topic of this thread, don't you agree?

    I look very much forward to see your results from the 102 Skymax and how it compares to theoretical curves.

    Since you use ImageJ, I was wondering if you are aware of the plugin that exists which performs that task. Never tried it myself, but looks decent.

    Slanted Edge MTF ImageJ

    I prefer to code in Python....

    • Like 1
  4. 1 minute ago, vlaiv said:

    I wonder why we keep getting different results? Although, we did get the same curve from synthetic data, right?

    Do we? I think we are converging with our results. Which I appreciate. Let me post a graph of my MTF with pixel scale soon. Today its late and I won't be online until Monday, but I will post the graph then.....

  5. 22 minutes ago, andrew s said:

    However, the recent comments on sampling reminded me that formally the MTF only applies to "shift invariant systems" and as I am sure you both know CCD detectors are not shift invariant as the result depends on where the image fall even with Nyquist sampling.

    Thanks for reminding us on that. I think it was not mentioned yet in the discussion.

  6. 49 minutes ago, jetstream said:

    Do you lose detail this way? at what f ratio are you sampling and what are you imaging?

    Somewhat, as we have discussed, we definitely loose contrast. My main point is that we have a hard time utilizing the really high spatial resolutions when we image, so we might not have to worry too much what happens in the different scope designs at the very right side of the MTF graphs. Something like that.

    The image I used has been taken at f/15 of my Mak, so at about 2700 mm. I am aware of the fact that I could increase the pixel scale, use a barlow and such... But I just wanted to make a point about possible sampling of the MTF when we use a scope in real life.

    I took an image of a black edge on white background, so a high contrast target (I posted the image earlier) and that was 170 m away of the scope, across a meadow in early morning, to keep atmospheric disturbance small.

    • Like 1
  7. 9 hours ago, vlaiv said:

    If this is so - then I don't think we have viable method for amateurs to use - scopes that are F/6 for example would need ~1ยตm pixel size to properly sample without a barlow - and there are no such cameras.

    Interesting point you catch up on. Look just for the sake of argument and as you put in sooo much energy to prove things, I decided to run my real world edge data without super-resolution.

    Here is the aligned edge data, you can see it gets quite nicely aligned, so the 1D FFT is valid, it is a 1D problem.

    for_vlaiv_08.png.16d36fefd5bf8a2523a2d8aae02bd34f.png

    I do the whole processing without super-resolution and without any interpolation whatsoever (happy?) So weย  we get this graph for the MTF.

    for_vlaiv_07.png.b5d3657cc4bc7534dc9191a2e47b34b5.png

    The green one is completely correct, I have even re-assessed my pixel scale of my camera, so that is spot on where theory tells us the MTF should be. To be correct I had to switch to 600 nm wavelength as this is the wavelength my camera has the highest QE, so just being fair to the sensor.

    As you can see the measured MTF eventually goes down to zero. As it should, BUT. More importantly it does so at a different location than the theoretical one.

    And now comes the really important part for us amateur astronomers. When we take images with our cameras, we sample the world with the sensors we put on the back of our scopes. They HAVE way to coarse pixel dimensionsย  to sample an MTF completely.

    As you finally seem to agree with me, the red curve from about X=0.25 onward to the left is under-sampled (thus quite useless), because our sensors (even the fancy ones with about 2 mu-m pixels) are way too bad to sample the full spectrum.

    How does this translate to our scopes, their design and central obstructions?

    Well in my case it does not matter if I image with my Mak or with a 125 mm unobstructed APO, I will be able to sample about the same scales properly. The really fine details I always undersample. Not becauseย  the scope is bad, it's because of the camera I have.

    And now you might as, whats about the fancy images one sees with a C14?

    I will show you in my next post.

    ย 

    ย 

  8. 7 hours ago, vlaiv said:

    I really want to get to bottom of this and confirm that doing things the way you do it - 1d derivative and 1d FFT, is producing correct result - but I'm having difficulty in doing so and honestly - you are not helping me much with that.

    @vlaiv, how very kind of you to say. I may offer some advice. You might consider listening to the things others tell you.

    I can summarize at this point:

    • You have disregarded all peer-reviewed published science I presented
    • You still claim my method is inaccurate for no apparent reason as I managed to reproduce your result
    • You do not manage to reproduce my results, and complain about noise in the real world data and ... 8 bits ... and ....
    • Ah yes and you base all of your "claims" on simulations you do on your computer with some software (which is?)

    Not sure where to take it next, but better let it go (as I am of no help as you say).

    Maybe end with a hint. Any basic mathematical textbook on Fourier Transforms will tell you (should you care to read it) that a one-dimensional Fourier Transform is adequate for a one-dimension problem (our edge detection task).

    • Like 1
  9. 7 hours ago, vlaiv said:

    It should go up to 256px or rather up to frequency of 0.5 cycles per pixel

    Good Morning. Funny you should mention that now. You did not say anything about a pre define pixel scale beforehand and now you claim my method is inaccurate because of that.

    Here is what happens, and I assumed you know that:

    Your initial data looks like the graph below (horizontal cross section of your image)

    for_vlaiv_04.png.70a66b517a4bda040327697b8d9c0dcf.png

    The beginning and end (the swings in opposite directions) have nothing to do with the problem. We are interested in the edge in the middle. So I cut the wings away.

    However when one processes data purely numerically, the location where you cut the data is important for the final scale. So initially I cut 50 data points each way and get the result I posted. If I cut a 100 data points each way the graph looks like this:

    for_vlaiv_06.png.76a8bcde375fd349ceb0c3fa140d0956.png

    Still a nice representation of the edge. Doing the numerical processing as discussed leads to this curve:

    for_vlaiv_05.png.cbe36cf123ddf2d59813c267e4cf035d.png

    Information wise it is the same curve as the originally posted one (here for comparison):

    for_vlaiv_03.png.431ad709050ceeb7244f407128fd78f3.png

    However the scale is different on the x-axis. Is that a problem? No. Why should it be. The two curves have the same information just represented on a different scale.

    BTW this is one of the reasons why people who talk about MTFs scale also the x axis between 0 and 1, then no confusion arises. Only when you want to compare different optical designs, as I did above, you need to introduce a proper scale.

  10. 1 hour ago, vlaiv said:

    8 bit is going to introduce much quantization noise in samples and in general samples are really noisy - which can be seen as alternating black and white lines.

    Yes, welcome to reality.

    1 hour ago, vlaiv said:

    This shows that you can't read off data from the image until you finish complete procedure - it is best to leave image as is and perform operations on it

    I have no idea what you mean by this, maybe you can explain.

    My processing program is capable of producing my posted MTF from a simple 8 bit image (colour though)

    Here is the original image as it comes of the camera:

    edge01.png.839da2a54f4f80dc1a1c30e127d3675f.png

    And I don't need a straight edge or any higher bit data, nor do I need any noise reduction or such things. Just the steps I explained and....

    SMTF_4.png.2e10322a859351b485bdaeb2c46de909.png

    Out comes the red line....

    Hmm @vlaiv, what do you say? Can you reproduce the result with your approach?

    ย 

  11. 1 hour ago, vlaiv said:

    Where we again have exact same MTF - but this time as line cross section of above 2D circular MTF - graph is the same.

    Hi @vlaiv good to hear your dog is fine. And well performed theoretical experiment (simulation). Now let us bring you into the real world, where you do not know the PSF to start with, and where you can not just convolve your perfect edge with a perfect PSF AND you have to deal with noise.

    Here is some data for you, which is a slice through my measured edge (real camera, real telescope, real air, real target)

    for_vlaiv_02.png.118ba70d1684e7c288bc930bf387dd5f.png

    and here would be a text file with the respective data:

    edge_data_file.txt

    and here would be a 2D, 8-bit grayscale image of the same edge data (just vertically repeated). This has perfect vertical alignment with the sensor, as you asked for ๐Ÿ˜

    edge_test.png.6d6ebb5c5795c3e0af571df154c85720.png

    Now please redo your experiment and show me your MTF for my scope.

    I look forward to see your results.

  12. 11 minutes ago, Sunshine said:

    Been trying to follow along, maybe someone can sum up all 6 pages of this threadย in layman's terms, ย thanks.

    i'll be waiting........Hello?

    I agree and when I have time I will try to do that, please hold on until after the weekend.... time runs out now..... sorry for that,,,,,

  13. 6 minutes ago, Seelive said:

    Perhaps any interested parties could write a paper on the subject and submit it to a suitable journal so that it can be peer reviewed there rather than on this forum

    I fully agree as my work has been and is based on already published work. I did not expect such a detour.... However, what can you do....

  14. 17 minutes ago, vlaiv said:

    I also said - don't use interpolation of any kind - it is not needed. In fact - look at previous post by Andrew - and remember a bit of quantum mechanics - "FT of narrow function is broad and vice verse" - you don't need to interpolate your data - just a do FFT and cross section of MTF will be broad enough.

    I love the fact that you remind me of quantum mechanics. ๐Ÿ˜€. Given your comment you obviously talk about something completely different. Look, once more, and for the last time. These are my steps:

    1. Having taken an image of an edge target. I align and stack the line data (it's all those horizontal lines in ONE 2D image of the line target, EACH line is an observation).
    2. During this alignment and stacking I can or can not deploy super-resolution (my choice). Also I can interpolate in this step the coarse data to a finer grid, or not.
    3. Now I have the ESF.
    4. Now I need to differentiate to get the LSF.
    5. Now I need to do an FFT to get the MTF
    6. And yes, either way the MTF will be broad enough, that has never been the issue (except for people who do not know how to do FFTs properly and struggle with the sampling limit

    You see my interpolation happens in step (2) above. Just to reconstruct a better representation of the ESF. You suggest to me that I just do an FFT and have the MTF. If I would do that, I would do an FFT of the ESF, which does not make sense at all. By doing so I would skip step 4 and the reconstruction of the LSF. We obviously talk about different things.

    25 minutes ago, vlaiv said:

    I ran simulation showing perfect match between two MTF getting methods in the post above - how did I not grasp the method?

    It is very good that you do simulations on your computer. However I doubt you are able with your simulations to reconstruct the MFT from an edge target where the edge spread is represented over just a few pixels (say 10 as in my first example).

    I stated that you did not grasp the method as you suggested to take away all the refined steps to make such a method being higher resolution. As it was you who doubted the quality of the method initially, I am still surprised that you suggest that. Working with real life data, one needs a line target that is not aligned with the sensor and in many cases one needs super-resolution to recover the edge function from the data..... simple as that.

    2 hours ago, vlaiv said:

    Here is edge that has been convolved with PSF for perfect aperture

    From your edge spread simulation, I would love to see the horizontal cross section of that "data". Looks to me that this edge is spread over many pixels. Maybe you can show us that if you find time. I'm gonna have more โ˜•

  15. 55 minutes ago, vlaiv said:

    First is "super resolution" and need for linear interpolation of the sample versus sinc interpolation. Here I can't do much except to point you to the proof of Shannon-Nyquist sampling theorem and the fact that sinc interpolation is perfect restoration of band limited properly sampled function. There is mathematical proof for this - not sure what is there to discuss.

    I am well aware of the Shannon-Nyquist theorem. However if you compare my edge spread data with a sinc function

    SMTF_2.png.8ccb190690b78d808aa21c52915ae46b.pngindex.jpg.064e2a1dcf5941043f70642c23ff13d4.jpg

    Why would I want to fit the sinc function in into my edge spread function. It so does not make sense, Nyquist or not.... Maybe a sigmoid function, but you did not like that, remember, so I showed you a purely data driven approach....

    Super-resolution and its intermediate step of piecewise linear interpolation is just a way to harness the data of many observations and make more of them in comparison to a single one. And maybe you do not understand what I have explained. I do not curve fit anything during my method, so it is purely data based. All I do is prepare a 10x larger data array by piecewise linear interpolation of the initial data, there is no curve fitting component to it. And I then fill it with data....

    55 minutes ago, vlaiv said:

    I would really like to see your experiment done with following protocol:

    - shoot straight edge at vertical position (make sure it is as straight as possible and that it is indeed as close to vertical as possible) - use critical sampling rate for your pixel size (which you already have)

    - perform differential on image using simple kernel

    - do FFT of resulting image and then measure resulting line profile

    No need for super resolution / interpolation and curve fitting - this protocol is much simpler. Yes, note pixel scale so you can properly derive cut off frequency when plotting measured MTF against theoretical ones.

    Unfortunately this quite tells me that you did not grasp the essence of the edge spread function approach yet. You propose the exact opposite of what should be done.

    • You should exactly NOT align the sensor pixel array with the edge target, that is the worst sampling you can do given a limited resolution of your camera (and all cameras are limited).
    • I do just simple finite differencing, there is hardly a simpler "kernel", and may I remind you that you suggested that yourself
    • I do the FFT of the resulting image, which happen to be 1D in the case of an edge.
    • Your suggested protocol is indeed simpler, but the crudest approach there is to this measurement technique.... So no, I will not do that, you are free however free to do what you feel like....

    Another paper reference to help understand the rational behind this method:

    "Comparison of MTF measurements using edge method:towards reference data set" (paper available here )

    image_oversampe01.png.d27fe128537bf345ae7fcf092d3258ff.png

    This would be the classical oversampling idea and the plain reason why you would not want to align your edge vertically, parallel to the pixel array of your camera. You would loose soo much data you could sample.

    image_oversampe02.png.91b42cadc4f6b26e08b602357e9232a8.png

    This is what I base my code on. The Airbus method I call it leisurely. Where you align multiple observations and thereby utilize more data.

    Ah yes and I have to clarify. The Airbus method does curve fitting, but I do not. The reason? When I apply super-resolution I harness so much data, I can reconstruct the LSF directly out of the data, with an acceptable amount of noise....

    Hmm, maybe this helps.

    ย 

  16. 1 hour ago, vlaiv said:

    which is clear indicator that this method does not produce correct results since it shows something impossible by physical laws as we understand them

    Regarding this statement. I did not show something that is impossible by the laws of physics. If it would be impossible, it would be and I could not show it. Physics rule everything, no escape. I can state that, being a physicist myself.

    What you see in my graph of the MTFs is the effect of processing the data with super-resolution. This is possible because I observe the edge many times with many small variations in each measurement. The same process is used when generating super-resolution images which suddenly show license plates of cars driving by a camera, even though each single frame of the video is so blurry you can't see the license plate. The combination of many observations makes the difference. You use this too, when you lucky-image a planet and I am sure you did not think that you break the laws of physics by doing so (and you computer explodes because it needs to ๐Ÿ˜„).

    What you see in the MTF graph is some extra "information" at very high spatial frequencies, so very small details in the image. What could those be? Hmm, I'd say it's the noise in the processed data. That noise is there and needs to be removed if you want to realistically measure the high frequency part of the MTF. I did not do that, as I know what it is and don't care about it. However one could.

    BTW, one can make this method a lot better than I did for this short demonstration, but the concept has been proven and it works. When I get my apo I will probably redo this and see if I can compare. Then I might have time to build in some noise filter in my software... we'll see.

  17. 41 minutes ago, vlaiv said:

    method is indeed almost as what he described - except: Don't try to fit functions to data and don't convert to 1D domain until you are done - FFT needs to be done in 2D domain and it needs to be done on "digital derivative" of ESF - which is LSF.

    Well the method I described has not been invented by me, it is built upon a large body of published scientific work and obviously it works, many scientists use it to understand very complex optical systems. For example the documentation of the Hubble Space Telescope lists measured Line Spread Functions (link here). So just to make this crystal clear this method works as described to a very high degree of resolution.

    Regarding the FFTs, you can do them on any dimensions, they are just mathematics. You can do everything in 1D as I do, or step towards 2D and process there. The one thing you can't do is mix dimensions carelessly.

    I am not sure if you now agree with me @vlaiv or not but I have to cut this short, even though I admire your energy put into experimentation. I wish I would have that amount of spare time for my hobby. ๐Ÿ‘

    You did find a misconception in my previous post, which is probably due to the fact that I write these summaries beside work, most often hastily in my โ˜• breaks (as now).

    I initially wrote that a line convoluted by a PSF does not give the LSF. This is somewhat true but also misleading. The correct statement would be:

    Quote

    A infinite long line which is infinitely narrow convoluted by a PSF does produce the LSF.

    And I have corrected my previous post.

  18. Regarding the interpolation @vlaiv, I forgot to answer.

    for_vlaiv.png.085d06f83cdf47f54029dc1c596fffa4.png

    This is a zoom-in on the transition of the edge spread towards the white side (values close to 1) of the edge image (white is at the right side of the graph). The blue dots are the original pixel data from the camera, the green stars are the piecewise linear interpolation and the red crosses are a spline interpolation. I did not do a sinc as I don't see the rational for doing so in this data.

    • As you can see the green stars exactly follow the blue line (original data), however now with 10x the horizontal resolution. However no additional "information" has been added.
    • The red line (the spline) is also at 10x horizontal resolution, but that interpolation added spurious information to the data. See the red swings away from the blue line. That's what I mean with spurious information. It is a signal that has not been there in the camera image.

    If I now align all observations of the edge and sum them together, especially if I do this on a sub-pixel resolution, I can add information from other pixels from other edge observations to the points (green stars) in the graph above. Importantly this is data that has been in other observations I made, not data I "created" by interpolations.

    So no, not all types of interpolations do generally add "spurious" data.

    Hope this helps.

  19. @vlaiv and all,

    Regarding the math connecting the LSF with the PSF and the MTF, I think this is beyond the general interest of the forum (just a guess) so I list some references one can easily access:

    The above three resources are roughly sorted by level of complexity in their explanations, easy ones first.

    I sincerely hope these help to shed light in the matter. However this week I lack the time (lot's of work) to answer in a detailed one-on-one manner, sorry for that.

ร—
ร—
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.