Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

alex_stars

Members
  • Posts

    223
  • Joined

  • Last visited

Everything posted by alex_stars

  1. I think we can keep going with the edge detection MTF approach, it just needs to be carried out carefully. Just to keep the ideas going here a link to a paper which discusses the effect of using CMOS sensors (as we have been) with respect to creating an MTF. https://core.ac.uk/download/pdf/12039012.pdf I most certainly will keep on developing my code to test my scopes. And I am more than happy to discuss further details here. If people are interested in running the python code themselves, we can think about making an open-source project out of it. However I will not have the time to create a fancy GUI for people to use...
  2. Sorry, little time these days to hang on the forum, which is a pity. I agree we should explore how much we can exploit the edge detection method. Let's keep at it. I will have to wait until my 125 mm APO gets delivered. Currently without a scope. regarding the drizzle. Well that is what I have proposed in the beginning. You remember my post on using super-resolution. Now we came full circle. Anyways, I agree we will have to deal with the under sampling, hence my suggestion to use super-resolution, which is in this case about the same as drizzle. Exactly, we will need to have the edge at an angle, to make use of a "drizzle" approach. Hence I was arguing from the start to use an edge that is not aligned with the sensor array. And yes, this is why I do the alignment code of the tilted edge data, to compensate for the fact that the edge is tilted. I see we converge with ideas. That is nice to see.👍 I will post more results as soon as I have my new scope and some time for daytime testing.
  3. Here the link to the thread that john started on the scope: https://stargazerslounge.com/topic/371904-tecnosky-125975-f78-fpl-53-ed-doublet-apo-🤔/ just if people want to follow up on the reviews. Guess I will post mine there too.
  4. As an update, I have decided and ordered the 125 mm f/7.8 from TS. Looking forward to try it soon (should be delivered within a week). BTW I saw that @johninderby could not resist the scope as well an ordered one too.... 👍 I will post a review after some good nights of testing
  5. Hi @johninderby and all, I have also one of those on order, should be delivered next week. Looking forward to try it out! Did you get yours already?
  6. SO I see you are now recommending the edge detection method for estimating the MTF. Great! Mission accomplished. 😀 Is there anything left for us to sort out? I don't think it is very interesting for everybody to continue the discussion of our curves, I can scale mine to look exactly like yours and that is not very amazing. Actually expected. If people code their programs in Python, because they can or if they use ImageJ is beside the topic of this thread, don't you agree? I look very much forward to see your results from the 102 Skymax and how it compares to theoretical curves. Since you use ImageJ, I was wondering if you are aware of the plugin that exists which performs that task. Never tried it myself, but looks decent. Slanted Edge MTF ImageJ I prefer to code in Python....
  7. Do we? I think we are converging with our results. Which I appreciate. Let me post a graph of my MTF with pixel scale soon. Today its late and I won't be online until Monday, but I will post the graph then.....
  8. Thanks for reminding us on that. I think it was not mentioned yet in the discussion.
  9. Somewhat, as we have discussed, we definitely loose contrast. My main point is that we have a hard time utilizing the really high spatial resolutions when we image, so we might not have to worry too much what happens in the different scope designs at the very right side of the MTF graphs. Something like that. The image I used has been taken at f/15 of my Mak, so at about 2700 mm. I am aware of the fact that I could increase the pixel scale, use a barlow and such... But I just wanted to make a point about possible sampling of the MTF when we use a scope in real life. I took an image of a black edge on white background, so a high contrast target (I posted the image earlier) and that was 170 m away of the scope, across a meadow in early morning, to keep atmospheric disturbance small.
  10. Interesting point you catch up on. Look just for the sake of argument and as you put in sooo much energy to prove things, I decided to run my real world edge data without super-resolution. Here is the aligned edge data, you can see it gets quite nicely aligned, so the 1D FFT is valid, it is a 1D problem. I do the whole processing without super-resolution and without any interpolation whatsoever (happy?) So we we get this graph for the MTF. The green one is completely correct, I have even re-assessed my pixel scale of my camera, so that is spot on where theory tells us the MTF should be. To be correct I had to switch to 600 nm wavelength as this is the wavelength my camera has the highest QE, so just being fair to the sensor. As you can see the measured MTF eventually goes down to zero. As it should, BUT. More importantly it does so at a different location than the theoretical one. And now comes the really important part for us amateur astronomers. When we take images with our cameras, we sample the world with the sensors we put on the back of our scopes. They HAVE way to coarse pixel dimensions to sample an MTF completely. As you finally seem to agree with me, the red curve from about X=0.25 onward to the left is under-sampled (thus quite useless), because our sensors (even the fancy ones with about 2 mu-m pixels) are way too bad to sample the full spectrum. How does this translate to our scopes, their design and central obstructions? Well in my case it does not matter if I image with my Mak or with a 125 mm unobstructed APO, I will be able to sample about the same scales properly. The really fine details I always undersample. Not because the scope is bad, it's because of the camera I have. And now you might as, whats about the fancy images one sees with a C14? I will show you in my next post.
  11. @vlaiv, how very kind of you to say. I may offer some advice. You might consider listening to the things others tell you. I can summarize at this point: You have disregarded all peer-reviewed published science I presented You still claim my method is inaccurate for no apparent reason as I managed to reproduce your result You do not manage to reproduce my results, and complain about noise in the real world data and ... 8 bits ... and .... Ah yes and you base all of your "claims" on simulations you do on your computer with some software (which is?) Not sure where to take it next, but better let it go (as I am of no help as you say). Maybe end with a hint. Any basic mathematical textbook on Fourier Transforms will tell you (should you care to read it) that a one-dimensional Fourier Transform is adequate for a one-dimension problem (our edge detection task).
  12. Good Morning. Funny you should mention that now. You did not say anything about a pre define pixel scale beforehand and now you claim my method is inaccurate because of that. Here is what happens, and I assumed you know that: Your initial data looks like the graph below (horizontal cross section of your image) The beginning and end (the swings in opposite directions) have nothing to do with the problem. We are interested in the edge in the middle. So I cut the wings away. However when one processes data purely numerically, the location where you cut the data is important for the final scale. So initially I cut 50 data points each way and get the result I posted. If I cut a 100 data points each way the graph looks like this: Still a nice representation of the edge. Doing the numerical processing as discussed leads to this curve: Information wise it is the same curve as the originally posted one (here for comparison): However the scale is different on the x-axis. Is that a problem? No. Why should it be. The two curves have the same information just represented on a different scale. BTW this is one of the reasons why people who talk about MTFs scale also the x axis between 0 and 1, then no confusion arises. Only when you want to compare different optical designs, as I did above, you need to introduce a proper scale.
  13. You never fail to amaze me.... So I took your fits stored edge data and ran it through my processing. Horizontal scale is in pixels And?
  14. Yes, welcome to reality. I have no idea what you mean by this, maybe you can explain. My processing program is capable of producing my posted MTF from a simple 8 bit image (colour though) Here is the original image as it comes of the camera: And I don't need a straight edge or any higher bit data, nor do I need any noise reduction or such things. Just the steps I explained and.... Out comes the red line.... Hmm @vlaiv, what do you say? Can you reproduce the result with your approach?
  15. Hi @vlaiv good to hear your dog is fine. And well performed theoretical experiment (simulation). Now let us bring you into the real world, where you do not know the PSF to start with, and where you can not just convolve your perfect edge with a perfect PSF AND you have to deal with noise. Here is some data for you, which is a slice through my measured edge (real camera, real telescope, real air, real target) and here would be a text file with the respective data: edge_data_file.txt and here would be a 2D, 8-bit grayscale image of the same edge data (just vertically repeated). This has perfect vertical alignment with the sensor, as you asked for 😁 Now please redo your experiment and show me your MTF for my scope. I look forward to see your results.
  16. Thanks for redoing your experiment and good luck with the dog, hope it's not something serious. No rush.
  17. I agree and when I have time I will try to do that, please hold on until after the weekend.... time runs out now..... sorry for that,,,,,
  18. I fully agree as my work has been and is based on already published work. I did not expect such a detour.... However, what can you do....
  19. I love the fact that you remind me of quantum mechanics. 😀. Given your comment you obviously talk about something completely different. Look, once more, and for the last time. These are my steps: Having taken an image of an edge target. I align and stack the line data (it's all those horizontal lines in ONE 2D image of the line target, EACH line is an observation). During this alignment and stacking I can or can not deploy super-resolution (my choice). Also I can interpolate in this step the coarse data to a finer grid, or not. Now I have the ESF. Now I need to differentiate to get the LSF. Now I need to do an FFT to get the MTF And yes, either way the MTF will be broad enough, that has never been the issue (except for people who do not know how to do FFTs properly and struggle with the sampling limit You see my interpolation happens in step (2) above. Just to reconstruct a better representation of the ESF. You suggest to me that I just do an FFT and have the MTF. If I would do that, I would do an FFT of the ESF, which does not make sense at all. By doing so I would skip step 4 and the reconstruction of the LSF. We obviously talk about different things. It is very good that you do simulations on your computer. However I doubt you are able with your simulations to reconstruct the MFT from an edge target where the edge spread is represented over just a few pixels (say 10 as in my first example). I stated that you did not grasp the method as you suggested to take away all the refined steps to make such a method being higher resolution. As it was you who doubted the quality of the method initially, I am still surprised that you suggest that. Working with real life data, one needs a line target that is not aligned with the sensor and in many cases one needs super-resolution to recover the edge function from the data..... simple as that. From your edge spread simulation, I would love to see the horizontal cross section of that "data". Looks to me that this edge is spread over many pixels. Maybe you can show us that if you find time. I'm gonna have more ☕
  20. I am well aware of the Shannon-Nyquist theorem. However if you compare my edge spread data with a sinc function Why would I want to fit the sinc function in into my edge spread function. It so does not make sense, Nyquist or not.... Maybe a sigmoid function, but you did not like that, remember, so I showed you a purely data driven approach.... Super-resolution and its intermediate step of piecewise linear interpolation is just a way to harness the data of many observations and make more of them in comparison to a single one. And maybe you do not understand what I have explained. I do not curve fit anything during my method, so it is purely data based. All I do is prepare a 10x larger data array by piecewise linear interpolation of the initial data, there is no curve fitting component to it. And I then fill it with data.... Unfortunately this quite tells me that you did not grasp the essence of the edge spread function approach yet. You propose the exact opposite of what should be done. You should exactly NOT align the sensor pixel array with the edge target, that is the worst sampling you can do given a limited resolution of your camera (and all cameras are limited). I do just simple finite differencing, there is hardly a simpler "kernel", and may I remind you that you suggested that yourself I do the FFT of the resulting image, which happen to be 1D in the case of an edge. Your suggested protocol is indeed simpler, but the crudest approach there is to this measurement technique.... So no, I will not do that, you are free however free to do what you feel like.... Another paper reference to help understand the rational behind this method: "Comparison of MTF measurements using edge method:towards reference data set" (paper available here ) This would be the classical oversampling idea and the plain reason why you would not want to align your edge vertically, parallel to the pixel array of your camera. You would loose soo much data you could sample. This is what I base my code on. The Airbus method I call it leisurely. Where you align multiple observations and thereby utilize more data. Ah yes and I have to clarify. The Airbus method does curve fitting, but I do not. The reason? When I apply super-resolution I harness so much data, I can reconstruct the LSF directly out of the data, with an acceptable amount of noise.... Hmm, maybe this helps.
  21. Regarding this statement. I did not show something that is impossible by the laws of physics. If it would be impossible, it would be and I could not show it. Physics rule everything, no escape. I can state that, being a physicist myself. What you see in my graph of the MTFs is the effect of processing the data with super-resolution. This is possible because I observe the edge many times with many small variations in each measurement. The same process is used when generating super-resolution images which suddenly show license plates of cars driving by a camera, even though each single frame of the video is so blurry you can't see the license plate. The combination of many observations makes the difference. You use this too, when you lucky-image a planet and I am sure you did not think that you break the laws of physics by doing so (and you computer explodes because it needs to 😄). What you see in the MTF graph is some extra "information" at very high spatial frequencies, so very small details in the image. What could those be? Hmm, I'd say it's the noise in the processed data. That noise is there and needs to be removed if you want to realistically measure the high frequency part of the MTF. I did not do that, as I know what it is and don't care about it. However one could. BTW, one can make this method a lot better than I did for this short demonstration, but the concept has been proven and it works. When I get my apo I will probably redo this and see if I can compare. Then I might have time to build in some noise filter in my software... we'll see.
  22. Well the method I described has not been invented by me, it is built upon a large body of published scientific work and obviously it works, many scientists use it to understand very complex optical systems. For example the documentation of the Hubble Space Telescope lists measured Line Spread Functions (link here). So just to make this crystal clear this method works as described to a very high degree of resolution. Regarding the FFTs, you can do them on any dimensions, they are just mathematics. You can do everything in 1D as I do, or step towards 2D and process there. The one thing you can't do is mix dimensions carelessly. I am not sure if you now agree with me @vlaiv or not but I have to cut this short, even though I admire your energy put into experimentation. I wish I would have that amount of spare time for my hobby. 👍 You did find a misconception in my previous post, which is probably due to the fact that I write these summaries beside work, most often hastily in my ☕ breaks (as now). I initially wrote that a line convoluted by a PSF does not give the LSF. This is somewhat true but also misleading. The correct statement would be: And I have corrected my previous post.
  23. Regarding the interpolation @vlaiv, I forgot to answer. This is a zoom-in on the transition of the edge spread towards the white side (values close to 1) of the edge image (white is at the right side of the graph). The blue dots are the original pixel data from the camera, the green stars are the piecewise linear interpolation and the red crosses are a spline interpolation. I did not do a sinc as I don't see the rational for doing so in this data. As you can see the green stars exactly follow the blue line (original data), however now with 10x the horizontal resolution. However no additional "information" has been added. The red line (the spline) is also at 10x horizontal resolution, but that interpolation added spurious information to the data. See the red swings away from the blue line. That's what I mean with spurious information. It is a signal that has not been there in the camera image. If I now align all observations of the edge and sum them together, especially if I do this on a sub-pixel resolution, I can add information from other pixels from other edge observations to the points (green stars) in the graph above. Importantly this is data that has been in other observations I made, not data I "created" by interpolations. So no, not all types of interpolations do generally add "spurious" data. Hope this helps.
  24. @vlaiv and all, Regarding the math connecting the LSF with the PSF and the MTF, I think this is beyond the general interest of the forum (just a guess) so I list some references one can easily access: York University Web book, especially the part on PSF (http://www.yorku.ca/eye/psf.htm) Also the Wikipedia article on Optical Transfer Function (OFT -> neglect phase signal -> MTF) is good. And a bit old, but gold article on the relations of LSF, PSF and MTFs (article link) and the direct PDF link The above three resources are roughly sorted by level of complexity in their explanations, easy ones first. I sincerely hope these help to shed light in the matter. However this week I lack the time (lot's of work) to answer in a detailed one-on-one manner, sorry for that.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.