Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Nothing serious, thanks - case of "over licking" (hair got - don't know the term - threaded perhaps - but in a bad way) that created little sore. Noticed that spot is not looking good so its just few rounds of antibiotics and it's already getting better. In any case, to continue - next I perform differentiation and get line spread function: And finally, doing FFT of line spread function - produces section of MTF: Where we again have exact same MTF - but this time as line cross section of above 2D circular MTF - graph is the same.
  2. Sure, I'll do another round of simulation, detailing every step. I'll start with PSF at bare edge of sampling resolution. Here is generated PSF: As you can see - disk is covered by something like 3 pixels and each ring is one approximately one pixel wide. Although this might not seem like it - it actually properly samples Airy pattern. Here is FFT of above airy pattern: and associated MTF diagram - we can clearly see that frequency cutoff is right at the edge of FFT space (pixel at 256 from the center for 512x512 image) - perfect sampling of airy disk. Here is my edge convolved with PSF and appropriate plot of cross section: you can see that there is about 10 samples that define edge (airy disk is 3 so at least twice that on each side + first few rings ... Have to make a pause now, to take dog to a vet ... will continue later.
  3. I don't really think that any of what has been discussed is anywhere near scientific enough to be published - but I do get your sentiment - and I acknowledge that it is my fault as well. Quite a bit of discussion is too technical to be of interest to others. I'm for one rather happy with what I've learned in this discussion - that is Edge/Line Transfer function method of deriving section of MTF. I was hoping that this thread would explain some technical concepts to people that will help them better understand optical performance of telescopes and what is possible and what not, but I guess that topic is simply too technical - or people that discuss it simply fail to convey it in plain language without too much use of technical terms. Then there is that part when debate gets heated, partly due to misunderstanding and partly because interpretation of evidence or lack of.
  4. I understand now - you are talking about scaling property of Fourier transform however, you need to understand that this holds as written above - time squeezed (or in this instance spatially squeezed) function will indeed be FT stretched - and we can clearly see this effect in perfect aperture - Perfect aperture that is twice as large will have twice as high cutoff frequency. However - this property does not hold for functions that are different. PSF of obstructed aperture is not simply scaled PSF of obstructed aperture - relative size of peaks also change (strehl changes / encircled energy - ratio of energy in disk vs rings) and above no longer holds - we have two functions and regardless of the fact that to us one looks like "squeezed" or "narrower" version of the other - in above sense where time scaling holds - it is not - those are in fact two different functions and not time scaled single function. We can't apply time scaling property on different functions.
  5. I did not suggest you fit sinc function. I simply said that if you want to interpolate your data to accurate curve - you should use sinc interpolation function over linear interpolation function - as sinc provides perfect reconstruction for band limited signal whereas linear interpolation introduces attenuation of high frequencies of signal and thus changes signal. I also said - don't use interpolation of any kind - it is not needed. In fact - look at previous post by Andrew - and remember a bit of quantum mechanics - "FT of narrow function is broad and vice verse" - you don't need to interpolate your data - just a do FFT and cross section of MTF will be broad enough. I ran simulation showing perfect match between two MTF getting methods in the post above - how did I not grasp the method?
  6. I think that we can apply the same logic and without assumptions but by the means of mathematical proof - show that obstructed aperture can't have higher frequencies than unobstructed. Obstructed aperture is actually difference of one larger unobstructed and one smaller unobstructed aperture. From linearity of Fourier transform - it leads that Fourier transform of obstructed aperture is difference of fourier transforms of unobstructed apertures. Since smaller aperture has lower cutoff frequency than larger aperture - their difference simply can't produce frequencies higher than cutoff frequency of larger aperture (neither of two apertures produces high enough frequencies for this to happen when we subtract - all higher frequencies are zero and zero - zero = zero). Why did I emphasize word assumption? You'll have to provide the proof of that and definition of narrower.
  7. I'm just baffled what could possibly baffle you with obstructed aperture. From physics we know that PSF is power spectrum of aperture FT - regardless of aperture shape - it works for circular, quadratic or any other aperture type (even apodized where we vary intensity and aberrated - where we vary wavefront phase). PSF acts via convolution on image and resulting image is frequency spectrum amplitude. If you don't question any of the above for clear aperture and you don't question the math - why do you question it for obstructed aperture?
  8. There is no question that this method works - there is very elegant and simple proof that it does - so I'm 100% with you on that - Edge Transfer Function method works it it is applied correctly. Proof is simple and relies on 3 facts - convolution theorem, fact that edge differential in direction perpendicular to that edge is line and fact that 2D Fourier transform of the line going thru the origin - is line perpendicular to it. Result of the method is MTF along single line - and yes, that is why we need this measurement in many orientations in order to reconstruct approximation to full MTF. There are few things where we still have different opinions. First is "super resolution" and need for linear interpolation of the sample versus sinc interpolation. Here I can't do much except to point you to the proof of Shannon-Nyquist sampling theorem and the fact that sinc interpolation is perfect restoration of band limited properly sampled function. There is mathematical proof for this - not sure what is there to discuss. Linear interpolation is not - and it introduces error into sampled function as it acts as low pass filter with attenuation (sinc is perfect low pass filter without attenuation - it just cuts off higher order harmonic frequencies that arise from the pulse train - FT of sinc is box/rectangle function). Second is the fact that in your protocol for experiment - you are using 1D Fourier transform. Result is not the same as doing 2D Fourier transform on whole image. For example - not above post where I point Andrew to difference between 1D case and 2D case with spherically symmetric functions - although circular aperture and rectangle are "similar" - in sense that cross section of circular aperture is rectangle in 1D - corresponding Fourier Transforms no longer share this. Cross section of MTF is not triangle function (there is little sag). I would really like to see your experiment done with following protocol: - shoot straight edge at vertical position (make sure it is as straight as possible and that it is indeed as close to vertical as possible) - use critical sampling rate for your pixel size (which you already have) - perform differential on image using simple kernel - do FFT of resulting image and then measure resulting line profile No need for super resolution / interpolation and curve fitting - this protocol is much simpler. Yes, note pixel scale so you can properly derive cut off frequency when plotting measured MTF against theoretical ones.
  9. Just as addition to above Edge Transfer function method - here is simulation of it for perfect aperture: Here is edge that has been convolved with PSF for perfect aperture After that we perform differentiation to get LSF: and we do FFT of LSF: Then we measure resulting FFT of LSF to get MTF profile: As a comparison her is MTF derived directly from PSF:
  10. With regards to that, I managed in the mean time to track relevant math. Look up 2d fourier transform in polar coordinates and fourier transform of rectangular / brick function and fourier transform of sinc^2 function. In nutshell: 1d - case rectangle -FT-> sinc sinc^2 -FT-> triangular function with clear cutoff 2d - case When switching to polar coordinates - you end up multiplying with Besel function and corresponding 2d case is: circular aperture (rectangle in radius / polar) -FT(power)-> Airy pattern Airy pattern -FT-> MTF with clear cuttoff (and resembles triangular function) Yes, there is clear cutoff frequency for clear aperture and it is well defined - all other values are really zero.
  11. It turns out that we can easily settle this by just examining Wiki article on Optical transfer function: https://en.wikipedia.org/wiki/Optical_transfer_function#Using_extended_test_objects_for_spatially_invariant_optics Here are some interesting quotes: This is true for any image - if we take an image of Mickey Mouse thru a telescope and take Fourier Transform of it and divide that with Fourier Transform of original image - we will get PSF. That is simple consequence of convolution theorem that states that fourier transform of convolution of two functions is equal to product of fourier transforms of these functions - thus any image will do as it holds for all images. Next quote on wiki is important one: First sentence is very important - FT of line going thru the origin is line orthogonal to first line. This means that if we use line for our base image that we shoot thru a telescope - we already have FT of that image - it is again just line. And third ingredient is in following quote (something that we've established): So here is how you should conduct Edge Spread method test: Record a high contrast edge. Differentiate that image to get LSF. Make sure that center of LSF is in center of the image and fast fourier transform that differential image and measure along the line that is perpendicular to original LSF. I think that I owe @alex_stars an apology - method is indeed almost as what he described - except: Don't try to fit functions to data and don't convert to 1D domain until you are done - FFT needs to be done in 2D domain and it needs to be done on "digital derivative" of ESF - which is LSF.
  12. @alex_stars, I apologize in advance if this post comes across as harsh or disrespectful. This is not aimed at you personally in any way. I believe that some of the claims are doing disservice to people actually trying to grasp the subject and draw conclusions from it. First you present your findings with Edge Spread Function method and I quote: which ensures people method you use is credible, then you derive following result graph: and I again quote: which is clear indicator that this method does not produce correct results since it shows something impossible by physical laws as we understand them (your measured MTF somehow has higher cutoff frequency than theoretically possible), but more importantly - it can lead people to believe that theory is somehow flawed and that experiment clearly demonstrates that. After I point out that above method is flawed - by assuming that Line Spread Function is convolution of a line with PSF in which case PSF and LSF can't be the same objects (and hence MTF can't be FT of LSF if it is defined as FT of PSF), you proceed to explain that concept of LSF is obtained by different means - using integration of PSF in one coordinate (which would again make it different than PSF). I then show that method you used - by doing differential of Edge Spread Function is producing convolution of line with PSF - first in simulation, then as actual mathematical proof, so I was right to assume that LSF is indeed line convolved with PSF - which in turn renders above method invalid and in response - you quote literature that clearly supports what I'm saying: (screen shot from PDF you linked) From the above - I can't but conclude that method that you used for testing a telescope is not producing correct MTF and thus can't be used to compare to theoretical MTF or MTF obtained by correct tests - unlike established optical tests (wavefront method).
  13. In fact above can be easily proven: (psf * edge) (x,y) = sum_u, sum_v of psf(u, v) * edge(x-u, y-v) du dv Now we take derivative with respect to x that will be sum_u, sum_v of psf(u, v) * edge' (x-u, y-v) du dv as only edge depends on x Derivative of edge with respect to X is "Dirac delta line at X = crossover" - or "line that goes vertical" so we end up with convolution expression where psf convolves vertical line. Hence - derivative of convolved edge with respect to X is the same as convolved line and not integral of PSF in one coordinate.
  14. I just ran a test and found something very interesting ... If you take an edge - convolve it with PSF and then use kernel filter to "differentiate it" (a bit like you suggested - except I'm using filter to differentiate) and read off function and if you take a line and convolve it with PSF and read off values / plot graph - you get essentially two identical functions: (X values are not the same because edge and line were not in the same place in test images - but curves match perfectly). To me this suggests that using edge test and reading off curve and then differentiating it - will not produce LSF which is integral of PSF in one variable.
  15. Ok, that explains my misunderstanding of what LSF is. Could you in simple terms explain why is derivative of ESF (being cross section of edge convolved with PSF) equal to LSF - which is integral of PSF in one coordinate? I'm sure that it is explained in the book you linked to - but I would rather have it explained in a few sentences if possible then paying for that book just to find the answer. I understand this bit - except that I don't know how do we reconstruct MTF from LSF - could you give explanation of that as well? I'm failing to see how adding derived data as new data points offers greater precision for alignment than using original data points (and using derivation process as a part of original alignment process). Think of if this way - if we have set of measurements that we need to do linear fit on - will linear fit change if we add more points by interpolating existing ones? Or in math terms if original set is X and interpolated set is g(X) and we have fitting as a function b(X, g(X)) it is still function of X alone. Adding linear data is adding spurious "data" by interpolation same as any other interpolation. In fact not the same. Shanonn-Nyquist sampling theorem states that band limited signal can be perfectly reconstructed if sampling frequency is twice higher than maximum frequency of band limited signal and one uses sinc function for reconstruction (convolves delta functions at sample points scaled with sample values with sinc function) https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem For example how linear interpolation is actually performing data smoothing rather than original reconstruction - see here:
  16. If you don't mind, I have couple of questions. There are few things that I don't quite understand and would need clarification on those. LSF is essentially 1D concept, right? Can you explain that in context of perfect circular aperture? More importantly, will line under convolution by PSF produce LSF if we take cross section of line blurred with PSF perpendicular to line itself? I might be wrong, but it looks like you are implying that? I can easily show that such LSF simply can't be equal to Airy pattern PSF. As we know, Airy pattern has zeros - places where PSF is equal to zero. Black vertical line is our line that will serve to produce LSF once we convolve it with airy disk PSF. Yellow line will be the line along which we sample LSF. Point A is intersection and black circle is marking edge of Airy disk. As such convolution at point A will indeed produce 0 (zero) value at orange point. However - there exists point b (in fact - infinitely many such points) - where PSF convolution will not produce 0 value in orange point. Convolution will sum all these values - Airy pattern has only non negative values - and sum of non negative number where at least one is greater than zero - is value greater than zero. LSF derived this way will have only positive values and no zeros and PSF contains zero values - hence the two can't be equal. What am I missing? Another question is: If you are sampling at critical sampling rate for optical system (Nyquist theorem) - why do you need to form "super resolution" image - and use linear interpolation (instead of sinc interpolation that will guarantee reproduction of band limited original signal that is properly sampled)? What is that achieving? Can you explain rationale behind it?
  17. Since we are now on the related topic of optics tests - there is test that people can do for little or no money (no money if they already have planetary / guiding camera, but I think even DSLR will work). @alex_stars Outlined one approach that might be useful, I have not checked proposed algorithm in papers linked, but no doubt it is useful method - at least for camera lens (not sure if it has required precision for telescope optics). There is another similar approach that is not based on shooting high contrast edge - but rather point source. Either real or artificial star. It is called Roddier test and it works by shooting defocused pattern of a star - both in and out focus and then it uses Zernike polynomials and FFT to find wavefront that corresponds to those patterns. Here I must emphasize that what I described is what I believe is happening - I did not read actual paper by Roddier that describes the process, but I did use that method to test telescopes and actual usage is not very complicated at all. There is readily available software called WinRoddier for anyone wanting to try it out. My only reservation now is usage of artificial star. Software has option to specify if artificial star was used and at what distance - but I don't remember choosing telescope type. Today as I was reading on Telescope Optics.net about spherical aberration due to close focus - I realized that it depends on telescope type and that in some cases - distances needed far exceed what is practically possible - like 700+ meters required for large and fast newtonian telescopes. It also means that you can't calculate spherical aberration due to close focus to subtract from measured wavefront if you only know the distance to source and not telescope type. In any case - test can be performed on a real star if seeing is decent.
  18. Yes, that was both genuine question but also attempted humor. Many of entry level telescopes are described as "capable of showing to two main Jovian belts", and I insisted that "marble spotting" is alternative to planetary observation, and fact that you used refractor ...
  19. I'm sure that with DSLR you'll be able to take at least as good images as those. Do remember that due to long focal length and need for tracking - you'll need special kind of processing step - binning your data after stacking. There are couple of software solutions capable of doing this. I personally use ImageJ - which is free software package that runs on Java for this.
  20. I'm terribly sorry if I've offended you and I assure you that I don't have urge to prove myself to be a "wise guy". On plenty occasions I've been proven wrong and I have no problem what so ever accepting my mistakes and learning from them. If I strongly support a stand - it is not because it is my stand, but because stand is strongly supported by the fact that I have and logic. Either might be wrong - I might be operating under wrong assumptions or there might be a flaw in my logic. Again - happy to be corrected on either or both accounts. In fact, I would be more than happy to explain exact steps in getting two graphs above so we can together explore them and try to find mistake in my method that lead to graphs being different if expected outcome is the same - PSF (as I would expect from your explanation). In any case, please accept my apology if I've offended you in any way.
  21. Not sure if my posts are perceived like that - but my personal goal is finding the truth. At this point - my major interest is discrepancy between what science and my personal (mostly related to imaging) - but limited experience suggest, and what other people report. I would really like to know where I'm wrong with this if 4" APO can perform on par with 12" - or maybe it can't maybe my simulations are just off and everything is "shifted" (my 4" sim actually describes 2" telescope and 12" sim - 4" telescope because I got some quantity wrong or something). Maybe there is some observational bias at play here. Also note - we are not talking of the performance of telescope under stars here - we are talking about best possible scenario - no atmospheric influence. With seeing in equation - I'm sure there are conditions when 4" APO performs on par with 12" Newtonian - however, I would expect the image in that case to be worse than both simulated best case images - again discrepancy.
  22. My claim wan not that certain scientific method was not valid - my claim was that your method was not valid. Did you use proposed algorithm for deriving of MTF from edge profile or did you just use 1D Fourier transform of 1D PSF profile that you got as differential of 1D profile of the edge? If you did later - look at two PSF profiles that I got above - they are different and one is real PSF profile used in calculation and other is PSF derived using your method.
  23. Actually - these images will always win over simulation as simulation tries to show what telescope can deliver visually - without processing and these images are processed. In particular, there is part of processing that sort of negates MTF - that is sharpening. Point of sharpening is to do this: Well, that second MTF line did not come out right - but you get the point - sharpening corrects the "sag" of MTF - all the way up to cut off point. For that reason - image will look sharper after processing. In fact - look at this: left is 5" apo image at eyepiece - and right is the same image with wavelet adjustments in registax. In reality 5in scope will not be able to deliver right image at eyepiece - but maybe it could deliver it when imaging.
  24. You only need Canon T ring. It screws directly into visual back end of the scope - it has T2 thread on it. There is no suitable reducer for this combination - and it would not work properly anyway because Mak can illuminate only about 26mm diameter and I'm guessing that Canon 1200D is APS-C sized sensor with 28mm. You will get some vignetting as is and you'll need flat fielding. Dew shield is a must on that scope - and not because of dew - but because of the way it is baffled internally - not very good, so you need to make sure you have stray light protection. You can convert AzGTI to EQ type mount - and I would recommend that. With such long focal length, you will be sampling at high sampling rate - so you'll need to bin your data to get anything decent. Although your camera is 6000x4000 px - in reality, you'll be getting images that are 1500x1000 or even 1000 x 666 - if you want to get decent images with this scope. This all means that you need very long exposure images - at least couple of minutes to minimize impact of read noise. That in turn means EQ type mount. AzGTI can be turned into EQ mount - you'll need counter weight and shaft, and EQ wedge - look up on SGL there are couple of threads discussing how it is done. You'll also want to guide because AzGTI is not very precise mount - and even with guiding - don't expect very sharp results. AzGTI is very basic mount capable of only low resolution (usually wide field - but with your scope that is not going to be wide field by any account). Go to astronomy tools and check FOV of your scope - use 1300mm of FL if Skymax 102 is not in equipment database. It appears that SkyMax is indeed in equipment database and so is Canon 1200D it also says that 1200d is not 6000x4000 - but you get the idea, even with smaller pixel count, you'll still have to bin at least x4
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.