Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

MTF of a telescope


Recommended Posts

@vlaiv and all,

Regarding the math connecting the LSF with the PSF and the MTF, I think this is beyond the general interest of the forum (just a guess) so I list some references one can easily access:

The above three resources are roughly sorted by level of complexity in their explanations, easy ones first.

I sincerely hope these help to shed light in the matter. However this week I lack the time (lot's of work) to answer in a detailed one-on-one manner, sorry for that.

Link to comment
Share on other sites

Regarding the interpolation @vlaiv, I forgot to answer.

for_vlaiv.png.085d06f83cdf47f54029dc1c596fffa4.png

This is a zoom-in on the transition of the edge spread towards the white side (values close to 1) of the edge image (white is at the right side of the graph). The blue dots are the original pixel data from the camera, the green stars are the piecewise linear interpolation and the red crosses are a spline interpolation. I did not do a sinc as I don't see the rational for doing so in this data.

  • As you can see the green stars exactly follow the blue line (original data), however now with 10x the horizontal resolution. However no additional "information" has been added.
  • The red line (the spline) is also at 10x horizontal resolution, but that interpolation added spurious information to the data. See the red swings away from the blue line. That's what I mean with spurious information. It is a signal that has not been there in the camera image.

If I now align all observations of the edge and sum them together, especially if I do this on a sub-pixel resolution, I can add information from other pixels from other edge observations to the points (green stars) in the graph above. Importantly this is data that has been in other observations I made, not data I "created" by interpolations.

So no, not all types of interpolations do generally add "spurious" data.

Hope this helps.

Link to comment
Share on other sites

@alex_stars, I apologize in advance if this post comes across as harsh or disrespectful. This is not aimed at you personally in any way.

I believe that some of the claims are doing disservice to people actually trying to grasp the subject and draw conclusions from it.

First you present your findings with Edge Spread Function method and I quote:

Quote

First off, this method has plenty of precision for practical purposes and is used for about any imaging device, from camera lenses, satellite based optical systems, telescopes to even MRI machines in medicine....

which ensures people method you use is credible, then you derive following result graph:

SMTF_4.png

and I again quote:

Quote

The interesting thing is that the measured MTF is better than the theoretical one from about 0.8 onward to the right.

which is clear indicator that this method does not produce correct results since it shows something impossible by physical laws as we understand them (your measured MTF somehow has higher cutoff frequency than theoretically possible), but more importantly - it can lead people to believe that theory is somehow flawed and that experiment clearly demonstrates that.

After I point out that above method is flawed - by assuming that Line Spread Function is convolution of a line with PSF in which case PSF and LSF can't be the same objects (and hence MTF can't be FT of LSF if it is defined as FT of PSF), you proceed to explain that concept of LSF is obtained by different means - using integration of PSF in one coordinate (which would again make it different than PSF).

Quote

A line convoluted by a PSF does not produce the LSF. The LSF is an integration over on variable (i.e. coordinate) of the 2D PSF. You can read up on the mathematical relations in Digital Image Processing by Jonathan M. Blackledge (details here)    . My mistake was to not include the integration part, sorry for that. (I have corrected the original post).

I then show that method you used - by doing differential of Edge Spread Function is producing convolution of line with PSF - first in simulation, then as actual mathematical proof, so I was right to assume that LSF is indeed line convolved with PSF - which in turn renders above method invalid and in response - you quote literature that clearly supports what I'm saying:

image.png.ce034580ccc5bf12b66bc385cdb1610b.png

(screen shot from PDF you linked)

From the above - I can't but conclude that method that you used for testing a telescope is not producing correct MTF and thus can't be used to compare to theoretical MTF or MTF obtained by correct tests - unlike established optical tests (wavefront method).

Link to comment
Share on other sites

@vlaiv we have discussed this before but the so called theoretical cutoff is not in my view always valid. The psf of an obstructed aperture is thinner than that of an unobstructed aperture but with more energy in the first ring. However, the narrow pdf can't be correctly represented on the mtf if you constrain it to zero at the cutoff of the unobstructed aperture.

I just note this as our long discussion before did not lead to a conclusion. 

Regards Andrew 

Link to comment
Share on other sites

It turns out that we can easily settle this by just examining Wiki article on Optical transfer function:

https://en.wikipedia.org/wiki/Optical_transfer_function#Using_extended_test_objects_for_spatially_invariant_optics

Here are some interesting quotes:

Quote

When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.

This is true for any image - if we take an image of Mickey Mouse thru a telescope and take Fourier Transform of it and divide that with Fourier Transform of original image - we will get PSF. That is simple consequence of convolution theorem that states that fourier transform of convolution of two functions is equal to product of fourier transforms of these functions - thus any image will do as it holds for all images.

Next quote on wiki is important one:

Quote

The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles.

The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.

First sentence is very important - FT of line going thru the origin is line orthogonal to first line.

This means that if we use line for our base image that we shoot thru a telescope - we already have FT of that image - it is again just line.

And third ingredient is in following quote (something that we've established):

image.png.b3dff692b68e816b7d6586de6f665041.png

So here is how you should conduct Edge Spread method test:

Record a high contrast edge. Differentiate that image to get LSF. Make sure that center of LSF is in center of the image and fast fourier transform that differential image and measure along the line that is perpendicular to original LSF.

I think that I owe @alex_stars an apology - method is indeed almost as what he described - except: Don't try to fit functions to data and don't convert to 1D domain until you are done - FFT needs to be done in 2D domain and it needs to be done on "digital derivative" of ESF - which is LSF.

 

Edited by vlaiv
Link to comment
Share on other sites

20 minutes ago, andrew s said:

@vlaiv we have discussed this before but the so called theoretical cutoff is not in my view always valid. The psf of an obstructed aperture is thinner than that of an unobstructed aperture but with more energy in the first ring. However, the narrow pdf can't be correctly represented on the mtf if you constrain it to zero at the cutoff of the unobstructed aperture.

I just note this as our long discussion before did not lead to a conclusion. 

Regards Andrew 

With regards to that, I managed in the mean time to track relevant math.

Look up 2d fourier transform in polar coordinates and fourier transform of rectangular / brick function and fourier transform of sinc^2 function.

In nutshell:

1d - case

rectangle -FT-> sinc

sinc^2 -FT-> triangular function with clear cutoff

2d - case

When switching to polar coordinates - you end up multiplying with Besel function

Screenshot_3.jpg.cc57155294198572f4dbfc77296cf5ce.jpg

and corresponding 2d case is:

circular aperture (rectangle in radius / polar) -FT(power)-> Airy pattern

Airy pattern -FT-> MTF with clear cuttoff (and resembles triangular function)

Yes, there is clear cutoff frequency for clear aperture and it is well defined - all other values are really zero.

Link to comment
Share on other sites

Just as addition to above Edge Transfer function method - here is simulation of it for perfect aperture:

image.png.3aea080f5e457eafc48082f2594d759e.png

Here is edge that has been convolved with PSF for perfect aperture

After that we perform differentiation to get LSF:

image.png.31844f924403b13c7374b897e8973d08.png

and we do FFT of LSF:

image.png.ca46c1a14e9c6c886d71f2633393bea6.png

Then we measure resulting FFT of LSF to get MTF profile:

image.png.6f2bf9415bd072795111f0848c2b558f.png

As a comparison her is MTF derived directly from PSF:

image.png.0686a85ab4ff0daa7c73df966b6d5925.png

Link to comment
Share on other sites

41 minutes ago, vlaiv said:

method is indeed almost as what he described - except: Don't try to fit functions to data and don't convert to 1D domain until you are done - FFT needs to be done in 2D domain and it needs to be done on "digital derivative" of ESF - which is LSF.

Well the method I described has not been invented by me, it is built upon a large body of published scientific work and obviously it works, many scientists use it to understand very complex optical systems. For example the documentation of the Hubble Space Telescope lists measured Line Spread Functions (link here). So just to make this crystal clear this method works as described to a very high degree of resolution.

Regarding the FFTs, you can do them on any dimensions, they are just mathematics. You can do everything in 1D as I do, or step towards 2D and process there. The one thing you can't do is mix dimensions carelessly.

I am not sure if you now agree with me @vlaiv or not but I have to cut this short, even though I admire your energy put into experimentation. I wish I would have that amount of spare time for my hobby. 👍

You did find a misconception in my previous post, which is probably due to the fact that I write these summaries beside work, most often hastily in my breaks (as now).

I initially wrote that a line convoluted by a PSF does not give the LSF. This is somewhat true but also misleading. The correct statement would be:

Quote

A infinite long line which is infinitely narrow convoluted by a PSF does produce the LSF.

And I have corrected my previous post.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

With regards to that, I managed in the mean time to track relevant math.

Look up 2d fourier transform in polar coordinates and fourier transform of rectangular / brick function and fourier transform of sinc^2 function.

In nutshell:

1d - case

rectangle -FT-> sinc

sinc^2 -FT-> triangular function with clear cutoff

2d - case

When switching to polar coordinates - you end up multiplying with Besel function

Screenshot_3.jpg.cc57155294198572f4dbfc77296cf5ce.jpg

and corresponding 2d case is:

circular aperture (rectangle in radius / polar) -FT(power)-> Airy pattern

Airy pattern -FT-> MTF with clear cuttoff (and resembles triangular function)

Yes, there is clear cutoff frequency for clear aperture and it is well defined - all other values are really zero.

I don't disagree with this. It is when it is assumed  this hold for an obstructed aperture without proof. I looked but could not find a justification either way. 

Regards Andrew 

PS to elaborate slightly.  If the obstructed psf is narrower than the unobstructed it must contain higher frequencies by a simple application of the  Fourier transform theorem. 

Edited by andrew s
Link to comment
Share on other sites

1 hour ago, vlaiv said:

which is clear indicator that this method does not produce correct results since it shows something impossible by physical laws as we understand them

Regarding this statement. I did not show something that is impossible by the laws of physics. If it would be impossible, it would be and I could not show it. Physics rule everything, no escape. I can state that, being a physicist myself.

What you see in my graph of the MTFs is the effect of processing the data with super-resolution. This is possible because I observe the edge many times with many small variations in each measurement. The same process is used when generating super-resolution images which suddenly show license plates of cars driving by a camera, even though each single frame of the video is so blurry you can't see the license plate. The combination of many observations makes the difference. You use this too, when you lucky-image a planet and I am sure you did not think that you break the laws of physics by doing so (and you computer explodes because it needs to 😄).

What you see in the MTF graph is some extra "information" at very high spatial frequencies, so very small details in the image. What could those be? Hmm, I'd say it's the noise in the processed data. That noise is there and needs to be removed if you want to realistically measure the high frequency part of the MTF. I did not do that, as I know what it is and don't care about it. However one could.

BTW, one can make this method a lot better than I did for this short demonstration, but the concept has been proven and it works. When I get my apo I will probably redo this and see if I can compare. Then I might have time to build in some noise filter in my software... we'll see.

Link to comment
Share on other sites

5 hours ago, alex_stars said:

@vlaiv and all,

Regarding the math connecting the LSF with the PSF and the MTF, I think this is beyond the general interest of the forum (just a guess) ...

... perhaps but don’t let that hold you and the others back, please.

I've always wondered what de-convolution (and convolution obvs) was as a Sharpening software tool option, now I have a much better idea.

looking up Convolution in wiki for example is initially not very enlightening, it just gives proofs and definition leading one to think “yes but so what”

this thread puts it all into superb context, so very useful, thanks!

Magnus

  • Like 1
Link to comment
Share on other sites

27 minutes ago, alex_stars said:

I am not sure if you now agree with me @vlaiv or not but I have to cut this short, even though I admire your energy put into experimentation.

There is no question that this method works - there is very elegant and simple proof that it does - so I'm 100% with you on that - Edge Transfer Function method works it it is applied correctly.

Proof is simple and relies on 3 facts - convolution theorem, fact that edge differential in direction perpendicular to that edge is line and fact that 2D Fourier transform of the line going thru the origin - is line perpendicular to it.

Result of the method is MTF along single line - and yes, that is why we need this measurement in many orientations in order to reconstruct approximation to full MTF.

There are few things where we still have different opinions.

First is "super resolution" and need for linear interpolation of the sample versus sinc interpolation. Here I can't do much except to point you to the proof of Shannon-Nyquist sampling theorem and the fact that sinc interpolation is perfect restoration of band limited properly sampled function. There is mathematical proof for this - not sure what is there to discuss.

Linear interpolation is not - and it introduces error into sampled function as it acts as low pass filter with attenuation (sinc is perfect low pass filter without attenuation - it just cuts off higher order harmonic frequencies that arise from the pulse train - FT of sinc is box/rectangle function).

Second is the fact that in your protocol for experiment - you are using 1D Fourier transform. Result is not the same as doing 2D Fourier transform on whole image. For example - not above post where I point Andrew to difference between 1D case and 2D case with spherically symmetric functions - although circular aperture and rectangle are "similar" - in sense that cross section of circular aperture is rectangle in 1D - corresponding Fourier Transforms no longer share this. Cross section of MTF is not triangle function (there is little sag).

I would really like to see your experiment done with following protocol:

- shoot straight edge at vertical position (make sure it is as straight as possible and that it is indeed as close to vertical as possible) - use critical sampling rate for your pixel size (which you already have)

- perform differential on image using simple kernel

- do FFT of resulting image and then measure resulting line profile

No need for super resolution / interpolation and curve fitting - this protocol is much simpler. Yes, note pixel scale so you can properly derive cut off frequency when plotting measured MTF against theoretical ones.

 

Link to comment
Share on other sites

35 minutes ago, andrew s said:

I don't disagree with this. It is when it is assumed  this hold for an obstructed aperture without proof. I looked but could not find a justification either way. 

Regards Andrew 

I'm just baffled what could possibly baffle you with obstructed aperture.

From physics we know that PSF is power spectrum of aperture FT - regardless of aperture shape - it works for circular, quadratic or any other aperture type (even apodized where we vary intensity and aberrated - where we vary wavefront phase).

PSF acts via convolution on image and resulting image is frequency spectrum amplitude.

If you don't question any of the above for clear aperture and you don't question the math - why do you question it for obstructed aperture?

Link to comment
Share on other sites

We are repeating the debate we had before.

May I try a different approach. 

The principle of superposition lets us model an obstructed circular aperture by taking the psf of the unobstructed aperture and subtracting that of the obstruction.

This I contend leads to a narrower pdf than the unobstructed one. Do you agree with this?

If you accept the psf is narrower, do you accept it must have higher frequency components and if not why not?

(The reason I question the obstructed is that the maths employed has the cutoff built in from analysing the unobstructed  case without proof that that is ok. I can't find the maths to provide the justification. )

Can you answer the questions I pose to allow me to understand your position better.

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

55 minutes ago, vlaiv said:

First is "super resolution" and need for linear interpolation of the sample versus sinc interpolation. Here I can't do much except to point you to the proof of Shannon-Nyquist sampling theorem and the fact that sinc interpolation is perfect restoration of band limited properly sampled function. There is mathematical proof for this - not sure what is there to discuss.

I am well aware of the Shannon-Nyquist theorem. However if you compare my edge spread data with a sinc function

SMTF_2.png.8ccb190690b78d808aa21c52915ae46b.pngindex.jpg.064e2a1dcf5941043f70642c23ff13d4.jpg

Why would I want to fit the sinc function in into my edge spread function. It so does not make sense, Nyquist or not.... Maybe a sigmoid function, but you did not like that, remember, so I showed you a purely data driven approach....

Super-resolution and its intermediate step of piecewise linear interpolation is just a way to harness the data of many observations and make more of them in comparison to a single one. And maybe you do not understand what I have explained. I do not curve fit anything during my method, so it is purely data based. All I do is prepare a 10x larger data array by piecewise linear interpolation of the initial data, there is no curve fitting component to it. And I then fill it with data....

55 minutes ago, vlaiv said:

I would really like to see your experiment done with following protocol:

- shoot straight edge at vertical position (make sure it is as straight as possible and that it is indeed as close to vertical as possible) - use critical sampling rate for your pixel size (which you already have)

- perform differential on image using simple kernel

- do FFT of resulting image and then measure resulting line profile

No need for super resolution / interpolation and curve fitting - this protocol is much simpler. Yes, note pixel scale so you can properly derive cut off frequency when plotting measured MTF against theoretical ones.

Unfortunately this quite tells me that you did not grasp the essence of the edge spread function approach yet. You propose the exact opposite of what should be done.

  • You should exactly NOT align the sensor pixel array with the edge target, that is the worst sampling you can do given a limited resolution of your camera (and all cameras are limited).
  • I do just simple finite differencing, there is hardly a simpler "kernel", and may I remind you that you suggested that yourself
  • I do the FFT of the resulting image, which happen to be 1D in the case of an edge.
  • Your suggested protocol is indeed simpler, but the crudest approach there is to this measurement technique.... So no, I will not do that, you are free however free to do what you feel like....

Another paper reference to help understand the rational behind this method:

"Comparison of MTF measurements using edge method:towards reference data set" (paper available here )

image_oversampe01.png.d27fe128537bf345ae7fcf092d3258ff.png

This would be the classical oversampling idea and the plain reason why you would not want to align your edge vertically, parallel to the pixel array of your camera. You would loose soo much data you could sample.

image_oversampe02.png.91b42cadc4f6b26e08b602357e9232a8.png

This is what I base my code on. The Airbus method I call it leisurely. Where you align multiple observations and thereby utilize more data.

Ah yes and I have to clarify. The Airbus method does curve fitting, but I do not. The reason? When I apply super-resolution I harness so much data, I can reconstruct the LSF directly out of the data, with an acceptable amount of noise....

Hmm, maybe this helps.

 

Edited by alex_stars
Link to comment
Share on other sites

9 minutes ago, andrew s said:

We are repeating the debate we had before.

May I try a different approach. 

The principle of superposition lets us model an obstructed circular aperture by taking the psf of the unobstructed aperture and subtracting that of the obstruction.

This I contend leads to a narrower pdf than the unobstructed one. Do you agree with this?

If you accept the psf is narrower, do you accept it must have higher frequency components and if not why not?

(The reason I question the obstructed is that the maths employed has the cutoff built in from analysing the unobstructed  case without proof that that is ok. I can't find the maths to provide the justification. )

Can you answer the questions I pose to allow me to understand your position better.

Regards Andrew 

I think that we can apply the same logic and without assumptions but by the means of mathematical proof - show that obstructed aperture can't have higher frequencies than unobstructed.

Obstructed aperture is actually difference of one larger unobstructed and one smaller unobstructed aperture.

From linearity of Fourier transform - it leads that Fourier transform of obstructed aperture is difference of fourier transforms of unobstructed apertures.

Since smaller aperture has lower cutoff frequency than larger aperture - their difference simply can't produce frequencies higher than cutoff frequency of larger aperture (neither of two apertures produces high enough frequencies for this to happen when we subtract - all higher frequencies are zero and zero - zero = zero).

Why did I emphasize word assumption?

14 minutes ago, andrew s said:

This I contend leads to a narrower pdf than the unobstructed one. Do you agree with this?

You'll have to provide the proof of that and definition of narrower.

Link to comment
Share on other sites

5 minutes ago, alex_stars said:

Why would I want to fit the sinc function in into my edge spread function. It so does not make sense, Nyquist or not.... Maybe a sigmoid function, but you did not like that, remember, so I showed you a purely data driven approach....

I did not suggest you fit sinc function. I simply said that if you want to interpolate your data to accurate curve - you should use sinc interpolation function over linear interpolation function - as sinc provides perfect reconstruction for band limited signal whereas linear interpolation introduces attenuation of high frequencies of signal and thus changes signal.

I also said - don't use interpolation of any kind - it is not needed. In fact - look at previous post by Andrew - and remember a bit of quantum mechanics - "FT of narrow function is broad and vice verse" - you don't need to interpolate your data - just a do FFT and cross section of MTF will be broad enough.

10 minutes ago, alex_stars said:

Unfortunately this quite tells me that you did not grasp the essence of the edge spread function approach yet. You propose the exact opposite of what should be done.

I ran simulation showing perfect match between two MTF getting methods in the post above - how did I not grasp the method?

Link to comment
Share on other sites

9 minutes ago, vlaiv said:

 

You'll have to provide the proof of that and definition of narrower.

 See the diagram I posted and link. I think you have to accept that it is narrower and has the first zero is inside that of the unobstructed aperture? 

Narrower means that if normalised the obstructed central peak is inside that of the unobstructed one.

I want to know if you accept this before taking the next step.

Regards Andrew 

Link to comment
Share on other sites

3 minutes ago, andrew s said:

 See the diagram I posted and link. I think you have to accept that it is narrower and has the first zero is inside that of the unobstructed aperture? 

Narrower means that if normalised the obstructed central peak is inside that of the unobstructed one.

I want to know if you accept this before taking the next step.

Regards Andrew 

I understand now - you are talking about scaling property of Fourier transform

image.png.eb126bb7dac81c8c1db48f155ec2b52b.png

however, you need to understand that this holds as written above  - time squeezed (or in this instance spatially squeezed) function will indeed be FT stretched - and we can clearly see this effect in perfect aperture - Perfect aperture that is twice as large will have twice as high cutoff frequency.

However - this property does not hold for functions that are different.

PSF of obstructed aperture is not simply scaled PSF of obstructed aperture - relative size of peaks also change (strehl changes / encircled energy - ratio of energy in disk vs rings) and above no longer holds - we have two functions and regardless of the fact that to us one looks like "squeezed" or "narrower" version of the other - in above sense where time scaling holds - it is not - those are in fact two different functions and not time scaled single function. We can't apply time scaling property on different functions.

Link to comment
Share on other sites

This initially started out as a seemingly interesting thread but I am slowly loosing the will to live.  Perhaps any interested parties could write a paper on the subject and submit it to a suitable journal so that it can be peer reviewed there rather than on this forum.

  • Like 1
Link to comment
Share on other sites

17 minutes ago, vlaiv said:

I also said - don't use interpolation of any kind - it is not needed. In fact - look at previous post by Andrew - and remember a bit of quantum mechanics - "FT of narrow function is broad and vice verse" - you don't need to interpolate your data - just a do FFT and cross section of MTF will be broad enough.

I love the fact that you remind me of quantum mechanics. 😀. Given your comment you obviously talk about something completely different. Look, once more, and for the last time. These are my steps:

  1. Having taken an image of an edge target. I align and stack the line data (it's all those horizontal lines in ONE 2D image of the line target, EACH line is an observation).
  2. During this alignment and stacking I can or can not deploy super-resolution (my choice). Also I can interpolate in this step the coarse data to a finer grid, or not.
  3. Now I have the ESF.
  4. Now I need to differentiate to get the LSF.
  5. Now I need to do an FFT to get the MTF
  6. And yes, either way the MTF will be broad enough, that has never been the issue (except for people who do not know how to do FFTs properly and struggle with the sampling limit

You see my interpolation happens in step (2) above. Just to reconstruct a better representation of the ESF. You suggest to me that I just do an FFT and have the MTF. If I would do that, I would do an FFT of the ESF, which does not make sense at all. By doing so I would skip step 4 and the reconstruction of the LSF. We obviously talk about different things.

25 minutes ago, vlaiv said:

I ran simulation showing perfect match between two MTF getting methods in the post above - how did I not grasp the method?

It is very good that you do simulations on your computer. However I doubt you are able with your simulations to reconstruct the MFT from an edge target where the edge spread is represented over just a few pixels (say 10 as in my first example).

I stated that you did not grasp the method as you suggested to take away all the refined steps to make such a method being higher resolution. As it was you who doubted the quality of the method initially, I am still surprised that you suggest that. Working with real life data, one needs a line target that is not aligned with the sensor and in many cases one needs super-resolution to recover the edge function from the data..... simple as that.

2 hours ago, vlaiv said:

Here is edge that has been convolved with PSF for perfect aperture

From your edge spread simulation, I would love to see the horizontal cross section of that "data". Looks to me that this edge is spread over many pixels. Maybe you can show us that if you find time. I'm gonna have more

Link to comment
Share on other sites

6 minutes ago, Seelive said:

Perhaps any interested parties could write a paper on the subject and submit it to a suitable journal so that it can be peer reviewed there rather than on this forum

I fully agree as my work has been and is based on already published work. I did not expect such a detour.... However, what can you do....

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.