Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

MTF of a telescope


Recommended Posts

1 hour ago, Carl Au said:

I am definitely confused now, but it is starting feel like the point of this thread is telescope bashing (refractors in particular) and rubbishing people's gear. 

I stopped using cloudy nights because of this type of carry on...pity.

Don't stop using SGT Carl, cos you and me know that you can't beat a 4" refractor. 😂😈

5a97f5d84ca19_2017-03-1820_24_15.jpg.c643b310359613598a294d299bb0d592.jpg.cc5bc40506f3bffc9d0eac52c3537bfd.jpg924810456_2021-01-2016_31_12.png.cee25cd4a7a5d6a04ce09f1d851630be.png

  • Like 1
Link to comment
Share on other sites

1 hour ago, alex_stars said:

For me this discussion has gone too far, you seem to have the urge to prove yourself of being the "wise guy" on the forum and seem to take pleasure in criticizing valid contributions of other... which is not very scientific....

Not sure why you think this- challenging positions are a matter of discussions IMHO. Btw Vlaiv and I had a good one with respect to aperture and light throughput or "entendue". Opposing positions on it which taught me a lot from his position.

  • Like 1
Link to comment
Share on other sites

18 minutes ago, jetstream said:

If you mean those 2 belts in the marble then yes.

Yes, that was both genuine question but also attempted humor.

Many of entry level telescopes are described as "capable of showing to two main Jovian belts", and I insisted that "marble spotting" is alternative to planetary observation, and fact that you used refractor ... :D

 

Edited by vlaiv
  • Like 2
Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Yes, that was both genuine question but also attempted humor.

Many of entry level telescopes are described as "capable of showing to two main Jovian belts", and I insisted that "marble spotting" is alternative to planetary observation, and fact that you used refractor ... :D

 

:grin:

Yes I started with the little peashooter refractors- the real scopes will be out soon :evil:

In reality this 90mm really surprised me eventhough it takes the 2.4mm HR on the moon with ease.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

In any case, please accept my apology if I've offended you in any way.

Hi @vlaiv,

No need to apologize, I think I misunderstood your eager to find the truth as offence. Apologies from my side are in order!

Good however that we have cleared that misunderstanding, so we can proceed in finding an answer together.

I am currently working further on my algorithm and will present more as results come in.

Edited by alex_stars
  • Like 3
Link to comment
Share on other sites

It may be a lot more enjoyable to look through scopes rather than conduct forensic investigations into the way the dilithium crystals in the perpendicular cohesion of their tubular viscosity affects the resonating gamma waves 

entering the vertical objective end. Just making light of situation, no offence to anyone of course.

Talk about a run on sentence, I have no idea where to insert a period/coma in there.

Edited by Sunshine
  • Haha 2
  • Confused 1
Link to comment
Share on other sites

3 minutes ago, jetstream said:

:grin:

Yes I started with the little peashooter refractors- the real scopes will be out soon :evil:

In reality this 90mm really surprised me eventhough it takes the 2.4mm HR on the moon with ease.

Is a HR a good EP to use, I cannot obtain focus with a Vixen SLV 2.5 on my SD103S but I can get focus on a HR 1.6 with it on the Moon (The SLV will have a Strehl around 0.9, versus 0.999 for the HR centre field). Can you go down market and see what happens using another EP in the spring time? 😃

Edited by Deadlake
  • Like 1
Link to comment
Share on other sites

2 minutes ago, Deadlake said:

Can you go down market and see what happens using another EP in the spring time? 

I should be able to test after this coming week. The extreme cold will twist up the lens cells on the 90mm SV, 120ED but not the TSA120.  The first 2 will show astig sometimes under these conditions. It varies by cooldown- if I put them away in seacan after va successful cooldown no astig will emerge, its a crapshoot.

"Down market"  😀

I don't believe in this term! I have inexpensive Circle T's that do very very well as well as my 10 BCO. I have not reached the limit of barlowing this EP with the VIP- and I'll show you my hillbilly zoom I made with it lol!.

Not saying your saying this, but I don't believe that things have to be expensive to be good.

  • Like 2
Link to comment
Share on other sites

9 minutes ago, jetstream said:

I should be able to test after this coming week. The extreme cold will twist up the lens cells on the 90mm SV, 120ED but not the TSA120.  The first 2 will show astig sometimes under these conditions. It varies by cooldown- if I put them away in seacan after va successful cooldown no astig will emerge, its a crapshoot.

"Down market"  😀

I don't believe in this term! I have inexpensive Circle T's that do very very well as well as my 10 BCO. I have not reached the limit of barlowing this EP with the VIP- and I'll show you my hillbilly zoom I made with it lol!.

Not saying your saying this, but I don't believe that things have to be expensive to be good.

Less performant EP then, agree value is important and relative. The Vixen HR's are good value compared with Takahashi TOE's that are a 1/3 more in price. 🙂  The SLV's are good value too, but have their limitations, in a larger aperture scope the 2.5 mm would work just fine.

Edited by Deadlake
Link to comment
Share on other sites

1 minute ago, Deadlake said:

Less performant EP then, agree value is important and relative. The Vixen HR's are good value compared with Takahashi TOE's that are a 1/3 more in price. 🙂 

I used my best eyepieces for the scope test so that the results reflect the scope performance and not the eyepiece. I will rig up the ultra sharp, zero scatter Zeiss 25.1-6.7 zoom too...it loves the VIP and a zoom will show the "boundary" of the limits of the scope well.

Testing eyepieces could be a whole "nuther" matter. I like the idea of that camera rig to test the optics, I wonder what it costs.

Link to comment
Share on other sites

It's easy to be misunderstood or missread even in the best of circumstances and especially if English is not your first language. I remember my first trip to Germany on business where the works manager asked "Why are you here" . To English sensitivity it seemed abrupt but was down to a limited vocabulary.  In fact he could not have been more helpful.

I now try not to assume intent or infer motivation. SGL is a good place for discussion

Regards  Andrew 

  • Like 3
Link to comment
Share on other sites

1 minute ago, andrew s said:

It's easy to be misunderstood or missread even in the best of circumstances and especially if English is not your first language. I remember my first trip to Germany on business where the works manager asked "Why are you here" . To English sensitivity it seemed abrupt but was down to a limited vocabulary.  In fact he could not have been more helpful.

I now try not to assume intent or infer motivation. SGL is a good place for discussion

Regards  Andrew 

How much is that optics testing camera set up Andrew?

Link to comment
Share on other sites

1 minute ago, jetstream said:

How much is that optics testing camera set up Andrew?

I don't  know but Thorlabs sell general ones for £3k+ . I have asked for a quote and will report back if I get an answer.

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

12 minutes ago, andrew s said:

I don't  know but Thorlabs sell general ones for £3k+ . I have asked for a quote and will report back if I get an answer.

Regards Andrew 

It's about 2-300EUR for the optics test to measure the wavefront. Maybe we can find the details on another site like CN?? 

  • Thanks 1
Link to comment
Share on other sites

6 minutes ago, Deadlake said:

It's about 2-300EUR for the optics test to measure the wavefront. Maybe we can find the details on another site like CN?? 

Sorry Wellenform is around 500 EUR, buts that a full strip, rebuild and test for a triplet, I suspect the test is less around 200-300 if memory serves...  

Link to comment
Share on other sites

Since we are now on the related topic of optics tests - there is test that people can do for little or no money (no money if they already have planetary / guiding camera, but I think even DSLR will work).

@alex_stars Outlined one approach that might be useful, I have not checked proposed algorithm in papers linked, but no doubt it is useful method - at least for camera lens (not sure if it has required precision for telescope optics). There is another similar approach that is not based on shooting high contrast edge - but rather point source. Either real or artificial star.

It is called Roddier test and it works by shooting defocused pattern of a star - both in and out focus and then it uses Zernike polynomials and FFT to find wavefront that corresponds to those patterns. Here I must emphasize that what I described is what I believe is happening - I did not read actual paper by Roddier that describes the process, but I did use that method to test telescopes and actual usage is not very complicated at all.

There is readily available software called WinRoddier for anyone wanting to try it out.

My only reservation now is usage of artificial star. Software has option to specify if artificial star was used and at what distance - but I don't remember choosing telescope type. Today as I was reading on Telescope Optics.net about spherical aberration due to close focus - I realized that it depends on telescope type and that in some cases - distances needed far exceed what is practically possible - like 700+ meters required for large and fast newtonian telescopes. It also means that you can't calculate spherical aberration due to close focus to subtract from measured wavefront if you only know the distance to source and not telescope type.

In any case - test can be performed on a real star if seeing is decent.

  • Like 1
Link to comment
Share on other sites

So, here is my second attempt to explain what can be done with an Edge Spread Function image. First off, this method has plenty of precision for practical purposes and is used for about any imaging device, from camera lenses, satellite based optical systems, telescopes to even MRI machines in medicine....

I will go through step by step and hope to explain each step in sufficient detail. I have implemented these steps in a Python program to have full control of the processing.

First you start with an edge target and take an image at best possible focus with the shortest possible distance on ground to avoid atmospheric disturbance. I took mine 170 m away across a meadow this morning at 7:50 am.

edge01.png.ce9103cea5d14c1e496dead471785269.png

Its actually important that the edge is not completely straight. My setup works and if you have a straight edge, the make sure you tilt the edge with respect to the pixel array of your camera, to make use of super-resolution later on. If you just process with a single pixel line (horizontal line in the image) then you need to curve-fit the data with a known edge spread function (see my post before) but that was somehow not convincing as I have been told.

To make use of the fact that I have imaged the same edge for over 600 times in the image above (all the horizontal lines in the image), one can deploy super-resolution processing. The algorithm, initially conceived by Airbus if I remember correctly, does the following. For each horizontal line in the image:

  • Upscale the resolution to a chosen factor, I use 10x. This means that in between each actual pixel, 10 new pixels are placed and the data is interpolated linearly.
  • Fit a sigmoid curve through the data. I use least squares as a measure for the fit.
  • The sigmoid fitted curve gives you an estimate of the off-set of the current line to a chosen alignment coordinate in the image at a 10x sub-pixel resolution.
  • Shift the line data to be aligned with the alignment target

This results in a 10x super-resolution aligned edge data, which looks like this (I flipped the data to be consistent with standard edge detection literature)

SMTF_1.png.e789e6c6877c85b83ca5521180944204.png

Now one can sum up the aligned line data and get a nice representation of the Edge Spread Function the optics produce. This is a similar technique that we use when we stack planetary images. Normalize the summed up line once again and you get the ESF:

SMTF_2.png.0950772939fba01a7d5a583479cf96be.png

Now in high-resolution and because of the "stacking" with almost no noise. We know that with perfect optics this would not be a smeared out curve but an instant step from 0 to 1 where the edge is. However we have to be careful now. This is not what you normally would be able to recover when imaging or viewing an object. Don't forget I work at 10x the normal resolution, which is easy for a line target, but a lot of computation for stacking planetary images.

Now comes the important part. When I differentiate this curve, I get the next graph. I now differentiate with finite differences on the real data, which increases the noise level:

SMTF_3.png.7d219f8b5f23269bb9db1b7f1ffe9587.png

This is whats referred to as the Line Spread Function. This is the smearing/blurring/spreading of a perfect, infinite small line caused by the optics. Now we can make a comparison to the point spread function.

  • In a 2D imaging plane, the spread a given optical device creates when transferring a perfect point source is refereed to as the Point Spread Function. This also relates to the well known Airy disk. (I am sure we are all familiar with this concept)
  • In a 1D imaging plane, the spread of a perfect line transferred through a given optical system is called the Line Spread Function. It is the same concept, just one dimension lower

In my case I make use of the fact that I can recover the Line Spread Function of any given optical system by imaging an edge target and the differentiate the image data. Thats just simple maths and if people issue with that I refer them to literature. However there are two things to remember:

  1. The LSF is not the same as the PSF. You would have to radially average the PSF and then integrate to get the LSF
  2. If you assume a perfectly circular symmetric optical system, the LSF and the PSF hold the same information with respect to the MTF. (The LSF is an integration of the PSF along one of its coordinates).
  3. If you want to recover the 2D PSF for a given system, just image several edges which have different alignments with respect to the optical plane and recover the full PSF (just rotate the telescope with respect to your target 😀 or otherwise if more convenient)

Now you can take a Fourier transform of the Line Spread Function and get the mysterious Modulation Transfer Function:

SMTF_4.png.ab964dc8844830adadc8e8c798734432.png

Here I plotted the theoretical MTF for my Skymax 180 at 530 nm light in green. The now measured (no curve fitting) MTF of my Mak in red (however that is at 10x super-resolution so you won't get such a good result that often) and for comparison a theoretical MFT at 530 nm for a 125 mm Apo.

The interesting thing is that the measured MTF is better than the theoretical one from about 0.8 onward to the right. This is due to the fact that I utilize super resolution. However I recover most of the MTF quite well with the measurement. Nice!

Nevertheless I interpret this graph as such. If I don't want to utilize the tedious method of super-resolution or have an atmosphere which hardly ever gives seeing conditions below 1 arcsec, or if I observe visually, then I am just as well off with a 125 mm APO.

This is all of course for theoretical perfect optics of a given design. Collimation and optical defects are not considered at all.

 

 

Edited by alex_stars
Link to comment
Share on other sites

34 minutes ago, alex_stars said:

So, here is my second attempt to explain what can be done with an Edge Spread Function image.

If you don't mind, I have couple of questions. There are few things that I don't quite understand and would need clarification on those.

35 minutes ago, alex_stars said:

In a 1D imaging plane, the spread of a perfect line transferred through a given optical system is called the Line Spread Function. It is the same concept, just one dimension lower

LSF is essentially 1D concept, right?

37 minutes ago, alex_stars said:

If you assume a perfectly circular symmetric optical system, the LSF and the PSF are the same,

Can you explain that in context of perfect circular aperture? More importantly, will line under convolution by PSF produce LSF if we take cross section of line blurred with PSF perpendicular to line itself?
I might be wrong, but it looks like you are implying that?

I can easily show that such LSF simply can't be equal to Airy pattern PSF. As we know, Airy pattern has zeros - places where PSF is equal to zero.

image.png.307c9892d85f08402a1b374508268ef3.png

Black vertical line is our line that will serve to produce LSF once we convolve it with airy disk PSF. Yellow line will be the line along which we sample LSF. Point A is intersection and black circle is marking edge of Airy disk. As such convolution at point A will indeed produce 0 (zero) value at orange point. However - there exists point b (in fact - infinitely many such points) - where PSF convolution will not produce 0 value in orange point.

Convolution will sum all these values - Airy pattern has only non negative values - and sum of non negative number where at least one is greater than zero - is value greater than zero.

LSF derived this way will have only positive values and no zeros and PSF contains zero values - hence the two can't be equal.

What am I missing?

Another question is:

If you are sampling at critical sampling rate for optical system (Nyquist theorem) - why do you need to form "super resolution" image - and use linear interpolation (instead of sinc interpolation that will guarantee reproduction of band limited original signal that is properly sampled)? What is that achieving? Can you explain rationale behind it?

  • Like 1
Link to comment
Share on other sites

18 hours ago, vlaiv said:

If you don't mind, I have couple of questions. There are few things that I don't quite understand and would need clarification on those.

Absolutely. Let's go through all the steps and see if I can clarify.

18 hours ago, vlaiv said:

LSF is essentially 1D concept, right?

Correct

18 hours ago, vlaiv said:

Can you explain that in context of perfect circular aperture? More importantly, will line under convolution by PSF produce LSF if we take cross section of line blurred with PSF perpendicular to line itself?
I might be wrong, but it looks like you are implying that?

Yes.

18 hours ago, alex_stars said:

If you assume a perfectly circular symmetric optical system, the LSF and the PSF are the same,

What I meant with the above is that a perfectly circular symmetric optical system is a system which is radially symmetric along the optical axis. All lenses and/or mirrors are perfectly aligned and have no variations along any given radial coordinate. Perfect optics. Spider vanes which hold a secondary mirror would make such a system non-symmetrical and the PSF would also become non-symmetric radially (example here). However in the radially symmetric case the PSF is radially symmetric and it would not matter under which angle you would cut through the 2D PSF to get a line graph.

Now how does this relate to the LSF. And I see what caused the misunderstanding (which was me being not completely precise).

A infinite long line which is infinitely narrow convoluted by a PSF does produce the LSF. The LSF is an integration over on variable (i.e. coordinate) of the 2D PSF. You can read up on the mathematical relations in Digital Image Processing by Jonathan M. Blackledge (details here)    . My mistake was to not include the integration part, sorry for that. (I have corrected the original post).

Now two things are important. The integration changes the look of the LSF in comparison with the PSF. That should be the puzzle piece you missed by me forgetting to write that. And secondly in a radially (circular) symmetric system the LSF holds enough information to reconstruct the MTF, you don't need the PSF.

However, a few simple reminders might help to understand the relations between the two:

  • In a 2D system (imaging or observing plane) a point gets blurred by an optical system and becomes the PSF. As a 2D image is a large collection of points, we can convolute the image with the PSF to re-create the blurred image. Or if we know the PSF precisely, we can revert the process and de-blurr an image by de-convolution.
  • In a 1D system (not like our images) an infinite long line (which is the 1D equivalent of the point in 2D) gets blurred by an optical system and becomes the LSF. Same procedures as above are valid, just in 1D.
  • In a radially symmetric system, we don't really need to care about 2D when we want to understand its quality by the measure of an MTF. However if we want to simulate its behaviour, we do need a 2D PSF to apply convolution and such.
18 hours ago, vlaiv said:

If you are sampling at critical sampling rate for optical system (Nyquist theorem) - why do you need to form "super resolution" image - and use linear interpolation (instead of sinc interpolation that will guarantee reproduction of band limited original signal that is properly sampled)? What is that achieving? Can you explain rationale behind it?

As you can see in my initial post, if I only use the edge spread directly from the camera sensor, I sample the edge spread with only a few  points, then I need to fit a theoretical edge spread function to the sparse data. If I apply super resolution, which I can as I have several hundreds of observations of that edge in my image, then I can resolve two issues I face:

  1. As the actual edge is curved, I need to align the edge observations before "stacking". To get a good alignment, I want to work on a sub-pixel scale as the native resolution of the camera is rather coarse. To shift observations on a sub-pixel scale I need super-resolution.
  2. I use piece-wise linear interpolation between original pixels as I don't want to add spurious "data" by interpolation, as I would with using a sinc or a spline or something. Let me remind you that I interpolate the raw camera data, so a sinc would not make sense. After the piece-wise linear interpolation the sub-pixel information comes directly from the summation over the several hundreds of observations of the edge. That is what I prefer when I calculate the LSF and MTF numerically out of the data.

I do hope that sheds some light into the method and thanks for carefully reading over it and asking questions 👍

Do ask more if you feel like....

 

Edited by alex_stars
small error correction
Link to comment
Share on other sites

40 minutes ago, alex_stars said:

A line convoluted by a PSF does not produce the LSF. The LSF is an integration over on variable (i.e. coordinate) of the 2D PSF. You can read up on the mathematical relations in Digital Image Processing by Jonathan M. Blackledge (details here)    . My mistake was to not include the integration part, sorry for that. (I have corrected the original post).

Ok, that explains my misunderstanding of what LSF is. Could you in simple terms explain why is derivative of ESF (being cross section of edge convolved with PSF) equal to LSF - which is integral of PSF in one coordinate?

I'm sure that it is explained in the book you linked to - but I would rather have it explained in a few sentences if possible then paying for that book just to find the answer.

44 minutes ago, alex_stars said:

And secondly in a radially (circular) symmetric system the LSF holds enough information to reconstruct the MTF, you don't need the PSF.

I understand this bit - except that I don't know how do we reconstruct MTF from LSF - could you give explanation of that as well?

49 minutes ago, alex_stars said:

As the actual edge is curved, I need to align the edge observations before "stacking". To get a good alignment, I want to work on a sub-pixel scale as the native resolution of the camera is rather coarse. To shift observations on a sub-pixel scale I need super-resolution.

I'm failing to see how adding derived data as new data points offers greater precision for alignment than using original data points (and using derivation process as a part of original alignment process). Think of if this way - if we have set of measurements that we need to do linear fit on - will linear fit change if we add more points by interpolating existing ones?

Or in math terms if original set is X and interpolated set is g(X) and we have fitting as a function b(X, g(X)) it is still function of X alone.

52 minutes ago, alex_stars said:

I use piece-wise linear interpolation between original pixels as I don't want to add spurious "data" by interpolation, as I would with using a sinc or a spline or something. Let me remind you that I interpolate the raw camera data, so a sinc would not make sense. After the piece-wise linear interpolation the sub-pixel information comes directly from the summation over the several hundreds of observations of the edge. That is what I prefer when I calculate the LSF and MTF numerically out of the data.

Adding linear data is adding spurious "data" by interpolation same as any other interpolation. In fact not the same. Shanonn-Nyquist sampling theorem states that band limited signal can be perfectly reconstructed if sampling frequency is twice higher than maximum frequency of band limited signal and one uses sinc function for reconstruction (convolves delta functions at sample points scaled with sample values with sinc function)

https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

Quote

A mathematically ideal way to interpolate the sequence involves the use of sinc functions. Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the sample, nT, with the amplitude of the sinc function scaled to the sample value, x[n]. Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method is to convolve one sinc function with a series of Dirac delta pulses, weighted by the sample values.

For example how linear interpolation is actually performing data smoothing rather than original reconstruction - see here:

 

Link to comment
Share on other sites

1 hour ago, alex_stars said:

A line convoluted by a PSF does not produce the LSF. The LSF is an integration over on variable (i.e. coordinate) of the 2D PSF. You can read up on the mathematical relations in Digital Image Processing by Jonathan M. Blackledge (details here)    . My mistake was to not include the integration part, sorry for that. (I have corrected the original post).

I just ran a test and found something very interesting ...

If you take an edge - convolve it with PSF and then use kernel filter to "differentiate it" (a bit like you suggested - except I'm using filter to differentiate) and read off function and if you take a line and convolve it with PSF and read off values / plot graph - you get essentially two identical functions:

image.thumb.png.493ac066886d640c6129e448dec73ad0.png

(X values are not the same because edge and line were not in the same place in test images - but curves match perfectly).

To me this suggests that using edge test and reading off curve and then differentiating it - will not produce LSF which is integral of PSF in one variable.

Link to comment
Share on other sites

In fact above can be easily proven:

(psf * edge) (x,y) = sum_u, sum_v  of psf(u, v) * edge(x-u, y-v) du dv

Now we take derivative with respect to x

that will be sum_u, sum_v of psf(u, v) * edge' (x-u, y-v) du dv as only edge depends on x

Derivative of edge with respect to X is "Dirac delta line at X = crossover" - or "line that goes vertical"

so we end up with convolution expression where psf convolves vertical line.

Hence - derivative of convolved edge with respect to X is the same as convolved line and not integral of PSF in one coordinate.

Link to comment
Share on other sites

6 hours ago, vlaiv said:

In fact above can be easily proven:

(psf * edge) (x,y) = sum_u, sum_v  of psf(u, v) * edge(x-u, y-v) du dv

Now we take derivative with respect to x

that will be sum_u, sum_v of psf(u, v) * edge' (x-u, y-v) du dv as only edge depends on x

Derivative of edge with respect to X is "Dirac delta line at X = crossover" - or "line that goes vertical"

so we end up with convolution expression where psf convolves vertical line.

Hence - derivative of convolved edge with respect to X is the same as convolved line and not integral of PSF in one coordinate.

Wow - beyond me - but more importantly.......

Will it get rid of the clouds ? 🤣

  • Haha 2
Link to comment
Share on other sites

58 minutes ago, dweller25 said:

Will it get rid of the clouds ?

Sorry to say, I am afraid it won't. Would be great though. However as a remedy of clouds I really suggest the ground based observation targets (marbles, print-outs) that @vlaiv suggested. They are great fun and the showed me how good my scope actually is.😀

Link to comment
Share on other sites

8 hours ago, vlaiv said:

Ok, that explains my misunderstanding of what LSF is. Could you in simple terms explain why is derivative of ESF (being cross section of edge convolved with PSF) equal to LSF - which is integral of PSF in one coordinate?

The idea starts with what we can easily observe. We want to observe a point and measure it's spread, but that is in most cases (except astronomy where you have stars) really hard. The next best thing would be a infinitely narrow line, but that is really hard to observe too. So the idea was born (I think it was Léon Foucault) that it is a lot easier to observe a edge (initially a real knife edge as a screen against the light).

Mathematically an edge like in my pictures is called a step function. If the step function has a range of 0 to 1 (normalized, hence the normalization in my steps) and you differentiate it, you get a Dirac delta function, which is the perfect cross-section of a infinite narrow line.

So instead of observing (or imaging lines), we observe edges in the realization that if we differentiate the result it is like we have observed a line. Or differently put, if we are interested in the spread of a line, we can observe the spread of an edge (which is a lot easier to do) and measure the spread of the line by differentiating the spread of an edge.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.