Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Exit pupil and AFOV


Recommended Posts

6 minutes ago, jetstream said:

This has always puzzled me Andrew- thanks for clearing things up. From my completely unscientific observations I can say that stars can be made to disappear with too high a mag, whether because of seeing or other things. There seems to be a sweet spot mag wise for them.

Can I ask one more question Andrew, slightly off topic?

How much object information is in one unit of light- packet, photon or whatever? When I place my hand over part of my scope the image is still there as itself but suffers more diffraction- I think lol!

If you obstruct the telescope with your hand you dim the object and add diffraction effects. These would be best modeled by geometric optics for the obstruction and classical wave theory for the diffraction.

I don't  think light packets or photon contribute any insight and would open  up a whole new can of worms. Photons the hardest quantum objects to pin down there are whole books on the subject.

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

1 minute ago, andrew s said:

I don't  think light packets or photon contribute any insight and would open  up a whole new can of worms. Photons the hardest quantum objects to pin down there are whole books on the subject.

It is complicated for sure and maybe I could start a thread in the physics section on it? The QED has been enlightening but just a synopsis of thoughts from members such as yourself would be fantastic.

One thing is for sure- I never realized before just how complicated the actual workings of seeing an image in a telescope actually are. For me it adds to the respect and admiration for my simple telescopes that allows the exploration of the sky.

  • Like 2
Link to comment
Share on other sites

@jetstream it has long been known that linear features can be seen more easily than say separating double stars. The Dawes and similar criteria are conventions or definitions. They were set either by observations in the past or by diktat. They have a useful role for comparing system but they are not hard and fast limits.

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

20 minutes ago, jetstream said:

"Understanding Resolution and Contrast

Two points it is important to understand is the resolution a telescope can provide, and how the contrast of the objects we are imaging affects is related to what can be recorded. Its often seen quoted in the Dawes or Rayleigh criterion for a given aperture. Dawes criterion refers to the separation of double stars of equal brightness in unobstructed apertures. The value can given given by the following simple formula:

115/Aperture (mm.) For example, a 254mm aperture telescope has a dawes limit of 0.45" arc seconds. The dawes limit is really of little use the Planetary observer, as it applies to stellar images. Planetary detail behaves quite differently, and the resolution that can be achieved is directly related to the contrast of the objects we are looking at. A great example that can be used from modern images is Saturn's very fine Encke division in ring A. The narrow gap has an actual width of just 325km - which converts to an apparent angular width at the ring ansae of just 0.05" arc seconds - well below the Dawes criterion of even at 50cm telescope. In `fact, the division can be recorded in a 20cm telescope under excellent seeing, exceeding the Dawes limit by a factor of 11 times!. How is this possible?.

As mentioned above, contrast of the features we are looking at is critical to how fine the detail is that we can record. The Planets are extended objects, and the Dawes or Rayleigh criterion does not apply here as these limits refers to point sources of equal brightness on a black background. In fact it is possible for the limit to be exceeded anywhere up to around ten times on the Moon and Planets depending on the contrast of the detail being observed/imaged."

From Peach.

Whats your thoughts about telescope resolution in this context?

There are two things that I would like to address here. Mr Peach is right that astrophotography can resolve more detail on planets - but not for reasons listed.

First - let's address what resolve means: In astronomy: "separate or distinguish between (closely adjacent objects)".

Resolving of Encke Division is not really resolving at all. You are resolving two pieces of ring A and not resolving division. Similarly you are resolving two stars.

I'm not talking here about light or lack of light - that is not the point. You can still resolve two dark features - but you need to distinguish between two dark features. If there were two divisions next to each other - and you record two division - then you resolved two divisions.

Recording contrast drop from Encke division is not resolving - in the same sense that seeing single star instead of two stars is not resolving that star. Maybe there is 50 of stars in there? Maybe it is 10000 stars. How can we tell? We can't because we did not resolve it.

Once you resolve pair of stars - you can tell - there is at least two stars there. Maybe there is more - but we did not resolve them.

In that sense - telescopes don't actually resolve Encke division - but they do record it - in same sense as you'll see a star if you observe double star but don't resolve it.

Now onto other part  - photography will record / resolve more than human eye can resolve with same telescope.

This is due to two things - first is "Frame rate" and atmosphere. We look at movies at 30fps and can't tell it is series of single frames in succession - this tells you that our eye/brain "exposure" time is at least 1/30s. In fact - some people can see faster than this - there is anecdote that someone saw M1/Crab pulsar pulsate in large telescope. Person was a pilot and could tell the difference to atmospheric influence. Crab pulsar pulsates with 33.5ms - which means that light and dark each last for half of that. Some people can see flicker at 30fps.

In any case - exposures for planetary photography are often 5-6ms. That is much faster and it is used to freeze the seeing. In another words - Human eye sees more atmospheric motion blur than camera due to "exposure" length.

Second thing is that images of planets are processed - contrast is enhanced and sharpening is performed.

Detail is really about contrast. For that reason resolving power is defined with two high contrast features - black sky and very bright stars.

Sharpening can sharpen telescope optics as well. That is something human eye/brain can't do (efficiently? I'm sure there is some sharpening involved - but not the way we think of it - brain does all sorts of funny things to image that we see).

Here is another screen shot from Mr Peach's website:

image.png.31f0add39ac01c97b7a69e002efb5c18.png

That talks about sharpness of optics. This is MTF of a telescope (with different levels of spherical aberration in this case).

What it does not tell is that image processing or sharpening in particular tends to do this:

image.png.6ada51441398e28d5dd2c2d85748724a.png

This graph shows how much detail looses contrast. Once this line reaches zero - no more detail can be seen as all contrast has been lost. If you look at obstructed telescope diagram vs unobstructed - you'll see something like this:

image.png.8c166677bea1ea5421768ce40cdbb09d.png

More central obstruction you add - more "dip" there is in this curve - more contrast is lost. This is why we say that clear aperture gives best contrast to the image.

Now back to sharpening - sharpening just straightens this curve and restores even more contrast and detail that scope can deliver to human eye that can't sharpen. How much you can restore this curve - depends on how much noise there is in the image because as you raise the curve - you raise the noise as well.

Important point - once curve reaches 0 - there is no straightening it up - zero just means zero - information lost, no way to recover it by sharpening (or in math terms - any number times 0 is 0 - you can't guess original number you multiplied with zero if your result is zero - it can be any number).

 

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

More central obstruction you add - more "dip" there is in this curve - more contrast is lost.

Well, the graph shows a slight increase in contrast at high frequencies for obstructed telescopes- the (one of) reason(s) that many top lunar/planetary images dont worry about obstruction IMHO.

What defines resolution? is it needed to use 2 similar objects or features to define? I can see shade differences in there like Enke- does this define resolution?

Link to comment
Share on other sites

12 minutes ago, vlaiv said:

What it does not tell is that image processing or sharpening in particular tends to do this:

image.png.6ada51441398e28d5dd2c2d85748724a.png

I need an explanation Vlaiv as I don't understand this. The red line seems to add information that must be limited by aperture, optics etc etc? Am I thinking correctly?

Link to comment
Share on other sites

On 05/05/2020 at 17:24, miguel87 said:

However, the exit pupil is an in-focus image, so the light not entering your pupil is just resolved stars at the edge of the FOV.

This is incorrect. The exit pupil is the beam of light for every part of the image formed by the eyepiece. If you have an exit pupil of 5 mm, that means that each star in the field is represented by a beam of parallel light 5 mm in diameter emerging from the eye lens of the eyepiece. All of those bundles of light converge to a point above the eye lens (forming a disc 5 mm wide), and the height of that point of convergence is the eyepiece's eye relief. So when your pupil meets the disk at the eye relief height, it occludes all parts of the image equally (assuming the eye pupil is smaller than the exit pupil).

Edited by Ags
  • Like 2
Link to comment
Share on other sites

16 minutes ago, jetstream said:

Well, the graph shows a slight increase in contrast at high frequencies for obstructed telescopes- the (one of) reason(s) that many top lunar/planetary images dont worry about obstruction IMHO.

What defines resolution? is it needed to use 2 similar objects or features to define? I can see shade differences in there like Enke- does this define resolution?

You need to be careful with MTFs. The graph shows the amplitude but not the phase. If you image black and white stripes you can get the back and white swapped round due to the phase shift. I had a book with a good example in it but I lent it out have as of now not got it back..

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

17 minutes ago, Ags said:

This is incorrect. The exit pupil is the beam of light for every part of the image formed by the eyepiece. If you have an exit pupil of 5 mm, that means that each star in the field is represented by a beam of parallel light 5 mm in diameter emerging from the eye lens of the eyepiece. All of those bundles of light converge to a point above the eye lens (forming a disc 5 mm wide), and the height of that point of convergence is the eyepiece's eye relief. So when your pupil meets the disk at the eye relief height, it occludes all parts of the image equally (assuming the eye pupil is smaller than the exit pupil).

You say parallel lines converge, but that is impossible.

Also if every star is represented by a 5mm wide beam, they cant all occupy the same space or they would not be parallel.

There is information within the exit pupil that would be lost if it was a uniformly bright, plain white disc of light.

Also if you use an eyepiece that has a too big exit pupil, you will notice that you cannot quite see all of the image at the edges. If all parts of the exit pupil, contained all parts of the image then even viewing the central 1mm portion would show the whole image.

 

Edited by miguel87
Link to comment
Share on other sites

12 minutes ago, jetstream said:

Well, the graph shows a slight increase in contrast at high frequencies for obstructed telescopes- the (one of) reason(s) that many top lunar/planetary images dont worry about obstruction IMHO.

Other graph with my added red line is why planetary imagers don't worry about CO and the fact that human eye/brain can't sharpen is why visual observer prefer small CO.

 

8 minutes ago, jetstream said:

I need an explanation Vlaiv as I don't understand this. The red line seems to add information that must be limited by aperture, optics etc etc? Am I thinking correctly?

There is no information adding in there. There is only information removing - and that is the point when graph line hits 0.

Graph actually reads as follows:

For a given frequency (X axis) - image that is observed thru a telescope and decomposed into frequencies (Fourier transform) - will have that particular frequency attenuated certain amount.

It's like someone, selectively per frequency, put some sort of ND filter and removed percentage of light on that frequency. What you end up is a bit less intense light at that frequency. In fact - height of a graph shows what percent of light remains.

As long as there is some light and as long as you know this curve (or can guess it) - you can do inverse on recorded image - you can take that frequency and multiply it with inverse number from graph.

If a frequency was halved - then multiply it with 2 and you'll get same intensity as before.

Only place you can't do that is if you multiply something with 0. That is information removal because I can't use anything to get original back. X * 0 = 0 => X = 0/0 but division with zero is undefined so X can be anything - information on what was the value of X is forever lost.

In classical picture - you can restore image fully. In quantum picture - you can only restore image up to a point since there is uncertainty involved and that uncertainty is noise. When signal is very low - SNR is very low and amplifying that signal won't increase SNR - it will remain the same. For that reason above red curve that I draw is not rectangle - it is still a bit curved - you can't beat the noise.

More you stack and better SNR you have - more sharpening you can perform - but there is always a limit - or rather two part limit - one is information loss - you can't restore information that has been lost due to multiplication with 0 and you can't restore information / signal that has poor SNR (is attenuated so much that noise is bigger than signal and SNR is <1 - in fact SNR needs to be >3 in order to start recognizing stuff).

Makes sense?

Link to comment
Share on other sites

1 minute ago, andrew s said:

You need to be careful with MTFs. The graph shows the amplitude but not the phase. If you image black and white stripes you can get the back and white swapped round due to the phase shift. I had a book with a good example in it but I lent it out have as of now not got it back..

Regards Andrew 

Excellent, another piece of the puzzle (well puzzle for me) enters the picture. Further thoughts appreciated- what can be the effects of this regarding the graphs or visual observation vs the graphs?

Link to comment
Share on other sites

15 minutes ago, miguel87 said:

You say parallel lines converge, but that is impossible.

The beams come from different parts of the eye lens, and converge to the exit pupil. But the light for each point on the image (e.g. star) remain parallel and the light for each point does not converge in itself - but the light from the whole image converges. They are "beams of parallel light" NOT "parallel beams of light".

Think of five people shining torches in your face. Each torch's light is (roughly) parallel, but the five beams of light converge on your face. That is what the light from five stars would do, each emerging from a different part of the eye lens and converging on your pupil.

Edited by Ags
  • Like 1
Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Makes sense?

No :grin:

Well kind of. I was just thinking simply if I go up one unit and over to the apertures MTF curve its different than if I go up a unit and over to the red line. Of course they meet at "1".

I do appreciate the thoughts on this Vlaiv.

Link to comment
Share on other sites

I gather that most people disagree with me here and also that I am likely wrong about it all 😂

But I'm just stating how I understand it from my own experience and knowledge.

Apologies if I am frustrating anybody with all my comments.

And thanks to everyone for a very engaging conversation (even if it is out of my depth)

On a closely related but slightly different note:

We are discussing the issue of brightness being lost if an exit pupil is too big.

How about the field stop? If the field stop is smaller than the 'real image' created by the telescope arent we already losing aperture? Regardless of what the eyepiece then does after the field stop.

 

Link to comment
Share on other sites

50 minutes ago, andrew s said:

They have a useful role for comparing system but they are not hard and fast limits.

I am going to retire from the thread having gained more pieces of the puzzle and this quoted piece is a very important concept (for me). As is "diktat" in reference to accepted resolution definitions.

As I learn more and gain observing experience I seem to embrace that some limits are not hard and fast- it is quite enlightening.

Thanks Andrew.

  • Like 1
Link to comment
Share on other sites

4 minutes ago, miguel87 said:

I gather that most people disagree with me here and also that I am likely wrong about it all 😂

But I'm just stating how I understand it from my own experience and knowledge.

Apologies if I am frustrating anybody with all my comments.

And thanks to everyone for a very engaging conversation (even if it is out of my depth)

On a closely related but slightly different note:

We are discussing the issue of brightness being lost if an exit pupil is too big.

How about the field stop? If the field stop is smaller than the 'real image' created by the telescope arent we already losing aperture? Regardless of what the eyepiece then does after the field stop.

 

eyepiece.png

Here it is - this image explains it all.

If you have a star / point at some angle Alpha to optical axis, following will happen:

- all rays from that point will be parallel before they reach aperture - same angle

- after objective they will start to converge and finally converge at focal point - all light from original star falls into single point on focal plane - this is why star is in focus on camera sensor (provided it is focused well) and it also means that field stop won't remove any light - it only limits angle that can be seen as bigger angle means point on focal plane is further away from center.

- then rays start to diverge (just happily go on their own way and since they came to a point they continue now to spread)

- eyepiece catches those rays and makes them parallel again. Few things to note - angle is now different - that is magnification. All parallel rays occupy certain "circle" - that was aperture earlier and now it is exit pupil. Ratio of angles and ratio of sizes of these pupils is magnification.

- Eye is same thing as telescope - it is device that again focuses parallel rays.

Thus field stop can't act as aperture stop because all rays from "aperture" have been squeezed into single point on focal plane.

  • Thanks 1
Link to comment
Share on other sites

23 minutes ago, vlaiv said:

eyepiece.png

Here it is - this image explains it all.

If you have a star / point at some angle Alpha to optical axis, following will happen:

- all rays from that point will be parallel before they reach aperture - same angle

- after objective they will start to converge and finally converge at focal point - all light from original star falls into single point on focal plane - this is why star is in focus on camera sensor (provided it is focused well) and it also means that field stop won't remove any light - it only limits angle that can be seen as bigger angle means point on focal plane is further away from center.

- then rays start to diverge (just happily go on their own way and since they came to a point they continue now to spread)

- eyepiece catches those rays and makes them parallel again. Few things to note - angle is now different - that is magnification. All parallel rays occupy certain "circle" - that was aperture earlier and now it is exit pupil. Ratio of angles and ratio of sizes of these pupils is magnification.

- Eye is same thing as telescope - it is device that again focuses parallel rays.

Thus field stop can't act as aperture stop because all rays from "aperture" have been squeezed into single point on focal plane.

I wish I understood all of that 😁

I think my problem is perhaps drawing a parallel between the telescopes primary image created in an empty focusser tube and the exit pupil of an eyepiece.

What your saying is the exit pupil is not a 2d image but a 'window's that the images passes through?

I hope I am getting there?!

Edited by miguel87
Link to comment
Share on other sites

3 minutes ago, miguel87 said:

I wish I understood all of that 😁

Want to go bit by bit and see where you get stuck?

From a point (for our purposes this can be a star) that is very distant (like really distant) incoming "rays" of light are parallel.

Do you understand why is this?

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Want to go bit by bit and see where you get stuck?

From a point (for our purposes this can be a star) that is very distant (like really distant) incoming "rays" of light are parallel.

Do you understand why is this?

I try to understand this with the relative sizes of the star and the telescope aperture.

And if you had a aperture wider than the light source then the lines would NOT be parallel?

Link to comment
Share on other sites

42 minutes ago, jetstream said:

I am going to retire from the thread having gained more pieces of the puzzle and this quoted piece is a very important concept (for me). As is "diktat" in reference to accepted resolution definitions.

As I learn more and gain observing experience I seem to embrace that some limits are not hard and fast- it is quite enlightening.

Thanks Andrew.

My thoughts as well Gerry :smiley:

  • Like 1
Link to comment
Share on other sites

7 minutes ago, miguel87 said:

I try to understand this with the relative sizes of the star and the telescope aperture.

And if you had a aperture wider than the light source then the lines would NOT be parallel?

Here is how to best understand it:

slope.jpg

Further the object is, irrespective of relative sizes of object and aperture angle between two lines that connect object to opposite sides of aperture is shrinking.

Here left edge of triangles is aperture and right vertex is object. Aperture is small relative to distance between objects means that angle at vertex is small. When you have very small angle between two lines - they are effectively parallel to you (and here I mean very very very small angle - like couple light years vs 20cm of aperture small angle - although we don't need to go that far - this holds for moon as well although it is 384000Km away).

Makes sense?

 

Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Here is how to best understand it:

slope.jpg

Further the object is, irrespective of relative sizes of object and aperture angle between two lines that connect object to opposite sides of aperture is shrinking.

Here left edge of triangles is aperture and right vertex is object. Aperture is small relative to distance between objects means that angle at vertex is small. When you have very small angle between two lines - they are effectively parallel to you (and here I mean very very very small angle - like couple light years vs 20cm of aperture small angle - although we don't need to go that far - this holds for moon as well although it is 384000Km away).

Makes sense?

 

Yep that makes sense, I can visualise looking the object from either side of the main mirror. The angle of my gaze would not change for a star.

The same way I can watch the moon out the right hand window in my car and drive 100 miles on a straight road and it will still be there.

Edited by miguel87
Link to comment
Share on other sites

2 minutes ago, miguel87 said:

Yep that makes sense, I can visualise looking the object from either side of the main mirror. The angle of my gaze would not change for a star.

The same way I can watch the moon out the right hand window in my car and drive 100 miles on a straight road and it will still be there.

Great

Next thing to realize, and here moon will be great help - is that different points in infinity arrive at different angles.

Do this thought experiment.

Take a ruler and point it at the center of the moon. When you look at one edge of the moon - it is at slight angle to that ruler. When you look at the other side of the moon - it is at slight angle again but to the other side.

Angle at which parallel rays arrive at aperture is related to where in the sky point of origin is.

image.png.18fe0080e236edd0e9bac8c8261e661f.png

If scope is aimed directly at a star - parallel rays will arrive at 90 degrees to aperture.

If scope is not aimed directly at a star, this will happen:

image.png.bfe233389af63783abde49e2ebea754d.png

Rays will arrive at an angle to front of the lens, but they will also converge not directly behind lens - but a bit "lower" - also on a focal plane but some distance to center.

This is why image forms at focal plane of telescope - star in the center of the FOV is one scope is aiming at while star at the edge is at an angle to telescope tube.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.