Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Magnification versus image ratio - rough calcs


Recommended Posts

I've just modified an old webcam by removing the lens so that it would fit inside my eyepiece holder.

I photographed a house that is 200m away and the image showed 2 courses of brick which is roughly 200mm giving a ratio of 1000 :1.

When I viewed the same hose with a 40mm eyepiece 20 courses of brick were showing. and the focal length of my scope is 2350mm (CPC925) which gives a magnification of 59x.

Given that the webcam shows only 2 course of bricks does that mean the magnification is 590x?

Why is his figure not 1000x as per the image ratio?

Link to comment
Share on other sites

  • Replies 32
  • Created
  • Last Reply

I seem to have said this a lot recently :D  Magnification doesn't really mean anything when imaging.  There's nothing being magnified.  Magnification is a visual thing.

What you want to know about for imaging is "plate scale" or "image scale".  That is, how much of the sky appears on some given unit of the camera sensor size -- typically that might be arcseconds per mm or arcseconds per pixel.

To calculate the former you just divide 206265 by the focal length, so 206265 / 2350 in this case, or 87.8.  So each millimetre of your camera sensor covers 87.8 arcseconds of the target (or about one and a half arcminutes).  If you know the pixel size you can then work out how much is covered by one pixel and now much is covered by the sensor (effectively the field of view for the image).

James

Link to comment
Share on other sites

I've just modified an old webcam by removing the lens so that it would fit inside my eyepiece holder.

I photographed a house that is 200m away and the image showed 2 courses of brick which is roughly 200mm giving a ratio of 1000 :1.

When I viewed the same hose with a 40mm eyepiece 20 courses of brick were showing. and the focal length of my scope is 2350mm (CPC925) which gives a magnification of 59x.

Given that the webcam shows only 2 course of bricks does that mean the magnification is 590x?

Why is his figure not 1000x as per the image ratio?

As James said the term magnification is  meaningless in AP terms. Your telescope will form  an image on to a sensor, the size of this image is always the same as it is solely a function of the scopes focal length and nothing else, that is the science of it. If you use a webcam for recording or viewing , which has a small sensor you  will see a small portion of the formed image , if you used a CCD with larger sensor you will see more of the formed image and if you use a DSLR you will see even more of this image but the size of the image is always the same, That is why in particular with AP we end up with atleast a few scopes of different focal lengths so that we could have differing image scales for different DSOs. If you want a larger FOV then use a larger sensor. Forget magnification in imaging.

A.G

Link to comment
Share on other sites

Presumably, the image will only be the same size if different sensors sit in the light path at the same point; i suspect the sensor in my DSLR sits further away than does the sensor in my ZWO, so there will be an extra bit of focal length when using my DSLR.

Jd

Link to comment
Share on other sites

Presumably, the image will only be the same size if different sensors sit in the light path at the same point; i suspect the sensor in my DSLR sits further away than does the sensor in my ZWO, so there will be an extra bit of focal length when using my DSLR.

Jd

This can be true in a Mak or SCT, but not in the general case.  The reason it's true for a Mak or SCT is that they contain multiple optical components and the effective focal length varies with the distance between the components.  Because they are focused by moving the focal plane to coincide with the sensor by means of moving the primary mirror, the focal length changes.  In general with a newt or 'frac the focal plane stays in the same place and the focal length doesn't change -- you have to move the camera sensor to the focal plane, not the other way around.

(In this case, a flat mirror doesn't count as an "optical component".)

James

Link to comment
Share on other sites

I do still struggle with the concept that there is no 'magnification' when i look at an image feed of Jupiter on my laptop coming from a camera at prime focus on a scope.

Jd

it's an interesting exercise just to sit and think about what it actually means in terms of the physics when you say you're looking at, say, Jupiter, at 100x magnification.  What exactly is being magnified, and how do you know that?  And if you only think you know because someone else told you, how do you know they know what they were talking about? :D

James

Link to comment
Share on other sites

JamesF: thanks for the replies. The trouble is i am quite thick when it comes to physics / maths / optics... In fact most things astro-related.

When i look at jupiter with my eye in the night sky, i can't see any moons, just a bright dot.

When i look at jupiter on my laptop via an imaging camera on my scope i can see colour and bands and moons and the GRS. So is that not some how magnified compared to when i look at it naked eye?

Jd

Link to comment
Share on other sites

When i look at jupiter with my eye in the night sky, i can't see any moons, just a bright dot.

When i look at jupiter on my laptop via an imaging camera on my scope i can see colour and bands and moons and the GRS. So is that not some how magnified compared to when i look at it naked eye?

It's a good question.  And not one to which I am sure I have the answer :)

The more I question the things I think I know, the less I realise I actually do know :D

James

Link to comment
Share on other sites

Hi. I have just been trying to work through the same issues. I think I have some kind of answer, or at least a way of understanding it that makes sense to me - most of this came from other guys on the forum, so thanks to them.

When viewing directly with the eye, "magnification" as a term applied to binoculars and telescopes makes sense as a concept. If bins are 8X, this means when looking through them the image at the back of you eye will be 8x bigger. If using a telescope, say 1000mm  focal length with 8mm eyepieces, mag will be 1000/8= 125. So, the image on your retina will be 125x larger than with the nnaked eye. Easy.

When using a camera sensor, things change. Assuming prime focus, ie using a telescope objective lens (or mirror) and no eyepiece, the image is formed on the camera sensor. It doesnt make any difference what sensor you use, the actual image size will remain the same. If you use a small sensor, like a webcam, the image may cover the entire sensor. When using a larger sensor like that of a typical DSLR camera, that same image will only cover a small part of the screen. Most PC/laptop set-ups are set to display images to fill your screen, "resizing" if neccesary.

So, the image of a given object from the small sensor will appear larger. The "two brick example" illustrates this nicely.

The key point is that with the larger sensor, although your subject may appear smaller on screen, you can increase the size on your screen without losing clarity, because you image had a better resolution.

In practical terms, this means:

Webcam image - looks bigger initially on screen, but if you try and increase image size it will degrade.

DSLR image - looks smaller initially, but screen size can be increased without losing resolution.

The other key thing is that you will find it easier to locate your subject with the DSLR set-up, because you can see more sky with the bigger sensor.

Using a x2 Barlow will double the size of the image for either sensor.

Using eyepiece projection can also increase the image size.

I think there is a reticence to use the term "magnification" for two reasons. Firstly because this is a relative term - the image of Jupiter from your scope isnt actually larger than Jupiter.  And secondly because magnifying things with an optical set-up like binoculars/scopes isnt the same as capturing an image on a sensor, and then displaying that on a screen.

Hope that helps!

Link to comment
Share on other sites

s

JamesF: thanks for the replies. The trouble is i am quite thick when it comes to physics / maths / optics... In fact most things astro-related.

When i look at jupiter with my eye in the night sky, i can't see any moons, just a bright dot.

When i look at jupiter on my laptop via an imaging camera on my scope i can see colour and bands and moons and the GRS. So is that not some how magnified compared to when i look at it naked eye?

Jd

When looking at a distant object using your eye you are actually using a lens, the eye,  which has a very small aperture and is designed~ created to form an image on the receptors at the back of the your eyeball. While looking through a telescope you are using an eyepiece, this eyepiece magnifies the image and forms a real image that your eye sees. A sensor has no magnifying capability , it just records the light falling upon it. You see the Moons and colour because the aperture of the scope is magnitudes larger than your eye and grasps a lot more light, detail and colour. Human eye is very sensitive to detail but not as much to colour that is why while viewing the DSOs we see no real colour, at best hints of blue or green.

A.G

Link to comment
Share on other sites

Hi. I have just been trying to work through the same issues. I think I have some kind of answer, or at least a way of understanding it that makes sense to me - most of this came from other guys on the forum, so thanks to them.

When viewing directly with the eye, "magnification" as a term applied to binoculars and telescopes makes sense as a concept. If bins are 8X, this means when looking through them the image at the back of you eye will be 8x bigger. If using a telescope, say 1000mm  focal length with 8mm eyepieces, mag will be 1000/8= 125. So, the image on your retina will be 125x larger than with the nnaked eye. Easy.

When using a camera sensor, things change. Assuming prime focus, ie using a telescope objective lens (or mirror) and no eyepiece, the image is formed on the camera sensor. It doesnt make any difference what sensor you use, the actual image size will remain the same. If you use a small sensor, like a webcam, the image may cover the entire sensor. When using a larger sensor like that of a typical DSLR camera, that same image will only cover a small part of the screen. Most PC/laptop set-ups are set to display images to fill your screen, "resizing" if neccesary.

So, the image of a given object from the small sensor will appear larger. The "two brick example" illustrates this nicely.

The key point is that with the larger sensor, although your subject may appear smaller on screen, you can increase the size on your screen without losing clarity, because you image had a better resolution.

In practical terms, this means:

Webcam image - looks bigger initially on screen, but if you try and increase image size it will degrade.

DSLR image - looks smaller initially, but screen size can be increased without losing resolution.

The other key thing is that you will find it easier to locate your subject with the DSLR set-up, because you can see more sky with the bigger sensor.

Using a x2 Barlow will double the size of the image for either sensor.

Using eyepiece projection can also increase the image size.

I think there is a reticence to use the term "magnification" for two reasons. Firstly because this is a relative term - the image of Jupiter from your scope isnt actually larger than Jupiter.  And secondly because magnifying things with an optical set-up like binoculars/scopes isnt the same as capturing an image on a sensor, and then displaying that on a screen.

Hope that helps!

I'll add a question to this which may also show why people ask the question about magnification for Prime Focas.

We all treat 1x magnification as the size the standard eye can see an image at (much smaller for me as I have bad eyesight so glasses make the subject smaller).

Now when you have like me a 650mm focal length scope with a 10mm EP you get to a magnification of 65x

What I think people (newbies like me) don't get their head around is that when you do Prime Focus we are always being told there is no magnification other than you have a 650mm lens.

So if there was no magnification when you looked at the image being projected through the scope on what ever it is being projected on it would be the same size as you can see with the naked eye. But it isn't because the image has been magnified and if it hadn't then you wouldn't need a telescope.

Sorry if I'm making a few people swear at this point but there has to be a magnification element to start with.

Yes I can see why people use coverage of the sky when talking about imaging as it is far clearer than all the talk about magnification and as the sky is like looking at a map you say how much of an area you can see, however even a map has a scale to it.

Link to comment
Share on other sites

JamesF: thanks for the replies. The trouble is i am quite thick when it comes to physics / maths / optics... In fact most things astro-related.

When i look at jupiter with my eye in the night sky, i can't see any moons, just a bright dot.

When i look at jupiter on my laptop via an imaging camera on my scope i can see colour and bands and moons and the GRS. So is that not some how magnified compared to when i look at it naked eye?

Jd

Remember that the pixels on your camera sensor are something like 0.004mm across. The pixels on your laptop screen are 0.25mm across. That is 60 times larger. So something that took up 1mm on the camera sensor would display as 6cm on the laptop.

Sent from my GT-N7000 using Tapatalk

Link to comment
Share on other sites

Here's an article explaining how to calculate the focal length of an SCT or Mak:

http://www.cloudynights.com/item.php?item_id=2410

It is for visual not imaging, but illustrates the point that the you focus by moving the focal plane (by moving the mirror) and that changes the focal length of the scope.  For newts and refractors, you focus by moving the camera to intercept the focal plane, so the focal length is fixed.

As others have said, magnification is a misleading term for imagers.  For visual it makes sense to work out the scale of the image relative to the naked eye, but for imaging you only care about the field of view and the pixel scale.

- The field of view is how much of the sky you can fit on to your camera chip.  All you care about here is whether you have a big enough field of view to cover your whole target or not. The field of view depends on the scope's effective focal length (i.e. its focal length multiplied by the effect of any barlow lens or focal reducer that you are using) and the size of the active part of the sensor chip. You calculate it as follows (ensure all units are in mm):

field of view height = (chip height x 57.3) / telescope effective focal length

field of view width = (chip width x 57.3) / telescope effective focal length

So I use a Canon 500D with a Skywatcher ED80 and 0.85 focal reducer:

fov height = (14.89mm x 57.3) / (600mm x 0.85)

                 = 1.67 degrees

fov width = (22.3mm x 57.3) / (600mm x 0.85)

               = 2.51 degrees

-The pixel scale is basically how much detail there is in the image.

pixel scale vertical = (fov height x 3600) / vertical resolution

                              = (1.67 x 3600) / 3168 pixels

                              = 1.9 arcseconds per pixel

pixel scale horizontal = (fov width x 3600) / horizontal resolution

                                 = (2.51 x 3600) / 4752

                                 = 1.9 arcseconds per pixel

(Note that most modern cameras have square pixels, so the horizontal and vertical pixel scales should be pretty much the same.  There are exceptions though and some cameras have rectangular pixels so the horizontal and vertical scales can be different in that case.  There are other ways to work out the pixel scale using the pixel size, see the link below for more).

The pixel scale tells you two things:

- Firstly it tells you how much 'detail' there might be in the image.  A small pixel scale will be more detailed than a large one.  So if you were using a camera with big pixels (say 8um square) and a scope with a focal length of 1000mm, you'd get a pixel scale of 1.65 arcseconds per pixel.  If you used a camera with small pixels (say 4um square) and a scope with a focal length of 500mm you'd get (guess what!) a pixel scale of 1.65 arcseconds per pixel.

Now provided the target fits in to the field of view of both setups (question one above!) then you will have exactly the same amount of detail in each image, and you can zoom in and out to your heart's content in Photoshop or whatever.  Hence magnification is a bit of a meaningless concept when imaging.  You can make the area covered by the image larger by increasing the focal length, but you can achieve exactly the same effect by making the camera pixels smaller (subject to some limits discussed below).  So forget magnification!

- The second thing pixel scale helps you work out is whether you will in fact resolve the image to that level of detail or not.  This is a bit more complicated (and perhaps contentious) but:

- In a single image you can't beat the resolving power of the scope which depends on its aperture.  The theoretical method is the Rayleigh Criterion (http://en.wikipedia.org/wiki/Angular_resolution), but in practice we use the Dawes Limit (http://en.wikipedia.org/wiki/Dawes_limit).  Taking the aperture of the scope in cm, the Dawes limit is:

resolving power = 11.6 / aperture

So for my 80mm (8cm) 80ED, we get

resolving power = 11.6 / 8

                          = 1.45 arcseconds

Second we need to sample the image, the starting point for which is Nyqist Theorem (http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem) which basically says that you have to take samples at twice the frequency of the highest frequency you want to resolve.  Now strictly speaking this applies to waveforms and infinitely small sample points, which isn't quite the case for images and cameras with pixels that are a lot larger than infinitely small, so there is some controversy.

The upshot is though that your pixel scale needs to be at least 2 times smaller (and probably 3 times smaller) than the smallest thing you want to resolve.  So with my pixel scale of 1.65 arcseconds per pixel I can reasonably expect to resolve details that are somewhere between 3.3 and 4.95 arcseconds in a single image.  Bear in mind though that if that figure is smaller than the Dawes limit, I won't resolve it regardless.

The other factor when thinking about resolving power is seeing.  Seeing in the UK typical limits you to resolving things of between 2 and 4 arcseconds (anything smaller gets blurred by atmospheric turbulence).  It can occasionally be better, and definitely can be worse than this.

Finally, using image stacking techniques you can beat the seeing for planetary lunar imaging by taking lots of (short exposure) images and averaging them out.  It's not that simple for long exposure imaging though.

If you need to calculate any of this (including other ways of determining fov and pixel scale) try the link in my signature below.

Link to comment
Share on other sites

What I think people (newbies like me) don't get their head around is that when you do Prime Focus we are always being told there is no magnification other than you have a 650mm lens.

So if there was no magnification when you looked at the image being projected through the scope on what ever it is being projected on it would be the same size as you can see with the naked eye. But it isn't because the image has been magnified and if it hadn't then you wouldn't need a telescope.

Sorry if I'm making a few people swear at this point but there has to be a magnification element to start with.

Yes I can see why people use coverage of the sky when talking about imaging as it is far clearer than all the talk about magnification and as the sky is like looking at a map you say how much of an area you can see, however even a map has a scale to it.

Ah, now, this appears to open up another avenue of interest :)

If I've read you correctly what you're saying is that magnification might be deemed to have taken place because the image at the focal plane is larger than that which is present in the human eye.

But if you think about what a telescope does, it takes all the light that falls onto a large surface area -- the aperture of your telescope -- and compresses it down so the entire image will fit down (say) a 1.25" drawtube.  One might view that as entirely the opposite of what a layman might term magnification (someone who deals with optics might be quite happy to call a reduction "negative magnification", I'm sure).

I'm struggling to see why magnification should be related to eyes.  It's just the ratio of the focal lengths of the telescope and the eyepiece.  Which kind of breaks down if you're using a camera because there is no focal length so you can't have a ratio.

I think I'm slowly fumbling my way towards the idea that when we talk about magnification what we're actually referring to is some sort of relationship between the aperture and the exit pupil size.  I need to go back to the maths and work that through to see if it has any potential for validity.

James

Link to comment
Share on other sites

Now provided the target fits in to the field of view of both setups (question one above!) then you will have exactly the same amount of detail in each image, and you can zoom in and out to your heart's content in Photoshop or whatever.  Hence magnification is a bit of a meaningless concept when imaging.  You can make the area covered by the image larger by increasing the focal length, but you can achieve exactly the same effect by making the camera pixels smaller (subject to some limits discussed below).  So forget magnification!

Sorry I should have said 'make the area covered by the image larger by decreasing the focal length'.

Link to comment
Share on other sites

Remember that the pixels on your camera sensor are something like 0.004mm across. The pixels on your laptop screen are 0.25mm across. That is 60 times larger. So something that took up 1mm on the camera sensor would display as 6cm on the laptop.

Sent from my GT-N7000 using Tapatalk

So the computer is "magnifying" the image?

Jd

Link to comment
Share on other sites

IanL that's a lot of information to take in, but after reading it a couple of times and trying to find the specs for my 300D etc I think I have something that makes sense.

Now according to the specs at Canon I can use them as below with a 2x Barlows on for the larger focal length.

  mm Focal Length FOV Chip Height 15.1 1300 0.67 Chip Width 22.7 1300 1.00

What does it tell me and how does it help with magnification that is so hit and miss with imaging?

I can see that with the width of my camera chip that I can see 1 degree of sky and I know that the moon is visible in around 0.5 degrees of sky. With my 10mm EP and no Barlows I also get around 1 degree FOV as the moon fills about half the width of the EP. So this is telling me in magnification terms that if the camera sensor through the Barlows is lined up correctly I should get the same FOV as the 10mm EP and no Barlows. This tells me that the actual imaging being recorded onto the Camera (note at point of recording) would have been at around 32x magnification.

Now the moment you view that image on the computer everything scales up.

Screen Resolution Pixel Pitch mm Scren Size mm Size 50% Image mm Height 1050 0.282 296.1 288.768 Width 1680 0.282 473.76 433.152

The table above now tells me that my image from the camera on my monitor at work being displayed at 50% will then be 433mm wide.

So I can see why people do not talk about magnification when talking in images, as it all depends on how you are viewing the image as to how big it looks in front of you. However where it does help is that if you know the FOV for the imaging device as per the first calculations you can compare them to the FOV you get with an EP thus making it easier to instantly imagine how much of the sky you will capture. This may be easier for newbies like me as I know from using the EP how much I can see when I look at an object and get a direct comparison of how much will be captured with the imaging device. For us it is easier to convert the 10mm or 20mm EP to magnification rather than FOV in the sky and then try and relate that to the width an object appears in the sky. Learning the width of objects when you know the FOV can be a great advantage as you can then look at maps etc seeing how much of the sky they cover before you even look through an EP.

The only part of the calculation I'm confused on is why multiply the camera chip by 57.3. Where does the 57.3 come from?

Paul

Link to comment
Share on other sites

The only part of the calculation I'm confused on is why multiply the camera chip by 57.3. Where does the 57.3 come from?

The basic formulae used for these calculations use angles measured in radians because it's a unit that works nicely mathematically.  There are 57.3 degrees in one radian, so it's just a conversion factor between the two units.  I often use 206265 instead, which is the same sort of conversion but into arcseconds rather than degrees.

James

Link to comment
Share on other sites

Paul, i agree with your comments about the field of view; i think it is a useful practical, even if not mathematically correct, comparison for people, though one might also have to state what the eye pieces apparent field of view is, and indicate the telescopes focal length. I suspect this is why it is easier to compare arc-minutes of sky, as this is a static value which would be transferrable between scopes.

Jd

Link to comment
Share on other sites

I can see that with the width of my camera chip that I can see 1 degree of sky and I know that the moon is visible in around 0.5 degrees of sky. With my 10mm EP and no Barlows I also get around 1 degree FOV as the moon fills about half the width of the EP. So this is telling me in magnification terms that if the camera sensor through the Barlows is lined up correctly I should get the same FOV as the 10mm EP and no Barlows. This tells me that the actual imaging being recorded onto the Camera (note at point of recording) would have been at around 32x magnification.

But if you have a 10mm ep with a 50 degree field of view in that scope, and a 5mm ep with a 100 degree field of view in the same scope, the Moon might fill half of the field of view in both.  What then would you say your magnification figure is?

James

Link to comment
Share on other sites

The basic formulae used for these calculations use angles measured in radians because it's a unit that works nicely mathematically.  There are 57.3 degrees in one radian, so it's just a conversion factor between the two units.  I often use 206265 instead, which is the same sort of conversion but into arcseconds rather than degrees.

James

But why multiply the chip size by 1 radian?

Link to comment
Share on other sites

The formula for the resolution of the telescope is just 1/f where f is the focal length, but the result in that formula is in radians per (whatever unit f was in).  So, say, a 500mm focal length telescope has a resolution of 1/500 or 0.002 radians per mm.  Multiplying by 57.3 gives you 0.115 degrees per mm.  Multiplying by the width of the sensor in mm, say, 22mm for a DSLR, gives just over 2.5 degrees field of view across the sensor.

James

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.