Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Imaging Scope Magnification? Stupid question maybe?


Recommended Posts

Just wanted to know, when imaging with a telescope, aren't you generally fixed at a certain magnification optically, how are the close up shots of nebula, galaxies, etc possible with smaller imaging telescopes? Do people just have telescopes with longer focal lengths? Or is it more down to the quality of camera to enable better quality digital zoom?

I'm sure it's simple I just haven't really been able to find an answer 

Thanks

Link to comment
Share on other sites

Basically the telescope focal length sets the image scale on the sensor. Focal reducers can reduce the scale to give a wider framing and barlows can do the opposite. It’s pretty much horses for courses though, choose the scope which gives you the best scale for your target; short focal length for larger nebulae and long focal length for small galaxies and PNs. 

Of course faster focal ratios are better for imaging, so the longer focal ratio scopes tend to be larger aperture to keep the speed up as much as possible.

Reader beware, I am a visual observer, not an imager. Let’s see how I did! 😉🤣 

  • Thanks 1
Link to comment
Share on other sites

Hi Cuto (is that your name?),

that certainly is not a stupid question (in my opinion stupid questions do not even exist!), but it certainly is not a simple one to answer. In fact you asked several questions, so let's start with the first one:

Quote

aren't you generally fixed at a certain magnification optically

Yes and no. Optically you are limited to what that OTA does for you, but for a certain magnification it depends on the camera resolution, on the eyepiece (in case of afocal imaging, so through an eyepiece), or, when doing focal imaging (so with the camera directly attached to the OTA), what kind of Barlow or reducer is used. Whether or not a certain magnification is useful depends on the physics behind the optics. I have written an article on this on a Dutch forum, which will translate quite reasonable when opened in Chrome, and in it I explain that optimal performance of a set-up is achieved when f# > 3 x |px| x (i), where f# is the focal ratio of the scope (focal length divided by aperture), |px| is the pixel size of the camera expressed in µm and (i) is the camera type with (i)=1 for a monochrome camera and (i)=2 for a colour camera with Bayer=pattern.

The reason for this formula lies in the smallest detail that can possibly be seen with a scope of a certain focal ratio. A star is shown as a dot with a radius equal to 0.66 x f# [µm]. The smallest detail in a picture then also depends on whether you clearly want to be able to separate them or just want to be on the limit. There are three commonly used criteria for that, which are known as the Rayleigh-, Dawes- en Sparrow-criteria (please follow the link for a clear explanation). Then finally, when sampling these criteria we have to comply with the Nyquist-criterium, which says that we should sample a detail with double the interval of the smallest detail.

We can increase magnification by adding a Barlow and/or a camera with smaller pixel size, but that will only cause oversampling and not provide additional detail. Oversampling does not hurt, but the light is spread over multiple pixels, resulting in longer exposure times. Undersampling does hurt as it throws away detail, but then it also reduces the exposure time.

Quote

how are the close up shots of nebula, galaxies, etc possible with smaller imaging telescopes

Well... basically the answer is "not". It is not possible to get more detail from a smaller telescope as physics limits our capabilities. There is one exception: drizzling. Using drizzling it is possible to increase some detail as long as enough images are taken. The reason for this is that the atmosphere and guiding are not perfect, so a star will spread its light at random over several pixels. Using drizzling it is possible to gain some detail, but my (very limited) experience is that it does not gain much above a factor 1.5.

Quote

Do people just have telescopes with longer focal lengths?

Indeed, that is the solution. People do indeed use several scopes of varying focal lengths to image at different scales.

Quote

Or is it more down to the quality of camera to enable better quality digital zoom?

When the camera has the optimum pixel size as explained above, no oversampling is done and thus no gain is to be expected from digital zoom.

 

Hope this helps,

Nicolàs

 

Edited by inFINNity Deck
  • Thanks 1
Link to comment
Share on other sites

3 hours ago, Cuto100200 said:

Just wanted to know, when imaging with a telescope, aren't you generally fixed at a certain magnification optically, how are the close up shots of nebula, galaxies, etc possible with smaller imaging telescopes? Do people just have telescopes with longer focal lengths? Or is it more down to the quality of camera to enable better quality digital zoom?

I'm sure it's simple I just haven't really been able to find an answer 

Thanks

There is no such thing as magnification when imaging. Magnification amplifies angles and is applicable when talking about visual for example - so telescope / eyepiece combination. When you view image on your computer screen - zoom will depend on how close you are to that screen - place it at 50cm away and it will be reasonably "zoomed in", but stand at 10m - and it will be very small. This shows that image does not have magnification.

Telescope + camera is projection device. Telescope projects image onto the sensor.  Two things are important here - field of view, which depends on telescope focal length and sensor size - how much of the sky are you projecting on the image?

Second thing that is important is sampling resolution. That depends on focal length and pixel size (or sensor pixel count and field of view if you will - these things are connected because quantities depend on one another).

With sampling rate - you can, over sample, under sample and sample just right (Goldilocks anyone? :D ).

Under sampling is not so bad, it won't record all the detail that could be recorded, but it improves SNR.

Over sampling is rather bad - you are "spending' resolution on empty detail (no detail to be captured) and in doing so you are lowering your SNR (signal to noise ratio - probably most important thing in imaging after mount :D ).

Sampling just right is the way to go of course.

2 hours ago, inFINNity Deck said:

Yes and no. Optically you are limited to what that OTA does for you, but for a certain magnification it depends on the camera resolution, on the eyepiece (in case of afocal imaging, so through an eyepiece), or, when doing focal imaging (so with the camera directly attached to the OTA), what kind of Barlow or reducer is used. Whether or not a certain magnification is useful depends on the physics behind the optics. I have written an article on this on a Dutch forum, which will translate quite reasonable when opened in Chrome, and in it I explain that optimal performance of a set-up is achieved when f# > 3 x |px| x (i), where f# is the focal ratio of the scope (focal length divided by aperture), |px| is the pixel size of the camera expressed in µm and (i) is the camera type with (i)=1 for a monochrome camera and (i)=2 for a colour camera with Bayer=pattern.

The reason for this formula lies in the smallest detail that can possibly be seen with a scope of a certain focal ratio. A star is shown as a dot with a radius equal to 0.66 x f# [µm]. The smallest detail in a picture then also depends on whether you clearly want to be able to separate them or just want to be on the limit. There are three commonly used criteria for that, which are known as the Rayleigh-, Dawes- en Sparrow-criteria (please follow the link for a clear explanation). Then finally, when sampling these criteria we have to comply with the Nyquist-criterium, which says that we should sample a detail with double the interval of the smallest detail.

We can increase magnification by adding a Barlow and/or a camera with smaller pixel size, but that will only cause oversampling and not provide additional detail. Oversampling does not hurt, but the light is spread over multiple pixels, resulting in longer exposure times. Undersampling does hurt as it throws away detail, but then it also reduces the exposure time.

Ok, that strange F/ratio > 3 x pixel_size_in_um formula is just wrong.

If you are after optimum sampling rate of diffraction limited system (Without influence of atmosphere - for planetary / lucky type imaging), then correct sampling rate is 2.4 pixels per airy disk radius (4.8 per airy disk diameter).

On the other hand, if we are talking about optimum sampling rate for long exposure imaging - we need to look at star FWHM, and in this case, close to optimum sampling rate is FWHM / 1.6 as a resolution arc seconds per pixel (if FWHM is expressed in arc seconds - so 1.6" FWHM needs 1"/px sampling rate).

You are correct about color sensor sampling at twice the lower rate than monochromatic sensor - but many people don't process color images / separate color information in the way that respects that anyway (they interpolate instead of just splitting color planes).

 

Link to comment
Share on other sites

Dear vlaiv (is that your real name?),

Thanks for adding you view on magnification, that is indeed how I see it as well.

But then you wrote:

Quote

Ok, that strange F/ratio > 3 x pixel_size_in_um formula is just wrong.

If you are after optimum sampling rate of diffraction limited system (Without influence of atmosphere - for planetary / lucky type imaging), then correct sampling rate is 2.4 pixels per airy disk radius (4.8 per airy disk diameter).

Clearly you did not read my article (also see below), if you had you would have understood why I express the focal ratio as a function of pixel size. As the formula is based on the airy disc for a given OTA in combination with a camera with pixel-size pixels. Yes, I did my calculations for a diffraction limited system. Also please have a close look at the images/simulations I generated for my article. You will notice that they do substantiate my focal ratio formula.

Although I never wrote that I am right, I wonder how you arrive at sampling rate of 2.4 pixels per airy disk radius (I use 2.0 pixels per radius, based on Nyquist in combination with Rayleigh-criterion), so could you please explain what physics is behind that figure 2.4? As you will see in my article there are quite a few rules of thumb around when it comes to determining the focal ratio, but most are mere guidelines and certainly not (completely) based on proper physics.

Your other remark:

Quote

On the other hand, if we are talking about optimum sampling rate for long exposure imaging - we need to look at star FWHM, and in this case, close to optimum sampling rate is FWHM / 1.6 as a resolution arc seconds per pixel (if FWHM is expressed in arc seconds - so 1.6" FWHM needs 1"/px sampling rate).

That is exactly what I wrote in my article (hence my remark that you did not read it). Some people claim that the focal ratio should even be based on FWHM, but as I explain in my article, this is incorrect as the FHWM depends on the focal ratio, so they are interrelated. The FWHM can thus only be used as a guidance for binning.

Nicolàs

 

Link to comment
Share on other sites

3 minutes ago, inFINNity Deck said:

Dear vlaiv (is that your real name?),

Thanks for adding you view on magnification, that is indeed how I see it as well.

But then you wrote:

Clearly you did not read my article (also see below), if you had you would have understood why I express the focal ratio as a function of pixel size. As the formula is based on the airy disc for a given OTA in combination with a camera with pixel-size pixels. Yes, I did my calculations for a diffraction limited system. Also please have a close look at the images/simulations I generated for my article. You will notice that they do substantiate my focal ratio formula.

Although I never wrote that I am right, I wonder how you arrive at sampling rate of 2.4 pixels per airy disk radius (I use 2.0 pixels per radius, based on Nyquist in combination with Rayleigh-criterion), so could you please explain what physics is behind that figure 2.4? As you will see in my article there are quite a few rules of thumb around when it comes to determining the focal ratio, but most are mere guidelines and certainly not (completely) based on proper physics.

Your other remark:

That is exactly what I wrote in my article (hence my remark that you did not read it). Some people claim that the focal ratio should even be based on FWHM, but as I explain in my article, this is incorrect as the FHWM depends on the focal ratio, so they are interrelated. The FWHM can thus only be used as a guidance for binning.

Nicolàs

 

Name is Vladimir, but feel free to call me Vlad :D

With regards to critical sampling rate, 2.4px per Airy disk radius or 4.8px per Airy disk diameter is based on Nyquist sampling theorem. In fact, I did not do the math, but rather set of simulations with different airy disks and FFT to determine cut off point in frequency domain.

Many people assume that x2 from Nyquist has something to do with spatial features - like Rayleigh criterion or similar - it does not. Sometimes I read that people say x3 or even x3.3 is better as we are dealing with 2d Nyquist instead of 1d case - again not true, x2 max frequency is again correct criterion in 2d - grid sampling case.

This applies to band limited signal and indeed telescope optics provides band limited signal - point is, finding what frequency represents cut off point.

If you generate airy disk PSF (that would be FFT of aperture - so simple circle or maybe obstructed circle with/without spider support) you can also generate MTF of that - which is FFT of airy disk. That represents airy disk PSF in frequency domain. It is also description of blur that optical system introduces - how much attenuation there is for a certain spatial frequency.

image.png.797d8ea3940f68f6f9171490e3b12b95.png

In this case, X axis of MTF that I marked is in relative frequency units, but in general - it is in absolute frequency units that can be expressed as cycles per arc second. At one point, here marked with 1 - MTF falls to 0 and there are no higher frequencies - all have been attenuated to 0. This is our cut off point.

Mathematically, you need to take Airy disk function and do Fourier Transformation of it and then equate that with 0 and find at what frequency it reaches 0 in order to find cut off frequency. I might do that some day, but it is not easy thing to do as Airy disk function is not trivial.

In any case, I did simulations with different airy disk sizes and "measured" where resulting cut off point is (using FFTs to generate both airy disks and MTFs). It turns out that relationship is about x2.4 of airy disk diameter.

Later I found out that this value could be related to 1.22 constant of first zero of corresponding Bessel function and that actual value that I measured above is 2.44 rather than 2.4 - but I'll need to see some math to confirm this.

As for relationship to FWHM - theory behind is the same, except that I used gaussian profile to model PSF and then did Fourier Transform of Gaussian (that was easy to do), and looked at cut off frequency - which I took to be when attenuation is such that frequency is attenuated to less than 1% of original value. I then took that frequency and based on that and Nyquist criterion determined factor for "optimum" sampling rate.

I wrote about both of these approaches here on SGL. Let me see if I can find corresponding threads.

 

  • Like 1
Link to comment
Share on other sites

I wonder whether this discussion is actually helping to inform the OP? It feels like rather than clearly explaining some introductory concepts, we have dived into very in depth imaging theory which, whilst interesting, may not be useful at this stage.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

Hi Vlad,

thanks for your elaborate reply, that makes more sense. So, basically we both did the same, we both made simulations to verify/derive a figure, but we did find a difference and that is perhaps due to the fact that we looked at it in a different way. If you look in my article at figure 10 and figure 13, you can see that, when based on a Rayleigh-criteron object, a 2.0 times sampling of the Airy-discs suffices to start seeing details (in both figures I show the sampling as a function of Airy-disc radius and of |px|). I admit that it is not brilliantly rich detail at that level and therefore understand that you arrived at this 2.4 figure. In my article Figure 14 shows an animation of a double circle, generated as a circular Rayleigh object. The animation repeatedly goes from undersampling to oversampling and back again. If you look at the value in the centre at the moment that you see the double circle (dis)appear, you will know at what sampling rate the correct sampling is done, at least for your perception (these kind of tests are rather subjective). For me this again is just over 3 x |px|, which corresponds to a sampling of 2.0 x Airy-disc radius. So your 2.4 x Airy-disc radius sampling rate corresponds to my 3.7 x |px| and is just slightly finer and therefore provides more detail than my 3 x |px| (2 x Airy-disc).

Nicolàs

Link to comment
Share on other sites

32 minutes ago, Stu said:

I wonder whether this discussion is actually helping to inform the OP? It feels like rather than clearly explaining some introductory concepts, we have dived into very in depth imaging theory which, whilst interesting, may not be useful at this stage.

Hmmm, you have a point there for sure, but for me it is difficult to explain it clearly without going into it with a bit of depth. I tried to keep it simple. So apologies to the OP if this has become too technical...

cheers,

Nicolàs

 

  • Thanks 1
Link to comment
Share on other sites

11 minutes ago, inFINNity Deck said:

Hmmm, you have a point there for sure, but for me it is difficult to explain it clearly without going into it with a bit of depth. I tried to keep it simple. So apologies to the OP if this has become too technical...

cheers,

Nicolàs

 

Thanks Nicolàs

Link to comment
Share on other sites

1 hour ago, Stu said:

I wonder whether this discussion is actually helping to inform the OP? It feels like rather than clearly explaining some introductory concepts, we have dived into very in depth imaging theory which, whilst interesting, may not be useful at this stage.

Notwithstanding this comment, a telescope produces a real image on the imaging  sensor and in so doing you can define a linear magnification.  

Given the distance of astronomical targets it is very much less than one! 

Take the moon with a diameter of about 3500km, it can be projected to about a 1cm image by a telescope,  a magnification of about 3 x10^-5! (assuming I did the sums right).

The longer the focal length the less the demagnification. 

Regards Andrew 

 

Edited by andrew s
Link to comment
Share on other sites

14 minutes ago, andrew s said:

Notwithstanding this comment, a telescope produces a real image on the imaging  sensor and in so doing you can define a linear magnification.  

Given the distance of astronomical targets it is very much less than one! 

Take the moon with a diameter of about 3500km, it can be projected to about a 1cm image by a telescope,  a magnification of about 3 x10^-5! (assuming I did the sums right).

The longer the focal length the less the demagnification. 

Regards Andrew 

 

I'm afraid I don't really follow your logic / argument here. You say that imaging system has linear magnification - but that means that any object of size X will be mapped to image with size Y.

If you check, both the Moon and the Sun, while having vastly different diameters will form images of the same size.

1 hour ago, Stu said:

I wonder whether this discussion is actually helping to inform the OP? It feels like rather than clearly explaining some introductory concepts, we have dived into very in depth imaging theory which, whilst interesting, may not be useful at this stage.

Hopefully OP will find good enough answer in non technical stuff as I'm certain that all participants set off to answer original question the best they could. I do agree that it is rather easy to slip into much more technical discussion that underpins issue discussed. @inFINNity Deck maybe we could continue this discussion over PMs once I read the article / post you referred to, or perhaps @Stu could be so kind to split technical part into thread of its own if there is enough valuable info to be found for other members of SGL?

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

I'm afraid I don't really follow your logic / argument here. You say that imaging system has linear magnification - but that means that any object of size X will be mapped to image with size Y.

If you check, both the Moon and the Sun, while having vastly different diameters will form images of the same size.

That's because they are at very different distances. Basic geometric optics has the linear magnification as image size/ object size = image distance / object distance (to a good approximation in this case) .

The image size maybe the same but the magnification  is different in the two cases.

Regards Andrew 

Link to comment
Share on other sites

2 minutes ago, andrew s said:

That's because they are at very different distances. Basic geometric optics has the linear magnification as image size/ object size = image distance / object distance (to a good approximation in this case) .

The image size maybe the same but the magnification  is different in the two cases.

Regards Andrew 

Or in another words - projection system from angles to linear distance, where certain angle always forms same distance (to a good approximation)?

Link to comment
Share on other sites

1 minute ago, vlaiv said:

Or in another words - projection system from angles to linear distance, where certain angle always forms same distance (to a good approximation)?

Yes because the object distance is, in this case, very much greater than the telescope focal length so the image distance is, to good approximation, at the focal point.

It is not generally true.

Regards Andrew 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.