Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

FOV and Pixel Scale Calculator


04Stefan07

Recommended Posts

I am looking at purchasing a 0.8x reducer for my telescope and wanted to check out the FOV/scale with my camera in SGP using the mozaic wizard. I know there is a great one on Astronomy Tool and Sky At Night websites but I wanted to compare with SGP. SGP doesn't allow you to enter a focal reducer so you need to change the focal length using an online calculator to get your new scale.  Below is my setup.

- Explore Scientific 102 APO Triplet (714mm Focal Length)
- ZWO ASI178MM-Cooled

WITHOUT focal reducer
= 0.69"/pixel

WITH x0.8 focal reducer
= 0.87"/pixel

I just wanted to confirm if I am doing it correctly?

Thanks,
Stefan.

Link to comment
Share on other sites

29 minutes ago, 04Stefan07 said:

I am looking at purchasing a 0.8x reducer for my telescope and wanted to check out the FOV/scale with my camera in SGP using the mozaic wizard. I know there is a great one on Astronomy Tool and Sky At Night websites but I wanted to compare with SGP. SGP doesn't allow you to enter a focal reducer so you need to change the focal length using an online calculator to get your new scale.  Below is my setup.

- Explore Scientific 102 APO Triplet (714mm Focal Length)
- ZWO ASI178MM-Cooled

WITHOUT focal reducer
= 0.69"/pixel

WITH x0.8 focal reducer
= 0.87"/pixel

I just wanted to confirm if I am doing it correctly?

Thanks,
Stefan.

Yes, that looks correct. You won't be able to resolve <1.14"/pixel (Dawes limit = 11.6/10.2 without taking seeing into account) so you'll be undersampling. Not to mention a rather small fov - 0.74.x 0.5 deg

Louise

Link to comment
Share on other sites

5 hours ago, Thalestris24 said:

Yes, that looks correct. You won't be able to resolve <1.14"/pixel (Dawes limit = 11.6/10.2 without taking seeing into account) so you'll be undersampling. Not to mention a rather small fov - 0.74.x 0.5 deg

Louise

Don't you mean oversampling Louise?

If you bin 2x2 then that helps out on the sampling rate

 

Link to comment
Share on other sites

2 hours ago, newbie alert said:

Don't you mean oversampling Louise?

If you bin 2x2 then that helps out on the sampling rate

 

Yes, sorry - it was late. What I meant to emphasize was that you can't achieve a resolution greater than the Dawes Limit plus the seeing limitation.

Louise

Link to comment
Share on other sites

1 hour ago, Thalestris24 said:

Yes, sorry - it was late. What I meant to emphasize was that you can't achieve a resolution greater than the Dawes Limit plus the seeing limitation.

Louise

Not too sure or straight in my head about dawes limit myself yet..I undersample myself at 3 (arc secs per pixel, I think it is?) I don't get blocky stars as they say I should, maybe if I printed a image on a 50x40 print I may see it..but on a laptop or my phone I don't see it..know plenty that undersample but don't take any notice of sampling rate..not sure if it's a true representation of digital photography  but like I say it's not straight in my mind yet

Link to comment
Share on other sites

15 minutes ago, newbie alert said:

Not too sure or straight in my head about dawes limit myself yet..I undersample myself at 3 (arc secs per pixel, I think it is?) I don't get blocky stars as they say I should, maybe if I printed a image on a 50x40 print I may see it..but on a laptop or my phone I don't see it..know plenty that undersample but don't take any notice of sampling rate..not sure if it's a true representation of digital photography  but like I say it's not straight in my mind yet

As I understand it, the Dawes Limit represents the absolute maximum theoretical resolution of a circular aperture (in green light). The Rayleigh limit is more pragmatic. There is a calculator here:
https://astronomy.tools/calculators/telescope_capabilities

So for a 102mm aperture, the Dawes Limit is 1.14"/pixel and the Rayleigh Limit is 1.35"/pixel. For an 80mm, the "/pixel is greater. However, typical seeing in the UK will reduce that to 2"/pixel or maybe even greater "/pixel. So you can't hope to resolve stars that are closer together. How it affects your image will likely depend on the particular target, I think, though I've never personally tried to do any practical comparisons :). In any case, my seeing where I am is always poor and my guiding is never better than +/-2" and often only +/-4" (on my PHD2 graph). Others' mileage will vary :) 

Louise

 

Link to comment
Share on other sites

One should not equate Dawes/Rayleigh criteria to sample resolution, topic is much more complicated than this.

I can briefly list things to consider.

Actual resolution in image will depend on following:

1. PSF of instrument used (this has direct link to Dawes/Rayleigh, or rather those criteria are based on PSF)
2. Seeing blur
3. Tracking / guide precision
4. Pixel blur

Above all combine (convolve) to form cumulative (long exposure) PSF of the image. Such PSF can be approximated with Gaussian (or some other distribution like Moffat, but in general for given parameters - infinite exposure will result in particular distribution, while limited one is well approximated by it). Given such PSF one can determine optimum sampling rate by use of Nyquist criteria - which operates in frequency domain, so there is no simple / direct link between spatial extent (like Dawes limit, or FWHM of Gaussian) and this cutoff frequency. Mathematical analysis of PSF needs to be done to determine what frequency component can be used as cutoff point and Nyquist theorem applied to it.

Oversampling offers both a benefit and a drawback to its use - lower SNR (due to discrete nature of light and Poisson distrubtion) but higher resolution due to Pixel blur (effective limit is set by aperture PSF, but pixel blur is affected by size of pixel as mathematics requires sampling to be point like and with sensors each pixel has surface - smaller the surface - more point like sampling will be). Special type of binning can be used to overcome SNR loss to some extent (at expense of increased read noise component), while regular binning will have same effect as larger pixels - same Pixel blur.

Undersampling will never produce square like stars - this is misconception because of interpolation algorithms used - only "Nearest neighbor" resampling method gives "squares" when image is magnified. Other resampling methods will have different artifacts, with Sinc filter being true reconstruction without any artifacts for band limited signal - Lanczos filter being one of the best approximations for Sinc filter (which is infinite in extent and can't be calculated because it requires infinite sensor - band limited signal is infinite in nature because it consists of sum of sine waves that go off to infinity. One needs infinite number of sine waves to reproduce non periodic function - thus representing signal that is unlimited in frequency domain).

I've complicated things, haven't I? :D

Moral of the story:

Dawes limit is not related to sampling frequency in straightforward way, so it would be mistake to suggest that if Dawes limit is 1.14" - this is sample rate that should be used.

Oversampling will be lesser of the issue with development of low read noise cameras (even now it's becoming less and less problematic with low read noise CMOS cameras - which happen to have smaller pixels anyway) and adoption of proper binning algorithms.

Undersampling will not produce square stars - it is the resampling filter used (one particular) that produces square stars. If we look at star profile, gaussian for example - and do single point sampling on it (thus using one pixel to record star) - it will just be single value - it will not be square of any sorts. How we decide to display this value will determine if it's square or not.

 

Link to comment
Share on other sites

45 minutes ago, vlaiv said:

One should not equate Dawes/Rayleigh criteria to sample resolution, topic is much more complicated than this.

I can briefly list things to consider.

Actual resolution in image will depend on following:

1. PSF of instrument used (this has direct link to Dawes/Rayleigh, or rather those criteria are based on PSF)
2. Seeing blur
3. Tracking / guide precision
4. Pixel blur

Above all combine (convolve) to form cumulative (long exposure) PSF of the image. Such PSF can be approximated with Gaussian (or some other distribution like Moffat, but in general for given parameters - infinite exposure will result in particular distribution, while limited one is well approximated by it). Given such PSF one can determine optimum sampling rate by use of Nyquist criteria - which operates in frequency domain, so there is no simple / direct link between spatial extent (like Dawes limit, or FWHM of Gaussian) and this cutoff frequency. Mathematical analysis of PSF needs to be done to determine what frequency component can be used as cutoff point and Nyquist theorem applied to it.

Oversampling offers both a benefit and a drawback to its use - lower SNR (due to discrete nature of light and Poisson distrubtion) but higher resolution due to Pixel blur (effective limit is set by aperture PSF, but pixel blur is affected by size of pixel as mathematics requires sampling to be point like and with sensors each pixel has surface - smaller the surface - more point like sampling will be). Special type of binning can be used to overcome SNR loss to some extent (at expense of increased read noise component), while regular binning will have same effect as larger pixels - same Pixel blur.

Undersampling will never produce square like stars - this is misconception because of interpolation algorithms used - only "Nearest neighbor" resampling method gives "squares" when image is magnified. Other resampling methods will have different artifacts, with Sinc filter being true reconstruction without any artifacts for band limited signal - Lanczos filter being one of the best approximations for Sinc filter (which is infinite in extent and can't be calculated because it requires infinite sensor - band limited signal is infinite in nature because it consists of sum of sine waves that go off to infinity. One needs infinite number of sine waves to reproduce non periodic function - thus representing signal that is unlimited in frequency domain).

I've complicated things, haven't I? :D

Moral of the story:

Dawes limit is not related to sampling frequency in straightforward way, so it would be mistake to suggest that if Dawes limit is 1.14" - this is sample rate that should be used.

Oversampling will be lesser of the issue with development of low read noise cameras (even now it's becoming less and less problematic with low read noise CMOS cameras - which happen to have smaller pixels anyway) and adoption of proper binning algorithms.

Undersampling will not produce square stars - it is the resampling filter used (one particular) that produces square stars. If we look at star profile, gaussian for example - and do single point sampling on it (thus using one pixel to record star) - it will just be single value - it will not be square of any sorts. How we decide to display this value will determine if it's square or not.

 

Yeah, I think I was just trying to keep it simple and in terms of maximum theoretical resolution being greater than practically possible. It can obviously be a lot worse! I hope I was right in saying that if calculated pixel scale is 0.87"/pixel then that's not achievable at 102mm and even less so taking other factors into account? It doesn't stop anyone from doing an image.

Louise

Link to comment
Share on other sites

26 minutes ago, Thalestris24 said:

Yeah, I think I was just trying to keep it simple and in terms of maximum theoretical resolution being greater than practically possible. It can obviously be a lot worse! I hope I was right in saying that if calculated pixel scale is 0.87"/pixel then that's not achievable at 102mm and even less so taking other factors into account? It doesn't stop anyone from doing an image.

Louise

Well theoretical minimum can be found in planetary critical sampling, and it's about 0.5"/pixel.

Otherwise you are quite right - I would not go below 1.5"/pixel with such setup in average seeing conditions, so not all is lost at 0.87"/pixel - it can be binned nicely to 1.74"/pixel - that is still quite right for "high res" work with such setup.

I can do quick calculations for expected FWHM of such setup - it looks like expected FWHM can be in range of 3 to 3.3" (depends on seeing, but for usual one 1.5" to 2", other values being 102mm aperture and 0.6" RMS guiding).

This value actually matches very well with sampling rates of 1.75 - 2"/pixel.

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

Well theoretical minimum can be found in planetary critical sampling, and it's about 0.5"/pixel.

Otherwise you are quite right - I would not go below 1.5"/pixel with such setup in average seeing conditions, so not all is lost at 0.87"/pixel - it can be binned nicely to 1.74"/pixel - that is still quite right for "high res" work with such setup.

I can do quick calculations for expected FWHM of such setup - it looks like expected FWHM can be in range of 3 to 3.3" (depends on seeing, but for usual one 1.5" to 2", other values being 102mm aperture and 0.6" RMS guiding).

This value actually matches very well with sampling rates of 1.75 - 2"/pixel.

I don't fully understand about binning with the 178mm. The manual suggests that using ROI would be better e.g. 2048 x 1080 rather than 3096 x 2080.

Link to comment
Share on other sites

1 minute ago, Davey-T said:

Don't know if it's any help but I use ZWO ASI178MM binned 2X2 with my Lunt LS60 gives me around 2"/pixel which works OK.

On the Sun obviously :grin:

Dave

Hi Dave

The OP wants to use it with an F7/102mm, probably with an 0.8x reducer. Do you use binning rather than ROI? And, if binning, is that the 'hardware' binning zwo mention, or software?

Louise

Link to comment
Share on other sites

Hi Louise,

I use binning to fit the whole solar disc onto the sensor at a reasonable arcsec/pixel then ROI it to get a square frame, it only just fits in vertically.

Binned in Firecapture, not really looked into the ZWO in camera binning, for all I know Firecapture could be implementing it :grin:

Dave

Link to comment
Share on other sites

Don't really see how that two relate.

I'll briefly go over binning and give recommendations. There are two types of binning - hardware and software.

Hardware binning is reserved for CCDs and it can be thought of as having 4 buckets of water and pouring that into large barrel before measuring how much water there is :D or electrons are put in same place prior to readout - this is possible because of construction of CCD chips and the way they are read out. Consequence of this is that 4 pixels are read out and their value summed (can't be split after into 4 pixels) and this value is polluted with single read noise "dose".

Software binning is done at pixel value level - and can be done in camera firmware (or drivers) or in software in processing. I prefer other method as it gives flexibility to control the process (there are several things that one might choose how to do, I'll explain).

Mathematics of it is pretty same as above - we take 4 adjacent pixels and sum (or average - it's the same thing) their values. Difference to hardware binning is that summing is done after read out and each pixel already had read noise "dose" applied to its value.

Summing 4 values has the same effect as stacking 4 frames - SNR goes by factor of 2 (square root of number of values averaged/summed).

If you consider what sort of SNR would have pixel that is twice the size of the original pixel (x2 in height x2 in width - hence x4 times in surface, or 4 adjacent pixels together) - you will conclude that for hardware binning there is no difference, while for software binning there is slight difference in read noise - as larger pixel would have one dose of read noise and 4 pixels have four doses.

In effect when you bin ASI178mm in software it is the same as having camera:

With pixel size of 4.8um (twice the size of 2.4, but x4 times the surface), that has pixel count of 1548x1040 (both height and width halved and total pixel count goes down by 4 - same as surface increase), and read noise is no longer 2.2e (or whatever it was based on gain setting used) but twice that - 4.4e.

Read noise increase is due to adding 4 doses of read noise (software binning) - and noise adds like linearly independent vectors - meaning square root of sum of squares - which in this case turns out to be twice original value (4 times square of original value - square rooted, leaves 2 x original value).

ROI on the other hand just limits usable FOV and number of pixels - it produces smaller file, same as in camera software binning - maybe hence the recommendation, but ROI will not affect sampling rate.

Reason why I don't like in camera binning is because one looses precision and introduces a bit more noise. Pixel value is 14bit. When you add 4 values together you are adding 1 bit of information - hence you need 15bit to store it. Camera might clip this to 14bit and you loose some precision - or noise is introduced (deviation from expected value), but since there is 16bit in image format - this might not happen - it depends how it's implemented (I have not checked this).

Other is that you don't have control of binning process. Only way that you can bin is simple add/average method. I'll list two more methods that are probably better.

First is adaptive read noise method. CMOS sensors have different read noise across pixels - one can do bias standard deviation to get this information. There is also "Telegraph" noise - some pixels are affected by it and don't have "smooth" gaussian read noise distribution, but rather 2 peaked or even 3 peaked gaussian.

By examining this read noise distribution you can decide to assign weights to each pixel in group of 2x2 (that you bin) as to optimize noise contributions (more noisy pixels contribute less by amount adequate for increased read noise).

Second method of binning is the one I would recommend and is useful for oversampled data (reason why we bin in the first place) - it can be thought of splitting rather than binning. We don't produce 1 binned image, but rather 4 subs - each having every other pixel (in X and Y), and each having two times coarser sampling resolution but we leave pixel size the same - this is related to above mentioned pixel blur - larger the pixel - more blur.

This way we get 4 subs, each sampled at twice lower resolution (same as binning), when we stack those 4 subs we get x2 increase in SNR (same as regular binning) - but pixel size is not doubled, and we don't introduce pixel blur associated with larger pixel.

HTH

Link to comment
Share on other sites

Given OPs equipment .8X focal reducer and 2X2 binning gives a sensible imaging resolution, theory is all well and good but there's nothing like a bit of trial and error.

Dave

PS: As mentioned I'm using it for solar so lack of signal is not generally a problem :grin:

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.