Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Pixel scale.


alan potts

Recommended Posts

As the 4 straight day of no rain pours down, 2 inches has just fallen in 30 minutes, you could not see, weather forecasters for you, I have some questions.

I see the words Pixel Scale used often in regards to a camera scope combination. Firstly what does it mean, I have scopes at 420mm 641mm,  805mm and 1000mm, without going into the shortest version of my SC at 1920mm.

How does one work this out and thirdly, what is a good figure to aim for?

Not that I can vary this at present having only a DSLR.

Alan

Link to comment
Share on other sites

General recommended scale is 1 to 3 arc seconds per pixel but not set in stone so always worth trying whatever you have available.

FLO astronomy tools calculator at top of page will work it all out for you.

Dave

  • Like 1
Link to comment
Share on other sites

45 minutes ago, alan potts said:

what is a good figure to aim for?

Apart from the excellent advice above, I'd also suggest you also consider the capabilities of your mount - if you aim for a pixel scale of X arc seconds/pixel then your mount needs to track at about half this value.  The other factors that will determine your lower limit will be your local seeing conditions and your scope.  Each of these factors acts as a blur which smears the details of the object of interest.  

So, for me, who's interested in high resolution DSO imaging, my lower limit appears to be around 0.7 arc seconds/pixel.   It would only make sense to decrease this target value if I went for a DSO lucky imaging approach where the aim would to decrease my average FWHM, however since this would mean quite short exposures a replacement camera would have to have a much higher QE.

Alan

 

  • Like 1
Link to comment
Share on other sites

For completeness of above answers (all are very good), I'll include additional detail. One might say that such detail is "too much" - but it might be beneficial to people reading the thread.

Very often we talk about pixel scale in terms of resolution, and while term resolution has many different meanings, depending on context - one of those contexts matches above pixel scale.

There are in fact two types of pixel scales that we can distinguish. First one is sampling pixel scale, and second one is "processing" pixel scale.

Sampling pixel scale is straight forward - how much of the sky does one pixel "cover". This is good definition if we assume pixels on camera sensors to be little squares. Sometimes it is useful to think in those terms.

Here is formula for calculation of it:

image.png.7090f224653badd706a07df9438d83cf.png

taken from: http://www.wilmslowastro.com/software/formulae.htm#ARCSEC_PIXEL

Better definition which is almost equivalent but adds flexibility to our way of thinking is - Sky covered by distance between two adjacent sampling points. From this definition we will see why there is a need for two different pixel scales.

Calculation is the same, but this time we don't think about pixel in terms of little squares that have sides of certain length, but we rather think about points (without any dimension) that are spaced in the grid. Grid distances are pixel scale. This approach let's us observe very interesting thing - there are many overlapping grids on top of that base grid of points.

For example - we can take every second point of original grid (in both X and Y direction) - it also forms a grid of sampling points with equal distances between samples, but this time distances are twice as large as original grid, and there are 4 times less samples in one such grid. In fact, we can overlay 4 of such grids to cover all sampling points.

So sampling pixel scale is distance between samples in base grid. Processing pixel scale is something that we can choose. We can alter sampling pixel scale in various ways. One of the ways is presented above - splitting original image into 4 sub images. This will create lower processing pixel scale compared to sampling one. Another way might be using drizzle algorithm - that will create higher processing pixel scale compared to sampling pixel scale. Then there is simple resampling (resizing the image) - we can arbitrary alter processing pixel scale compared to sampling one.

Why is pixel scale important?

There are different properties of image that are impacted by both sampling pixel scale and processing pixel scale.

1. Per pixel SNR - larger the pixel, more light it gathers because it corresponds to larger patch of the sky (for a given focal length) - this one is related to famous F/ratio and F/ratio myth because sampling pixel scale depends on focal length of the scope and F/ratio is on the other hand connected with focal length via aperture size. Better description of "speed" of system is "aperture at certain sampling pixel scale".

2. Size of image (image resolution in terms of Width x Height in pixels) depends on pixel scale for a chip of set diagonal. This might be "obvious" but there is reason I'm pointing it out. People often relate FOV with pixel scale. It is much better to relate FOV to physical size of sensor rather than pixel scale - because of the way we can treat grid of samples.

3. Detail in image depends on proper use of pixel scale. Too coarse pixel scale and you will begin to loose detail that could be captured because you are not using "fine enough sampling". This is where famous Nyquist sampling theorem comes in.

But what does all above mean in practical terms?

If you have certain FWHM of stars in your image, you can choose sampling rate based on that to loose no detail due to under sampling. Anything at ( FWHM in arc seconds / 1.6) or higher (higher sampling rate means lower number of arc seconds per pixel) and you will not be loosing any detail. Your FWHM will of course depend on your guiding precision, atmospheric seeing and aperture of your scope - and you can't know it in advance, but you can get a feel for your average conditions.

Oversampling - meaning using higher sampling rate than is needed will reduce SNR per exposure and hence per total imaging time. You will simply spread light over more pixels and each pixel will receive less light because of this - less signal, smaller SNR. This is why oversampling is not good - it will not capture more detail but it will reduce SNR.

Is oversampling always bad thing? Actually no, but only if we return to idea of overlapping grids. Take for example following: ideal sampling rate is 1"/px but we are using 0.5"/px.

Pixels are x4 smaller than they should be (1x1=1; 0.5x0.5=0.25; 1/0.25=4; surface is 4 times smaller). This means that they get x4 less light each (on average), because light gets spread over 4 pixels compared to 1"/px. There is no detail to be captured with higher sampling rate over 1"/px - so no benefit there.

But what can we do?

We divide our base grid by using every other pixels like mentioned above. We get 4 sub grids - each one will have sampling points at twice the base distance. And each one will have x4 less sampling points - but there will be 4 such grids.

Now if we look at those sub grids - they have pixel scale at 1"/px - if we go by definition that pixel scale is not size of pixel, but rather separation between adjacent sampling points. So these grids are properly sampled. There is 4 of them and if you stack those 4 grids, it will result in SNR improvement by factor of x2 (square root of 4) - we ended up recovering back SNR that we lost because of smaller pixels.

I described process that is very similar to binning, and in fact it has all the benefits of software binning, but avoids drawback of larger pixel (associated pixel blur). I also showed how sampling pixel scale can differ from processing pixel scale - we oversampled image in capture, but returned to proper sampling rate in processing, and did that with almost no SNR loss (there will be small because of read noise - same way hardware binning is better than software).

CMOS sensors allow for this kind of reasoning and processing, because of small read noise. It is good thing that they do this because they also bring smaller pixels with them and for larger telescopes this means oversampling.

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

There are various ways to think about it but if you open an image in a graphics programme and keep zooming in you will eventually reach pixellation, the point at which you no longer see natural looking shapes but, instead, see the individual squares which make up the image. If you were to take the same target binned 2x2 and again 1x1, the binned image would look pixellated far sooner than the unbinned one because the binned pixels are 4x as large as the unbinned ones so they look like squares at a lower level of screen zoom. If you then took the same target in a camera with smaller pixels than the first one you could zoom in further before seeing pixellation. 

Another way to think about it is to imagine the image projected by the same scope onto a series of camera chips with different pixel sizes. Let's say this is a circular nebula 1cm across as projected onto the chip. 1cm is 10,000 microns. On a camera with large 10 micron pixels the nebula will be 1000 camera pixels wide. (10,000/10) In a camera with 5 micron pixels it will be 2000 camera pixels wide. (10,000/5) In a modern CMOS with tiny 2.5 micron pixels it will be 4000 pixels wide. Remember the projected image is always the same size but we are sampling it at different pixel scales. We are putting different numbers of pixels under it. What happens when we open these three images on the PC and view them at 100% (This means seeing them 'full size' so that one camera pixel is given 1 screen pixel.)

10 micron pixel chip: full size image of nebula is 1000 screen pixels across.  Zoomed in to 2000 screen pixels will look 'blocky.' Zoomed in to 4000 screen pixels, will look pixellated.

5 micron pixel chip: full size image of nebula is 2000 screen pixels across and looks fine at that. At 4000 screen pixels, will look 'blocky' but not pixellated.

2.5 micron pixel chip: full size image is naturally 4000 screen pixels wide and looks fine at that.

However,  you can't go on forever because at some point you sky's seeing will blur out any new details in the more highly resolved image. For me that comes at something like 0.9 arcseconds per pixel on a good night but a bad one it might be 2 arcsecs per pixel. Once you try to image beyond what the seeing will allow your object will get bigger on the screen but will contain no new details. This is often called 'empty resolution.'

At the other end of the scale I would not want to image at more than 3.5"PP. That still looks fine but really is the limit for viewing at 100%.

Olly

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.