Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Pixel Scale


Scooot

Recommended Posts

I wonder if someone could clear up some confusion for me.

If my calculations are correct the arcsecond per pixel of my canon 450D with a 135mm lens is 7.945. Being 206.2648 x 5.2 / 135

However, I've uploaded an image to Astrometry.net which has given me some great information, part of which is the pixel scale which is 33.1 arcseconds per pixel. The image covers a sky area of 8.34x5.12 degrees.

So the arcsecond per pixel of either 7.945 or 33.1 is presumably a measurement of two different things so could anyone explain please.

Many thanks.

Link to comment
Share on other sites

20 minutes ago, jjosefsen said:

The pixel scale is definitely 7.945 arcsec/px..

If you could link the result from the solve on astrometry.Net we could probably find out what that other measurement is. :)

Thanks, here it is:

C5B86A6E-177B-49AC-985D-C14FD93F0F9B.thumb.png.6b58e020fb7237c6a019f14b28c943b5.png

Link to comment
Share on other sites

Is it possible the .png image you uploaded was downsized?

Astrometry.net calculates the arcsec/pixel according to the pixel count of the uploaded image, Astrometry.net’s value of 33.1 arcsec/pixel is pretty close to the expected value of a binned 4x4 raw image @ 7.945 arcsec/pixel.

I’m a bit out of touch with Windows files these days but I think you can right-mouse-click the .png image file you uploaded  and read the file properties where it will indicate the pixel count.

Link to comment
Share on other sites

8 minutes ago, Oddsocks said:

Is it possible the .png image you uploaded was downsized?

Astrometry.net calculates the arcsec/pixel according to the pixel count of the uploaded image, Astrometry.net’s value of 33.1 arcsec/pixel is pretty close to the expected value of a binned 4x4 raw image @ 7.945 arcsec/pixel.

I’m a bit out of touch with Windows files these days but I think you can right-mouse-click the .png image file you uploaded  and read the file properties where it will indicate the pixel count.

Ah that might explain it. Yes the original xisf file was very large, 50” wide so I resampled it for ease. I reduced it to 25%. :) So reducing it puts more sky on each pixel? 

Thanks oddsocks

Link to comment
Share on other sites

13 minutes ago, Scooot said:

So reducing it puts more sky on each pixel? 

That’s right Richard, Astrometry’s result confirms your calculated arcsec/pixel using the pixel size and count for your sensor and focal length, you just need to scale Astrometry’s report for arcsec/pixel according to the amount of image reduction you applied to the raw image before uploading it.

William.

Link to comment
Share on other sites

So out of curiosity I resampled the image again. The default resolution is 72 x 72 pixel and the original size was about 50 x 31". When I originally reduced the size to 25% it became 12.6 x 7.75" but the file was 775KB. Astrometry.net showed arcsecond per pixel as 33.1

If I resample it by increasing the resolution to 288 x 288 pixels the size still shrinks to 12.6 x 7.75" but the file size increases to 8335KB. Astrometry.net then shows the arcsecond per pixel as 8.26, much nearer to the camera resolution of 7.945, (I think I cropped the edges so presumably that accounts for the small difference).  I guess this image would look better because of the increased resolution in the smaller size?

Edit.

In fact I've just proved it to myself by zooming in on them :) 

Link to comment
Share on other sites

Hi Richard,

I think you're getting a bit confused between the image pixel resolution and document resolution. The document resolution, DPI (dots per inch also called pixels per inch) only refers to a printed image, and the default resolution of 72 is a hold over from when early monitors had a resolution of around 72 DPI. An image of 72 pixels wide and 72 pixels high would appear about 1 inch wide and high on the screen. The average modern screens are around 96 DPI. If you're printing your image, the DPI setting should be the DPI of your printer, laser printers are generally 600 DPI, inkjets are generally multiples of 360 like 360, 720, or1440 DPI.

Your image printed at 72 DPI would be 50" wide but would be very blocky to look at. The printer reads the DPI setting saved with the image and prints to that scale. If you wanted to print it on an inkjet you would set the DPI to 360 for example and the printed image would be 10" wide. Or so you might have thought. :huh2:

However,.... there's a check box labeled 'Resample image' on your 'Image Size' settings window. If 'Resample image' is checked and you change the DPI to 288, the document size stays the same, so the pixel dimensions of the image itself are increased by 4 times so the image has 16x as many pixels, hence the much larger file size. If 'Resample Image' is not checked, changing the DPI from 72 to 288 will only alter the printed document size, it will be 4 times smaller width, hence the 12.6". But the image pixel size will not change as it has not been resampled.

In summary, ignore the DPI resolution setting and the document size, unless you're actually printing your image on a printer. The 'Pixel Dimensions' boxes are the only ones you need be concerned with, and the 'Resample Image' needs to be checked in order to be able to change the image pixel resolution.

Reducing your image size by a factor of 4 by changing the 'Pixel Dimensions' before sending to Astrometry.net would increase your reported image scale by 4. Astrometry.net has no interest in the 'Document Resolution', just the pixel resolution.

I assumed you were using Photoshop. The actual wording of the image options may be slightly different for other programs. 

I hope that helps clear any confusion. ?

Alan

Link to comment
Share on other sites

6 hours ago, symmetal said:

Hi Richard,

I think you're getting a bit confused between the image pixel resolution and document resolution. The document resolution, DPI (dots per inch also called pixels per inch) only refers to a printed image, and the default resolution of 72 is a hold over from when early monitors had a resolution of around 72 DPI. An image of 72 pixels wide and 72 pixels high would appear about 1 inch wide and high on the screen. The average modern screens are around 96 DPI. If you're printing your image, the DPI setting should be the DPI of your printer, laser printers are generally 600 DPI, inkjets are generally multiples of 360 like 360, 720, or1440 DPI.

Your image printed at 72 DPI would be 50" wide but would be very blocky to look at. The printer reads the DPI setting saved with the image and prints to that scale. If you wanted to print it on an inkjet you would set the DPI to 360 for example and the printed image would be 10" wide. Or so you might have thought. :huh2:

However,.... there's a check box labeled 'Resample image' on your 'Image Size' settings window. If 'Resample image' is checked and you change the DPI to 288, the document size stays the same, so the pixel dimensions of the image itself are increased by 4 times so the image has 16x as many pixels, hence the much larger file size. If 'Resample Image' is not checked, changing the DPI from 72 to 288 will only alter the printed document size, it will be 4 times smaller width, hence the 12.6". But the image pixel size will not change as it has not been resampled.

In summary, ignore the DPI resolution setting and the document size, unless you're actually printing your image on a printer. The 'Pixel Dimensions' boxes are the only ones you need be concerned with, and the 'Resample Image' needs to be checked in order to be able to change the image pixel resolution.

Reducing your image size by a factor of 4 by changing the 'Pixel Dimensions' before sending to Astrometry.net would increase your reported image scale by 4. Astrometry.net has no interest in the 'Document Resolution', just the pixel resolution.

I assumed you were using Photoshop. The actual wording of the image options may be slightly different for other programs. 

I hope that helps clear any confusion. ?

Alan

Many thanks Alan, that all makes perfect sense.

I’m actually using PixInsight but I expect the principle is the same. The box in their resample image Process, in which  i’ve changed from 72 x 72 to 288 x 288, is named Resolution. It does exactly as you say in terms of changing the size by inch and I’m sure what you say would be the case if I printed it. 

However it does seem to change the onscreen resolution as well. Here are the two images (Melotte 20 btw): The second is the higher resolution. The first is very blocky if you zoom in whereas the second is fine? (On my PC anyway) So you’re correct I am still confused :) 

7CD92711-9221-4588-8D5C-C1BAF1D53C44.png.9ef08afef5b4c10487347829b705232a.png

E557C96B-2A8A-44C1-9F66-31D7CA64B2E4.thumb.png.e926fc152372146387c5566b656bba29.png

Link to comment
Share on other sites

I think I understand what's happened now.

My original XISF file was 94,956KB

When I saved it as a 25% reduced PNG file it became 776KB which was originally uploaded to Astrometry.net. (1st image above) Arcsecond per pixel 33.1.

I then increased the resolution on this file to 288 x 288 and it increased it's size to 8335KB. This file (2nd image above) showed the arcsecond per pixel on Astrometry.net as 8.26.

However, had I saved the original XISF file as a PNG file without reducing its size it would have been an 8335KB PNG file, arcseconds per pixel on astrometry.net of 8.26

So all I've done by raising the resolution back to 288 x 288, is to recreate the originally sized image in a PNG format.

Thanks everyone for all the help. :) 

Link to comment
Share on other sites

46 minutes ago, Scooot said:

So all I've done by raising the resolution back to 288 x 288, is to recreate the originally sized image in a PNG format.

Unfortunately it's a little more complicated than that.

When you downsized the original image to 25% native size you actually threw away resolution since a block of 4x4 pixels were averaged into a single pixel in the downsized image. In the original native image's 4x4 pixel block, adjacent pixels may have contained different colours or different levels of grey that represented structure transitions from black to white etc. When you downsized the original image those differences became averaged into a single pixel and the data was lost. When you upscale the image again to match full native size you can't bring that original data back and the algorithm for upscaling introduces further image degradation by trying to smooth transitions between adjacent pixels of the upscaled image.

Although the final image after downsizing and upscaling appears to have the same number of pixels as the original it will not have the same level of detail as the original had, by downsizing and upscaling again data was irretrievably lost.

Of course, there is an element of over simplification here, the actual resolution per pixel of the native image depends on the optics, seeing, mount tracking, etc, so in some circumstances you can downsize the image quite comfortably and not lose anything since it did not exist in the first place but in general it is best to work in post processing with full size native images, then downsize or upscale only for printing or displaying on a web page etc.

William.

Link to comment
Share on other sites

2 minutes ago, Oddsocks said:

Unfortunately it's a little more complicated than that.

When you downsized the original image to 25% native size you actually threw away resolution since a block of 4x4 pixels were averaged into a single pixel in the downsized image. In the original native image's 4x4 pixel block, adjacent pixels may have contained different colours or different levels of grey that represented structure transitions from black to white etc. When you downsized the original image those differences became averaged into a single pixel and the data was lost. When you upscale the image again to match full native size you can't bring that original data back and the algorithm for upscaling introduces further image degradation by trying to smooth transitions between adjacent pixels of the upscaled image.

Although the final image after downsizing and upscaling appears to have the same number of pixels as the original it will not have the same level of detail as the original had, by downsizing and upscaling again data was irretrievably lost.

Of course, there is an element of over simplification here, the actual resolution per pixel of the native image depends on the optics, seeing, mount tracking, etc, so in some circumstances you can downsize the image quite comfortably and not lose anything since it did not exist in the first place but in general it is best to work in post processing with full size native images, then downsize or upscale only for printing or displaying on a web page etc.

William.

Thanks William,

That's very interesting, helpful and improves my understanding. Of course I'd never go to the bother of reducing and increasing again, I started off not understanding the difference in the arcsecond to pixel scale as reported on astrometry.net . One thing leads to another :) 

Anyway all clear now.

I can reduce the image as I had done, this will show a different pixel scale on astrometry.net which I can easily recalculate if necessary, but the reduced image will lack the resolution of the original. All good stuff :) 

Link to comment
Share on other sites

Glad you've got it sorted out now Richard. :smile:  I've never used Pixinsight but I'm surprised that it uses DPI as the method for changing resolution, unless it's for sending to a printer. You could argue that the screen can be treated as a printer with a fixed resolution, but the default resolution of 72 DPI for screen resolution isn't true anymore for modern screens so it all becomes needlessly confusing. For changing resolution it's more logical to me to just specify the x, y pixel size of the image you want, or to specify it in % change in x and y. This makes it much clearer how the image is being re-sampled and whether you lose any actual image detail.

Alan

Link to comment
Share on other sites

Just now, symmetal said:

Glad you've got it sorted out now Richard. :smile:  I've never used Pixinsight but I'm surprised that it uses DPI as the method for changing resolution, unless it's for sending to a printer. You could argue that the screen can be treated as a printer with a fixed resolution, but the default resolution of 72 DPI for screen resolution isn't true anymore for modern screens so it all becomes needlessly confusing. For changing resolution it's more logical to me to just specify the x, y pixel size of the image you want, or to specify it in % change in x and y. This makes it much clearer how the image is being re-sampled and whether you lose any actual image detail.

Alan

Yes I'd just use the percentage to change the resolution, no need to use the resolution box. I only started using the resolution/DPi in my search for answers. :) 

Thanks for your help.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.