Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Pixel scale


StuartT

Recommended Posts

5 minutes ago, vlaiv said:

Once you measure FWHM - then you have idea what are Goldilocks pixels in above sense - you take FWHM divide that value with 1.6 and this gives you sampling rate you should be aiming for (take that into account and your focal length and /or any focal reducers to get wanted pixel size - then bin accordingly or replace camera if that makes more sense - or as last option - don't bother :D - if you can live with SNR loss due to over sampling).

 

Ok thanks, I think I'm starting to get it -- slooooowly...

I've taken a friend's data to analyse. Their sampling rate is 1.29"/px. Measured FWHM is 3.255. Divide by 1.6 and we get 2.03"/px. So they're a bit oversampled. But if I bin the data, then the sampling rate is 2.59"/px, so they're a bit undersampled. In this case, to bin or not to bin?

Link to comment
Share on other sites

Just now, Lee_P said:

But if I bin the data, then the sampling rate is 2.59"/px, so they're a bit undersampled. In this case, to bin or not to bin?

I'm think that I would rather go a little under sampled rather than little over sampled.

Usually difference in x2 sampling is really not that big in level of detail - it is noticeable but barely so (provided that one is over and other is under sampled - it is much more obvious if both are under sampled). To me SNR gain is simply better deal than having larger image.

Captured detail does not automatically translate into sharp image. It is only potential for sharp image. Even properly sampled (and even under sampled) image must be sharpened a bit (to be closer to true image undistorted by optical elements) and how much you can sharpen also depends on how good your SNR is.

I'd rather have slightly under sampled image with higher SNR that I can sharpen a bit more rather than potential for all the details but not being able to sharpen as much as I want because of noise issues.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

@vlaiv I'd be interested to get your thoughts on this video that just popped up in my feed. It seems that the author is comparing pixel scales and suggesting that the finer pixel scale is better, but there's no mention of matching working resolution to pixel scale 🥴

 

 

Link to comment
Share on other sites

23 minutes ago, Lee_P said:

@vlaiv I'd be interested to get your thoughts on this video that just popped up in my feed. It seems that the author is comparing pixel scales and suggesting that the finer pixel scale is better, but there's no mention of matching working resolution to pixel scale 🥴

Both are over sampled and that can be easily seen.

Difference in sharpness between two images does not come from pixel scale - it comes from sharpness of the optics.

RASA is simply not diffraction limited system. Quattro might also of lower quality than diffraction limited, but that depends on coma corrector used. As is, Quattro is diffraction limited in center of the field (coma free zone in center - which is rather small for such a fast system). When you add coma corrector - things regarding coma improve (obviously) - but CC can introduce other aberrations (often spherical) and lower sharpness of the optics.

In any case - difference between the two systems is not down to pixel scale - it is down to sharpness of the optics.

Here is assessment of RASA 11 system and its sharpness:

image.png.dc688fb7c8095337664c5f970c59ccb2.png

(this is taken from rasa white paper: https://celestron-site-support-files.s3.amazonaws.com/support_files/RASA_White_Paper_2020_Web.pdf , appendix B).

For 280mm of aperture, size of airy disk is ~1", or ~3um (for 550nm), while RMS of that pattern is about 2/3 of that

image.png.269949d065ddda563f23e22f8d946a28.png

(source: https://en.wikipedia.org/wiki/Airy_disk)

Or RMS of diffraction limited 11" aperture should be about 2um

So RASA 11 produces twice as large star image without influence of mount and atmosphere than diffraction limited scope would.

(and above is given for perfect telescope with zero manufacturing aberrations, not production units that are not quite as good as model).

By the way, there is simple way to see what the properly sampled image looks like - just take HST image of the same object at the scale that you are looking at. Such image will contain all the information that can be contained at given sampling rate - that will be upper limit - and if your image looks anywhere close to that - you then sampled properly and not over sampled.

Look at this (although this is not HST image - it is still not over sampled at given resolution - so you can see the difference):

image.png.5fc06679f1e58b4314cd8bfc2b5c7ada.png

Left is sharper image of NGC7331 from the video and right is example of how sharp image will be if you properly sample it (not over sampled at this scale). I think that difference is obvious.

  • Like 1
Link to comment
Share on other sites

Here is another interesting bit - look what happens if I reduce sampling rate of both images equally by 50%:

image.png.01a0678245fe280f144836160a7710f6.png

Now they start looking more alike one another, right? This means that information in them starts to be the same (reference image lost some of information due to lower sampling - and image from video did not have it to begin with so they are closer in information content now).

This shows you that image in video was at least x2 over sampled.

  • Like 1
Link to comment
Share on other sites

@vlaiv i have had this one question for a while and this thread seems to be the place for it. How does the FWHM / 1.6 rule take into account different monitor resolutions and observer preferences? Surely there is a subjective portion to this theory as well, since there is no such thing as a typical monitor or typical observer and both of those things will greatly affect how the image is seen and appreciated.

I ask because there is a huge difference between looking at my images on my main monitor, which is a 27'' 1440p monitor at roughly 1.5 arm lengths distance and then my mobile smart devices (Phone and tablet, both with very high PPI displays). Personally i am finding it very difficult to process an image at the /1.6 scale, and further more find it difficult to actually appreciably see the detail in an image processed this way, especially if using the mobile devices. I find a FWHM / maybe between 1.8 and 2.2 or even up to something like 2.4 acceptable.

Example images of what im talking about next. Below image is represented at roughly FWHM/2.4 if i recall correctly. Data was really not good, but still made an appreciable image. Personally i am having trouble seeing all the detail if the image is made any smaller than this size - regardless of if it could be downsampled a bit and then brought back up to full resolution with no loss in detail.

NGC4725-closeup.jpg.a4a96f7bf2269574a010911fd7eae065.jpg

Here is a more apples to apples comparison, an image of M51 where i did fresh processes of the same dataset at different resolutions. The stars have something weird going on, not at all sure what. The night was a dew nightmare, so i am guessing some purely cosmetic issue arose from that (data was between 2.0-2.4'' fhwm despite the hideous stars).

First try, close to the 1.6 rule:

M51-firsttry-crop.jpg.9a743513cbdaacfc0e6035de47c6f36f.jpg

Another version at a higher resolution, probably closer to FWHM/2:

M51-thirdtry-crop.thumb.jpg.0ac896ed3d62847bc937f8653d96a862.jpg

To my eyes the second one looks much sharper. For whatever reason i had an easier time working on the higher resolution image and could sharpen it much further, why do you think that is? Is it that with properly sampled data one has very little leeway in how exactly they apply the sharpening, and so can easily do it wrong?

Link to comment
Share on other sites

6 hours ago, ONIKKINEN said:

@vlaiv i have had this one question for a while and this thread seems to be the place for it. How does the FWHM / 1.6 rule take into account different monitor resolutions and observer preferences? Surely there is a subjective portion to this theory as well, since there is no such thing as a typical monitor or typical observer and both of those things will greatly affect how the image is seen and appreciated.

FWHM / 1.6 does not address display part of things at all - because it does not need to.

It is just concerned with image acquisition part and answers the question - what sampling rate for image acquisition should be to capture all the data there is.

Viewing part has been made difficult in part due to ever increasing screen resolutions - which is really a marketing trick rather than anything else.

Let's do some math to understand what is the effective limit of display resolution and what is actually manufactured and sold.

Most people have visual acuity of 1 minute of arc. https://en.wikipedia.org/wiki/Visual_acuity

(see table and MAR column - "minimum angle of resolution")

Very few people have sharper vision than that. That should really be size of pixel when viewed from a distance as it represents this gap:

image.png.cbd88328edf8a4dcad27d94e4cad5702.png

(you can see that letters used to determine visual acuity are made out of square bits that are of equal size - and you need to be able to resolve black and white bit of that size in order to recognize the letter - so minimum size of that is one pixel - either white or black).

Now, let's put that into perspective when viewing computer screen and viewing mobile screen.

Let's assume that we use computer screen from a distance of at least 50cm. At 50 cm, 1 arc minute is represented by  0.14544mm.

If we turn that in DPI/PPI value it will be 25.4 /  0.14544 = 174

You don't need computer screen with more than 174 DPI as most humans can't resolve pixels that small - In fact - most computer screens are 96 dpi or there about - not even that small pixels and we still don't see pixelation easily.

Phone screens are different matter - they are ever increasing - but we don't have need for them. If we apply same logic as above and say that we use smart phones 25cm away from our eyes - we come to upper limit of about 300-350dpi.

If you do a google search for smart phones with highest PPI - you will find that top 100 of them has higher PPI than that - and they range from 400-600 - which is just nonsense - human eye can't resolve that small - or it could, but you need to keep the phone 10cm way from your eyes - and even newborns might have issues focusing that close (in fact, I think that perhaps babies can focus at 10cm but that ability goes away quite soon after birth).

Ok, so computer screens are ok with resolution, but mobile phones are not and they have smaller pixels than is needed.

Further, to answer your question about viewing - you need to say what type of display style you are using in your app. Does your app just scale photo to whole screen of device, part of screen or perhaps uses some sort of zoom and pan feature?

These are all different scenarios and the size of the image presented to you will vary and will depend on actual pixel count of the image versus pixel count of the display device.

I always tend to look at images at 100% zoom level - which means that one pixel of image is mapped to one pixel of display device. Most people don't really think about that and view image as is presented by software. But in either case - you as a viewer have control of how are you going to view the image and you can select best way to suit you depending on your viewing device.

You don't have the control of how the image was captured - so it is best to do it in optimal way as far as amount of detail is concerned (or some other property is optimized - like sensor real estate in case of wide field images).

6 hours ago, ONIKKINEN said:

To my eyes the second one looks much sharper. For whatever reason i had an easier time working on the higher resolution image and could sharpen it much further, why do you think that is? Is it that with properly sampled data one has very little leeway in how exactly they apply the sharpening, and so can easily do it wrong?

Don't really know why, but here, look at this:

image.png.60a088f75f1b913f1f007361e1cf754c.png

Left is your larger version and right is your smaller version (in 8bit already stretched) that I took and did simple touch up in Gimp. Resized to the same size and did some sharpening.

Now difference is not so great, right? Yes, left image still looks a bit better - but I was working on already stretched data that is saved in 8 bit in jpeg format.

I don't mind people over sampling if they choose to do so - but you can't beat the laws of physics. If you want to over sample because that makes you make better image as it is somehow easier to apply processing algorithms - then do that - just be aware what you are giving up (SNR) and what you won't be able to achieve (show detail that is simply not there).

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.