Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Astronomy tools ccd suitability


vlaiv

Recommended Posts

3 minutes ago, alex_stars said:

Just to give an example of a possible experiment, I found a very interesting post over at the other forum:

https://www.cloudynights.com/topic/526385-planetary-camera-rant-11-micron-pixels/?p=7067915

and I shamelessly copy the images here for a summary (😀 reference above)

So the setup of "Vanguard" is as such:

Their premise is wrong:

Quote

You're really saying: Minimum Pixel Size = F#/3... remarkable close to what I found by experiment (read on).

Quote

Rick's Comment:  A general way of stating this is that pixel size = f#/2.

There is well known relationship between aperture size and spatial cut off frequency:

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

image.png.638dca064853f2c4f854301ca2c0f6ac.png

If you rearrange this for 3.75um pixel size - you get that you need F/15 optics for critical sampling (not F/7 nor F/10 as proposed) - for 500nm wavelength.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Their premise is wrong

Again the purpose of my post was not to demonstrate anybody being right or wrong, I wanted to point out that quantifiable results such as the ones with resolutions charts is the way to go to discuss matters. Be scientific is the motto.

4 minutes ago, vlaiv said:

There is well known relationship between aperture size and spatial cut off frequency

I think most of us do know that.

So page "5", we are getting there.

Link to comment
Share on other sites

1 minute ago, alex_stars said:

I was asking for experiments with quantifiable results so that we can discuss what strategy would be best to work with.

You can simply do experiments with ImageJ and bilinear resampling acting as fractional binning.

Bilinear resampling introduces noticeable blurring.

image.png.69181ae1b36fd65ba02832f0ad8c36b2.png

here is some noise in the left image and right image is left one shifted by 0.5px in both x and y. Blurring is more than evident.

Interesting feature of FFT is that frequency spectrum does not change if we translate image - this lets us calculate spectrum before and after interpolation and see what sort of filter it represents:

image.png.4b5c007cfed505eb0f735f5a18002bf1.png

Top row - frequency spectra of original and shifted image, resulting image when we divide the first two. Bottom row is just surface plot and profile of resulting filter response.

Link to comment
Share on other sites

4 minutes ago, alex_stars said:

So page "5", we are getting there.

Not sure what this means?

One thing is MTF of optics and spatial cut off frequency without influence of atmosphere and mount, and PSF in long exposure is something completely different (where we have seeing and mount performance in addition to optics).

Link to comment
Share on other sites

37 minutes ago, vlaiv said:

You can simply do experiments with ImageJ and bilinear resampling acting as fractional binning

That clearly depends on your fractional binning approach, wouldn't you say? We did not discuss what "weights" the neighbouring cells should get, say what fractional binning we attempt.

37 minutes ago, vlaiv said:

here is some noise in the left image and right image is left one shifted by 0.5px in both x and y

why do you shift the image by 0.5 px. were we not trying to fractionally bin the image?

Link to comment
Share on other sites

38 minutes ago, vlaiv said:

Not sure what this means?

One thing is MTF of optics and spatial cut off frequency without influence of atmosphere and mount, and PSF in long exposure is something completely different (where we have seeing and mount performance in addition to optics).

the page "5" was just a stupid joke from me on how we progress in the discussion.

Link to comment
Share on other sites

20 minutes ago, alex_stars said:

That clearly depends on your fractional binning approach, wouldn't you say? We did not discuss what "weights" the neighbouring cells should get, say what fractional binning we attempt.

I showed two ways of doing it above.

One is to imagine pixels being squares (given that they are surface sampling and spaced in rectangular / square grid, although, as we have seen - pixels on sensor are not really squares - they can be oval shaped or with rounded corners) and then attribute weight to surface covered by large square.

Alternative to that is to see pixels as being sampling points and then locate binned location and assign weights based on inverse distance to other pixels.

Either maps to exactly proper binning expression when doing integer binning. Binned pixel as square "covers" all 4 squares of binned group so each has weight of 1 or 1/4 if we are doing average.

Also - if we observe pixels to be sample points - we will put binned pixel smack in the middle of those 4 pixels - distances will be the same so again we have equal weights 1 or 1/4 depending if we sum or average.

28 minutes ago, alex_stars said:

why do you shift the image by 0.5 px. were we not trying to fractionally bin the image?

I just wanted to show properties of bilinear interpolation.

Shifting image should not change it - it should remain the same.

All interpolation algorithms have property of blurring image a bit when image is shifted - question is by how much.

Only interpolation that does not alter the image / function if function is properly sampled is Sinc interpolation. Problem is that Sinc interpolation is "infinite" in spatial extent and we would need infinite size image to apply it.

This is due to fact that we actually have band limited image and if image is band limited - it is infinite in spatial extent. Luckily for us - those values that are far away from say a star in the image rapidly tend toward zero and are swamped by noise. That is why we can use sensors that are finite in size and still get normal looking image.

It also means that we can use approximations to Sinc function that are finite in spatial extent and still get good results.

You can see a bit more on that here:

Here is important bit:

image.png

This is comparison of different low pass filters that resampling algorithms represent. These are just polynomials - linear, cubic, quintic and so on.

Point is to approach "box" filter - one that will not change any frequency within sampled frequencies.

All of this shows that bilinear interpolation has the least desirable properties for image manipulation, and since it is mathematically equivalent to fractional binning - that means that fractional binning also has those undesirable properties.

What is the point of doing fractional binning if you are going to additionally blur the image and again lower its optimum sampling rate - why not go to that lower sampling rate with regular binning straight away?

38 minutes ago, alex_stars said:

the page "5" was just a stupid joke from me on how we progress in the discussion.

I'm not sure we are progressing in original topic. Yes, discussion is developing, but we are not really discussing the tool that describes scope / camera suitability for long exposure astrophotography (or EEVA extension, maybe even planetary).

I proposed model that can be used for all three.

In long exposure astrophotography - all three are important - seeing, mount performance and aperture size.

In EEVA - we might not need to model mount if exposures are short enough so that mount does not play a part - that leaves seeing + aperture size.

In Planetary - we use short exposure to beat the seeing - that only leaves aperture size for determining proper pixel size if we want optimum sampling.

Only drawback for planetary in proposed model is that we approximate Airy pattern with Gaussian curve. That enables us to easily calculate total blur - otherwise we would need to calculate it in frequency domain (MTF * FT of seeing gaussain * FFT of  mount precision gaussian).

For planetary we should use just Airy pattern / spatial cutoff frequency to get best / most accurate results.

Link to comment
Share on other sites

14 hours ago, vlaiv said:

Sometimes even x4.

I've posted M51 taken with that setup that is best sampled at x4 bin. That also goes for this image, for example - taken on a night of rather poor seeing:

pacman.thumb.png.dde289649fed787ac6a8730649a5e1de.png

Just look at the size of those stars - very bloated. I binned this x3 (it is something like 1500 x 1100px in size) while in reality, it is better suited for bin x4.

Hi Vlaiv

If you bin that high (x4) then you would end up with an image of only approximately 1100 X 800 pixels. Most imagers wouldn't be happy with such a small number of pixels, as the image won't fill the screen of a 1080p monitor, let alone allow for any level of zoom. 

If the tool is going to suggest a level of binning, I think it would be good if it clearly explains the number of pixels the camera is effectively reduced to, as most beginners probably won't know what binning is. And maybe a warning or such once the number of pixels goes below ~2m, the amount needed to display an image full screen at 1080p resolution.

  • Like 1
Link to comment
Share on other sites

1 minute ago, Xiga said:

Hi Vlaiv

If you bin that high (x4) then you would end up with an image of only approximately 1100 X 800 pixels. Most imagers wouldn't be happy with such a small number of pixels, as the image won't fill the screen of a 1080p monitor, let alone allow for any level of zoom. 

If the tool is going to suggest a level of binning, I think it would be good if it clearly explains the number of pixels the camera is effectively reduced to, as most beginners probably won't know what binning is. And maybe a warning or such once the number of pixels goes below ~2m, the amount needed to display an image full screen at 1080p resolution.

How things have changed over the years.

Just a decade ago - cameras like Atik 314 were perfectly suitable beginner CCD cameras - and those have only ~1400x1000 resolution

Now we are not happy if we can't make FullHD image :D

Mind you - most DSOs are in fact small enough to fit on 1000x1000 image. Even if we somehow manage to sample at 1"/px - 1000px will be almost 17 arc minutes - that is quarter of a degree (half of the full moon).

  • Like 1
Link to comment
Share on other sites

2 minutes ago, alex_stars said:

Thanks for the additional input @vlaiv I will continue to think about fractional binning and how to best implement it. Now I don't want to high-jack this thread further and let you guys focus on the astronomy.tools

Here is one more quick point about that. I found it very useful to just simply get image with two elements - Gaussian PSF and block of random noise (gaussian in nature).

Whatever you do to that image - it will be easily measurable. Change in FWHM of Gaussian profile will indicate any loss of resolution and change in standard deviation over random block will represent SNR gain.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.