Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Help my brain cells : Planetary camera considerations


Altocumulus

Recommended Posts

Och - a side effect of getting old :D Brain's not what it used to be....

May I check with the Planetary Imagers on cameras.

Does the 5 * pixel size ~ telescope f-ratio hold? Or is there a better formulation. I keep reading all the bits on Dawes Limits and Nyquist, but nothing sticks on the brain cells, so if

someone has a simple easy to use guide, I'd appreciate it.

For instance I'm guessing - under the above rule of thumb - the 178 chip at 2.4µm ( 12 - bit low ) and the 462 at 2.9µm ( 14.5 - close, but the one I have is colour ) are within the native resolution of the SkyMax 180Pro f15.

Similarly the ZWO 174mm at 5.6µm probably needs at least a 1.5* barlow added to the imaging train? Not too sure I can get focus with a 2*.

I'm likely looking at f17-20 on the SkyMax with a Crayford and ADC in the train.

Link to comment
Share on other sites

Many thanks.

 

I've come across some accounts suggesting 7* for mc cameras because of the matrix, so you're good!

Where some confusion comes from is ccd compatibility in astronomy tools that suggests a barlow worsens. The 174 on the 180 is already to the left of the green band - adding barlow shifts it further to the left...mind it's over rather than undersampling.

  • Like 1
Link to comment
Share on other sites

Hi Geoff,

under ideal circumstances and based on spatial cut-off frequency the optimal f-ratio is 3.7 times the pixel-size [in micron and based on a monochrome camera]. It has been discussed here a few times by myself and @vlaiv and recently I have written a second article on a Dutch forum about the subject (with many thanks to Vladimir for the discussions we had on- and off-forum). The nice thing is, that oversampling can be verified using ImageJ (Fiji), which is also explained in that article (again, kudos to Vladimir). If you open the article in Chrome it will be translated to English (it has a link to the first article, which explains the topic using the Rayleigh-, Dawes- en Sparrow-criteria and Nyquist sampling).

Mind you that, when using a colour camera, the effective pixel-size is twice the actual pixel-size (the rest in interpolated), so for a colour camera the f-ratio equals 2 x 3.7 = 7.4 times the pixel-size (theoretically that is for R and B channels, whilst it is SQRT(2) x 3.7 x pixel-size for G).

In order to further test the matter I have recently bought myself a ZWO ASI290MM which I will use in combination with my C11 EdgeHD @ f/20. The camera requires 3.7 x 2.9 = f/10.73, so f/20 is almost double oversampling. Almost at the same time I can now compare it to a ZWO ASI174MM, which requires f/21.83, so that should be slightly undersampled (I have only one C11, so comparison may be affected by changes in seeing). So although I do not expect any wonders from the ASI290MM, the outcome may be of interest to planetary imagers. Next step would be testing it against the ASI290MC which I have.

The article: https://www.starry-night.nl/vergroting-onder-de-loep-deel-2-het-optimale-f-getal-nader-beschouwd/

Nicolàs

 

Edited by inFINNity Deck
Had the wrong short code for Vladimir in my text
  • Thanks 2
Link to comment
Share on other sites

57 minutes ago, inFINNity Deck said:

Hi Geoff,

under ideal circumstances and based on spatial cut-off frequency the optimal f-ratio is 3.7 times the pixel-size [in micron and based on a monochrome camera]. It has been discussed here a few times by myself and @Vlad and recently I have written a second article on a Dutch forum about the subject (with many thanks to Vladimir for the discussions we had on- and off-forum). The nice thing is, that oversampling can be verified using ImageJ (Fiji), which is also explained in that article (again, kudos to Vladimir). If you open the article in Chrome it will be translated to English (it has a link to the first article, which explains the topic using the Rayleigh-, Dawes- en Sparrow-criteria and Nyquist sampling).

Mind you that, when using a colour camera, the effective pixel-size is twice the actual pixel-size (the rest in interpolated), so for a colour camera the f-ratio equals 2 x 3.7 = 7.4 times the pixel-size (theoretically that is for R and B channels, whilst it is SQRT(2) x 3.7 x pixel-size for G).

In order to further test the matter I have recently bought myself a ZWO ASI290MM which I will use in combination with my C11 EdgeHD @ f/20. The camera requires 3.7 x 2.9 = f/10.73, so f/20 is almost double oversampling. Almost at the same time I can now compare it to a ZWO ASI174MM, which requires f/21.83, so that should be slightly undersampled (I have only one C11, so comparison may be affected by changes in seeing). So although I do not expect any wonders from the ASI290MM, the outcome may be of interest to planetary imagers. Next step would be testing it against the ASI290MC which I have.

The article: https://www.starry-night.nl/vergroting-onder-de-loep-deel-2-het-optimale-f-getal-nader-beschouwd/

Nicolàs

 

Thanks Nicolàs, yes, I am aware of the lengthy discussions about optimum FR for planetary imaging, indeed I have joined with @vlaiv and others at some length just a couple of months ago. However, many of the ‘experts’, do not follow those recommendations and I personally have better results with both colour and mono cameras when significantly over-sampled. I do not know why this is, but I used to get my best results with the 290MM at around F24, and currently F20/F21 is yielding my best results with the 462MC - both cameras have 2.9mn pixels. It is very difficult to get side by side comparisons in the UK because the conditions, especially seeing, can change significantly even after only a few minutes.

It is particularly interesting that you make the distinction between mono and colour cameras and that the x7+ that I’m currently using with the 462MC is more in line with the x7.4 that you mention for RG pixels.

Thanks also for the link to the article, which I will read with interest. I also look forward to your findings with the 290MM.

Edited by geoflewis
  • Like 3
Link to comment
Share on other sites

1 hour ago, Altocumulus said:

Where some confusion comes from is ccd compatibility in astronomy tools that suggests a barlow worsens. The 174 on the 180 is already to the left of the green band - adding barlow shifts it further to the left...mind it's over rather than undersampling.

IMHO Astronomy Tools is really only relevant for deep sky (DSO) imaging, not at all relevant for lunar and planetary imaging.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, geoflewis said:

It is particularly interesting that you make the distinction between mono and colour cameras and that the x7+ that I’m currently using with the 462MC is more in line with the x7.4 that you mention for RG pixels.

This is one thing I am still making my mind up about. For a single image this idea is valid, but what happens when we stack? When we stack images we may assume that the planet will not be at a stationary position on the chip due to seeing and tracking errors. Stacking software like AutoStakkert! that I use, determine the centre of the planet and by that displace (I presume) the data by a number of pixels (an integer number of pixels I presume). So imagine that frame 2 has a planetary centre that is 1 pixel (or any odd number of pixels) to the left or right and 0 pixels in up/down direction, the stack will cause two adjacent pixels to be filled with data. Likewise frame 3 could have an odd number of pixels offset up/down as well, and thus will fill adjacent pixels in that direction. If this is how stacking colour images works, then It would mean that colour cameras listen to the same f-ratio rule as mono cameras. The only difference would then be that one would need to stack 4 times as much data (for R/B, 2 times as much for G) to get the same signal to noise ratio.

Hopefully Vladimir can chime in on this...

Nicolàs

 

  • Like 2
Link to comment
Share on other sites

The 5x rule for some seems to be set in stone or written in a Bible as a written law

Same for Nyquist....

Real world scenarios prove the point.... Fact is where the sensor is placed also dictates the focal length... So even if in your mind's eye you should be bang on optimal sampling, you may not be... So it's best to test your system

Chris Go released a couple of images between his new mono 462 and the qhy200 the other day, presume the 462 still has 2.9um pixels and the 200 has 4um...  He  also has a variable Barlow... Take a look and see

  • Like 2
Link to comment
Share on other sites

23 minutes ago, newbie alert said:

Fact is where the sensor is placed also dictates the focal length...

That of course depends on whether a true Barlow or a Telecentric System (Like a TeleVue PowerMate) is used. The former indeed changes the focal length when the distance between Barlow and camera is changed. In the latter the camera distance hardly affects the magnification.

See: https://www.baader-planetarium.com/en/blog/the-benefits-of-telecentric-systems/

Also see: https://www.televue.com/engine/TV3b_page.asp?id=53&Tab=_app

Testing a system is always recommended. Even better is to finish that test with a check in ImageJ as described in my article to see whether or not the image is oversampled. If it is, it is better to reduce the f-ratio as that will result in shorter exposure times...

Nicolàs

  • Like 1
Link to comment
Share on other sites

I'm only dabbling with planetary imaging, compared to you guys. 😮

I've worked out that my Sky-Watcher Evostar 120 achromatic refractor, using a 2x Barlow and ZWO ASI 224MC camera, seems to follow this formula (I think). It gives me an F ratio of 16.66.

It would seem to be from what I've read that the Barlow that would suit me best, based on the pixel size of the camera is 2.24x magnification. I've got the BST StarGuider 2x Barlow, so that'll suffice. 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, Ian McCallum said:

I'm only dabbling with planetary imaging, compared to you guys. 😮

I've worked out that my Sky-Watcher Evostar 120 achromatic refractor, using a 2x Barlow and ZWO ASI 224MC camera, seems to follow this formula (I think). It gives me an F ratio of 16.66.

It would seem to be from what I've read that the Barlow that would suit me best, based on the pixel size of the camera is 2.24x magnification. I've got the BST StarGuider 2x Barlow, so that'll suffice. 

I think you’ll be fine with what you’ve got for starters. Give it a try and good luck.

  • Like 1
Link to comment
Share on other sites

I've written this many times before, and I think I'm probably getting boring with it by now :D, but here it is one more time:

- nature of the light is such that circular aperture that is limited in size produces image that is limited in frequency domain. There is maximum level of detail such aperture can record. It is well known fact and law of physics and there is no way around it (no simple way anyway - there have been some attempts at "super resolution" - namely speckle interferometry - but that is simply not applicable to planetary imaging).

This cut off frequency is given by simple formula:

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

image.png.e252b4b641fb5c6b48259fa62ae843cb.png

- Nyquist sampling theorem is mathematical theorem and it is proven correct - there is no arguing about it. It states that if we have band limited signal (and with telescope aperture we have - see above point) - we need to sample it at twice maximum frequency component in order for our samples to be able to completely and fatefully restore the signal.

Using these two - we can fairly simply derive rule for F/ratio of a telescope system.

Pixel size is equivalent to "wavelength" associated with needed frequency and we need to sample twice per wavelength (or half the wavelength - because we need to sample at twice the cut of frequency), so in above formula we replace f with 1/2*pixel_size and we get:

1/2*pixel_size = 1/lambda * f_ratio

which we can rearrange to be

2*pixel_size = lambda * f_ratio or f_ratio = 2*pixel_size / lambda

where lambda is wavelength of the light that we want to record faithfully.

Visual spectrum is from 400nm to 700nm and we can use 400nm as that is the shortest wavelength that we want to record in above formula and it turns into

f_ratio = 2 * pixel_size / 0.4 (if we use micrometers for pixel size then we must use micrometers for wavelength of light and 400nm is 0.4um).

which further turns into

f_ratio = 5 * pixel_size

Now, why people feel they need to sample at higher sampling rates (slower F/ratios or larger focal lengths) is something that I personally can't explain. All there is to be recorded can be recorded at sampling rate described above. There is no need to use higher F/ratio as far as recording the signal goes.

I know that some people feel that they get better results and that can be due to number of reasons - like processing workflow or simply "feeling that image is better" (we often have problem comparing two images that are sampled at different sampling rates and conditions can never be completely the same to exclude influences).

What I do know is whenever someone over samples - it can be:

a) shown to be over sampled in frequency domain

b) down sampled without loss of information and then up sampled back to original size and still look the same

Not only that, you can even sample at lower sampling rates with virtually no loss of information for several reasons. I often say that wavelength of light to be used in above equation should be 500nm instead.

- we are much more sensitive to luminance than to chrominance and peak of our luminance sensitivity is above 500nm. Detail that we see comes mostly from that light

- atmosphere bends shorter wavelengths much more than longer (think rainbow) and seeing affects shorter wavelengths more than longer. 400nm is going to be affected much more by atmosphere than say 700nm (this is often exploited by lunar imagers that use Ha narrowband filters to tame the seeing further), so there is greater chance that we will actually loose information at 400nm

- refractors are optimized for 500-550nm wavelengths (same reason as point 1 of this list) and will produce best results at those wavelengths.

Furthermore - difference of 20% in sampling rate is not that big as people often think. We can see this if we take very detailed image, reduce it's size to say 80% by using some sophisticated resampling method and then scale it back to 100%. There will be some very small degradation - but not as much as one might think without testing it and seeing it live.

In the end - OSC vs mono.

Yes, in principle OSC captures at lower sampling rate then pixel size suggests because of bayer matrix. Red and blue are captured at half of the frequency, while green is captured at ~0.7071 (1 over square root of 2) what pixel size suggests, but this is important for single image or when video is debayered prior to stacking.

If one uses AS!3 (and one should) - then algorithm called bayer drizzle is employed to restore the color - and it makes up for that lower sampling rate of OSC camera (in fact - that is one of rare occasions that drizzle algorithm works in amateur setups).

For all intents and purposes OSC and Mono+filters can be seen as equal for planetary imaging (except when doing specific stuff like NB or maybe methane or UV).

 

 

  • Like 2
Link to comment
Share on other sites

  Hi Vladimir, thanks for chiming in.

8 hours ago, vlaiv said:

I've written this many times before, and I think I'm probably getting boring with it by now :D, but here it is one more time:

That is why I poured it into an article... 🙂 but maybe should have done that in English.... 🤔 Will do so one day on my own website.

 

8 hours ago, vlaiv said:

If one uses AS!3 (and one should) - then algorithm called bayer drizzle is employed to restore the color - and it makes up for that lower sampling rate of OSC camera (in fact - that is one of rare occasions that drizzle algorithm works in amateur setups).

That confirms my reasoning, so I will add that to my article.

Nicolàs

Link to comment
Share on other sites

12 hours ago, inFINNity Deck said:

That of course depends on whether a true Barlow or a Telecentric System (Like a TeleVue PowerMate) is used. The former indeed changes the focal length when the distance between Barlow and camera is changed. In the latter the camera distance hardly affects the magnification.

See: https://www.baader-planetarium.com/en/blog/the-benefits-of-telecentric-systems/

Also see: https://www.televue.com/engine/TV3b_page.asp?id=53&Tab=_app

Testing a system is always recommended. Even better is to finish that test with a check in ImageJ as described in my article to see whether or not the image is oversampled. If it is, it is better to reduce the f-ratio as that will result in shorter exposure times...

Nicolàs

A true telecentic will keep the light beam parallel

So does that mean my power mate is faulty 

As my focal length is 380mm shorter than it should be ( according to firecapture)

Link to comment
Share on other sites

10 hours ago, vlaiv said:

I've written this many times before, and I think I'm probably getting boring with it by now :D, but here it is one more time:

- nature of the light is such that circular aperture that is limited in size produces image that is limited in frequency domain. There is maximum level of detail such aperture can record. It is well known fact and law of physics and there is no way around it (no simple way anyway - there have been some attempts at "super resolution" - namely speckle interferometry - but that is simply not applicable to planetary imaging).

This cut off frequency is given by simple formula:

https://en.wikipedia.org/wiki/Spatial_cutoff_frequency

image.png.e252b4b641fb5c6b48259fa62ae843cb.png

- Nyquist sampling theorem is mathematical theorem and it is proven correct - there is no arguing about it. It states that if we have band limited signal (and with telescope aperture we have - see above point) - we need to sample it at twice maximum frequency component in order for our samples to be able to completely and fatefully restore the signal.

Using these two - we can fairly simply derive rule for F/ratio of a telescope system.

Pixel size is equivalent to "wavelength" associated with needed frequency and we need to sample twice per wavelength (or half the wavelength - because we need to sample at twice the cut of frequency), so in above formula we replace f with 1/2*pixel_size and we get:

1/2*pixel_size = 1/lambda * f_ratio

which we can rearrange to be

2*pixel_size = lambda * f_ratio or f_ratio = 2*pixel_size / lambda

where lambda is wavelength of the light that we want to record faithfully.

Visual spectrum is from 400nm to 700nm and we can use 400nm as that is the shortest wavelength that we want to record in above formula and it turns into

f_ratio = 2 * pixel_size / 0.4 (if we use micrometers for pixel size then we must use micrometers for wavelength of light and 400nm is 0.4um).

which further turns into

f_ratio = 5 * pixel_size

Now, why people feel they need to sample at higher sampling rates (slower F/ratios or larger focal lengths) is something that I personally can't explain. All there is to be recorded can be recorded at sampling rate described above. There is no need to use higher F/ratio as far as recording the signal goes.

I know that some people feel that they get better results and that can be due to number of reasons - like processing workflow or simply "feeling that image is better" (we often have problem comparing two images that are sampled at different sampling rates and conditions can never be completely the same to exclude influences).

What I do know is whenever someone over samples - it can be:

a) shown to be over sampled in frequency domain

b) down sampled without loss of information and then up sampled back to original size and still look the same

Not only that, you can even sample at lower sampling rates with virtually no loss of information for several reasons. I often say that wavelength of light to be used in above equation should be 500nm instead.

- we are much more sensitive to luminance than to chrominance and peak of our luminance sensitivity is above 500nm. Detail that we see comes mostly from that light

- atmosphere bends shorter wavelengths much more than longer (think rainbow) and seeing affects shorter wavelengths more than longer. 400nm is going to be affected much more by atmosphere than say 700nm (this is often exploited by lunar imagers that use Ha narrowband filters to tame the seeing further), so there is greater chance that we will actually loose information at 400nm

- refractors are optimized for 500-550nm wavelengths (same reason as point 1 of this list) and will produce best results at those wavelengths.

Furthermore - difference of 20% in sampling rate is not that big as people often think. We can see this if we take very detailed image, reduce it's size to say 80% by using some sophisticated resampling method and then scale it back to 100%. There will be some very small degradation - but not as much as one might think without testing it and seeing it live.

In the end - OSC vs mono.

Yes, in principle OSC captures at lower sampling rate then pixel size suggests because of bayer matrix. Red and blue are captured at half of the frequency, while green is captured at ~0.7071 (1 over square root of 2) what pixel size suggests, but this is important for single image or when video is debayered prior to stacking.

If one uses AS!3 (and one should) - then algorithm called bayer drizzle is employed to restore the color - and it makes up for that lower sampling rate of OSC camera (in fact - that is one of rare occasions that drizzle algorithm works in amateur setups).

For all intents and purposes OSC and Mono+filters can be seen as equal for planetary imaging (except when doing specific stuff like NB or maybe methane or UV).

 

 

Thanks Vlaiv, I know that you’ve said it many times, but my personal experience (and I think many others) says that oversampling gets better results. I have tried both correct sampling and over sampling and my oversampled images are almost always the better ones. I don’t understand why, maybe it easier to obtain accurate focus with larger over sampled image on screen, or something else, but operating at F21 or F24 gives me better results despite ‘correct’ sampling is ~F14. I cannot deny the maths/physics that you describe, but equally I cannot deny the evidence of my own eyes.

  • Like 1
Link to comment
Share on other sites

9 minutes ago, geoflewis said:

but equally I cannot deny the evidence of my own eyes.

I'm not sure if I actually saw example of evidence to support this.

Could you post an example that shows your findings, or at least explain what you find better in over sampled images (like additional detail revealed or anything)?

  • Like 1
Link to comment
Share on other sites

12 minutes ago, vlaiv said:

I'm not sure if I actually saw example of evidence to support this.

Could you post an example that shows your findings, or at least explain what you find better in over sampled images (like additional detail revealed or anything)?

This is difficult, as I don’t have side by side captures processed. It may just be a feeling, but the sessions I ran at ~F21 just appear better results to me than the sessions I ran at ~F12. My observing conditions in UK are very inconsistent, so maybe I was just lucky at F21, but after many sessions trying one or the other (not the same session) it seems to me that F21 is better.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.