Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

asi 178mc v asi 224mc v ZWO ASI 462MC for planetary on a c8 edge


iwols

Recommended Posts

 

hi just having a switch in cameras and just wondered which of these 3 you would recommend for planetary imaging on a c8 edge,using a 178mc at the minute but this needs to go back in my all sky camera,or any other ccd recommended thanks 

Link to comment
Share on other sites

All are superb cameras to be honest.  

Ranking them by minimum read noise (i.e. max gain) you've got:

1. 462c = 0.5 e-rms

2. 224c = 0.85 e-rms

3. 178c = 1.38 e-rms

None of them have excessive read noise and so all three are suitable, but for me personally the lower the read noise the better.  What it means practically is that it takes fewer frames stacked to get those nice smooth images.

The 178 does have the larger sensor so that's a plus for that. It also has the smallest pixels but you would be slightly oversampling at the native FL of the C8.

The 224 has the largest pixels and you'd need a 1.3x barlow to get to the sweet spot.  The 462c sits in the middle of the pixel sizes and you'd be spot on for sampling with the native FL of the C8...  No barlow needed so that eliminates a piece of glass and some weight from the optcal train. Also finding things and tracking will be easier without a barlow.

So I guess my vote goes for the 462c.... I have one (QHY version) and I think it's superb.

Oh the other thing is that the 462 has excellent transparency on all pixels past ~800nm, so when using an IR pass filter the camera behaves like a mono camera.

  • Like 1
Link to comment
Share on other sites

14 hours ago, iwols said:

thanks craig .what is the qhy version please

It is this one... https://www.modernastronomy.com/shop/cameras/lunar-planetary/qhy-lunar-planetary/qhy5iii462c-planetary-and-nir-imaging-camera/

I use the qhy version because the shape of it allows a bit more inwards focus travel which is useful when used with newtonians. The body of the camera can be slid down right into the focuser to get the sensor further in to reach focus.

With an SCT though you won't have any inwards focus issues as you've got loads of focus travel. 

  • Like 1
Link to comment
Share on other sites

4 hours ago, CraigT82 said:

Nope... amazing images can be produced with any sized pixels, as long as they are appropriate to the focal ratio used! 

how do i work out that the 462 would be appropriate for my c8 edge Craig,as im just about to push the button👍

Link to comment
Share on other sites

On 11/08/2021 at 16:39, CraigT82 said:

All are superb cameras to be honest.  

Ranking them by minimum read noise (i.e. max gain) you've got:

1. 462c = 0.5 e-rms

2. 224c = 0.85 e-rms

3. 178c = 1.38 e-rms

None of them have excessive read noise and so all three are suitable, but for me personally the lower the read noise the better.  What it means practically is that it takes fewer frames stacked to get those nice smooth images.

The 178 does have the larger sensor so that's a plus for that. It also has the smallest pixels but you would be slightly oversampling at the native FL of the C8.

The 224 has the largest pixels and you'd need a 1.3x barlow to get to the sweet spot.  The 462c sits in the middle of the pixel sizes and you'd be spot on for sampling with the native FL of the C8...  No barlow needed so that eliminates a piece of glass and some weight from the optcal train. Also finding things and tracking will be easier without a barlow.

Please Craig, could you explain why the 1.3 times Barlow would hit the sweet spot for the asi224mc and C8 Edge?

(Apologies OP for asking this)

 

  • Like 1
Link to comment
Share on other sites

46 minutes ago, Peter_D said:

Please Craig, could you explain why the 1.3 times Barlow would hit the sweet spot for the asi224mc and C8 Edge?

(Apologies OP for asking this)

 

That's a good question.  I've got that figure by using a method for calculating optimum sampling which is based on having 2 pixels sampling the minimum spatial cut off frequency (https://en.wikipedia.org/wiki/Spatial_cutoff_frequency). I recently made a spreadsheet which gives the F ratios required for a given pixel size and wavelength of light, based on this method (attached).  BTW this method of calculation is something that I've learned from @vlaiv.

As you can see from the link it assumes perfect optics and perfect seeing, so even these focal ratios may be a bit overkill for our slightly imperfect scopes and really imperfect atmosphere!

Others will advocate for longer focal ratios and that's absolutely fine, there are a few ways and methods for calculating the ideal sampling for any give scope or camera and everyone is free to decide on which way they think is best. 

 

Planetary Imaging Sampling Calculator (2).xlsx

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

18 hours ago, CraigT82 said:

Others will advocate for longer focal ratios and that's absolutely fine, there are a few ways and methods for calculating the ideal sampling for any give scope or camera and everyone is free to decide on which way they think is best. 

@Peter_D

I'd like to point out that using different method and getting different result is a bit like saying: I'm using different method to solve quadratic equation and I'm getting different results - but I think that way is the best :D

Above approach is directly related to the fact that aperture produces Airy pattern at focal plane (physics of light) and that Airy pattern acts as low pass filter with above mentioned cutoff frequency (Fourier transform of Airy pattern shows this). As such, it is band limited signal and according to Nyquist theorem you need to sample it at twice highest frequency component of band limited signal.

Link to comment
Share on other sites

58 minutes ago, vlaiv said:

according to Nyquist theorem

Doesn't the Nyquist theorem only apply to continuous signals?

It seems to me that imaging produces a discrete result: all the photons gathered in a given time.

Link to comment
Share on other sites

21 minutes ago, pete_l said:

Doesn't the Nyquist theorem only apply to continuous signals?

It seems to me that imaging produces a discrete result: all the photons gathered in a given time.

There is no difference as photon rate is continuous signal. Photon rate and thus sampling of it does not depend on light intensity or exposure duration - it is only measurement noise that depends on these, and noise is not signal - it does not act as signal as far as optical configuration is concerned and is not band limited - it has rather uniform frequency spectrum.

Imaging produces photon rate, if one chooses to do so - which is not whole number but fraction. One just needs to divide number of photons captured with exposure duration. In limiting case when exposure -> infinity, measured photon rate -> true photon rate (noise goes to 0).

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

Imaging produces photon rate

But the camera does not process each photo individually. The other problem is that Nyquist assumes the signal is correlated. Photons are not. At best they are a series of impulses with random arrival times and amplitudes.

Edited by pete_l
Link to comment
Share on other sites

1 minute ago, pete_l said:

But the camera does not process each photo individually. The other problem is that Nyquist assumes the signal is correlated. Photons are not.

But yes they are correlated.

If you want to go into realm of quantum and discuss individual photons - then look at for example double slit experiment. Although individual photon landing positions are very random - they are actually governed by wave function interference pattern. Which means that there is strong correlation between photon detection and wave shape - where there is complete destructive interference - no photons can be detected.

Similar to this - Airy pattern is round aperture equivalent to interference pattern which forms from double slit - it is wave function interference pattern. It also has complete destructive interference.

image.png.944f7331611dd1d08378fe5283bc4170.png

At certain points - there is actually 0 probability of detecting a photon.

You can see above curve as probability distribution for photon detection or you can see it as what the sensor detects given enough photons, and indeed:

image.png.35655820806667efbabdbfa75b1e00ce.png

In the center you can see that it is almost continuous function of photon count while at periphery of the image you can spot discreetness of photon counts because signal is much lower.

This correlation is precisely what creates band limited signal - if you examine this pattern by means of Fourier transform - you'll get low pass filter response that we know as MTF of optical system and it looks like this:

image.png.0522ae4331805c13456a56cb2bd63c0d.png

Point where this line (and it is represented by line as it is cross section of rotationally symmetric 2d function) falls to 0 is cut off frequency. It only depends on aperture size for circular apertures.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Airy pattern is round aperture

The Airy disk is an artifact of diffraction of the incoming photons through an aperture.  That has nothing to do with the size of the sensor as the pixels do not cause diffraction. The only time the sensor size becomes an issue is not to do with sampling theory, it is only a factor when compared to the size of the point of focused light (or as we amateur astronomers call it: the "star" ;) ). What optical designers call the spot size.

However, even a 1/8 wave optic blurs a star to 4-5 Airy disks [ reference: Schimdt Cassegrain telescope ] - and that is the on-axis spot size! It also takes no account of atmospheric "seeing" that spreads the light out, further. So neither sampling nor diffraction makes any significant difference when applied to real world operations.

That is why we still get pretty much rounded-looking stars even with quite large pixels. It is also why star images are spread across many adjacent pixels.

Link to comment
Share on other sites

5 hours ago, pete_l said:

The Airy disk is an artifact of diffraction of the incoming photons through an aperture.

Not sure why are you quoting me on that as that is exactly what I said in the sentence you quoted (although you quoted only part of it).

Whole sentence that I wrote goes like this:

7 hours ago, vlaiv said:

Similar to this - Airy pattern is round aperture equivalent to interference pattern which forms from double slit - it is wave function interference pattern.

That is exactly the same thing except I wanted to point out the relation with interference pattern of double slit as both show correlation of pixel landing positions (even in single photon interference case).

5 hours ago, pete_l said:

That has nothing to do with the size of the sensor as the pixels do not cause diffraction. The only time the sensor size becomes an issue is not to do with sampling theory,

I don't follow - no one was talking about sensor size, were we?

We were discussing critical sampling frequency and F/ratio needed for pixels of certain size to capture everything aperture can render.

5 hours ago, pete_l said:

What optical designers call the spot size.

Can you provide a reference to "spot size"? I never heard of that term being used in context of telescopes (it is related to lasers and gaussian beams). I've heard of spot diagram - which is something that optical designers use - but is a tool to understand optical performance of the system using geometrical optics - which is not something we are discussing here - although there is some relation to airy disk.

5 hours ago, pete_l said:

However, even a 1/8 wave optic blurs a star to 4-5 Airy disks [ reference: Schimdt Cassegrain telescope ]

That is complete nonsense.

I looked at the reference provided, but I was not able to find the exact statement (that "1/8th wave optic blurs a star to 4-5 airy disks"). Would you be so kind to point me to the exact place where this is quoted from?

5 hours ago, pete_l said:

It also takes no account of atmospheric "seeing" that spreads the light out, further. So neither sampling nor diffraction makes any significant difference when applied to real world operations.

We are here not discussing effects of long exposure imaging, but rather lucky type planetary imaging where we try to capture moments of good seeing and reduce impact of seeing to the minimum. As such it is very important to understand limits of optics without impact of seeing. We hope that in few rare moments we will be able to utilize full potential of aperture - and we aim for that.

We can also argue the case when we do long exposure imaging and we have full impact of atmosphere - same rules apply, but only in that case we won't be looking at airy disk to be our blur and thus provide cut off frequency, but rather achieved star FWHM that depends on aperture, seeing and mount performance. In that case - Nyquist theorem still applies - but we determine cut off frequency in different way because of all variables involved.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.