Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Choosing the right type of monochrome camera - CCD, CMOS and pixel size


Recommended Posts

There's a couple of questions in this post but they will ultimately lead me towards choosing the right criteria for a monochrome camera.

  1. Should I look for cameras with larger pixels than my current camera or is my resolution acceptable?
  2. Per the title, are CCDs still considered good value for money given the choice of CMOS cameras available today (2022)?

Pixel Size/Resolution

I'm currently considering moving over to mono and also considering the pixel size to accomodate my WO 120mm f/6.5 refractor. The resolution with my current ASI533MC-Pro with 3.76um pixels is 0.99"/pixel at the native focal length of 780mm when I use my 1.0x flattener. I used this camera from my Redcat and understood it would be slightly oversampled with my new FLT120. If I pony up and buy the 0.8x reducer this would increase to 1.24"/pixel. From what I've read, I should really bump my resolution to ~1.2"/pixel or more for UK seeing. I've provided an example image from the above setup (Melotte 15) at the bottom of this post so is there anything that doesn't look right due to oversampling or does the image look OK and this resolution isn't a big deal?

 

CCD or CMOS

Here's some cameras I've considered in my search. They can all use 1.25" filters which means I don't have to buy a new filter wheel and it keeps the cost of filters down. I included their pixel size and calculated resolution at the native Focal Length (FL) and with the 0.8 reducer where applicable:

  • ZWO ASI294MM | 4.63um pixels* | 1.22"/pixel native FL           *at standard 2x2 binned mode
  • QHY294M Pro | 4.63um pixels* | 1.22"/pixel native FL              *at standard 2x2 binned mode
  • Starlight Xpress Trius Pro 814 | 3.69um pixels | 0.99"/pixel native FL -> 1.22"/pixel with 0.8x reducer

The ASI294MM is a good first choice as I currently use an ASI Air Pro and 1.25" filter wheel, so I think I just need to buy the filters and I'm good to go. When unlocking the binning mode and running at 1x1, the resolution would be a good match for my Redcat51. Total cost would be £1,431 plus cost of filters.

For the QHY, it's exactly the same as the ASI294MM but I would need to buy a mini-PC to use the camera and I would need to rethink my cabling situation since there's no USB ports on the back of this camera. I'm assuming this would work well with my mini-EFW. I've found a good price online which could pay for the mini-PC. Total cost would be £1,012 plus cost of mini-PC and filters.

Finally, for the Starlight Xpress I would need to buy the reducer to get a similar pixel scale (unless my current resolution appears OK) and a mini-PC to use the camera. This is the most expensive camera which also requires the ~£500 reducer to compete with the aforementioned cameras. Total cost would be £2,046 (camera) plus £546 (reducer) plus cost of mini-PC and filters.

All the images I've seen from these cameras look great and they all review pretty well. Unfortunately, there doesn't seem to be a lot of recent discussion around the tried and testing Starlight Xpress and how they compare to newer CMOS cameras. Is it still a good buy in 2022?

I would greatly appreciate any comments, or even alternative suggestions which are similar in price to the above!

 

Melotte 15 - Captured with ASI533MC-Pro with my 120mm Refractor at 780m FL (resolution of 0.99"/pixel)

1224479070_Melotte15RGBFinal8hrs.thumb.jpg.fbbd1bac56efe5350b25e76f3b7c4c27.jpg

Link to comment
Share on other sites

41 minutes ago, Richard_ said:

Melotte 15 - Captured with ASI533MC-Pro with my 120mm Refractor at 780m FL (resolution of 0.99"/pixel)

Now interesting thing is that you drizzled that image for some reason, right?

Actual resolution of the image is not only compatible with double the imaging resolution - it's not even half of the imaging resolution.

image.thumb.png.371c338952ed52a1618f8e90c1956060.png

One of these two images has been scaled to 25% of its original size and then scaled up again to 100%, but can you tell which one?

 

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

Now interesting thing is that you drizzled that image for some reason, right?

Actual resolution of the image is not only compatible with double the imaging resolution - it's not even half of the imaging resolution.

image.thumb.png.371c338952ed52a1618f8e90c1956060.png

One of these two images has been scaled to 25% of its original size and then scaled up again to 100%, but can you tell which one?

 

Hah, well spotted vlaiv! Did you notice from the image dimensions that I drizzled? This was due to repetitive routine from my Redcat without really thinking. 

From the side by side image you provided, I actually can't tell any difference, so I'm guessing the answer is "drizzling offers no benefit when you aren't undersampled?" 

Link to comment
Share on other sites

2 minutes ago, Richard_ said:

so I'm guessing the answer is "drizzling offers no benefit when you aren't undersampled?" 

That is correct answer, but besides that - you were also oversampled since I could reduce original size to 25% - or 2"/px without loss of detail (probably even a bit more).

Check FWMH values in your subs so far (or even stacks - but make sure they are linear). From that you can see what sort of resolution you should be aiming for - FWHM / 1.6 should be your guideline.

Judging by this image - anything below 2"/px is over sampling.

  • Thanks 1
Link to comment
Share on other sites

Sorry to jump in here @vlaiv and @Richard_ but my GT81/0.8x/294MC gives 2.5"/pixel so I just looked at the stats of a recent image that looks ok to me and here it is:

Screenshot_20220108_234758.thumb.png.9b3fd45ce518ac13e5794b7c0bbe15b2.png

It shows approx 1.9 for FWHM correct? What does this tell me, that if i had a camera/scope resolution of less than 1.9 I'd be wasting my time as it were?

thanks 🙂

Link to comment
Share on other sites

24 minutes ago, vlaiv said:

That is correct answer, but besides that - you were also oversampled since I could reduce original size to 25% - or 2"/px without loss of detail (probably even a bit more).

Check FWMH values in your subs so far (or even stacks - but make sure they are linear). From that you can see what sort of resolution you should be aiming for - FWHM / 1.6 should be your guideline.

Judging by this image - anything below 2"/px is over sampling.

Using "SubframeSelector" in Pixinsight on a selection of my raw subs (prior to any calibration etc.), my FWHM ranges between ~3 and 6.5 which averages at ~4.5. Using your calculation, 4.5/1.6 yields me 2.8"/pixel.

Using the CCD suitability tool on astronomy.tools and selecting "poor seeing (4-5")" based on my calculated FWHM of 4.5, I can see that the ASI294MM at native focal length is showing as over-sampled @ 1.22"/pixel, which becomes "OK" @ 1.53"/pixel when I choose an 0.8 reducer.

Have I got the right end of the stick? If so, that answers Question 1 and either tells me I need further reduction or need to look at even larger pixel cameras.

16994477_FWHMchart.PNG.c216683aff147822c70a3519a7b1c699.PNG

2121145324_ASI294mmnoreducer.thumb.PNG.05dc15f637f83d0b78c899240659b420.PNG

24359423_ASI294mmwithreducer.thumb.PNG.56ff1b3c1bd23d588e430c9d47b0565b.PNG

 

Edited by Richard_
Pointed out my subs were raw and uncalibrated
Link to comment
Share on other sites

1 minute ago, scotty38 said:

What does this tell me, that if i had a camera/scope resolution of less than 1.9 I'd be wasting my time as it were?

It tells you that you are very slightly over sampling in the case of that image.

If your working resolution is 2.5"/px and you have FWHM of 1.936px - then your FWHM in arc seconds is 1.936px * 2.5"/px = 4.84"

Once you know that number - you can calculate "optimum" sampling rate as that number / 1.6. In your case sampling rate should be 4.84 / 1.6 = 3.025"/px

This is for this image alone. 3"/px is quite high and can be consequence of particularly poor seeing on the night of capture. Check couple more images to get idea of what your working resolution should be.

Btw. I think that at 2.5"/px - in most cases, you should not worry about over sampling. That resolution is good.

Another thing to add - if on particular night seeing is very good or very poor - your sampling rate will be over or under sampling. That is something that we can't control and should not worry you much. You should aim for "average" FWHM to match your working resolution.

  • Like 1
Link to comment
Share on other sites

1 minute ago, Richard_ said:

Using "SubframeSelector" in Pixinsight on a selection of subs, my FWHM ranges between ~3 and 6.5 which averages at ~4.5. Using your calculation, 4.5/1.6 yields me 2.8"/pixel.

Is that in pixels or arc seconds? If it is in pixels - convert to arc seconds.

Be careful - if you use stack and you've drizzled - then you've changed pixel scale of the image.

In order to calculate optimum sampling rate / pixel scale - you need to divide FWHM in arc seconds with 1.6

If you have FWHM in pixels - then it needs to be 1.6px if you are sampling at optimum rate. If your FWHM in pixels is between 3 and 6 - then this means that you are oversampling by factor of x2 - x4 (or rather if it was between 3.2 and 6.4 this would hold).

4 minutes ago, Richard_ said:

Using the CCD suitability tool on astronomy.tools and selecting "poor seeing (4-5")" based on my calculated FWHM of 4.5, I can see that the ASI294MM at native focal length is showing as over-sampled @ 1.22"/pixel, which becomes "OK" @ 1.53"/pixel when I choose an 0.8 reducer.

I would not use that tool for judging optimum sampling rate as I believe it is flawed.

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

It tells you that you are very slightly over sampling in the case of that image.

If your working resolution is 2.5"/px and you have FWHM of 1.936px - then your FWHM in arc seconds is 1.936px * 2.5"/px = 4.84"

Once you know that number - you can calculate "optimum" sampling rate as that number / 1.6. In your case sampling rate should be 4.84 / 1.6 = 3.025"/px

This is for this image alone. 3"/px is quite high and can be consequence of particularly poor seeing on the night of capture. Check couple more images to get idea of what your working resolution should be.

Btw. I think that at 2.5"/px - in most cases, you should not worry about over sampling. That resolution is good.

Another thing to add - if on particular night seeing is very good or very poor - your sampling rate will be over or under sampling. That is something that we can't control and should not worry you much. You should aim for "average" FWHM to match your working resolution.

Again apologies for adding my input but this is very interesting and hopefully my butting in is sort of relevant to the general understanding.

I just did another sample using subframeselector as Richard did and my FWHM was 1.6. Putting that in to your formulas gets me to 2.5"/pixel. As it happens that is what the astronomy tool tells me is the resolution of my setup. Is that number therefore to be expected or is it a spooky coincidence. Actually is it good, bad or indifferent?

Like Richard if I was looking for a new camera and went out and bought a 2600mm, according to the tool (you don't like 🙂 ) my resolution would go from 2.5 to 2.0 so would that be good/bad/advisable or not?

Link to comment
Share on other sites

1 minute ago, vlaiv said:

Is that in pixels or arc seconds? If it is in pixels - convert to arc seconds.

Be careful - if you use stack and you've drizzled - then you've changed pixel scale of the image.

In order to calculate optimum sampling rate / pixel scale - you need to divide FWHM in arc seconds with 1.6

If you have FWHM in pixels - then it needs to be 1.6px if you are sampling at optimum rate. If your FWHM in pixels is between 3 and 6 - then this means that you are oversampling by factor of x2 - x4 (or rather if it was between 3.2 and 6.4 this would hold).

I would not use that tool for judging optimum sampling rate as I believe it is flawed.

Thanks vlaiv. Below is a screenshot from the subframe selector table in PixInsight. I believe that the units of measurement of the FWHM values in the chart are expressed in arcseconds, so the calculation I provided should be correct. The average of 4.5" was from the raw subs and not the stack, so are unaffected by drizzling.

With regards to the tool I referenced, what part do you think is flawed? And not that I doubt you in any shape or form so please don't take this the wrong way, but do you have a source or literature reference where that 1.6 came from and the theory behind it, or is this value based on your experience?

@scotty38 no problem with the additional posts, it's related to my question so carry on 🙂

113296727_SubframeSelectorTable.PNG.909f5ca12dd24c466496e6f4c8013b36.PNG

Link to comment
Share on other sites

1 hour ago, vlaiv said:

 

I would not use that tool for judging optimum sampling rate as I believe it is flawed.

Yes that took is indeed flawed on a number of levels and I wouldn't use it either.

 

Link to comment
Share on other sites

11 hours ago, scotty38 said:

I just did another sample using subframeselector as Richard did and my FWHM was 1.6. Putting that in to your formulas gets me to 2.5"/pixel. As it happens that is what the astronomy tool tells me is the resolution of my setup. Is that number therefore to be expected or is it a spooky coincidence. Actually is it good, bad or indifferent?

In my view it's a good thing and is to be expected. When you sample at 2.5"/px - there will be nights of poor seeing when you are slightly over sampled - like with first image, but there will be nights of average seeing as well (and there should be more of these) - where you'll be spot on with your sampling rate. There will be nights of better seeing as well (again not so often) - where you'll be a bit oversampling.

I think that you are having OK sampling rate at 2.5"/px.

11 hours ago, scotty38 said:

Like Richard if I was looking for a new camera and went out and bought a 2600mm, according to the tool (you don't like 🙂 ) my resolution would go from 2.5 to 2.0 so would that be good/bad/advisable or not?

That part of tool is OK - one that tells you sampling rate based on pixel size.

Formula is simple and goes like this:

sampling rate = 206.3 * pixel_size / focal_length

Where pixel size in is µm and focal length is in mm

You can derive that formula with a bit of trigonometry.

In the end, you are wondering if 2"/px will be good sampling rate? I believe it will be as long as you have decent guiding (RMS below 1") and you have scope that has aperture of 80mm or more.

11 hours ago, Richard_ said:

Thanks vlaiv. Below is a screenshot from the subframe selector table in PixInsight. I believe that the units of measurement of the FWHM values in the chart are expressed in arcseconds, so the calculation I provided should be correct. The average of 4.5" was from the raw subs and not the stack, so are unaffected by drizzling.

Ok, so I was right then - it seemed over sampled by larger factor than x2 - and indeed it is. If your average FWHM is 4.5" then optimum sampling rate is ~2.8"/px. That shows it was night of poor seeing (as does fact that you have FWHM in 3"-6" range - very variable - seeing must have changed quite a bit - I'm suspecting local thermals as it is winter and when you track object across the sky - it moves above different houses - heating & chimneys can cause issues).

11 hours ago, Richard_ said:

With regards to the tool I referenced, what part do you think is flawed? And not that I doubt you in any shape or form so please don't take this the wrong way, but do you have a source or literature reference where that 1.6 came from and the theory behind it, or is this value based on your experience?

Ok, so I'll go briefly over that. I wrote several times on that topic already - you can also search SGL for more info.

Here are statements from Astronomy.Tools explaining why it does calculations it does.

Quote

In the 1920s Harold Nyquist developed a theorem for digital sampling of analog signals. Nyquist’s formula suggests the sampling rate should be double the frequency of the analog signal. So, if OK seeing is between 2-4” FWHM then the sampling rate, according to Nyquist, should be 1-2”.

Nyquist sampling theorem (or Shannon-Nyquist sampling theorem) states that for band limited signal, perfect reconstruction can be achieved if one samples at double maximum frequency of that band limited signal. That part is sort of ok. Problematic part is equating seeing FWHM to sampling rate. That is simply not what Nyquist's theorem is saying. Maximum frequency is feature in frequency domain and FWHM is defined in spatial domain and different for different curves. If we assume any type of curve - like Gaussian - then we must calculate / explain relationship of FWHM to elements of frequency domain.

further

Quote

There is some debate around using this for modern CCD sensors because they use square pixels, and we want to image round stars. Using typical seeing at 4” FWHM, Nyquist’s formula would suggest each pixel has 2” resolution which would mean a star could fall on just one pixel, or it might illuminate a 2x2 array, so be captured as a square.

Ok, again this is very wrong. Stars won't be squares because of way we sample them. Even if we grossly over sample - they still won't be squares. That is misunderstanding of sampling process. Result of sampling process are not little squares (nor other shapes if pixels are of different shape) - result of sampling are points - dimensionless points (like true math points).

What "shape" will objects have - depends on restoration procedure. We often "see" square pixels in our images - not because they are really squared, but because certain restoration algorithm is used - namely nearest neighbor resampling / interpolation. If we don't use that algorithm - we'll get different results.

Nyquist states that Sinc is perfect restoration kernel (sin(x) / x). That is function that goes of to infinity and it must because band limited signal is cyclic in nature (and images are not). For this reason we use approximation to Sinc kernel and that is often Lanczos kernel (which is windowed Sinc).

In any case - square stars are not artifact of sampling and square pixels - it is artifact of restoration process - way we restore image from sampling points and we have complete control over that in choice of algorithm used - it is separate from imaging part.

Quote

The solution

It is better then to image with a resolution 1/3 of the analog signal, doing this will ensure a star will always fall on multiple pixels so remain circular.

And of course - if your premise is false - then there is no wonder if conclusion / solution is.

There you go in a nutshell what I object - first is equating seeing to star FWHM. Seeing is just part of story, aperture and guide performance also impact final FWHM among other things. FWHM is not measure of frequency elements (although it is related in Gaussian profile) so we can't just take 1/2 or 1/3 of FWHM to be our pixel size.

Nyquist clearly states that we need x2 max frequency and no we don't need x3 to make stars "rounder" - that is just misunderstanding of sampling process.

In the end, a bit of theory (this will be very brief).

I arrived to 1.6 number by using several relationships. First I approximated star profile with Gaussian. There is known relationship between FWHM and sigma of gaussian. Then I used fact that fourier transform of gaussian is gaussian (fourier transform "moves" function in frequency domain). There I selected cutoff frequency as place where it falls of to less than 10% - this is because Gaussian approximation is just approximation - it continues to infinity and hence has no clear cut off frequency so we can use one where it makes sense in terms of SNR and level of detail in the image). For this reason I often put quotes around "optimum" sampling rate - there really is no hard cut off point and changing that percent above will change result so there is really range of optimum values based on your criteria.

If you do the math - it turns out that sampling rate should be FWHM / ~1.6.

That is later confirmed by simulations and also by that technique I demonstrated above - take any image that is over sampled and calculate "optimum" sampling rate. Then downsize image to optimum rate and upsize it back again - there will be no visual difference, but if you downsize it more - you will start to notice difference in the image - detail will start to be lost.

You can find articles online that explain every step in what I said above - look up Nyquist sampling theorem, look up Gaussian function and it's properties - relationship of sigma and FWHM and fourier transform of Gaussian function and you'll be able to derive the same from those.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

Thanks very much for the very informative reply @vlaiv and for taking the time to type it out! Whilst I don't fully understand the maths right now (it's been 10 years since I last dabbled with Fourier transformations...) I understand the logic in your response and what you are trying to get across.

Of interest, I looked back at my master-light and drizzled master-light of Melotte 15 to see if I could see any difference. While I was doing this, I used "IntergerResample" to bin my original image to see if I could notice any pixelation etc. I selected four regions of interest, and each picture shows the following:

  • Left - original image but downsampled by 2 (average)
  • Middle - original image
  • Right - drizzle stacked

PS: I have no idea whether this approach would produce a different result to binning "as imaged", so take this with a pinch of salt.

I included FIT files of the above (just the stack, no other processing has been performed). I scanned across the image to a few different regions of interest in Melotte 15. I then set the scale as 1:1 for the original image and scaled the drizzle and resampled images to suit so that you have the same Field of View at the same scale. The images should be at eactly the same location. There is no discernible difference in star shape at this scale, however, you will notice a slight drop in detail from the "original" to the "resample" image which is best observed in the second image and there is also a slight improvement in background noise if you look carefull (I read that binning reduces noise, thus improves SNR, right?).

So in summary, I fully agree that the drizzling was pointless with this telescope (it was more of force of habit on my part than a conscious decision) and based on my crude testing that I could explore the possibility of binning my ASI533 to 2x2 (i.e. ~2.0"/pixel) to see if there is any discernible difference in my downsampled image versus binned "at point of capture". I guess this experimentation could potentially help me choose an appropriate combination of reducer/pixel size/resolution for a monochrome camera, right?

Resample.thumb.PNG.4dcbf12c6246c032b87197083e5f8dd9.PNG

Resample2.thumb.PNG.6afb8e5cf5ac06d2b3d03357a24aa873.PNG

Resample3.thumb.PNG.1ec8593a3a49dbbb054b0b34cb644c54.PNG

Resample4.thumb.PNG.9e37eef2c609f7a7484adfd8c1eb43ac.PNG

 

Melotte_15_Drizzle.fit Melotte_15_Original.fit Melotte_15_Resample.fit

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.