Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Cooled OSC CMOS vs Cooled OSC CCD


Adam J

Recommended Posts

Just been looking at some of the APS-C and full frame cooled CMOS sensors that are coming out (QHY165c, QHY168c, QHY247c and QHY367c). I note that they are all OSC and so I got to wondering about weather these represented a better option in comparison to the current OSC CCD cameras. I have seen some staggering images from the QHY367c in particular. 

The new cooled CMOS cameras all have read noise that their CCD competitors can only dream of and unlike the current QHY163c they also have no amp glow and very low dark current in long exposures. 

I cant currently think of any reason why you would want to go with a OSC CCD at this point. I don't think that the QE is any higher and given that shorter exposures are workable with the CMOS sensors I dont see dynamic range as an issue. 

Please feel free to correct me if I am wrong. 

Also please please please can we have this conversation without the whole thing spiralling into a discussion on Mono Vs OSC there should be no reason to discuss Mono cameras in the context of my question. 

 

Link to comment
Share on other sites

Hello Adam,

I agree with you, OSC CCD is dead in the water, the CMOS cameras offer better specs and extra features at a much cheaper price.

Compare the QHY CCD flagship for OSC - QHY12 - it costs approx $US2700 and is APS-c format, 4610x3080pixels. 5.1 micron pixel, 32,000e well depth, 10e read noise and USB2.

The closest CMOS camera equivalent is the QHY168c and costs approx $US1500, APS-c format, 4952x3288 pixels, 4.8 micron pixel, 46,000e well depth, 3.2e read noise and USB3. 

With the CMOS camera you also get a 128Mb image buffer for slow USB ports, video capture mode at 10 frames per sec for full res, and a window heater. Downside is you need to use an L filter up front as the camera window is anti-glare only (QHY12 has a UV/IR cut off window).

Anyone looking to buy a new OSC would be foolish to buy a CCD version now.

Cheers
Bill

Link to comment
Share on other sites

14 hours ago, billdan said:

Hello Adam,

I agree with you, OSC CCD is dead in the water, the CMOS cameras offer better specs and extra features at a much cheaper price.

Compare the QHY CCD flagship for OSC - QHY12 - it costs approx $US2700 and is APS-c format, 4610x3080pixels. 5.1 micron pixel, 32,000e well depth, 10e read noise and USB2.

The closest CMOS camera equivalent is the QHY168c and costs approx $US1500, APS-c format, 4952x3288 pixels, 4.8 micron pixel, 46,000e well depth, 3.2e read noise and USB3. 

With the CMOS camera you also get a 128Mb image buffer for slow USB ports, video capture mode at 10 frames per sec for full res, and a window heater. Downside is you need to use an L filter up front as the camera window is anti-glare only (QHY12 has a UV/IR cut off window).

Anyone looking to buy a new OSC would be foolish to buy a CCD version now.

Cheers
Bill

Well as there are no cries of disagreement that may actually be it lol.

Link to comment
Share on other sites

Cmos are generally 12 or 14 bit, while ccd are 16 bit. That limits dynamic range. But it would be nice to have a side by side comparison as to the practical implications of this difference. My guess is that any difference would be difficult to spot.

Link to comment
Share on other sites

If you stack enough frames there is no difference between 12/14 and 16 bit. Even 8-bit planetary images from big sets produce 16 bit stack without problems. As for CCD vs CMOS - the CMOS are the current and future technology.  CCD are slowly going out of the market as they can't go as far as CMOS technology.

Link to comment
Share on other sites

18 hours ago, wimvb said:

Cmos are generally 12 or 14 bit, while ccd are 16 bit. That limits dynamic range. But it would be nice to have a side by side comparison as to the practical implications of this difference. My guess is that any difference would be difficult to spot.

No that is a common misconception the resolution of the analogue to digital converter does not limit dynamic range at all. It just controls how granular the sampling is. Its the well depth within a single pixel that controls dynamic range not the A/D resolution. Its all to do with the size of the capacitor and hence how quickly it becomes saturated. 

For example if the well is half full on a 12bit you will get a reading of 2048 out of 4096 however if it is 16 bit then you will get a value of 32768 out of 65536 but in terms of brightness they both mean the same thing. 

The reality is that a CCD is less likely to take advantage of the additional resolution than a CMOS as in general they have higher read noise and so the read noise will blur the definition between the brightness levels. 

17 hours ago, riklaunim said:

If you stack enough frames there is no difference between 12/14 and 16 bit. Even 8-bit planetary images from big sets produce 16 bit stack without problems. As for CCD vs CMOS - the CMOS are the current and future technology.  CCD are slowly going out of the market as they can't go as far as CMOS technology.

That's not quite right either you are gaining resolution by stacking but not dynamic range. If you stack 100 saturated images the final image will still be saturated. 

 

Link to comment
Share on other sites

3 hours ago, Adam J said:

That's not quite right either you are gaining resolution by stacking but not dynamic range.

Adam can you clarify what you mean by resolution in this context? Stacking clearly reduces noise compared to a single exposure of the same exposure but I don't see how it effects either dynamic range or granularity of the sampling.

Regards Andrew

Link to comment
Share on other sites

1 hour ago, andrew s said:

Adam can you clarify what you mean by resolution in this context? Stacking clearly reduces noise compared to a single exposure of the same exposure but I don't see how it effects either dynamic range or granularity of the sampling.

Regards Andrew

Ok its not easy to explain but ill give it a go.

The improvement in resolution in effect relies on the noise in the sensor being higher than the granularity in the data.

Granularity ultimately meaning the change in voltage measured by the A/D resulting in a change in the A/D output value by 1. So if the range of the A/D is 0 to 5 volts (its not actually 5 volts), then a 12-bit A/D has a resolution (granularity) of 0.0012 volts per level. A 16 bit A/D would have a resolution of 0.00005volts per level.....but the point being that the total range is still 0 - 5volts and the voltage being measured is a function of the number of electrons liberated by photons striking the sensor. As such a sensor with high dynamic range will take many more photons to increase the voltage being measured than would be required for a sensor with low dynamic range as it takes more electrons to fill the capacitor. So in effect a sensor with a high dynamic range would deliver a smaller voltage (all other factors such as sensitivity and pixel size being equal) than a sensor with low dynamic range. 

Now the problem with this comes when the high dynamic range sensor delivers a voltage of 0.0006v and the low dynamic range sensor delivers a voltage of 0.0012v. In the case of a 12-bit A/D this results in both sensors delivering the same value where as a 16-bit A/D will be able to distinguish between the two different voltage levels. What this means in broad terms is that when you are working right at the left hand side of the histogram in short exposure astro-photography you are only using a fraction of the full well depth most of the time and so the lower A/D resolution has a large impact on image quality in comparison to the high resolution A/D that has plenty of residual resolution. 

That brings me onto the second part of the argument here, which is that resolution in the data can be increased by stacking. This is because when operating right on the left hand side of the histogram you experience a high level of shot noise and read noise (even in a low read noise camera). This results in variation from image to image despite the low resolution of the A/D converter.

To explain this further lets take the case of a A/D with only 1-bit depth, the pixel can be either white or black.  Now lets also assume that the signal to noise ratio is very bad, so bad in fact that sometimes when we take an image the pixel is white and other times the pixel is black. If we take only a single image then we will end up with either a white or a black pixel. However, if we take 100 images we may end up with 20 white pixels and 80 black pixels. So we are able to conclude that the pixel is neither white or black it is in fact closer to the black end of the scale then it is to the white end of the scale. Now if we can also make an estimate of the actual noise we can use this to make an estimate of how much more black the pixel is than white by making use of a bi-nominal probability distribution and hence despite the fact that all our individual subs have only two possible values we have gained more information / resolution by stacking many subs....now its actually not quite like that as we have many more than two levels and the noise will cause variation in reading over multiple levels but hopefully you get the idea. Now eventually this is all converted by a program like DSS into a 32-bit TIFF and hence the additional information becomes available in processing the image. 

This is a simplification in some ways but I am sorry to say that is as good an explanation as I am capable of in the allowable time. 

Oh and as I said dynamic range is not effected. 

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.