Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Types of Camera’s Explained


Herzy

Recommended Posts

Over the years I’ve been collecting my knowledge on Astrophotography. Here is a section I wrote. I hope it is helpful, though I don’t claim it to be perfect. Any corrections are welcome. I also created all of the illustrations myself. 

13E7761F-E19F-45F9-9EA3-2512ABD6EB06.jpeg.32bb2b84e918f734d5b01a0911dc4754.jpeg
 

AE6E8CD5-F583-4EE1-AD09-82A539AA17EF.jpeg.0e752d1d0cd9cdf953c59aa90d11cdca.jpeg

 

AC25A073-84BE-43E2-A306-720F8E514503.jpeg.62cd868bf26f048fe8f575c9351d4b14.jpeg

A57DC4D1-E740-4A63-A328-26CB7A04132C.jpeg.56f845d5e5a94c875315f335108ec697.jpeg

8395CABB-F162-41EE-862B-B1F83D8AC5E3.jpeg.1f1681ecf4d0b0bc75611b2f2f3a5241.jpeg

C26CC181-6F7B-4FE4-A517-196F90FD08DD.jpeg.6d171adb82105662eba5de110980d0e6.jpeg

1BF4F0F6-AA04-41F4-A21A-448E571C11A1.jpeg.545496dd418aed47497b30780b580777.jpeg

Edited by Herzy
  • Like 1
Link to comment
Share on other sites

I did not go thru all of the text, but have noticed few things.

First - all this data on DSLRs (and more) is available online from several sources.

Look here:

https://www.photonstophotos.net/

Next - you listed both cost and costly as being disadvantage of CCDs for some reason

Then - you listed "low quality 12bit ADC", "Non linear pixel sensitivity" and "Amp glow" as disadvantages to CMOS.

Can you explain what do you mean by low quality 12bit ADC? In what way is it low quality? Does it produce digital values that are somewhat lower quality than standard? Does it introduce some sort of error into ADC process (different than read noise)?

Which CMOS sensors have you found to have pixel nonlinearity?

Amp glow is term we use on CMOS sensors but it has originated with CCD sensors (there was single amplifier unit on CCD that would thermally "glow" and cause electron build up on one side of sensor) - which means Amp glow is not exclusive feature of CMOS sensors - and can be calibrated out.

  • Like 1
Link to comment
Share on other sites

Seems outdated by a number of years, and today things are opposite to what you wrote about CCD and CMOS.

Quote

CCD sensors offer superior image quality, dynamic range, and noise control than CMOS sensors

This is the opposite of truth in today's camera market with low read noise and practically no thermal noise CMOS cameras becoming the norm, whereas CCD cameras have an abundance of both.

Quote

Most deep-sky objects are rich in Hydrogen and Oxygen, which shine in red and blue frequencies respectively, meaning all green photosites will capture little data of value.

Here you have ignored all broadband targets, which in my opinion are more common targets than the emission nebulae you meant with that sentence. Broadband targets are the opposite of that, and in fact shine brightest in the green channel. Still, it has truth in it as not every pixel receives useful light from a target and resolution is lost. Just have a gripe with how you have ignored broadband completely.

20 minutes ago, Herzy said:

8395CABB-F162-41EE-862B-B1F83D8AC5E3.jpeg.1f1681ecf4d0b0bc75611b2f2f3a5241.jpeg

This is also the opposite of truth and frankly not useful at all today.

CMOS sensors today have either native 16-bit ADCs or 14-bit ones, 12bit is rare and only sold in planetary camera format today (where this does not matter). Amp glow is also a thing of the past, as is nonlinear pixel sensitivity with the newest DSO cameras reaching 99% linearity.

CCD sensors have 3-5x read noise compared to CMOS, so also just plain wrong.

Not sure i would recommend any of this to beginner astrophotographers.

Edited by ONIKKINEN
typo
  • Like 1
Link to comment
Share on other sites

1 minute ago, vlaiv said:

I did not go thru all of the text, but have noticed few things.

First - all this data on DSLRs (and more) is available online from several sources.

Look here:

https://www.photonstophotos.net/

Next - you listed both cost and costly as being disadvantage of CCDs for some reason

Then - you listed "low quality 12bit ADC", "Non linear pixel sensitivity" and "Amp glow" as disadvantages to CMOS.

Can you explain what do you mean by low quality 12bit ADC? In what way is it low quality? Does it produce digital values that are somewhat lower quality than standard? Does it introduce some sort of error into ADC process (different than read noise)?

Which CMOS sensors have you found to have pixel nonlinearity?

Amp glow is term we use on CMOS sensors but it has originated with CCD sensors (there was single amplifier unit on CCD that would thermally "glow" and cause electron build up on one side of sensor) - which means Amp glow is not exclusive feature of CMOS sensors - and can be calibrated out.

Pixel non-linearity in a CMOS sensor refers to internal ADC’s performing separate conversions at each pixel site, whereas the CCD performs all operations through a single external ADC. CMOS sensors will have more uncertainty/noise due to this, but they will have lower readout times because not ever charge has to pass through the same ADC. 
Because the CMOS sensor has built in ADC’s at every pixel they are usually only 12bit, whereas CCD sensors can get away with higher quality 16bit ADCs.

 

Amp glow isn’t exclusive to CMOS, you’re correct. But it is more prevalent, due to the more compacted and complex circuitry at every pixel site.

 

I hope that clears some of it up

Link to comment
Share on other sites

5 minutes ago, ONIKKINEN said:

Seems outdated by a number of years, and today things are opposite to what you wrote about CCD and CMOS.

This is the opposite of truth in today's camera market with low read noise and practically no thermal noise CMOS cameras becoming the norm, whereas CCD cameras have an abundance of both.

Here you have ignored all broadband targets, which in my opinion are more common targets than the emission nebulae you meant with that sentence. Broadband targets are the opposite of that, and in fact shine brightest in the green channel. Still, it has truth in it as not every pixel receives useful light from a target and resolution is lost. Just have a gripe with how you have ignored broadband completely.

This is also the opposite of truth and frankly not useful at all today.

CMOS sensors today have either native 16-bit ADCs or 14-bit ones, 12bit is rare and only sold in planetary camera format today (where this does not matter). Amp glow is also a thing of the past, as is nonlinear pixel sensitivity with the newest DSO cameras reaching 99% linearity.

CCD sensors have 3-5x read noise compared to CMOS, so also just plain wrong.

Not sure i would recommend any of this to beginner astrophotographers.

You’re correct about the broadband targets showing up in the green channel, and I should change the wording of that section, but my point was simply that the standard Bayer matrix has 50% green pixels, 25% red pixels, and 25% blue pixels. Later I compared that to a filter wheel over a monochromatic sensor, showing how every pixel is operating at maximum efficiency rather than being constricted by color filters.

I don’t claim this to be perfect, and I appreciate the corrections. It’s a very complicated topic. 

Link to comment
Share on other sites

1 minute ago, Herzy said:

CMOS sensors will have more uncertainty/noise due to this, but they will have lower readout times because not ever charge has to pass through the same ADC. 

No.

CMOS sensors have much lower read noise than CCD.

1 minute ago, Herzy said:

Because the CMOS sensor has built in ADC’s at every pixel they are usually only 12bit, whereas CCD sensors can get away with higher quality 16bit ADCs.

Again - this does not explain how is 12bit ADC low quality and 16bit ADC high quality.

From astrophotography perspective - there is absolutely no difference. Let me show you in rather simple example.

Say you compare 12bit CMOS sensor that has 1.7e of read noise with 16bit CCD sensor with 7e of read noise. For the sake of argument - let's suppose that both sensors have FWC that matches their bit count and operate at system gain of 1e/ADU (that is not the case with CCD sensors as they often have less than 65K FWC where CMOS sensors more often have higher than 4K FWC and thus have variable gain).

Since there is 4 bits of difference between CMOS and CCD - it stands to reason that CCD can expose for x16 times longer than CMOS, right?

But what happens if we take 16 subs with CMOS and just add them up? Signal adds up - and yes, it becomes the same as one taken with CCD. All time dependent signals/noises will just add up. Only one that is not time dependent is read noise.

There will be 16 "doses" of read noise in CMOS - so let's add those up to see what noise level we'll get. Noise adds as square root of sum of squares so we have sqrt(16 x 1.7e^2) = 4 * sqrt(1.7e^2) = 4 * 1.7e = 6.8e

Hm, same thing as CCD, or still slightly better (6.8e vs 7e).

How is 12bit ADC inferior?

 

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

No.

CMOS sensors have much lower read noise than CCD.

Again - this does not explain how is 12bit ADC low quality and 16bit ADC high quality.

From astrophotography perspective - there is absolutely no difference. Let me show you in rather simple example.

Say you compare 12bit CMOS sensor that has 1.7e of read noise with 16bit CCD sensor with 7e of read noise. For the sake of argument - let's suppose that both sensors have FWC that matches their bit count and operate at system gain of 1e/ADU (that is not the case with CCD sensors as they often have less than 65K FWC where CMOS sensors more often have higher than 4K FWC and thus have variable gain).

Since there is 4 bits of difference between CMOS and CCD - it stands to reason that CCD can expose for x16 times longer than CMOS, right?

But what happens if we take 16 subs with CMOS and just add them up? Signal adds up - and yes, it becomes the same as one taken with CCD. All time dependent signals/noises will just add up. Only one that is not time dependent is read noise.

There will be 16 "doses" of read noise in CMOS - so let's add those up to see what noise level we'll get. Noise adds as square root of sum of squares so we have sqrt(16 x 1.7e^2) = 4 * sqrt(1.7e^2) = 4 * 1.7e = 6.8e

Hm, same thing as CCD, or still slightly better (6.8e vs 7e).

How is 12bit ADC inferior?

 

I wasn’t referring to noise by calling a higher bit depth ADC ‘superior’, I was referring to the color range. Higher bit ADCs have less color banding and less banding between brightness values, giving a smoother overall image. Does it not?

Link to comment
Share on other sites

Just now, Herzy said:

I wasn’t referring to noise by calling a higher bit depth ADC ‘superior’, I was referring to the color range. Higher bit ADCs have less color banding and less banding between brightness values, giving a smoother overall image. Does it not?

No

What color banding are you talking about?

Those are often perpetuated myths.

First, I've shown you how you can easily get 16bit data by stacking 16 subs from 12bit ADC - to the same level of performance as one acquired with 16bit ADC (true 16 bit ADC).

Second, light comes in photons, very few photons :D . Most targets out there that we image will need far less bits per exposure. Signal of say few hundred ADUs per exposure is deemed very strong, and you only need 8-9 bits to properly record it.

Third - in astrophotography we mostly deal with stacking - regardless of what type of ADC we use. This means that we end up with bit count that is bigger than either 12 or 16 bits in our final stacks. For each x2 number of subs stacked we add one bit to our bit depth.

Stack 32 subs - we increase bit depth by 5. Stack 128 subs - and we add 7 bits. That is why stacking 16 subs produces same results with 12bit as we get with 16bit. 16 is 2^4 - so it is like adding 4 bits and 12+4 = 16. General formula is number of bits added = log base 2 of number of stacked subs.

Only difference between few long and many short subs in stacks that add up to same total time is in read noise. It is only thing that adds per sub and not "per time". CMOS sensors are optimized for shorter exposures as they have lower read noise. They don't need 16bit ADCs to perform the same job - they can use 12bit ones.

Back to the banding - I'm writing this on 8bit display. In principle, I have very tough time seeing banding in any sort of gradient. We now have 10bit displays as ones that guarantee that humans can't notice banding.

With even single sub of 12 bit - that is x4 finer "granulation" of intensity levels, let alone stacking that produces much higher bit range.

What you should mention if you want to talk about color banding is bit count when processing images. I've seen most people use 16bit fix point format for image processing - and that can be a problem for modern CMOS sensors. Data should be in 32bit floating point from the moment we start calibrating - until we are finished with our processing and we save it for display (then it will be saved in 8bit format).

Because of the way CMOS sensors work - we use shorter exposures and more subs in our stacks. Shorter exposures mean that signal per sub will be lower - and that ADU values will be lower in general.

If we use fixed point format and bunch up our signal on one side of this range - we will loose "resolution" or number of levels available.

Here is an example to show you what I mean. Say that you image a target that produces 100e/minute signal. You do that with CCD and expose for 10 minutes and you do the same with CMOS but expose for only a minute. You get same total imaging time (CMOS will produce x10 more subs in stack so SNR will be the same).

You use average stacking method for both.

CCD will have 1000e per sub (100e/minute for 10 minutes)

CMOS will have 100e per sub.

average of ~1000e values is ~1000e - no matter how many subs you average

average of ~100e values is ~100e - again, it does not matter that you average more samples.

If you record those two resulting stacks in 16bit fixed point - those average values will be rounded and you introduce rounding error. This rounding error is larger compared to absolute signal value for CMOS - because of shorter exposures (although SNR is the same for both images).

Now we reduced signal to 100 distinct levels - and that is less than 7bits - that is something our eye can notice, especially after stretching.

But this happens only if we process our data in 16bit fixed point format. If we have our data in 32bit floating point numeric format - we won't be introducing rounding error when averaging and we won't have these distinct levels. Stretching will work normally.

Link to comment
Share on other sites

On 25/11/2022 at 11:55, Herzy said:

Because the CMOS sensor has built in ADC’s at every pixel they are usually only 12bit

CMOS sensors don't have an ADC at each pixel site.  There's no room to do so!  The ADCs are sited away from the light sensitive area of the sensor and typically there will be one ADC per row or column or one ADC per group of rows or columns.

Quote

 Higher bit ADCs have less color banding and less banding between brightness values, giving a smoother overall image. Does it not?

It's a misconception that the discrete values coming from the ADU somehow cause image banding.  The discrete values (quantisation) can only be seen in the background noise (read noise and/or shot noise) and this noise prevents any banding in the image.  The only way that banding (posterisation) can be made to appear in the data is by faulty post-processing e.g. reducing the bit depth too early.

Mark

Edited by sharkmelley
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.