Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

12bit vs 14bit DSLRs


beka

Recommended Posts

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

Link to comment
Share on other sites

  • Replies 34
  • Created
  • Last Reply

When I went from a Canon 10D to a 450D, which is 12 to 14bit, the difference was huge. As the 450D was less noisy than the 10D it was basically like getting four times the data in the same time.

A bit different with cooled CCDs as they are very low noise, and the camera may be mono, making a direct comparison difficult.

Link to comment
Share on other sites

I have used a 12bit Canon 1000d, a modded 550d, both 12 bit.  Currently I'm using a Canon 6D and 80D, both 14 bit.  It is pretty difficult to make a comparison based on bit depth because cmos chip technology is constantly evolving with increasing mega pixels, better sensitivity and lower noise before you bring bit depth into the equation.  

When I say I've used these cameras I'm meaning widefield landscape night time photography.  In this situation the images aren't majorly underexposed.  I've found that my 12 bit DSLRs images would cope pretty well being when stretched from around 3 stops underexposed.  However, with deep sky, much more image stretching is often needed and you have to feel that 14bit would offer a significant advantage.  The latest CMOS cameras seem to be breaking the rules though, they have a very low read noise and this means you don't need such long exposure to overcome read noise.  Large numbers of short sub exposures appear to be order of the day with the use of gain (equivalent to high iso) enabling the subs to not need as much stretching.  In this situation bit depth may not be too critical.  

There are lots of great images on SGL from ASI1600 users and these will be far more eloquent than anything I can say!!

Link to comment
Share on other sites

4 hours ago, MartinB said:

I have used a 12bit Canon 1000d, a modded 550d, both 12 bit.  Currently I'm using a Canon 6D and 80D, both 14 bit.  It is pretty difficult to make a comparison based on bit depth because cmos chip technology is constantly evolving with increasing mega pixels, better sensitivity and lower noise before you bring bit depth into the equation.  

When I say I've used these cameras I'm meaning widefield landscape night time photography.  In this situation the images aren't majorly underexposed.  I've found that my 12 bit DSLRs images would cope pretty well being when stretched from around 3 stops underexposed.  However, with deep sky, much more image stretching is often needed and you have to feel that 14bit would offer a significant advantage.  The latest CMOS cameras seem to be breaking the rules though, they have a very low read noise and this means you don't need such long exposure to overcome read noise.  Large numbers of short sub exposures appear to be order of the day with the use of gain (equivalent to high iso) enabling the subs to not need as much stretching.  In this situation bit depth may not be too critical.  

There are lots of great images on SGL from ASI1600 users and these will be far more eloquent than anything I can say!!

I am 100% certain that the 550D is 14bit.

Link to comment
Share on other sites

5 hours ago, beka said:

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

For that matter the 700D is also 14bit. You will notice a difference in processing as you can stretch the image further before the histogram starts to fragment if that is a good way of describing it. But that is mainly apparent in when you only have low numbers of subs. With the ASI1600 you gain resolution back in stacking (to an extent) due to the large number of subs used. That not usually the case with a DSLR. But its not a problem as all canon DSLR cameras sold since the 1000D have been 14 bit.

Link to comment
Share on other sites

8 minutes ago, tooth_dr said:

Just out of curiosity as you are quite knowledgable, am I better using my modded 1000d or the 40d I bought a few months ago?

I would not expect them to be very different they both use the exact same sensor 12-bit 10.1mp DIGIC III processor.....I would personally be tempted to use the 40D though as it should run slightly cooler (larger body) during longer imaging sessions. The reason to use the 1000D would be if you wanted to save weight. Its also much easier to self IR modify the 1000D.

I dont know about knowledgeable but I do tend to retain lots of useless information lol....but thanks.

Link to comment
Share on other sites

2 minutes ago, Adam J said:

I would not expect them to be very different they both use the exact same sensor 12-bit 10.1mp DIGIC III processor.....I would personally be tempted to use the 40D though as it should run slightly cooler (larger body) during longer imaging sessions. The reason to use the 1000D would be if you wanted to save weight. Its also much easier to self IR modify the 1000D.

I dont know about knowledgeable but I do tend to retain lots of useless information lol....but thanks.

I self-modded the 1000d when I bought it many moons ago, the 40d is stock.   

Link to comment
Share on other sites

9 hours ago, beka said:

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

The Canon 700D is also 14bit:  https://www.dxomark.com/Cameras/Canon/EOS-700D---Specifications

Mark

Link to comment
Share on other sites

Hi everyone,

To those of you that pointed out the Canon 700D is 14bit, sorry I stand corrected. But the issue might still be relevant to those that have 12 bit cameras. Thanks for all the input.

Best

Link to comment
Share on other sites

What is the real life implication of having extra 'bits'? I know that for every extra bit you get twice as many more tonal levels, but do you end up benefiting from this in astrophotgraphy? I can see how this would help in landscape and portrait photography where the tonal range needed is far greater, but what about here? The ASI1600m, which seems to be a popular CMOS camera at the moment only has 12 bits. 

Link to comment
Share on other sites

2 hours ago, Rico said:

What is the real life implication of having extra 'bits'? I know that for every extra bit you get twice as many more tonal levels, but do you end up benefiting from this in astrophotgraphy? I can see how this would help in landscape and portrait photography where the tonal range needed is far greater, but what about here? The ASI1600m, which seems to be a popular CMOS camera at the moment only has 12 bits. 

Greater benefit in astrophotography, where dynamic range is really important and you may wish to stretch a very short part of the overall tone curve much more dramatically than you would with 'normal' photography.

Although the difference is mitigated by stacking. 4 12-bit images = one 14bit and the low noise of the ASI1600 allows you to exploit the faint signals more.

 

Link to comment
Share on other sites

23 hours ago, tooth_dr said:

I thought the 40d was 14bit though?

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Link to comment
Share on other sites

2 minutes ago, Adam J said:

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Thanks again Adam. :hello2: So the big Q - should I modify it or just sell it and put money towards a cooled CMOS?

Link to comment
Share on other sites

14 hours ago, Adam J said:

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Why waste time speculating when the info is only one quick search away?
http://web.canon.jp/imaging/eosd/eos40d/specifications.html

I think much of the online confusion stems from the 400D having a 12-bit raw format.

Link to comment
Share on other sites

21 hours ago, Stub Mandrel said:

Although the difference is mitigated by stacking. 4 12-bit images = one 14bit...

 

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Link to comment
Share on other sites

2 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Not necessarily. When it comes to proper astro cameras that have RAW formats that are actually raw, the pixel value is simply a count of the electrons in the pixel electron well (1 electron per photon detected). If the electron well cannot hold more than 4096 (2^12) electrons then there is no difference between 12 and 14 bit data.  When it comes to DSLRs the camera sensor specs, like the size of the electron well, are not public and the camera does a lot of processing even on RAW images that depends on the camera settings (iso settings, etc). In that case it is tricky without a good test setup to actually tell if a specific camera gets any advantage of a higher bit depth Analog Digital Converters (ADC) and RAW format. The smart money is probably that most of them do benefit based on the data available from CCDs with similar pixel sizes but it's hard to know for sure.

Fun fact: Many astro cameras that output 16 bit FITS or similar don't use the entire 16 bit range. For instance the popular KAF-8300 CCD has a well depth of 25500 electrons which can be stored using just 15 bits and the small pixel Sony chips chips are in the 9-22 thousand range (13-15 bits) .

 

Link to comment
Share on other sites

Here's another way to look at the problem.  I use 12bits all the time for astro-imaging on my Sony A7S. For my long exposures, the sky background is the main source of noise in my image.  Since this noise is also sufficient to dither the step size of the 12bit quantisation, I would gain absolutely nothing by using 14 bits instead.

Mark

Link to comment
Share on other sites

6 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

Sorry I just noticed that I missed answering your main question. The four 12-bit equal to one 14 bit image is not technically true although it is trueish in many circumstances. It's one of those common but potentially dangerous mental models many people use for stacking. The right way to do this is to look at the actual signal and signal-to-noise-ratio (SNR) and it is not as scary as it sounds.

In technical terms the signal is just the number of detected photons. The SNR is just like the name implies the ratio between just signal and the noise and it is the value that isn't good enough when you look at an image and thinks it looks noisy. The noise in turns be equal to the square root of the signal (the actual nature of noise and why the noise of a signal that behaves like a Poisson distributed statistical processes is equal to the sqrt of the signal is a bit overkill for this post). This turns out to be pretty nice for doing calculations since this also means that the SNR is also equal to the square root of the signal (SNR = signal/noise = signal/sqrt(signal) = sqrt(signal)).

Here is one case where the 4x12 bit = 14 bit model would be close to true. Take two cameras identical in every way (pixel size, quantum efficiency, etc.) except one (Camera A) has pixel wells that can hold 36000 electrons and the other camera (Camera B) has pixel wells with 9000 electrons and test them in the same telescope. If we take a 1 min exposure using Camera A where one specific pixel is just saturated (i.e. has converted 36000 photons to electrons and filled the well to capacity) the resulting value will fit into 14 bits with a little room to spare. Since Camera B has smaller electron wells the same pixel would become saturated after detecting 9000 photons which would take only 15 seconds.  Camera B's image will fit into 12 bits.  

If we process these images the same way but compensate for the lower values in the image from Camera A (i.e. multiply the pixels by 4) we will get very similar looking images except the image from Camera A will look less noisy since it has a higher signal (36000 photons vs 9000 photons for the nearly saturated pixel) and thus also better SNR (sqrt(36000) vs sqrt(9000) or appr. 190 vs 95). However if we grab four 15 second exposures with Camera B and add them, the nearly saturated pixel in the stacked image will have a photon signal of 9000+9000+9000+9000 = 36000 (fits into 14 bits) just like the 1 min exposure using Camera A. If we ignore that reading four images from the sensor instead of one introduces more of the unwanted readout and offset signals (pretty small sources for longer exposures) the two images would be identical.

Link to comment
Share on other sites

Out of curiosity, I've looked up figures for my 10D and 450D.

Figures with a dash are from http://www.sensorgen.info/

Figures with an equals are my calculations. Dynamic range is calculated on saturation capacity (well depth) NOT on bit count.

10D

QE - 22%

Min. read noise - 8.6 electrons

Saturation capacity - 38,537 electrons

Bit depth 12 - 4096

Electrons per bit = 9.40

Photons per bit @ 22% QE = 43

Photos to saturate = 1,657,091

 

450D

QE - 33%

Min. read noise - 3.6 electrons

Saturation capacity - 26,614 electrons (rather fewer!)

Bit depth 14 - 16,384

Electrons per bit = 1.62

Photons per bit @ 33% QE = 5

Photons to saturate = 878,262

 

So although the 450D has a smaller well depth, it makes more efficient use of its electrons and vastly more efficient use of its photons. This is partly offset by the read noise, but the difference is less than it appears because of the 450D having 50% better QE.

So if the 450D gets 1000 photons,  the signal will be:

1000/5 = 200 bits +  2 bits read noise.

For the 10D getting 1000 photons the signal will be

1000/43 = 23 bits + 1 bit read noise
 

Clearly the higher read noise of the 14-bit camera is totally irrelevant in the context of the far greater resolution.

Also, although the 10D has lower QE and greater well depth (saturation capacity) it can only take twice as many photons.

Taking the ratio between  1 photon count and a full well

The 450D has a dynamic range of 52dB

The 10D has a dynamic range of 35dB

 

Even allowing for the better QE of the 14 bit camera, I think this explains why my 14-bit 450D knocks the 10D into a cocked hat.

It's worth remembering that cameras don't actually use the full bit count available, so the difference is probably even more marked.

Link to comment
Share on other sites

23 minutes ago, Stub Mandrel said:

 

Taking the ratio between  1 photon count and a full well

The 450D has a dynamic range of 52dB

The 10D has a dynamic range of 35dB

 

Sensorgen actually gives the 14bit 450D a lower DR than the 12bit 10D:

  • 10D:  DR = 11.0 stops
  • 450D:  DR = 10.4 stops

Do they use a different definition of DR?

Mark

Link to comment
Share on other sites

1 hour ago, Stub Mandrel said:

So although the 450D has a smaller well depth, it makes more efficient use of its electrons and vastly more efficient use of its photons. This is partly offset by the read noise, but the difference is less than it appears because of the 450D having 50% better QE.

So if the 450D gets 1000 photons,  the signal will be:

1000/5 = 200 bits +  2 bits read noise.

For the 10D getting 1000 photons the signal will be

1000/43 = 23 bits + 1 bit read noise

Firstly, when you say bits do you mean ADU ("Analog to Digital converter Unit" AKA pixel value)? Secondly, you are not calculating the signal, you're calculating the pixel values which is a thing we generally already know. 

Signal is simply the number of detected photons. So either 1000 for both cameras (if you mean that the photons where all detected) or 330 vs 220 (if the photons hit the pixel and we then take QE in account). Note that lower resolution of the 10D sensor will mean that its pixels are significantly larger so will be hit by almost exactly twice as many photons everything else being equal so the 10D would come out ahead per pixel even with its lower QE. The larger electron well is also due to the larger pixel size. These factors explains why older cameras like the 10D can do surprisingly well in practice.

The main problem for the 10D is that the electron well is so much larger what what the 12 bit RAW format can handle. This could lead to issues like banding if you stretch the data enough and other quantisation problems.

What you really want to do is to get the signal (the property that can be compared across different equipment) from the pixel value. If you know the processing done in the camera this can be very easy to do. For astro cameras it is often trivial, there is no hidden processing and the gain is usually 1 (unity gain 1 electron = 1 ADU) so in most cases pixel value is the same as number of photons. In the other cases you just look up your sensors gain (if the documentation or camera driver doesn't provide this information you should complain) and multiply or divide by it. Serious imaging software will read the needed data from the camera driver or let you input it.

With DSLRs it gets tricky, the gain (electrons per ADU - the thing you assumed was 9.6 for the 10D) will vary depending on ISO settings. Sensorgen says the saturation level  of the 10D is 2308 electrons at ISO 1600 vs 38537 at ISO 100, a clear sign that the gain is very variable. No DSLR maker makes this information public and here will also be other processing done, some of it non linear. For example there is indefinably some sort of dark current and/or bad pixel map processing going on. In these cases its probably best to give up if you want an accurate value usable for comparisons.

Pixel values can be used as a proxy for signal for a specific setup, it's just not comparable between different equipment and settings.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.