Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep34_banner.thumb.jpg.28dd32d9305c7de9b6591e6bf6600b27.jpg

beka

12bit vs 14bit DSLRs

Recommended Posts

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

  • Like 1

Share this post


Link to post
Share on other sites

When I went from a Canon 10D to a 450D, which is 12 to 14bit, the difference was huge. As the 450D was less noisy than the 10D it was basically like getting four times the data in the same time.

A bit different with cooled CCDs as they are very low noise, and the camera may be mono, making a direct comparison difficult.

Share this post


Link to post
Share on other sites

I have used a 12bit Canon 1000d, a modded 550d, both 12 bit.  Currently I'm using a Canon 6D and 80D, both 14 bit.  It is pretty difficult to make a comparison based on bit depth because cmos chip technology is constantly evolving with increasing mega pixels, better sensitivity and lower noise before you bring bit depth into the equation.  

When I say I've used these cameras I'm meaning widefield landscape night time photography.  In this situation the images aren't majorly underexposed.  I've found that my 12 bit DSLRs images would cope pretty well being when stretched from around 3 stops underexposed.  However, with deep sky, much more image stretching is often needed and you have to feel that 14bit would offer a significant advantage.  The latest CMOS cameras seem to be breaking the rules though, they have a very low read noise and this means you don't need such long exposure to overcome read noise.  Large numbers of short sub exposures appear to be order of the day with the use of gain (equivalent to high iso) enabling the subs to not need as much stretching.  In this situation bit depth may not be too critical.  

There are lots of great images on SGL from ASI1600 users and these will be far more eloquent than anything I can say!!

  • Like 1

Share this post


Link to post
Share on other sites
4 hours ago, MartinB said:

I have used a 12bit Canon 1000d, a modded 550d, both 12 bit.  Currently I'm using a Canon 6D and 80D, both 14 bit.  It is pretty difficult to make a comparison based on bit depth because cmos chip technology is constantly evolving with increasing mega pixels, better sensitivity and lower noise before you bring bit depth into the equation.  

When I say I've used these cameras I'm meaning widefield landscape night time photography.  In this situation the images aren't majorly underexposed.  I've found that my 12 bit DSLRs images would cope pretty well being when stretched from around 3 stops underexposed.  However, with deep sky, much more image stretching is often needed and you have to feel that 14bit would offer a significant advantage.  The latest CMOS cameras seem to be breaking the rules though, they have a very low read noise and this means you don't need such long exposure to overcome read noise.  Large numbers of short sub exposures appear to be order of the day with the use of gain (equivalent to high iso) enabling the subs to not need as much stretching.  In this situation bit depth may not be too critical.  

There are lots of great images on SGL from ASI1600 users and these will be far more eloquent than anything I can say!!

I am 100% certain that the 550D is 14bit.

Share this post


Link to post
Share on other sites
5 hours ago, beka said:

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

For that matter the 700D is also 14bit. You will notice a difference in processing as you can stretch the image further before the histogram starts to fragment if that is a good way of describing it. But that is mainly apparent in when you only have low numbers of subs. With the ASI1600 you gain resolution back in stacking (to an extent) due to the large number of subs used. That not usually the case with a DSLR. But its not a problem as all canon DSLR cameras sold since the 1000D have been 14 bit.

Edited by Adam J
  • Like 1

Share this post


Link to post
Share on other sites
4 minutes ago, Adam J said:

For that matter so is the 700D

Just out of curiosity as you are quite knowledgable, am I better using my modded 1000d or the 40d I bought a few months ago?

Share this post


Link to post
Share on other sites
8 minutes ago, tooth_dr said:

Just out of curiosity as you are quite knowledgable, am I better using my modded 1000d or the 40d I bought a few months ago?

I would not expect them to be very different they both use the exact same sensor 12-bit 10.1mp DIGIC III processor.....I would personally be tempted to use the 40D though as it should run slightly cooler (larger body) during longer imaging sessions. The reason to use the 1000D would be if you wanted to save weight. Its also much easier to self IR modify the 1000D.

I dont know about knowledgeable but I do tend to retain lots of useless information lol....but thanks.

Edited by Adam J
  • Like 1

Share this post


Link to post
Share on other sites
2 minutes ago, Adam J said:

I would not expect them to be very different they both use the exact same sensor 12-bit 10.1mp DIGIC III processor.....I would personally be tempted to use the 40D though as it should run slightly cooler (larger body) during longer imaging sessions. The reason to use the 1000D would be if you wanted to save weight. Its also much easier to self IR modify the 1000D.

I dont know about knowledgeable but I do tend to retain lots of useless information lol....but thanks.

I self-modded the 1000d when I bought it many moons ago, the 40d is stock.   

Share this post


Link to post
Share on other sites
1 minute ago, tooth_dr said:

I self-modded the 1000d when I bought it many moons ago, the 40d is stock.   

Then use the modded camera every time. :)

  • Thanks 1

Share this post


Link to post
Share on other sites
9 hours ago, beka said:

Hi All,

I was wondering if anyone has any experience comparing DSLRs with 12bit pixel depth like canon 700D vs 14bit like Canon 800D. Might it be worth upgrading your existing 12bit model for a 14bit model. Also what about the question of a cooled 12bit CCD like the ZWO ASI1600 compared to a 14bit DSLR?

Cheers!

 

The Canon 700D is also 14bit:  https://www.dxomark.com/Cameras/Canon/EOS-700D---Specifications

Mark

Share this post


Link to post
Share on other sites

Hi everyone,

To those of you that pointed out the Canon 700D is 14bit, sorry I stand corrected. But the issue might still be relevant to those that have 12 bit cameras. Thanks for all the input.

Best

Share this post


Link to post
Share on other sites

What is the real life implication of having extra 'bits'? I know that for every extra bit you get twice as many more tonal levels, but do you end up benefiting from this in astrophotgraphy? I can see how this would help in landscape and portrait photography where the tonal range needed is far greater, but what about here? The ASI1600m, which seems to be a popular CMOS camera at the moment only has 12 bits. 

Share this post


Link to post
Share on other sites
2 hours ago, Rico said:

What is the real life implication of having extra 'bits'? I know that for every extra bit you get twice as many more tonal levels, but do you end up benefiting from this in astrophotgraphy? I can see how this would help in landscape and portrait photography where the tonal range needed is far greater, but what about here? The ASI1600m, which seems to be a popular CMOS camera at the moment only has 12 bits. 

Greater benefit in astrophotography, where dynamic range is really important and you may wish to stretch a very short part of the overall tone curve much more dramatically than you would with 'normal' photography.

Although the difference is mitigated by stacking. 4 12-bit images = one 14bit and the low noise of the ASI1600 allows you to exploit the faint signals more.

 

Edited by Stub Mandrel

Share this post


Link to post
Share on other sites
23 hours ago, tooth_dr said:

I thought the 40d was 14bit though?

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Share this post


Link to post
Share on other sites
2 minutes ago, Adam J said:

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Thanks again Adam. :hello2: So the big Q - should I modify it or just sell it and put money towards a cooled CMOS?

Edited by tooth_dr

Share this post


Link to post
Share on other sites
14 hours ago, Adam J said:

That is actually quite interesting as its listed in a couple of places as 12-bit and in other places as 14-bit. I suspect it is probably 14bit...but unless its modified then the 1000D is still the way to go.

Why waste time speculating when the info is only one quick search away?
http://web.canon.jp/imaging/eosd/eos40d/specifications.html

I think much of the online confusion stems from the 400D having a 12-bit raw format.

Edited by glappkaeft

Share this post


Link to post
Share on other sites
17 hours ago, tooth_dr said:

Thanks again Adam. :hello2: So the big Q - should I modify it or just sell it and put money towards a cooled CMOS?

Cooled cmos for me

Share this post


Link to post
Share on other sites
21 hours ago, Stub Mandrel said:

Although the difference is mitigated by stacking. 4 12-bit images = one 14bit...

 

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Share this post


Link to post
Share on other sites
2 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Not necessarily. When it comes to proper astro cameras that have RAW formats that are actually raw, the pixel value is simply a count of the electrons in the pixel electron well (1 electron per photon detected). If the electron well cannot hold more than 4096 (2^12) electrons then there is no difference between 12 and 14 bit data.  When it comes to DSLRs the camera sensor specs, like the size of the electron well, are not public and the camera does a lot of processing even on RAW images that depends on the camera settings (iso settings, etc). In that case it is tricky without a good test setup to actually tell if a specific camera gets any advantage of a higher bit depth Analog Digital Converters (ADC) and RAW format. The smart money is probably that most of them do benefit based on the data available from CCDs with similar pixel sizes but it's hard to know for sure.

Fun fact: Many astro cameras that output 16 bit FITS or similar don't use the entire 16 bit range. For instance the popular KAF-8300 CCD has a well depth of 25500 electrons which can be stored using just 15 bits and the small pixel Sony chips chips are in the 9-22 thousand range (13-15 bits) .

 

Edited by glappkaeft
  • Like 2

Share this post


Link to post
Share on other sites

Here's another way to look at the problem.  I use 12bits all the time for astro-imaging on my Sony A7S. For my long exposures, the sky background is the main source of noise in my image.  Since this noise is also sufficient to dither the step size of the 12bit quantisation, I would gain absolutely nothing by using 14 bits instead.

Mark

Edited by sharkmelley

Share this post


Link to post
Share on other sites
6 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

Sorry I just noticed that I missed answering your main question. The four 12-bit equal to one 14 bit image is not technically true although it is trueish in many circumstances. It's one of those common but potentially dangerous mental models many people use for stacking. The right way to do this is to look at the actual signal and signal-to-noise-ratio (SNR) and it is not as scary as it sounds.

In technical terms the signal is just the number of detected photons. The SNR is just like the name implies the ratio between just signal and the noise and it is the value that isn't good enough when you look at an image and thinks it looks noisy. The noise in turns be equal to the square root of the signal (the actual nature of noise and why the noise of a signal that behaves like a Poisson distributed statistical processes is equal to the sqrt of the signal is a bit overkill for this post). This turns out to be pretty nice for doing calculations since this also means that the SNR is also equal to the square root of the signal (SNR = signal/noise = signal/sqrt(signal) = sqrt(signal)).

Here is one case where the 4x12 bit = 14 bit model would be close to true. Take two cameras identical in every way (pixel size, quantum efficiency, etc.) except one (Camera A) has pixel wells that can hold 36000 electrons and the other camera (Camera B) has pixel wells with 9000 electrons and test them in the same telescope. If we take a 1 min exposure using Camera A where one specific pixel is just saturated (i.e. has converted 36000 photons to electrons and filled the well to capacity) the resulting value will fit into 14 bits with a little room to spare. Since Camera B has smaller electron wells the same pixel would become saturated after detecting 9000 photons which would take only 15 seconds.  Camera B's image will fit into 12 bits.  

If we process these images the same way but compensate for the lower values in the image from Camera A (i.e. multiply the pixels by 4) we will get very similar looking images except the image from Camera A will look less noisy since it has a higher signal (36000 photons vs 9000 photons for the nearly saturated pixel) and thus also better SNR (sqrt(36000) vs sqrt(9000) or appr. 190 vs 95). However if we grab four 15 second exposures with Camera B and add them, the nearly saturated pixel in the stacked image will have a photon signal of 9000+9000+9000+9000 = 36000 (fits into 14 bits) just like the 1 min exposure using Camera A. If we ignore that reading four images from the sensor instead of one introduces more of the unwanted readout and offset signals (pretty small sources for longer exposures) the two images would be identical.

Edited by glappkaeft
added some missing words
  • Like 2

Share this post


Link to post
Share on other sites

Out of curiosity, I've looked up figures for my 10D and 450D.

Figures with a dash are from http://www.sensorgen.info/

Figures with an equals are my calculations. Dynamic range is calculated on saturation capacity (well depth) NOT on bit count.

10D

QE - 22%

Min. read noise - 8.6 electrons

Saturation capacity - 38,537 electrons

Bit depth 12 - 4096

Electrons per bit = 9.40

Photons per bit @ 22% QE = 43

Photos to saturate = 1,657,091

 

450D

QE - 33%

Min. read noise - 3.6 electrons

Saturation capacity - 26,614 electrons (rather fewer!)

Bit depth 14 - 16,384

Electrons per bit = 1.62

Photons per bit @ 33% QE = 5

Photons to saturate = 878,262

 

So although the 450D has a smaller well depth, it makes more efficient use of its electrons and vastly more efficient use of its photons. This is partly offset by the read noise, but the difference is less than it appears because of the 450D having 50% better QE.

So if the 450D gets 1000 photons,  the signal will be:

1000/5 = 200 bits +  2 bits read noise.

For the 10D getting 1000 photons the signal will be

1000/43 = 23 bits + 1 bit read noise
 

Clearly the higher read noise of the 14-bit camera is totally irrelevant in the context of the far greater resolution.

Also, although the 10D has lower QE and greater well depth (saturation capacity) it can only take twice as many photons.

Taking the ratio between  1 photon count and a full well

The 450D has a dynamic range of 52dB

The 10D has a dynamic range of 35dB

 

Even allowing for the better QE of the 14 bit camera, I think this explains why my 14-bit 450D knocks the 10D into a cocked hat.

It's worth remembering that cameras don't actually use the full bit count available, so the difference is probably even more marked.

Share this post


Link to post
Share on other sites
23 minutes ago, Stub Mandrel said:

 

Taking the ratio between  1 photon count and a full well

The 450D has a dynamic range of 52dB

The 10D has a dynamic range of 35dB

 

Sensorgen actually gives the 14bit 450D a lower DR than the 12bit 10D:

  • 10D:  DR = 11.0 stops
  • 450D:  DR = 10.4 stops

Do they use a different definition of DR?

Mark

Edited by sharkmelley

Share this post


Link to post
Share on other sites
1 hour ago, Stub Mandrel said:

So although the 450D has a smaller well depth, it makes more efficient use of its electrons and vastly more efficient use of its photons. This is partly offset by the read noise, but the difference is less than it appears because of the 450D having 50% better QE.

So if the 450D gets 1000 photons,  the signal will be:

1000/5 = 200 bits +  2 bits read noise.

For the 10D getting 1000 photons the signal will be

1000/43 = 23 bits + 1 bit read noise

Firstly, when you say bits do you mean ADU ("Analog to Digital converter Unit" AKA pixel value)? Secondly, you are not calculating the signal, you're calculating the pixel values which is a thing we generally already know. 

Signal is simply the number of detected photons. So either 1000 for both cameras (if you mean that the photons where all detected) or 330 vs 220 (if the photons hit the pixel and we then take QE in account). Note that lower resolution of the 10D sensor will mean that its pixels are significantly larger so will be hit by almost exactly twice as many photons everything else being equal so the 10D would come out ahead per pixel even with its lower QE. The larger electron well is also due to the larger pixel size. These factors explains why older cameras like the 10D can do surprisingly well in practice.

The main problem for the 10D is that the electron well is so much larger what what the 12 bit RAW format can handle. This could lead to issues like banding if you stretch the data enough and other quantisation problems.

What you really want to do is to get the signal (the property that can be compared across different equipment) from the pixel value. If you know the processing done in the camera this can be very easy to do. For astro cameras it is often trivial, there is no hidden processing and the gain is usually 1 (unity gain 1 electron = 1 ADU) so in most cases pixel value is the same as number of photons. In the other cases you just look up your sensors gain (if the documentation or camera driver doesn't provide this information you should complain) and multiply or divide by it. Serious imaging software will read the needed data from the camera driver or let you input it.

With DSLRs it gets tricky, the gain (electrons per ADU - the thing you assumed was 9.6 for the 10D) will vary depending on ISO settings. Sensorgen says the saturation level  of the 10D is 2308 electrons at ISO 1600 vs 38537 at ISO 100, a clear sign that the gain is very variable. No DSLR maker makes this information public and here will also be other processing done, some of it non linear. For example there is indefinably some sort of dark current and/or bad pixel map processing going on. In these cases its probably best to give up if you want an accurate value usable for comparisons.

Pixel values can be used as a proxy for signal for a specific setup, it's just not comparable between different equipment and settings.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By endless-sky
      I would like to share my third image.
      I finally had a "lucky week", since my last session, December 18th. I managed 5 clear nights out of the past 6 (has to be a record, at least for me and my area) and I was able to finish a couple of projects I had started long ago and start a few new ones.
      This is M33, also known as the Triangulum Galaxy, taken over 10 nights, under my Bortle 5/6 home sky.
      Total integration time: 10h 14m 00s.
      Here are the acquisition details:
      Mount: Sky-Watcher NEQ6 Pro
      Telescope: Tecnosky 80/480 APO FPL53 Triplet OWL Series
      Camera: D5300 astromodified
      Reducer/flattener: Tecnosky 4 elements, 0.8x
      Guide-scope: Artesky UltraGuide 60mm f/4
      Guide-camera: ZWO ASI 224MC
      2020/11/08: Number of subs/Exposure time: 11@240s. Notes: L-Pro filter, no Moon
      2020/11/09: Number of subs/Exposure time: 10@240s. Notes: L-Pro filter, no Moon
      2020/11/20: Number of subs/Exposure time: 15@240s + 4@300s. Notes: L-Pro filter, Moon 30% illuminated
      2020/11/21: Number of subs/Exposure time: 22@300s. Notes: L-Pro filter, Moon 45% illuminated
      2020/11/24: Number of subs/Exposure time: 20@300s. Notes: L-Pro filter, Moon 75% illuminated
      2020/12/13: Number of subs/Exposure time: 12@300s. Notes: L-Pro filter, no Moon
      2020/12/14: Number of subs/Exposure time: 8@300s. Notes: L-Pro filter, no Moon
      2020/12/18: Number of subs/Exposure time: 6@300s. Notes: L-Pro filter, Moon 20% illuminated
      2021/01/10: Number of subs/Exposure time: 9@300s. Notes: L-Pro filter, no Moon
      2021/01/11: Number of subs/Exposure time: 15@300s. Notes: L-Pro filter, no Moon
      Total exposure time: 36840s = 10h 14m 00s.
      Pre and post-processing: PixInsight 1.8.8-7.

      Image was Drizzle Integrated and then cropped to original sensor size (6016x4016), without resampling. So, it appears as if taken ad double the focal length (768mm instead of 384mm). Image scale 1.04 arc-sec/pixel.
      Here's a link to the full resolution image: Triangulum Galaxy (M33)
      Thanks for looking!
      C&C welcome!
    • By cwinstone
      Hello, does anyone know if my imaging train looks correct, and if it does, why am i still getting these coma errors?
      Could it be incorrect backfocus? Searching the internet makes me think my dslr is 44mm, adding the t ring (even tried a 1mm spacer too) gets me to the required 55mm (assuming that's correct)
      I have no idea how to solve this and i feel like I'm just throwing money down the drain fighting this in vien. Help would be much appreciated.
      Skywacher Evostar 72ed
      Reducer rotator for 72ed (needs this for extra distance to achieve focus, the reducer and adapter alone doesn't allow for enough outwards travel)
      Reducer/corrector for ed72
      Canon eos 650d


    • By Uncertainty
      Hi, 
      I am a complete novice and got into astrophotography in December of last year, I had some decent success using only my Canon 450D and stock 18-55mm lens. 
      Since then I have purchased a telescope and mount (Sky-Watcher Startravel 102 and the AZGTe WiFi GOTO mount), which arrived yesterday. Luck would have it that the skies were clear last night and I had a go at imaging both M42 and M31 (also tried m45 tonight just to see if I could fiddle around with anything to solve it, but no luck), both of which came out pretty dissapointingly.
      If someone could tell my what is is that is wrong with the star shapes in the images, and potential fixes, it would be greatly appreciated, as I don't want to waste another clear night if it is an easy fix!
      I have read about coma, but also that refractors don't have this issue, so I am completely baffled.
       
      Thanks in advance.
       
      M45 at 1 second exposure, M42 at 20 seconds and M31 at 30 seconds.


    • By SpaceDave
      I'm new to the astrophotography hobby. I have experience with astronomy. I am struggling to make decent deep sky images (other than M42). The images don't seem to have much definition or brightness despite a decent overall exposure time. See the below images. I have seen on this forum that people are able to take awesome images of the below objects with my same setup. Is anyone able to tell me if I am missing something, please? Do I need even more exposure time?
      I use a Celestron 6SE with unmodified Canon 600D. It has a goto alt az, no EQ. I use a bahtinov mask to focus. Both images were taken with the native focal length of 1500mm, no filters or eyepieces.
      The image of the Triangulum Galaxy is 180 x 15sec ISO800 images. The Crab Neb is 250 x 15sec ISO800 images. Both images had their appropriate flats, darks and biases (30 of each). I use SIRIL to stack the images, which I have had good success with M42 before (see below).
      Any advice would be appreciated!



    • By AstroRookie
      Hello,
      when applying the flats taken in my last session (to find out what is causing the strange diffraction spikes) with Siril, the final stacked result still shows the vignetting and the dust spots. I also did the whole preprocessing with Nebulosity, same result.
      I took the flats as follows:
      same iso as my subs camera and focus not touched I use a homemade flatbox combined with the a white t-shirt with Ekos took test shots till the histogram was half-way to the left checked all my flats, they all show vignetting and the same dust spots as in my subs I tried using them with and without using a bias frame, same result, the final result looks as if no flats were used.
      Anybody any idea what is going on? An other question I have, will the vignetting and dust spots also show in the master flat (flats stacked)?
      Thanks for your help,
      AstroRookie
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.