Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

14-bit vs 16-bit chips


Jane C

Recommended Posts

No, doesnt matter all that much. Plenty of folks still imaging with the 1600MM which is a 12-bit camera.

Only difference i could think of is a higher fullwell depth on the 16-bit ADC vs the 14, but i cant think of a target where that really matters.

  • Like 1
Link to comment
Share on other sites

35 minutes ago, Jane C said:

Hi:

 

If all other things were equal, is native 16-bit a significant advantage over 14-bit images converted to 16-bit images?

 

Thanks,

 

Jane.

 

There are 2^n reasons but that bit aint one.....

  • Like 1
  • Haha 3
Link to comment
Share on other sites

Very little if any advantage in higher bit count.

With stacking, we certainly improve bit depth of the image (stacking 2 subs adds 1 bit, 4 subs - 2bits, 8 subs - 3 bits and so on).

Resulting image often has much more bits of depth than camera itself.

 

  • Like 2
Link to comment
Share on other sites

Just now, Martin Meredith said:

Isn't one advantage that the higher the number of bits, the less one has to fret about gain settings?

I love the fixed gain on my 16-bit CCD compared to the user-settable gain on my 12-bit CMOS. Anything that removes an entire dimension of decision making is good in my book!

Actually no.

With CMOS sensors and adjustable gain - you have selection of read noise and possibility to avoid as much quantization error as possible.

Say you have couple of sensors - one 12bit, one 14bit and one 16bit.

With adjustable gain - you can make all of them work on unity gain - one electron = one ADU unit (or pixel value is number of electrons).

In that case - higher bit count cameras have advantage that they can record higher range of signal values in single exposure.

12bit will be limited to 4095 electrons, 14bit will be limited to 16383 electrons and 16bit will be limited to 65535 electrons (hypothetically as we don't consider offset).

This might seem like serious advantage - but it is not.  There will always be stars that have higher brightness ratio than any of these numbers - so there will be always "blown" cores of some of the stars and that is dealt with in different manner - by using short exposures for over exposed areas.

On the other side - camera with lower read noise will have advantage over one with high read noise as far as bit depth goes - so that is metric that should be looked at more than at bit count.

With lower read noise - one can use shorter exposures, and with shorter exposures - one can have more of them for same total integration time, and we have seen that stacking raises total bit count - more subs you stack, more total bits you end up with.

Say you compare ASI1600 with 12 bit and some CCD camera with 16 bit ADC. Say that this CCD has high read noise - maybe 13.6 (I've chosen this number for ease of calculation).

ASI1600 has 1.7e of read noise.

In order to swamp read noise with sky glow - you need to expose much longer with CCD - 13.6/1.7 = 8, so your sub needs to be x64 longer!

This means that you'll have 64 less subs in final stack if you expose the same total time.

64 is 2^6, so x64 more subs is additional 6 bits of depth, and it turns 12bit camera into 18bit camera versus 16bit camera.

Due to read noise - 12bit camera produces higher bit depth image in the end (all other things being equal).

  • Like 2
Link to comment
Share on other sites

32 minutes ago, Martin Meredith said:

You missed my point entirely and in fact your post demonstrates perfectly what I mean about 'fretting about gain settings'. 😉

You are probably quite right about me missing the point of your post.

I don't really see what is all the fuss about selecting suitable gain - one just needs to take a quick look at gain vs different settings and just select sensible value. Nothing to fret about really.

In fact - selecting the lowest gain setting is pretty much like using CCD sensor, so that is always an option.

  • Like 1
Link to comment
Share on other sites

23 minutes ago, Vulisha said:

Just to jump in, for planetary imaging difference between 10bit and 12 bit. 

With my camera I can image 10bit at 120fps, and 12bit at 50fps.

Which is the better option? 

 

 

I would hazard a guess that you'd want the higher frame rate, although happy to be corrected if there are other factors at play to consider. 

  • Like 1
Link to comment
Share on other sites

35 minutes ago, Vulisha said:

Just to jump in, for planetary imaging difference between 10bit and 12 bit. 

With my camera I can image 10bit at 120fps, and 12bit at 50fps.

Which is the better option? 

 

 

That really depends on target and conditions.

In most cases 8bit capture is sufficient.

Some targets like lunar for example (or solar) don't change at great speed, so you can capture for longer time. Jupiter rotates, and depending on aperture size and resolved detail - you will be limited in capture duration (unless you derotate your video - but then it is question of how does derotation affect final quality).

If you have limited window to shoot in - faster FPS are better, as you want to capture as much frames as possible.

On bright target, depending on gain setting you choose and we often choose high gain because read noise is lower - you run a risk of over exposing. Then you have two options - lower exposure further or use higher bit count. Former won't really bring in any benefit if you are limited at FPS you can achieve (not much point in shooting 2ms exposure if you can't record 500fps and 5ms is seeing limit) but later - using 12bit mode, will then be beneficial.

Bottom line - on most targets, go with 8bit, and use 16bit mode only if there is real reason for it.  By the way - using ROI can increase fps that you can achieve - which is good for small targets like planets.

 

  • Like 2
Link to comment
Share on other sites

Thanks vlaiv, so basically higher fps is way to go! 

 

Unfortunately my camera (RpiHq camera) does not support ROI as it is. 

 

It has few modes 3040px@10 fps, 1520px@40 fps, 1080px@50fps all 12 bit and 990px@120fps and ROI is only helpfull for saving speed if disk is not fast enough, sensor is limited for some reason and cannot go above hard-coded values. 

Link to comment
Share on other sites

On 10/08/2022 at 17:07, Jane C said:

Hi:

 

If all other things were equal, is native 16-bit a significant advantage over 14-bit images converted to 16-bit images?

 

Thanks,

 

Jane.

Hi Jane as I understand it, the 16 bit has higher range than a 14

65000 odd for 16 bit

16000 odd for 14

Link to comment
Share on other sites

On 10/08/2022 at 17:07, Jane C said:

Hi:

 

If all other things were equal, is native 16-bit a significant advantage over 14-bit images converted to 16-bit images?

 

Thanks,

 

Jane.

Basically no advantage because we stack images and so they still reach the same gradiation even with a 12-bit camera. You will reach the same 16-bit depth (or in theory more) in the final stacked image. The advantage in single frame work is real though and should provide a higher dynamic range assuming your read noise is sufficiently low. 

Edited by Adam J
Link to comment
Share on other sites

On 10/08/2022 at 20:48, vlaiv said:

You are probably quite right about me missing the point of your post.

I don't really see what is all the fuss about selecting suitable gain - one just needs to take a quick look at gain vs different settings and just select sensible value. Nothing to fret about really.

In fact - selecting the lowest gain setting is pretty much like using CCD sensor, so that is always an option.

Well, maybe you don't read all the posts on various forums about what is the best gain for this or that. The fact that many people now rely on tools to work out the best gain/exposure settings speaks for itself. Add to that the fact that gain leads to a complete extra dimension for a dark library. User-settable gain is a necessary evil due to insufficient bit depth and ought to be recognised for the sticking plaster solution it is.

The analogy isn't perfect but 20-30 years ago there was a similar situation in audio recording -- fretting over how to set recording level to avoid clipping and at the same time avoid quantisation noise due to a too-low a bit depth. Guess what: nobody worries too much about that nowadays due to widespread availability of increased bit depth audio codecs. That's where we'll be in 10 years time with CMOS, I predict. Nobody will be worrying about gain. 

Just my opinion.

 

 

 

Link to comment
Share on other sites

17 minutes ago, Martin Meredith said:

Well, maybe you don't read all the posts on various forums about what is the best gain for this or that. The fact that many people now rely on tools to work out the best gain/exposure settings speaks for itself. Add to that the fact that gain leads to a complete extra dimension for a dark library. User-settable gain is a necessary evil due to insufficient bit depth and ought to be recognised for the sticking plaster solution it is.

The analogy isn't perfect but 20-30 years ago there was a similar situation in audio recording -- fretting over how to set recording level to avoid clipping and at the same time avoid quantisation noise due to a too-low a bit depth. Guess what: nobody worries too much about that nowadays due to widespread availability of increased bit depth audio codecs. That's where we'll be in 10 years time with CMOS, I predict. Nobody will be worrying about gain. 

Just my opinion.

 

To be honest i have never seen the issue with gain either, all seems simple to me but we should recognise that not everyone wants to get that involved in the process.

Link to comment
Share on other sites

2 hours ago, Martin Meredith said:

User-settable gain is a necessary evil due to insufficient bit depth and ought to be recognised for the sticking plaster solution it is.

I don't see it like that.

In fact - there are several examples to the contrary.

1) We now have 16bit CMOS cameras that still have adjustable gain. If only reason for adjustable gain was to over come bit depth - then there would be no need for it on 16bit camera

2) Some of CCD cameras don't need 16 bit at all - they have FWC that can be fully satisfied with lower bit count

for example ICX825 has 24K FWC, so 15bits is more then enough to record electron count. Sony ICX814/5 has only 9000, so even 13bit would be enough for that (with a bit of a loss) - and 14bit is more then enough.

Yet, they all have 16bit ADC

3) There are CCD models with more than 64K full well capacity - like KAF-09000 or KAF-16803, both with 100K+ FWC - yet, they don't have 17bit ADC to fully exploit that - but 16bit ADC.

I think that adjustable gain has more to do with electronics and how ADC is implemented.

With CMOS sensor - since each pixel has ADC unit associated and because of advancement of semi conductor manufacturing process (think CPUs and such) - it is easy to make circuitry for adjustable gain.

 

 

  • Like 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.