Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

12bit vs 14bit DSLRs


beka

Recommended Posts

21 hours ago, fireballxl5 said:

I've read this and similar claims a number of times now, but what is the technical justification? Has someone explained this elsewhere?

At the end of the day, a 14-bit image has got to be better than a 12- bit image, all other things being equal of course. Wether or not this is noticeable to the eye is another matter ?

Maybe this article can explain, by Craig Stark of PHD fame:

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/BitDepthStacking.pdf

(Btw, it only works with average stacking, not median stacking.)

Link to comment
Share on other sites

  • Replies 34
  • Created
  • Last Reply
6 hours ago, wimvb said:

Maybe this article can explain, by Craig Stark of PHD fame:

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/BitDepthStacking.pdf

(Btw, it only works with average stacking, not median stacking.)

That's a good analysis.  I pretty much agree with what he says.

I should probably clarify a point I made earlier.  For long exposure imaging, I choose the length of my exposure so the noise from the skyglow drowns out the read noise.  If that skyglow noise is sufficient to dither the quantisation at 12 bits (which is almost certainly the case) then moving to 14 bits won't offer me any improvement.

It's definitely worth moving to a camera with lower read noise because (all other things being equal) that allows the skyglow floor to be lowered and takes advantage of the dynamic range available.

Mark

Link to comment
Share on other sites

Btw, I put Stark's article to the test and did a sequence in PixInsight, creating and stacking images. But I only simulated a 3 bit camera, stacking 32 images.

Here's the result:

http://wimvberlo.blogspot.se/2017/08/that-other-reason-to-stack-images.html

When "all else is equal", the number of bits being 12 or 14, really isn't an issue. As long as we stack enough images, there is no noticable difference.

But, when is "all else equal"? Certainly not when you compare two camera models, even of the same brand.

Link to comment
Share on other sites

9 hours ago, wimvb said:

Btw, I put Stark's article to the test and did a sequence in PixInsight, creating and stacking images. But I only simulated a 3 bit camera, stacking 32 images.

Here's the result:

http://wimvberlo.blogspot.se/2017/08/that-other-reason-to-stack-images.html

When "all else is equal", the number of bits being 12 or 14, really isn't an issue. As long as we stack enough images, there is no noticable difference.

But, when is "all else equal"? Certainly not when you compare two camera models, even of the same brand.

That's an excellent blog article of yours.  It explains very clearly how noise dithers the quantisation and allows stacking to do a good job.

You also asked when is "all else equal". Unfortunately it never happens in practice :)

Mark

Link to comment
Share on other sites

On 10/24/2017 at 00:11, sharkmelley said:

That's an excellent blog article of yours.  It explains very clearly how noise dithers the quantisation and allows stacking to do a good job.

You also asked when is "all else equal". Unfortunately it never happens in practice :)

Mark

I find the 14-bit data much easier to process, but then I do long exposures as I cool my DSLR and would not see that effect on a stack of only 12 or so 20min exposures. Also the reduction in noise from cooling means that you don't get anything like the same dithering effect in brightness. Dependent on the processing I have often thought that ASI1600 images can look a little flat in comparison to CCD and have in the past attributed it to the lower number of brightness levels. But then again sometimes I see them and they look great, its probably a function of gain and number of subs. 

i.e with very short subs 30 - 60 seconds the quantization of the data is high and so it can take vast numbers of subs. So while what you say is true I think that you are better off with a 14-but A/D just because you don't need to rely on capturing large numbers of subs to compensate.

 

Link to comment
Share on other sites

The read noise should be adequate for dithering. Plus the number of photons form a target will always vary randomly between subs.

If you do you initial stretch on a 16-bit tiff then you only need 8 14-bit images before the lowest bit becomes irrelevant (allowing for rounding the last bit of a stack) , but if you do an initial stretch on a 32-bit image (e.g. in DSS) you need to stack about half a million subs before the low bits make no contribution. Obviously this is OTT but it does show that the benefits of camera bit-depth depend on how you process the data.

 

Link to comment
Share on other sites

On 23/10/2017 at 20:23, wimvb said:

Btw, I put Stark's article to the test and did a sequence in PixInsight, creating and stacking images. But I only simulated a 3 bit camera, stacking 32 images.

Here's the result:

http://wimvberlo.blogspot.se/2017/08/that-other-reason-to-stack-images.html

When "all else is equal", the number of bits being 12 or 14, really isn't an issue. As long as we stack enough images, there is no noticable difference.

But, when is "all else equal"? Certainly not when you compare two camera models, even of the same brand.

I did a similar test a while ago using 8 bit grey scale data and what I noticed is that you had to do the stacking in say the 32 bit domain or floating point (obviously) but you needed to scale the image after stacking because the noise was additive...so instead of getting a range of 0-255 you would end up with something like 23-255 depending on the number of images stacked.

It really does show the power of stacking though and you have presented it really well...

Link to comment
Share on other sites

Thank you, Stuart.

If you do the stacking at the same bit depth as that of the subs, it's theoretically impossible to gain more depth. So, yes, you need to stack at a higher depth. But there should be a gain going from 8 to 16 bit. And any stacking software should work inernally with floating point accuracy, of course, even if the output is presented in integer format.

1 hour ago, StuartJPP said:

but you needed to scale the image after stacking because the noise was additive...so instead of getting a range of 0-255 you would end up with something like 23-255 depending on the number of images stacked.

I didn't understand this part. If the input is 8 bit with values from 0 to 255, then after averaging, the output should still be from 0 to 255, but now with fractional values representing the gained bit depth. The only scaling required would be from the old highest value (2^8 - 1 = 255 for 8 bit) to the new highest value (2^32 - 1, about 4 bilion for 32 bit) to fit those fractional values into an integer format. I don't understand how the smallest value should be as high as 23 (using averaging, not just summing up. In the latter case, the highest value would also exceed 255.)

Link to comment
Share on other sites

I'll try to explain what I did but I am not very good at it. I think I know what my issue was as listed below.

I had an original graduated grey-scale image, similar to yours on your blog (the original scene, not the 3-bit version). I then added copied and added random noise to 64, 128 and 256 of the original images, the key here is that the noise I added was additive, I didn't add "negative" noise to the stack. The other thing was that the noise I added was random over the whole 0-255 range.

When you then do a mean over all the noisy images in the stack, the averaged black point can only increase. The more images I added to the stack the more the black point moved away from 0.

So when I scaled (truncated) the image back to an 8-bit image, the black point was no longer black and the top end was clipped to white. Hope that explains the issue I had.

 

My original intention was to just see how well an average function could "recover" the original image from a stack of say 2, 4, 8, 16 etc. noisy images, and as expected it works very well with enough subs. I don't have the images here with me, but I am sure you can get an idea of what I mean from the above explanation.

 

 

Link to comment
Share on other sites

Ok, now I understand. In your case the noise was really added to the signal. And since the added part had a mean larger than 0, you really moved the pixels to higher values. You basically added a the equivalent of "dark signal" or light pollution.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.