Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

16 bit vs 32 bit processing


Recommended Posts

Hi,

I have been playing with Affinity Photo for image processing, and I find that I really like the application and the more visual way of processing the images, compared to for instance PI (I know I am probably crazy...)

One problem I have though is that in order for me to use either Starnet++ or StarXTerminator for separating stars and nebulosity, I have to go from 32bit to 16bit..

But I am honestly struggling a little bit with wrapping my head around how much of a difference it makes in the end?

I appreciate that 32bit images contain a lot more information than 16 bit images. But then again I am using a camera with a 12bit sensor, so does it really matter?

 

Any insights or thoughts are welcome!

Even if you have ways of separating stars from background with as good results as these machine learning plugins do, then that is also appreciated.

Link to comment
Share on other sites

I really like that idea of using 32bit floating point precision.

It is simply enough precision for all amateur astronomical uses.

Although camera is for example 12 bit or 14 bit - we end up stacking a lot of subs. Each new sub we stack "adds" to bit depth.

If you stack for example 128 subs - you'll add 7 bits of depth to the image. Even if you took those with 12bit camera - you are already at 19 bits of fixed point precision (I'm emphasizing fixed point here).

That means - stacking in 16bit is simply silly and should not be done - you should convert to 32bit floating point as soon as you start your processing workflow, even before calibration.

Although 32bit floating point has only 23 bit precision in mantissa:

image.png.aa88127b30a21bd567a0de35e85d4f9c.png

It is much more precise than 23 bit fixed point equivalent.

This is because of "floating point" part. With fixed point format - you have fixed ratio of brightest and darkest pixel in the image. For example 16bit - brightest value is 65535 while darkest value (non zero) is 1. So that is ratio of 65535:1.

With floating point - every pixel has its own 23 bits of precision - regardless how bright or dark it is.

That means huge dynamic range without loss of precision

 image.png.edb393b86c98fba081b280ccb024be54.png

image.png.058d04e8bf7cbd04d7eb0b86e7e8e73f.png

For small values - say values of 100 or so (in electron count) - you'll get enough precision even if you stack enormous number of subs - say you stack 4096 subs and you use 14bit camera.

With fixed point numbers - that would require 12bit + 14bit = 26 bit of precision.

With floating point numbers - you have 23 bit of precision - but that is more than you need. In single sub SNR is 10. In stack of 4096 subs, SNR improvement is x64 so overall SNR when you finish will be 640

Noise is 640 times smaller than signal. Signal is 100, so noise is ~0.15. You only need about 10 bits of mantissa to write 100 +/- 0.15 within noise limits.

You can't do that with fixed point numbers - as fixed point numbers need bits to maintain dynamic range - they describe both value of pixel and also how large it is compared to all other pixels in the image.

With floating point - you get exponent to do that and mantissa is just for precision of that single pixel.

Anyway, back on your question.

Doing processing in 16 bit is real concern and I would not do it. At least - not directly like that. I discourage people from saving stacks in 16bit format and then using PS or other image manipulation software to stretch that image. That does not make sense.

However, with Starnet++ - you can use 16 bit format because StarNet ++ expects stretched data. If you take 32bit format and stretch it - you will take all those faint low SNR areas and make numbers big enough to be comparable to other numbers in the image. You will compress dynamic range - and by compressing dynamic range - you'll again be in the zone that is handled with 16bit format ok.

Only issue with this approach is if you need linear starless image for whatever reason. You could do mathematical stretch - like gamma with known number - then perform star removal and then undo stretch. However, your precision will suffer somewhat and resulting linear data will have some quantization noise injected in it (because you rounded numbers when you used fixed point 16bit format).

Makes sense?

  • Like 1
Link to comment
Share on other sites

17 minutes ago, vlaiv said:

....

Anyway, back on your question.

Doing processing in 16 bit is real concern and I would not do it. At least - not directly like that. I discourage people from saving stacks in 16bit format and then using PS or other image manipulation software to stretch that image. That does not make sense.

However, with Starnet++ - you can use 16 bit format because StarNet ++ expects stretched data. If you take 32bit format and stretch it - you will take all those faint low SNR areas and make numbers big enough to be comparable to other numbers in the image. You will compress dynamic range - and by compressing dynamic range - you'll again be in the zone that is handled with 16bit format ok.

Only issue with this approach is if you need linear starless image for whatever reason. You could do mathematical stretch - like gamma with known number - then perform star removal and then undo stretch. However, your precision will suffer somewhat and resulting linear data will have some quantization noise injected in it (because you rounded numbers when you used fixed point 16bit format).

Makes sense?

Thank you for that explanation, I need to re-read the top part again to understand all of it. :)

But in regards to the quoted part..:

If I do my stretching of the fits files in Affinity Photo (which supports 32bits) and then convert from 32bit to 16bit after,  then it is "OK" if I understand you correctly.

Because by stretching the data before I will have done dynamic range compression? 🤔

Link to comment
Share on other sites

2 minutes ago, jjosefsen said:

If I do my stretching of the fits files in Affinity Photo (which supports 32bits) and then convert from 32bit to 16bit after,  then it is "OK" if I understand you correctly.

Because by stretching the data before I will have done dynamic range compression? 

Yes.

Here is rather crude example.

Say you have 0-1 range and you have "fixed precision" of 100 units in that range. I'll be working with decimal numbers to make it easy to understand and calculate.

Your image consists out of three pixels: One bright, one medium and one very faint.

Bright will be 0.851

Medium will be 0.5

Very faint will be 0.001

That is in "floating point representation" - or you don't need to worry about precision.

If you write that our fixed precision - look what happens:

0.851 will be written as 0.85 (only two decimal places allowed) - you loose here very little of the value, error is (0.851 - 0.85)/0.851 = ~0.12% of error

0.5 will be written as 0.5 - here due to number it self you don't loose anything by rounding

0.001 will become 0 - here you lost all information. That is the problem with fixed point format - you loose information in very faint areas.

But what will happen if we apply stretch first? Stretch that we apply can be represented by power law - raising to certain power. Here we will use power of 0.1 (equivalent to gamma of 10).

0.851 ^ 0.1 = ~0.984 = 0.98 when we round it to two digits

0.5 ^ 0.1 = ~0.933033 = 0.93 when we round it to two digits

0.001 ^ 0.1 = ~0.5012 = 0.5 when we round it to two digits.

Now we did not loose any of pixel values completely. Nothing was rounded to 0 we still have values that are very close to those in full precision. In fact - if we "inverse" those fixed point numbers - we will get very close to original values:

0.98 to the power of 10 is ~0.817073

0.93 to the power of 10 is ~0.484

0.5 to the power of 10 is  0.0009765625

As you can see - error is larger where we had large signal to begin with and very small with very small signal.

You can view all of that in another way:

Linear data and fixed point format - uses equal precision for both high and low values in the image - it introduces same level of rounding error.

Stretching and then using fixed point format - uses high precision for faint parts and low precision for bright parts - so rounding error is small on faint stuff and high on bright stuff - which is better from SNR perspective as faint parts already suffer from poor SNR and don't need more (rounding) noise added.

 

 

  • Thanks 1
Link to comment
Share on other sites

19 minutes ago, vlaiv said:

Yes.

Here is rather crude example.

Say you have 0-1 range and you have "fixed precision" of 100 units in that range. I'll be working with decimal numbers to make it easy to understand and calculate.

Your image consists out of three pixels: One bright, one medium and one very faint.

Bright will be 0.851

Medium will be 0.5

Very faint will be 0.001

That is in "floating point representation" - or you don't need to worry about precision.

If you write that our fixed precision - look what happens:

0.851 will be written as 0.85 (only two decimal places allowed) - you loose here very little of the value, error is (0.851 - 0.85)/0.851 = ~0.12% of error

0.5 will be written as 0.5 - here due to number it self you don't loose anything by rounding

0.001 will become 0 - here you lost all information. That is the problem with fixed point format - you loose information in very faint areas.

But what will happen if we apply stretch first? Stretch that we apply can be represented by power law - raising to certain power. Here we will use power of 0.1 (equivalent to gamma of 10).

0.851 ^ 0.1 = ~0.984 = 0.98 when we round it to two digits

0.5 ^ 0.1 = ~0.933033 = 0.93 when we round it to two digits

0.001 ^ 0.1 = ~0.5012 = 0.5 when we round it to two digits.

Now we did not loose any of pixel values completely. Nothing was rounded to 0 we still have values that are very close to those in full precision. In fact - if we "inverse" those fixed point numbers - we will get very close to original values:

0.98 to the power of 10 is ~0.817073

0.93 to the power of 10 is ~0.484

0.5 to the power of 10 is  0.0009765625

As you can see - error is larger where we had large signal to begin with and very small with very small signal.

You can view all of that in another way:

Linear data and fixed point format - uses equal precision for both high and low values in the image - it introduces same level of rounding error.

Stretching and then using fixed point format - uses high precision for faint parts and low precision for bright parts - so rounding error is small on faint stuff and high on bright stuff - which is better from SNR perspective as faint parts already suffer from poor SNR and don't need more (rounding) noise added.

 

 

Honestly that makes perfect sense, thank you for that explanation.

Any processing on linear data needs to be done in 32bit preferably and then I can go to 16bit when non-linear.

I already have a workflow in mind then.. :)

Link to comment
Share on other sites

I believe the AffinityPhoto 32-bit mode works similarly to Photoshop's 32-bit mode.  Photoshop assumes a 32-bit file is linear.  It will therefore display the 32-bit image using a linear (i.e. gamma=1.0) ICC profile.  This means the brightness displayed on the screen is proportional to the pixel values in the image.  As soon as you change the mode to 16-bit the relevant colour space gamma is applied (e.g. gamma=2.2 for AdobeRGB).  In other words, when you change the mode to 16-bits the pixel values are non-linearly transformed.  You can easily see this by comparing the pixel values before and after.  However, the image displayed on the screen looks identical because the non-linear ICC profile knows how to transform the non-linear pixel values into the correct screen brightness.

Mark

Link to comment
Share on other sites

Thanks for the very clear explanation Vlaiv.

To the OP, I use Affinity for my final "polishing" but since 2014 have done my main processing in AstroArt, now at V8. I too have looked at PI but found it :BangHead:. I know it's supposed to be the bee's knees for astro imaging but I found it totally counterintuitive. Possibly because I'm not a mathematician.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.