Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Check Your Astrobin


Recommended Posts

14 minutes ago, gorann said:

Yes, png may be ok since it stores at 16 bit and not 8 bit like jpg so you could have all your dynamic range there. I bet someone like Vlaiv @vlaiv could as usual help us out here. I took one of my 89.9 Mb tif files into PS and saved it as png. File size fell a little bit to 72.4 Mb but when I opened it in PS it was still 16 bit. When I save the same file as jpg in PS (at maximum quality of 12) it becomes 10.4 Mb and when I open it it is only 8 bit - so crap for further processing.

I can try, but have no idea what is being discussed here (sorry, I did not read thread posts). I can see that it has something to do with Astrobin having issues and you are mentioning file formats and file sizes?

Link to comment
Share on other sites

24 minutes ago, geoflewis said:

As Salvatore states that Amazon advise that all missing images are lost, I have no choice other than to reload all my 360+ images to AB one by one - so far I did not find one that survive the loss. I made a start this afternoon and so far have restored about 50 images in something like 5 hours, so perhaps 10 per hour. Most of those already done were easy as they were still available in working folders on my laptop. I have now started working through archive HDs and for sure my planetary images are the most challenging to identify final version, as they are mostly composite of many Tiff images de-rotated in WinJupos, so checking to be sure that I have the correct final version is taking a lot more time than the DSO images. Sadly I have also found that some images initially appear missing from my primary astro image back up of planetary video, so maybe my periodic back up routine missed some. I have multiple versions of back up on different HDs, so hopefully I will find them somewhere as I work through everything over the coming days

Tomorrow I go into hospital for a prostate biopsy under GA, hoping that it does NOT detect any cancer, so right now I definitely have an acute appreciation of what is serious and what is just annoying. The only important result I want right now is no cancer, the loss of images on AB is just noise in comparison....!! I will need to rest for a few days after the procedure, so no doubt I will continue to amuse myself restoring images to AB, but I think I will be fully active post op way before I have restored all 360 missing images......

You know the person that hurts themselves to feel pain intentionally because that means they are alive?  This Astrobin issue reminds me of that a little bit for you in a way.  No its not nice to have these kind of worries.....then again, it is so very sweet to be able to have these sort of worries. I hope your biopsy comes back negative. 

Rodd

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

28 minutes ago, gorann said:

Yes, png may be ok since it stores at 16 bit and not 8 bit like jpg so you could have all your dynamic range there. I bet someone like Vlaiv @vlaiv could as usual help us out here. I took one of my 89.9 Mb tif files into PS and saved it as png. File size fell a little bit to 72.4 Mb but when I opened it in PS it was still 16 bit. When I save the same file as jpg in PS (at maximum quality of 12) it becomes 10.4 Mb and when I open it it is only 8 bit - so crap for further processing.

I process JPEGS all the time.  I think people get hung up on this stuff.

Link to comment
Share on other sites

1 minute ago, Rodd said:

I process JPEGS all the time.  I think people get hung up on this stuff.

:D

I'm probably one of those people. For me, this is break down of bit format and usage:

16 bit - good only for raw subs out of the camera and it's usage should ideally stop there ( I know that some people use 16bit format because of older versions of PS, but no excuse really :D )

8 bit - good only for display after all processing has been finished

32 bit float point - all the rest.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

3 minutes ago, Rodd said:

I process JPEGS all the time.  I think people get hung up on this stuff.

But the fact is that 8 bit is 256 levels of brightness and 16 bit is 65536 levels. So after your stretched your 16 bit image and saved it as 8 bit jpg, you have thrown away everything outside those tiny 256 bits and severely consticted what you can do with it later.

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

:D

I'm probably one of those people. For me, this is break down of bit format and usage:

16 bit - good only for raw subs out of the camera and it's usage should ideally stop there ( I know that some people use 16bit format because of older versions of PS, but no excuse really :D )

8 bit - good only for display after all processing has been finished

32 bit float point - all the rest.

Yeah but if you open a JPEG and tweak the histogram or some other tweak--no worries.  life goes on.  the image is still good.  Its better actually if the tweak worked. 

Link to comment
Share on other sites

1 minute ago, gorann said:

But the fact is that 8 bit is 256 levels of brightness and 16 bit is 65536 levels. So after your stretched your 16 bit image and saved it as 8 bit jpg, you have thrown away everything outside those tiny 256 bits and severely consticted what you can do with it later.

Not really, since monitors only show the 256.  I tweak JPEGS all the time and the world continues to turn.  Some even got thumbs up from you as being improved!

Link to comment
Share on other sites

Just now, Rodd said:

Yeah but if you open a JPEG and tweak the histogram or some other tweak--no worries.  life goes on.  the image is still good.  Its better actually if the tweak worked. 

Could feel like that until you compare it to 16 bit, or 32 bit as Valiv pointed out. However, I have a question to @vlaiv: Why go over to 32 bit if there never was 32 bit information in the image to start with? Yes, I know you will probably tell me that stacking many 16 bit images creates more than 16 bit, but it is still quite a big dymanic range. In any case I am stuck on my old 16 bit PS, and the new pay-per-view PS has apparently lost quite a bit of the nice "bits" of the old one I have been told.

Link to comment
Share on other sites

5 minutes ago, Rodd said:

Not really, since monitors only show the 256.  I tweak JPEGS all the time and the world continues to turn.  Some even got thumbs up from you as being improved!

What I am saying is not that an 8 bit image does not look good printed or on a monitor, but it is kind of dead when it comes to stretching it since the faint stuff and bright stuff is not there anymore. You would just get posterization stretching it outside its small dynamic range. So it is only good for the final product that we post and print.

Link to comment
Share on other sites

12 minutes ago, Rodd said:

Not really, since monitors only show the 256.  I tweak JPEGS all the time and the world continues to turn.  Some even got thumbs up from you as being improved!

Well, for starter Jpeg is lossy format - which means that it alters your image (looses information). You can check that it does so by using regular 8bit image saved as PNG and then, same image saved as Jpeg (even at highest quality setting) and subtract the two - you will not get "blank" image.

Here is an example:

image.png.7be5184135b53dd04bb15842838fdb35.png

This is famous Lena image (used often as test image for algorithms) - left is unaltered png, and right is same png image saved as 100% quality JPEG (chroma sampling 1:1 and such). And here is what you get if you subtract the two:

image.png.f178cdccc51f81dc3d2cf4003762696f.png

There is clearly something done to the Jpeg image that makes it different than original image.

Now, let's do another experiment to see how higher bit count fares versus 8bit format.

This is single frame (binned to oblivion to pull the data out and make it small and easy to copy/paste) of 1 minute exposure in 32bit format - prior to stretch:

image.png.1ef3c7b2dd8a9c9bf9af919a71fca43c.png

This is exactly the same image, except converted to 8bit format:

image.png.4a93c83ae5f3888ad9bc57c262cc4f8b.png

So far, so good - not much difference, but let's stretch a bit that data and see what happens:

Here is 32bit version with very basic stretch, btw stretch is saved as preset:

image.png.badf7c09a1f6308fffc4f5b40d9dee87.png

Here is the same stuff in 8bit format:

image.png.8c6ed5cbff8ce8f6a5bd7f7e5da00a61.png

Look at that grain and noise, that stuff was not in above image, clearly 8bit image can't take same level of manipulation as 32bit image.

  • Like 1
Link to comment
Share on other sites

Vlaiv @vlaiv, that is a very striking demonstration that images saved as 8 bit should not be used for further processing. Would you be equally worried about 16 bit since that is what probably the majority here are processing. 8 bit has only 4% of the theoretical dynamic range of 16 bit.

By the way, yes this thread started becasue Astrobin crashed and a lot of people have lost their posted images there, and waiting to find out what can be done. Then it turned out that Rodd has been using Astrobin to save his data rather than trusting a hard drive or some other storage device or cloud. But since his data is saved as 8 bit jpg on Astrobin I have now tried to tell him not to use Astrobin as a way to save data, but only for posting his images, and that he would be better off saving it as something not compressed, like tiff or fits. This may be especially true since he works with PI which I think is normally saving everything as 32 or even 64 bit, which you appear to prefer.

Link to comment
Share on other sites

24 minutes ago, gorann said:

Could feel like that until you compare it to 16 bit, or 32 bit as Valiv pointed out. However, I have a question to @vlaiv: Why go over to 32 bit if there never was 32 bit information in the image to start with? Yes, I know you will probably tell me that stacking many 16 bit images creates more than 16 bit, but it is still quite a big dymanic range. In any case I am stuck on my old 16 bit PS, and the new pay-per-view PS has apparently lost quite a bit of the nice "bits" of the old one I have been told.

Simply put - 16bit image does not hold enough information that you obtain by stacking and calibration, but more important thing - it is fixed point format. Which means that you not only limit data per pixel at 16 bit but you also limit total dynamics of the image to 16 bit.

32bit floating point does not have much more bits per pixel to hold information, it is only 24bits so only 8 bits more than 16bit format, but it is floating point precision - which means that it has huge dynamic range

from image.png.0c7265722b3fe22b4d2b153254e54fa8.png to image.png.b9058e8e9a86daf3b580d9e29c6450f4.png (source: https://en.wikipedia.org/wiki/Single-precision_floating-point_format)

What does this mean? Well let's do simple example - we stack by adding 4 subs from 14 bit camera. We have a very bright star and we have background with no light pollution. First is almost saturating 14bit range at 16384 and later is sitting around 0 (we have read noise so it is in some +/- read noise range, shifted with offset but let's ignore details for now).

Star will add to 16 bit (4x16384 = 65536 = 16 bit), while noise around 0 will add to be again noise around 0. Stacking increases dynamic range of whole image besides needing more precision individual pixels. This creates problem with fixed point representation because there is fixed ratio between strongest pixel and weakest pixel - it is always only 16 bits in 16bit format or x65536. If you will, we can convert that into magnitudes and it is about 12mags. You simply put a firm limit on dynamic range of your image at 12mags. If you record a signal that has some intensity - signal that is 12mags fainter will be a single number - a constant value - or there won't be any detail (no variation in that single value).

In comparison 32bit float point has 24 bits of precision per pixel (that means that you can stack 256 subs of 16 bits each until you start to need more "space") or in another words error due to precision will be 1 in 16777216. But more importantly you can record much higher dynamics in your image - about 10^83 or in magnitudes - over 200 magnitudes of difference in intensity.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Simply put - 16bit image does not hold enough information that you obtain by stacking and calibration, but more important thing - it is fixed point format. Which means that you not only limit data per pixel at 16 bit but you also limit total dynamics of the image to 16 bit.

32bit floating point does not have much more bits per pixel to hold information, it is only 24bits so only 8 bits more than 16bit format, but it is floating point precision - which means that it has huge dynamic range

from image.png.0c7265722b3fe22b4d2b153254e54fa8.png to image.png.b9058e8e9a86daf3b580d9e29c6450f4.png (source: https://en.wikipedia.org/wiki/Single-precision_floating-point_format)

What does this mean? Well let's do simple example - we stack by adding 4 subs from 14 bit camera. We have a very bright star and we have background with no light pollution. First is almost saturating 14bit range at 16384 and later is sitting around 0 (we have read noise so it is in some +/- read noise range, shifted with offset but let's ignore details for now).

Star will add to 16 bit (4x16384 = 65536 = 16 bit), while noise around 0 will add to be again noise around 0. Stacking increases dynamic range of whole image besides needing more precision individual pixels. This creates problem with fixed point representation because there is fixed ratio between strongest pixel and weakest pixel - it is always only 16 bits in 16bit format or x65536. If you will, we can convert that into magnitudes and it is about 12mags. You simply put a firm limit on dynamic range of your image at 12mags. If you record a signal that has some intensity - signal that is 12mags fainter will be a single number - a constant value - or there won't be any detail (no variation in that single value).

In comparison 32bit float point has 24 bits of precision per pixel (that means that you can stack 256 subs of 16 bits each until you start to need more "space") or in another words error due to precision will be 1 in 16777216. But more importantly you can record much higher dynamics in your image - about 10^83 or in magnitudes - over 200 magnitudes of difference in intensity.

Thanks Valiv, very enlightening, I am with you, and maybe slightly worried,  but would we notice a significant difference processing in 16 bit vs 32 bit floating and should we put out a warning not to use old 16 bit versions of PS, like me and Olly @ollypenrice?

Link to comment
Share on other sites

2 minutes ago, gorann said:

Vlaiv @vlaiv, that is a very striking demonstration that images saved as 8 bit should not be used for further processing. Would you be equally worried about 16 bit since that is what probably the majority here are processing. 8 bit has only 4% of the theoretical dynamic range of 16 bit.

By the way, yes this thread started becasue Astrobin crashed and a lot of people have lost their posted images there, and waiting to find out what can be done. Then it turned out that Rodd has been using Astrobin to save his data rather than trusting a hard drive or some other storage device or cloud. But since his data is saved as 8 bit jpg on Astrobin I have now tried to tell him not to use Astrobin as a way to save data, but only for posting his images, and that he would be better off saving it as something not compressed, like tiff or fits. This may be especially true since he works with PI which I think is normally saving everything as 32 or even 64 bit, which you appear to prefer.

For 16 bit - I recommend against it on a principle - it is limited in dynamic range. It will not be much of a problem if you for example have high dynamic range image and you stretch it to an extent while in 32bit format and then save it as 16bit. Stretching "compresses" dynamic range and you don't loose much.

That is "a trick" I used when working with StarNet++. It requires both stretched data and 16bit format to remove stars and when processing NB data - I first do a stretch per channel - but only something like 1/4 of what I would normally stretch - since I want to later stretch more and denoise data after removing stars and also want to do channel mixing.

Problem with 16 bit data comes when you use it on your linear data to start working on stretching. We have seen above how limited 8bit data really is. Using short exposure that is common with modern sensors (cmos in particular) and because more and more people image in LP and will not benefit from long exposures - makes stacked images very "compressed" in left part of histogram - low values.

Imagine that all truly interesting signal is in 2-3% lower part of histogram (left part). That means that this signal occupies only 2-3% of 16 bit range. In values this would mean that this signal only has 65535/40 = ~1600 levels. Now we are down to 10.5 bits - very close to 8bits - you will soon start loosing detail in faint parts of the image. Average galaxy has something like 7-8 mags of dynamic range or even more and guess what? 8 mag is about x1600 between brightest and faintest part - or said 10.5 bits.

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

For 16 bit - I recommend against it on a principle - it is limited in dynamic range. It will not be much of a problem if you for example have high dynamic range image and you stretch it to an extent while in 32bit format and then save it as 16bit. Stretching "compresses" dynamic range and you don't loose much.

That is "a trick" I used when working with StarNet++. It requires both stretched data and 16bit format to remove stars and when processing NB data - I first do a stretch per channel - but only something like 1/4 of what I would normally stretch - since I want to later stretch more and denoise data after removing stars and also want to do channel mixing.

Problem with 16 bit data comes when you use it on your linear data to start working on stretching. We have seen above how limited 8bit data really is. Using short exposure that is common with modern sensors (cmos in particular) and because more and more people image in LP and will not benefit from long exposures - makes stacked images very "compressed" in left part of histogram - low values.

Imagine that all truly interesting signal is in 2-3% lower part of histogram (left part). That means that this signal occupies only 2-3% of 16 bit range. In values this would mean that this signal only has 65535/40 = ~1600 levels. Now we are down to 10.5 bits - very close to 8bits - you will soon start loosing detail in faint parts of the image. Average galaxy has something like 7-8 mags of dynamic range or even more and guess what? 8 mag is about x1600 between brightest and faintest part - or said 10.5 bits.

So could I concude that as long that I am stuck to 16 bit and image at a dark site, I should go for long rather than short exposures with my CMOS cameras? Yes, I know we are getting a bit away from the topic of the tread but not much seems to happen with the Astrobin situation right now so we could see it as an Intermission....

Link to comment
Share on other sites

7 minutes ago, gorann said:

Thanks Valiv, very enlightening, I am with you, and maybe slightly worried,  but would we notice a significant difference processing in 16 bit vs 32 bit floating and should we put out a warning not to use old 16 bit versions of PS, like me and Olly @ollypenrice?

I believe there should be difference, and just how much - depends on your imaging workflow. Using very long exposure and fewer number of them - will not put all signal in low range and consequently it will be less posterized by use of 16 bit.

Here is example of that happening - I used my H alpha stack (4 minute subs, 4h total, binned x2 for 1"/px sampling rate) in 32 bit and same image first converted to 16bit. I used just one round of levels - same on each:

image.png.0d38a9239e45fa9dc83d5775b1890d41.png

and here is same done on 16 bit version:

image.png.69e6e5e01641d71f2da4a499bd3f6ed0.png

See how posterized faint regions become?

This image is made out of - let's say 4 x 16 x 4 = 64 x 4 = 256 stacked subs (4 minute and 16 subs per hour make 64 subs total, but I did bin x2 with average method so that is another x4 in number of samples per pixel). That is enough data with small signal to keep things in low values and show posterization.

Maybe posterization won't be as bad with 30-40 ten minute subs

Link to comment
Share on other sites

23 minutes ago, gorann said:

Vlaiv @vlaiv, that is a very striking demonstration that images saved as 8 bit should not be used for further processing. Would you be equally worried about 16 bit since that is what probably the majority here are processing. 8 bit has only 4% of the theoretical dynamic range of 16 bit.

By the way, yes this thread started becasue Astrobin crashed and a lot of people have lost their posted images there, and waiting to find out what can be done. Then it turned out that Rodd has been using Astrobin to save his data rather than trusting a hard drive or some other storage device or cloud. But since his data is saved as 8 bit jpg on Astrobin I have now tried to tell him not to use Astrobin as a way to save data, but only for posting his images, and that he would be better off saving it as something not compressed, like tiff or fits. This may be especially true since he works with PI which I think is normally saving everything as 32 or even 64 bit, which you appear to prefer.

Yes but you can’t see that unless you subtract it. 

Link to comment
Share on other sites

3 minutes ago, gorann said:

So could I concude that as long that I am stuck to 16 bit and image at a dark site, I should go for long rather than short exposures with my CMOS cameras? Yes, I know we are getting a bit away from the topic of the tread but not much seems to happen with the Astrobin situation right now so we could see it as an Intermission....

It could help if you can sacrifice high part of the range.

Let's put it in simple numbers to explain what will happen. Imagine you do 1 minute vs 10 minute subs.

Signal level in 1 minute sub is at 2% of full well capacity. Stacking bunch of such subs with average method will leave signal level at 2% (take bunch of 0.02 and average them - you will get 0.02).

Similarly in 10 minute sub - signal will reach 20% of full well capacity, again stacking with average will leave that at 20%.

Signal at 2% will have 10.3 bit of dynamic range, while signal at 20% will have 13.7 bit of dynamic range - clearly better and there will be less posterization of faint stuff. However by using 10 minute subs - you will blow out more cores, in fact you will saturate with signal x10 weaker than in 1 minute range. This is what it means to loose high part of the range.

If you try to mix in that high range in linear stage - you will suppress lower range again - only way you can mix in burn out features is via layers in PS - the way Olly does it and often suggests that should be done (because it works for him with this approach).

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Sure you can - just use lower quality setting in jpeg :D

lena-png-6-crap.jpg.52493fd29bdf7f2e7a171d477e8e0d34.jpg

obviously--but I dont use a low quality JPEG setting......If a tree falls in the forest and the fox is deaf, does it make a sound?

Link to comment
Share on other sites

To be perfectly honest I was profoundly underwhelmed when I went from my early 8 bit Ps 7 to my present 16  bit Ps CS3. I was expecting to find a very obvious difference but simply didn't. I never did a careful comparison but nothing jumped out at me from the screen.

I'm not convinced that this is a big deal in practice and will continue to use my bought-and-paid for CS3 until some Windows update prevents it from working. Then I'll go off in a sulk and not come back!

😁lly

Link to comment
Share on other sites

14 hours ago, vlaiv said:

and here is same done on 16 bit version:

See how posterized faint regions become?

I would not expect that to happen. After converting the 32-bit image to a 16-bit image check the noise level (i.e. the standard deviation) in a small area of background of the 16-bit image.  You should find it adequately dithers the quantisation.  If so, then it means that the reduction to 16-bit is not the cause of your posterization issue.  No amount of stretching will introduce posterization in an image where the quantisation is adequately dithered by noise.

Mark

Edited by sharkmelley
Link to comment
Share on other sites

1 minute ago, sharkmelley said:

That should not happen. After converting the 32-bit image to a 16-bit image check the noise level (i.e. the standard deviation) in a small area of background of the 16-bit image.  You should find it adequately dithers the quantisation.  If so, then it means that the reduction to 16-bit is not the cause of the posterization issue.

Mark

Random noise is going to dither quantization provided that is of proper magnitude with respect to quantization. One of the reasons why sensor designers leave certain amount of read noise present, or rather "tune" read noise levels - to dither things.

Here we are talking about stacked image. Noise will drop as a function of number of stacked subs and at one point noise will be too small to dither quantization.

We are also talking about signal and the fact that signal needs to have enough bits of precision to be properly recorded. If signal for example has dynamic of 5-6 bits, then it should really have at least 5-6 bits of storage to be recorded. If you give it 2-3 bits it will be posterized due to rounding.

In any case, here is what you've proposed:

image.png.31cff57587b3c4aada32aaaaad3eef2e.png

First measurement is that of 32bit image - small selection of the background. Standard deviation of that patch is ~0.142745

Next measurement is that of whole image. Important thing to note is that here values are up to 12 bit or a bit less (<4096 because of 12 bit camera, offset removed and flat calibration performed) - but format is 32bit float. This will be needed later to do "conversion" of noise.

Third measurement is small selection after converting image to 16 bit format and last one is full image at 16 bit.

We can now compare noise levels in both images. Converted noise level from 32bit image would be 0.142744802 * 65535 / (3401.327392578 + 1.790456772) = ~2.748885291

while measured noise level is 2.76030548 or difference of about 0.4%. Many will say that increase in noise of less than one percent is not significant - but we don't know how distribution of the noise changed - and this was only due to bit number conversion.

Now let's examine something else that is important - here is screen shot of section that I'm examining now:

image.png.2f94fb4411de5ef87e7e097765c42934.png

I tried to select part of the background where there are no stars but there is variation of brightness in nebulosity.

image.png.ac469e1e94f888583a07e0049ef49919.png

That part of the image has something like 6-7 in terms of dynamic range. It has max value of 1.16 and noise of 0.167 so dynamic range is ~6.95. This is of course in floating point numbers so there is plenty of precision to record all information but what will happen if we convert it to 16 bit?

We have seen that it takes multiplying with about x19 to convert (65535/~3402) and here we have total range of about 1.5 (from -0.37 to 1.16), so converted range will be ~28.9. That is less than 5 bits total (not dynamic range) - we have at least 2 bits lost for that data or x4 number of levels (128 vs 32 or less levels).

This is why my above example shows posterization in faint areas - because it is really there.

 

 

Link to comment
Share on other sites

This was my reply on another astro forum relating to this issue....

 

Looks like all my images are gone...oh well it wasn't like I used the site anymore.

None of the subscriptions on offer suited me as a very casual user so I just abandoned it. The 30 or so images I had on  "ate" into the free allowance which meant that I couldn't upload any new images for over 3 years after the subscription model was introduced.

Sorry to hear others have lost data. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.