Jump to content

sgl_imaging_challenge_2021_2.thumb.jpg.72789c04780d7659f5b63ea05534a956.jpg

PixInsight newbie needs help understanding image noise


Recommended Posts

Hi SGLers,

I’m hoping a PixInsight guru can help me. I’m a PI beginner, but am having fun learning. My question is about the level of noise in my images. After integrating and performing an STF stretch, the resulting image always looks quite smooth. But it doesn’t take long at all – just a DBE really, maybe then a gentle stretch – for the image to become really noisy. And then a lot of my editing is centred on battling that noise. My camera is an ASI2600MC-Pro, which I cool to -10. For a recent experiment, I gathered 20 hours of data from 120s subs. With that much integration time, and the low-noise camera, I was hoping for lower noise than I actually got. (I am shooting from Bortle 8, however).

So my question is: are my expectations wrong, and actually the amount of noise I have is what’s to be expected? Or, have I messed something up in pre-processing or integration?

In case it’s useful, I ran SCRIPT -> ImageAnalysis -> Noise Evaluation on the image straight out of integration and got the following:

Ch | noise | count(%) | layers |

---+-----------+-----------+--------+

0 | 2.626e-01 | 18.39 | 4 |

1 | 1.037e-01 | 12.01 | 4 |

2 | 1.636e-01 | 11.10 | 4 |

---+-----------+-----------+--------+

 

I’ve also uploaded the file (1.16Gb) for anyone kind enough to help investigate further:

https://drive.google.com/drive/folders/1wB3May69oEWniF8hikueUkSS-TJvjMKC?usp=sharing

 

Thanks!

-Lee

Link to post
Share on other sites

No idea what the numbers mean but that looks fine to me, good I'd say from Bortle 8 and 400mm.    Here it is, all I've done is cropped all the edges back, re-sampled by 50%, (I expect that drizzling is just adding noise)  removed light pollution and given it a histogram stretch.   Plenty more to give with more processing..  Beware of Pixinsight Screen Transfer function stretch it is highly aggressive.  I'd suggest you re-integrate without drizzle and go from there .

20_hour_drizzled-lpc-cbg_HT.thumb.jpg.c6800a6907c5604353c3326c9becbd01.jpg

  • Thanks 1
Link to post
Share on other sites
35 minutes ago, Laurin Dave said:

No idea what the numbers mean but that looks fine to me, good I'd say from Bortle 8 and 400mm.    Here it is, all I've done is cropped all the edges back, re-sampled by 50%, (I expect that drizzling is just adding noise)  removed light pollution and given it a histogram stretch.   Plenty more to give with more processing..  Beware of Pixinsight Screen Transfer function stretch it is highly aggressive.  I'd suggest you re-integrate without drizzle and go from there .

20_hour_drizzled-lpc-cbg_HT.thumb.jpg.c6800a6907c5604353c3326c9becbd01.jpg

Thanks, this is a great help! Can I ask exactly how you removed the light pollution? Did you use DBE, and if so, what were your settings?

Link to post
Share on other sites

I'd pretty much agree with Dave's sentiment, you've got decent data here.

However, it is worth bearing in mind that shooting broadband from Bortle 8 is never going to get you incredible results in a reasonable timescale (compared to a dark site). For reference, 20 hours in bortle 8 is approximately equivalent to only 1.25 hours in Bortle 1! This is typically why people tend to favour imaging in narrowband from light polluted areas; the gap between Bortle 8 and 1 is significantly closed when using narrow bandpass filters.

It makes sense that after performing DBE you'll be able to see the noise profile of your image much clearer, as you've subtracted all that unwanted signal from the light pollution. 

 On undersampled data such as this, DrizzleIntegration can help tease out fine details, especially if you have a large number of subs that are well dithered, but it will invariably add noise to the image. Whether or not you are willing to make this tradeoff is up to you and dependent on the image. All things considered you've got a good base to make a nice image from here, despite the challenging conditions!

image.thumb.png.72256b204a8d51b6851336312c6f382a.png

If you're interested in some further reading, this is a great article on SNR: https://jonrista.com/the-astrophotographers-guide/astrophotography-basics/snr/

Edited by Spongey
  • Thanks 1
Link to post
Share on other sites
4 minutes ago, Spongey said:

I'd pretty much agree with Dave's sentiment, you've got decent data here.

However, it is worth bearing in mind that shooting broadband from Bortle 8 is never going to get you incredible results in a reasonable timescale (compared to a dark site). For reference, 20 hours in bortle 8 is approximately equivalent to only 1.25 hours in Bortle 1! This is typically why people tend to favour imaging in narrowband from light polluted areas; the gap between Bortle 8 and 1 is significantly closed when using narrow bandpass filters.

It makes sense that after performing DBE you'll be able to see the noise profile of your image much clearer, as you've subtracted all that unwanted signal from the light pollution. 

 On undersampled data such as this, DrizzleIntegration can help tease out fine details, especially if you have a large number of subs that are well dithered, but it will invariably add noise to the image. Whether or not you are willing to make this tradeoff is up to you and dependent on the image. All things considered you've got a good base to make a nice image from here, despite the challenging conditions!

image.thumb.png.72256b204a8d51b6851336312c6f382a.png

If you're interested in some further reading, this is a great article on SNR: https://jonrista.com/the-astrophotographers-guide/astrophotography-basics/snr/

Thanks, that's all very helpful! I think I'm falling down with the DBE step; I can't get it to look quite as good as you or Dave have. I'd be very grateful if you could maybe post a screenshot of how you did DBE?

Link to post
Share on other sites
23 minutes ago, Lee_P said:

Thanks, that's all very helpful! I think I'm falling down with the DBE step; I can't get it to look quite as good as you or Dave have. I'd be very grateful if you could maybe post a screenshot of how you did DBE?

Sure :) 

I find that two passes of DBE can be useful in heavily light polluted data like this, the first one with a higher tolerance, and a second with the default tolerance of 0.5.

As the aim of DBE is to remove large scale gradients, I use 7 samples per row. More than this is generally unnecessary and can create some strange artificial gradient subtraction depending on the placement of your samples.

Generally you want a sample size that is big enough to cover a decent amount of background (I wouldn't use below 10px samples), but you don't want to be struggling to find areas of background with no stars in. As the data is drizzled, and there aren't lots of stars in the frame, I used a sample size of 50px.

I also reduce the minimum sample weight to allow the algorithm to pick and choose among the samples a little more than with the default settings.

The first pass of DBE was run at a tolerance of 1.16 as shown below. This was the lowest I could get the tolerance while still accepting all of my samples.

Finally, the correction mode was set to subtraction.

For this image, the other default settings are appropriate and don't need changing.

image.thumb.png.55fdba5b2a6b248cd45d86b0c1d3a4aa.png

As seen in the background map produced, the process did a pretty good job of modelling the gradient in the image.

image.thumb.png.d5d9acab4967e7ef6b07e4d02a630e0a.png

This gives us a pretty good result, but there are still some traces of light pollution, particularly in the bottom right corner where it is strongest. For that reason, we run the same DBE process again on the new image. To ensure you have the same samples and settings as the first pass, you can drag the little triangle out from the DBE process window onto the workspace, close DBE, and then double click on the saved process icon with the new image open.

HOWEVER, for the second pass, we change the tolerance value back down to the default of 0.5. This gives us better rejection between the samples and provides a better estimation of the background. If there are other gradients present in the image, then you can now move some samples to better model these regions specifically (note that if we are only dealing with light pollution, there shouldn't be any other major gradients). In this case I didn't move any, but you could add some more samples to the bottom right corner to help model the remaining gradient here if desired.

image.thumb.png.3030fedb217f89f28fe5aa718e57e553.png

Running this second pass only removes the trace of light pollution left in the bottom right corner, where it was worst in the original image, as shown by the background map here:

image.thumb.png.17d8ac28511940c309fe380a1cae5329.png

The final image looks pretty flat, there is some large scale green/purple blotchyness that is common with OSC images, but that can be dealt with later in processing by darkening and desaturating the background.

image.thumb.png.2bca76d71228826e7cb0a9dc0a84939e.png

From here I would proceed with colour correction, noise reduction, stretching etc.

Note that the image does not look very pretty here, but that is normal! STF is designed to show you everything the image has to offer by stretching the hell out of it. When you manually stretch, it will look much better.

Hope this helps!

Edited by Spongey
  • Like 1
  • Thanks 1
Link to post
Share on other sites
8 minutes ago, Spongey said:

Sure :) 

I find that two passes of DBE can be useful in heavily light polluted data like this, the first one with a higher tolerance, and a second with the default tolerance of 0.5.

As the aim of DBE is to remove large scale gradients, I use 7 samples per row. More than this is generally unnecessary and can create some strange artificial gradient subtraction depending on the placement.

Generally you want a sample size that is big enough to cover a decent amount of background (I wouldn't use below 10px samples), but you don't want to be struggling to find areas of background with no stars in. As the data is drizzled, and there aren't lots of stars in the frame, I used a sample size of 50px.

I also reduce the minimum sample weight to allow the algorithm to pick and choose among the samples a little more than with the default settings.

The first pass of DBE was run at a tolerance of 1.16 as shown below. This was the lowest I could get the tolerance while still accepting all of my samples.

Finally, the correction mode was set to subtraction.

For this image, the other default settings are appropriate and don't need changing.

image.thumb.png.55fdba5b2a6b248cd45d86b0c1d3a4aa.png

As seen in the background map produced, the process did a pretty good job of modelling the gradient in the image.

image.thumb.png.d5d9acab4967e7ef6b07e4d02a630e0a.png

This gives us a pretty good result, but there are still some traces of light pollution, particularly in the bottom right corner where it is strongest. For that reason, we run the same DBE process again on the new image. To ensure you have the same samples and settings as the first pass, you can drag the little triangle out from the DBE process window onto the workspace, close DBE, and then double click on the saved process icon with the new image open.

HOWEVER, for the second pass, we change the tolerance value back down to the default of 0.5. This gives us better rejection between the samples and provides a better estimation of the background. If there are other gradients present in the image, then you can now move some samples to better model these regions specifically (note that if we are only dealing with light pollution, there shouldn't be any other major gradients).

image.thumb.png.3030fedb217f89f28fe5aa718e57e553.png

Running this second pass only removes the trace of light pollution left in the bottom right corner, where it was worst in the original image, as shown by the background map here:

image.thumb.png.17d8ac28511940c309fe380a1cae5329.png

The final image looks pretty flat, there is some large scale blotchyness to the image but that can be dealt with later in processing by darkening and desaturating the background.

image.thumb.png.2bca76d71228826e7cb0a9dc0a84939e.png

From here I would proceed with colour correction, noise reduction, stretching etc.

Hope this helps!

That's absolute gold, thank you so much!

 

Link to post
Share on other sites

My take on your image. Also in PixInsight.

  • Resample 50% to reduce the image size. Stars are still quite round after this, and I'm not convinced your image was undersampled to begin with. Undersampling depends very much on atmospheric conditions, and as long as stars don't appear too pixelated, you're probably good on the sampling scale. Just test with integrating this image without drizzle.
  • Dynamic crop to get rid of stacking edges
  • DBE approximately as per @Spongey's excellent write up. 
  • Background neutralization and Photometric Color Calibration
  • Stretch 1 using arcsinh stretch
  • Stretch 2 using curves
  • SCNR green cast removal
  • Local colour saturation and dynamic range extension using MMT with a mask to isolate the galaxy
  • Background desaturation
  • Saved as jpeg

I didn't use any noise reduction except SCNR-green. Btw, you caught quite a few distant galaxies in this image.

1789548948_20hourdrizzled.thumb.jpg.053a6874e53723444cb9c7bd129f672e.jpg

Edited by wimvb
  • Like 1
  • Thanks 1
Link to post
Share on other sites
14 hours ago, wimvb said:

My take on your image. Also in PixInsight.

  • Resample 50% to reduce the image size. Stars are still quite round after this, and I'm not convinced your image was undersampled to begin with. Undersampling depends very much on atmospheric conditions, and as long as stars don't appear too pixelated, you're probably good on the sampling scale. Just test with integrating this image without drizzle.
  • Dynamic crop to get rid of stacking edges
  • DBE approximately as per @Spongey's excellent write up. 
  • Background neutralization and Photometric Color Calibration
  • Stretch 1 using arcsinh stretch
  • Stretch 2 using curves
  • SCNR green cast removal
  • Local colour saturation and dynamic range extension using MMT with a mask to isolate the galaxy
  • Background desaturation
  • Saved as jpeg

I didn't use any noise reduction except SCNR-green. Btw, you caught quite a few distant galaxies in this image.

1789548948_20hourdrizzled.thumb.jpg.053a6874e53723444cb9c7bd129f672e.jpg

Thanks, this is really useful. Arcsinh stretch is a new one to me, I'll research it further 😃

 

Link to post
Share on other sites
5 hours ago, scitmon said:

@wimvb may I ask why you Resampled the image? I wondering what the benefits are beyond the obvious memory saving.

Time saving. Especially when mmt needs to calculate the wavelet layers, this can be time consuming for a large image. And as I wrote before, I’m not convinced the image needed drizzle to start with. In fact, I would like to see a non-drizzled version of this image. My guess is that the level of detail will be very much the same.

  • Like 1
Link to post
Share on other sites
19 hours ago, wimvb said:

Time saving. Especially when mmt needs to calculate the wavelet layers, this can be time consuming for a large image. And as I wrote before, I’m not convinced the image needed drizzle to start with. In fact, I would like to see a non-drizzled version of this image. My guess is that the level of detail will be very much the same.

Sure, I've uploaded a non-drizzled version: https://drive.google.com/drive/folders/1wB3May69oEWniF8hikueUkSS-TJvjMKC

I'd be interested to hear your thoughts on the difference. My no doubt overly-simplistic reasoning was that a drizzled version would give me the option of a closer crop on the galaxy.

  • Like 1
Link to post
Share on other sites
2 hours ago, Lee_P said:

I'd be interested to hear your thoughts on the difference. My no doubt overly-simplistic reasoning was that a drizzled version would give me the option of a closer crop on the galaxy

There is a real difference in star profile in your image. The 2x drizzle image is in principle oversampled, and the non-drizzled is undersampled imo. Applying drizzle doesn't necessarily mean that you gain detail, it means that the detail is distributed over more pixels. But there is a difference in post processing. The oversampled image allows you to do a more aggressive deconvolution, reclaiming some of the detail that is lost to seeing.

So there is an advantage to process the drizzled image. But that advantage comes at a cost; your image file is 1.2 GB, four times larger than the non-drizzled image. This not only eats hard drive space, but also slows down the processing. What I would do with this image is first crop the drizzled version to maybe 1/4 size, and then start processing it. In terms of image geometry, this is equivalent to taking an image with a smaller sensor camera which also has smaller pixels.

Here are the star profiles for the drizzled and non-drizzled images

PSF.jpg.ed7487d7915855f1932b5dfcbf70024f.jpg

Btw, if you drizzle an image, you don't need integer drizzle values. You could drizzle 1.5 x or even slightly less.

Edited by wimvb
  • Thanks 1
Link to post
Share on other sites
1 hour ago, wimvb said:

There is a real difference in star profile in your image. The 2x drizzle image is in principle oversampled, and the non-drizzled is undersampled imo. Applying drizzle doesn't necessarily mean that you gain detail, it means that the detail is distributed over more pixels. But there is a difference in post processing. The oversampled image allows you to do a more aggressive deconvolution, reclaiming some of the detail that is lost to seeing.

So there is an advantage to process the drizzled image. But that advantage comes at a cost; your image file is 1.2 GB, four times larger than the non-drizzled image. This not only eats hard drive space, but also slows down the processing. What I would do with this image is first crop the drizzled version to maybe 1/4 size, and then start processing it. In terms of image geometry, this is equivalent to taking an image with a smaller sensor camera which also has smaller pixels.

Here are the star profiles for the drizzled and non-drizzled images

PSF.jpg.ed7487d7915855f1932b5dfcbf70024f.jpg

Btw, if you drizzle an image, you don't need integer drizzle values. You could drizzle 1.5 x or even slightly less.

Thanks, that's really interesting. For future images I'll try 1.5 drizzle -- maybe that's a sweet spot.

Link to post
Share on other sites

Could be. But above all, collect much data. Drizzle will inevitably decrease signal to noise ratio, and if you want to apply deconvolution, you need a low noise image to start with. The image data you posted here, has maximum pixel values less than 1 (scale 0 ... 1), so you can safely increase the sub exposure time.

Link to post
Share on other sites
On 24/03/2021 at 13:54, Lee_P said:

... My camera is an ASI2600MC-Pro, which I cool to -10. For a recent experiment, I gathered 20 hours of data from 120s subs. With that much integration time, and the low-noise camera, I was hoping for lower noise than I actually got. (I am shooting from Bortle 8, however).

At 120s exposure and without a NB filter I would suspect that the photon (?) noise of a Bortle 8 sky would overwhelm any camera noise (from my Bortle 6 sky it certainly overwhelms my DSLR noise at 180s) so I guess you're just fighting the noise created by the light pollution. 

  • Thanks 1
Link to post
Share on other sites
  • 2 weeks later...
On 27/03/2021 at 12:03, wimvb said:

Btw, if you drizzle an image, you don't need integer drizzle values. You could drizzle 1.5 x or even slightly less.

Sorry for what's probably a dumb question, but I tried setting DrizzlerIntegration to 1.5 today, but couldn't. The scale option only allows for integers. What am I doing wrong?

 

DRIZZLE.JPG.21134d41ef25f41a8fdddd10589961e4.JPG

Link to post
Share on other sites

Sorry, my bad. I really thought non integer values were possible. I never drizzle my data, so I’m not that familiar with the process. drizzle 2x is the best you can do. Follow up with deconvolution to get detail back, because, as I wrote earlier, drizzle by itself is not a guarantee that you get more detail.

  • Like 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By EMG
      Recently I bought a ZWO ASI178MM for planetary/lunar/deep-sky imaging and last weekend i had clear skies so I was able to capture videos of the moon in two panels. I have already processed the videos in AutoStakkert, I used 3x drizzle because I intend to print the image so now I have two 120MB TIFF images that i would like to combine in a two panel mosaic. I tried doing it in Hugin, a free panorama stitcher but the program crashes due to the file sizes being too large. I have searched for tutorials on using DynamicAlignment in PixInsight but it seems to me that i'll need a reference image to be able to create mosaics in PixInsight. Is there any way to go about this, I feel like this should be a pretty straightforward job but I am not very experienced in PI so I would really appreciate some advice.
       
      I tried using DynamicAlignment as you can see in the attached screenshot but the result was just a cut version of the target image, aligned perfectly to the source image. It would be perfect if it didn't cut off the lower part of the target image.

    • By kryptonite
      I'm not experienced with LRGB imaging, so thought i'd give it a go on M81. However, when i combine the 4 individually processed integrations i end up with horrible colour hues across the image - they're all aligned and wotnot. Am i running into the issues of light pollution (inside the M25), which i can only remove with aggressive DBE application?  Individual files attached.




    • By kryptonite
      I've got a 5 year old desktop, which i think performs decently and does my Pixinsight processing pretty well. However, i want to change to a laptop for the form-factor, but i don't want to compromise on performance.
      This is my rig: https://www.userbenchmark.com/UserRun/27054387
      What machine do you use and how does it perform with Pixinsight? Any folk familiar with computer hardware able to provide steer on whether i'm due an upgrade?
      Looking to understand how others do their processing.
    • By chriscoles
      Hi, I have taken mono LRGB images of M51 with my new ASI1600mm-PRO.
      I have a mix of 30s, 60s, 120s subs. 100 darks, and 100 flats for each filter. About 4 hours total.
      I Used DSS for alignment and stacking, then Pixinsight for LRGB combination.
      My issue is that i'm not happy with the colours, the stars all appear white and the background is strange.
      I have attached the Aligned LRGB images.
      Can anyone point me in the right direction? Thanks!

      Green.TIF red.TIF Lum.TIF B.TIF
    • By bencevokoszt
      Hey guys, can you help me with how to fix this?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.