Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Swamp read noise


Recommended Posts

5 hours ago, vlaiv said:

You are swamping it by factor of 5.07 / 1.61 = ~ x3.152

that matches very exactly with the stddev practical aproach (to have a stddev > 3xbias_stddev on the background) among other calculations I'm playing with.

I was aware I was using ADUs, and noticed as well that converting to e- led to more correct results, but I thought it also should work with ADUs. Clearly is not :)

Thanks a lot for your help and clarifying, Vlaiv. Very useful

Link to comment
Share on other sites

  • 1 year later...

I had to rescue this thread from my bookmarks. Yesterady a doubt came into my mind.

@vlaiv, when we are measuring (shot?) noise from the background, we are also accounting for its read noise; not only shot noise, right? In other words, if my background patch as a stddev of 20 (for example), part of that "20" also contains read noise (even if the sub is calibrated), no? So, how can we be sure that we are accounting only the correct amount of noise we want to swamp?

Link to comment
Share on other sites

3 minutes ago, mgutierrez said:

I had to rescue this thread from my bookmarks. Yesterady a doubt came into my mind.

@vlaiv, when we are measuring (shot?) noise from the background, we are also accounting for its read noise; not only shot noise, right? In other words, if my background patch as a stddev of 20 (for example), part of that "20" also contains read noise (even if the sub is calibrated), no? So, how can we be sure that we are accounting only the correct amount of noise we want to swamp?

Depends on how you measure.

If we want to calculate exposure length to swamp the read noise - we do it by measuring average background value - not noise. We measure sky signal level. Sky signal level will not depend on read noise. It might depend on bias level, and that is why we need to calibrate it - to remove that bias level / offset.

Once we measure sky background level - then we calculate sky background noise. If we were to measure standard deviation of image - we would find that it is a bit higher then calculated value - precisely for the reason you are mentioning - there are other noise sources that add up to total noise levels in the image (both thermal and read noise).

However - those don't affect sky background value and sky background value is directly related to the amount of noise coming from it - because it follows Poisson distribution and for it - noise is equal to square root of signal value.

Link to comment
Share on other sites

50 minutes ago, vlaiv said:

Sky signal level will not depend on read noise

Thanks for the quick reply.

I see your point, but honestly not completely. Is not the signal (our measurement) level affected by the read noise? With a very low signal (due to a too short exposure) and a quite high read noise, wouldn't be our measurement even more affected by the read noise? Even if the LP follows a poisson distribution, why that "mean value" would not be affected (specially if the rn is quite high compared to the signal) by such read noise?

Edited by mgutierrez
typo
Link to comment
Share on other sites

2 minutes ago, mgutierrez said:

Thanks for the quick reply.

I see your point, but honestly not completely. Is not the signal (our measurement) level affected by the read noise? With a very low signal (due to a too short exposure) and a quite high read noise, wouldn't be our measurement even more affected by the read noise? Even if the LP follows a poisson distribution, why that "mean value" would not be affected (specially if the rn is quite high compared to the signal) bu such read noise?

Noise, by definition is random deviation from some mean value. If you take some number of noisy samples and you average them - you will get value that is close to that average/mean value. More samples you take - closer to expected value you will be with your average.

This is how stacking works - we take bunch of images that are noisy and average them and get much cleaner image - one that is much closer to true / average value of signal.

When you measure sky background value in the image - you average bunch of pixels that have same mean value. Yes, each will be slightly different in true value due to noise - but since they all have same mean value - which is level of sky background - you will get value that is very close to what you want to measure. It's like stacking thousands of subs to get correct image but we "stack" thousands of pixels instead (thousands of single values).

Here is an example. Say that your mean background value is 900 electrons. Just from Poisson noise - we will have noise level of 30. Add some read noise to it - let's say that read noise is 2e, so total noise is sqrt(30*30+2*2) = 30.066 (see here how small read noise compared to large background noise barely affects total noise levels). Thus we have SNR of 900/ 30.066 = ~29.93.

This really means that we can expect to see 900 +/- 30.066 as background value about 67% of the time (values between ~870 and 930).

But if we take square that is only 100x100 pixels - that is 10000 pixels, and average those - we get SNR improvement that is equal to x100 (square root of number of averaged samples). This means that our SNR is no longer ~29.93 but rather ~2993 (increased by factor of x100).

so we have 900 / some_uncertainty = 2993

from there

some_uncertainty = 900 / 2993 = ~0.3

Number that we get by taking average is actually 900 +/- 0.3e

Yes, there will still be some residual error of measurement - but it is so small that we can just take our value to be correct value. We can easily say that 900.3e is actually 900e - it won't change our results by any significant amount.

Larger patch of the sky you take - better estimate of background signal value you get (more precise). Only problem is that with larger patches of sky we also take some stars. This is why it is better to use median of this then true mean - as median is far less sensitive to outliers (which would be anomalous pixel values due to star light falling on those pixels).

 

 

Link to comment
Share on other sites

43 minutes ago, vlaiv said:

total noise is sqrt(30*30+2*2) = 30.066 (see here how small read noise compared to large background noise barely affects total noise levels).

that's actually my point. So I guess we should start with a enough exposed light, right?

Link to comment
Share on other sites

16 minutes ago, mgutierrez said:

that's actually my point. So I guess we should start with a enough exposed light, right?

You start with exposure that you have in order to determine exposure that you need to swamp the read noise.

You don't need much light at all since you'll be averaging larger number of pixels in the background.

Say that you have significantly lower average background signal - say that you have only something like 50e. That is SNR of 7 (sqrt(50) = ~7e of noise 50/~7 = ~7 SNR).

Again if you average 10000 samples (100x100 pixels) - you get boost of x100 in SNR or SNR of 700.

In this case, your average signal will be 50 +/- 0.07e - again, plenty of accuracy for what you need.

Even if you have something very low - lower than read noise, say your signal is 1e on average.

SNR will be 1 because sqrt(1) = 1 and 1/1 = 1

If you average 10000 such pixels - you will get SNR of 100 or your final value will be 1 +/- 0.01

Even if we account for read noise - make it 2e again and see what happens:

Signal is 1e, shot noise is 1e, read noise is 2e. Total noise is sqrt(1*1+ 2*2) = sqrt(5) = ~2.236, so SNR is 1/2.236 = ~0.447. After averaging 10000 pixels SNR will be 44.7, so noise in final result will be 1/44.7 = ~0.022. We still have only 2% of error in our measurement of average background level.

You don't need much light to have good estimation of average signal if you use that many samples - 10000 pixels. It's like stacking 10000 images. We manage to get good images by stacking only hundreds of subs and even less.

Link to comment
Share on other sites

thanks @vlaiv for your useful input.

Some concepts are clear. But I'm running into trouble to understand something that should be basic. From your previous post:

Quote

Here is an example. Say that your mean background value is 900 electrons. Just from Poisson noise - we will have noise level of 30. Add some read noise to it - let's say that read noise is 2e, so total noise is sqrt(30*30+2*2) = 30.066 (see here how small read noise compared to large background noise barely affects total noise levels). Thus we have SNR of 900/ 30.066 = ~29.93.

This really means that we can expect to see 900 +/- 30.066 as background value about 67% of the time (values between ~870 and 930).

But if we take square that is only 100x100 pixels - that is 10000 pixels, and average those - we get SNR improvement that is equal to x100 (square root of number of averaged samples). This means that our SNR is no longer ~29.93 but rather ~2993 (increased by factor of x100)

Ok, we have the whole background with mean=900. SNR is ~29.93. Ok.

I don't fully get the point of the snr boost. If we measure the square patch in the same way as we did with the whole background, its mean value should be similar to 900 and hence the noise should remain similar. I'm missing some point. Thanks for your patience

Link to comment
Share on other sites

1 hour ago, mgutierrez said:

Ok, we have the whole background with mean=900. SNR is ~29.93. Ok.

Think about what this means.

This means that for any given pixel you choose in the background on the image - you can expect it to be in certain range of values with certain probability.

I'll give you idea with Gaussian distribution because Poisson distribution for large numbers behaves pretty much like Gaussian - and Gaussian is more familiar:

image.png.63955d8a3c2e528462ae413a73698e81.png

There is some mean value - central value. When we say that background is 900 - that is what we mean - average value of background pixels is 900. Then there is some noise. If we have SNR of roughly 30 - this means that noise is 30e.

Noise is sigma in above distribution graph. Standard deviation.

About 68% of all pixels will have value in +/- one sigma range - which means that roughly 68% pixels will have values between 870 and 930.

About 95% of all pixels will have values in two sigma range - which means that roughly 95% of all pixels of the background will fall between 840 and 960 in their value.

If we take these pixel values they will look like this:

891, 914, 902, 907, 899, ....

They won't be exactly 900 - but will scatter according to above graph.

When we take some number of such samples on average (non bias selection) and we perform their average - that average value will fall closer to true mean value - closer to 900 than individual samples.

Noise can be thought of as scatter around the mean value. When we perform averaging of samples to get new values - we improve SNR. What does that mean? Well - average value remains the same (averaging won't change actual average value) - but scatter will be reduced.

If you have 10000 above random values and you divide them into groups of say 100 numbers in each group and you average every group - each group will then become single value - and you will have 100 values. Those 100 values will be more closely grouped around whole average. This is what SNR improvement is - you reduced scatter of averaged values.

If you average whole 10000 - you will get single value so we can't really talk about scatter - but we can talk about uncertainty - or range around true mean where this value falls.

If you have 10000 numbers with average of 900 and standard deviation of 30 and you average those numbers - you will get result that is in 900 +/- 0.3 with 68% certainty, or in 900 +/- 0.6 with 95% certainty.

Makes sense now?

 

Link to comment
Share on other sites

@mgutierrez When you stack say 100 images to obtain a final lower noise image, you improve the SNR for each pixel, by stacking the same pixel in each image to get an average value for that single pixel closer to the actual value, which it would be if there were no noise present.

For a more accurate sky background value you don't need to stack multiple images, you can stack multiple sky background pixels from the same image, the 100 x 100 pixel block vlaiv mentions, and so again get an average value much closer to the actual sky background value. You've therefore also improved the sky background value SNR.

Alan

Link to comment
Share on other sites

  • 1 month later...
On 19/04/2022 at 10:21, vlaiv said:

Just be careful that you need to properly calibrate your sub. Use one of the channels for measurement - green is probably the best as it carries the most of luminance information (humans are most sensitive to noise in luminance data).

sorry, again, for resurrecting this thread.

@vlaiv, I agree with the quoted affirmation. But, just to be sure we are swamping the faintest signal (channel), wouldn't be better to choose the channel with the lowest background signal?

Link to comment
Share on other sites

59 minutes ago, mgutierrez said:

But, just to be sure we are swamping the faintest signal (channel), wouldn't be better to choose the channel with the lowest background signal?

If you want to be very specific about it - then yes, choose channel that has lowest background signal.

But you really don't have to be with OSC data because of how OSC sensors work and how human vision works.

Here, look at this example (which you can do yourself in any image processing software):

baseline.png.561554a3b7cf32a7d999e8600238549e.pngblue_noise.png.7c5b018cd2947a153802464a2104c169.pnggreen_noise.png.732bb6a61a0ccc317cd672fcc88c40d3.png

First image is baseline. I've chosen image that does not have much green in it - rather like astrophotograpy - blue and red (or orange) are base colors.

Second image is with some level of noise added to blue channel alone. Third image is same level of noise this time added to green channel only.

Thing is - we are much more sensitive to variation in brightness than in color / hue. Green carries the bulk of brightness information. Even OSC sensors are built that way - look at raw image from say 533mc - you will see that it is very green and needs "color balance" to produce proper image. Bayer matrix of OSC sensor has 2 green pixels and 1 red and 1 blue - again because green carries the most information about the image.

If you swamp the read noise in green channel - you've done 99% of the work as far as visual quality of the image goes. Want to be very accurate - sure go for lowest signal, but fact is - it won't make much of a difference.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.