Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Swamp read noise


Recommended Posts

3 minutes ago, Pitch Black Skies said:

They are a stack of 20 single files each.

Do you think I should increase to get a more averaged out master? I'm going by the reccomendation of 20 in the book 'Every Photon Counts'.

I usually use more (significantly more), but that really depends. Every additional sub lowers noise you inject back in in the image.

I just measured your masters and found something interesting:

image.png.27804305d434275180de08b9caa4cc53.png

Dark has one (or more) hot pixels that are not showing in flats.

max pixel value is almost double that of flat.

First thing to check is if you used exact same parameters for flats and flat darks? (temperature, offset, gain, exposure length?).

Second would be type of stacking used for both:

What type of stacking did you use for both? I would recommend sigma reject stacking for both to remove any cosmic ray hits or anomalous hot pixels.

  • Thanks 1
Link to comment
Share on other sites

5 minutes ago, newbie alert said:

What does read noise look like in a sub.... I know I'm going to get jumped on for this but CMOS looks so much noiser than my cc'd, and considering I don't use calibration frames, and my cc'd has a read noise far higher than a CMOS I'm interested in what it looks like in a sub or in a stack.. 

I know what dark current looks like and bias frames, so what does read noise look like?

image.png.550060007fb4bb29b8c3e1dbba8a76b1.png

Here you go - above attached stack of bias subs - very small standard deviation and rather "normal" looking stretched image.

(You managed to get your old account back? Do you have two accounts now?)

Link to comment
Share on other sites

27 minutes ago, vlaiv said:

Dark has one (or more) hot pixels that are not showing in flats.

That's interesting, I'll be able to check the DSS settings this evening.

The parameters were all the same. I remember I had to use 10s for the flats to get the ADU around the 20k mark. Not sure if that was a bit too long.

How many would you usually shoot for, 100?

Link to comment
Share on other sites

33 minutes ago, vlaiv said:

image.png.550060007fb4bb29b8c3e1dbba8a76b1.png

Here you go - above attached stack of bias subs - very small standard deviation and rather "normal" looking stretched image.

(You managed to get your old account back? Do you have two accounts now?)

Cheers Vlaiv, yeah managed to change my password, and save it so I remember!... 

Link to comment
Share on other sites

43 minutes ago, Pitch Black Skies said:

How many would you usually shoot for, 100?

Depending on exposure length - I aim for at least 64 darks

I go for 256 flats and flat darks, but my flat panel is very strong and I don't have issues with very short flat exposures - my flat exposures are like few milliseconds each, so it does not take long to shoot 256 of them (take more time to download subs with ASCOM driver in SGP than anything else - my flats usually take about 10 or so minutes).

  • Thanks 1
Link to comment
Share on other sites

13 minutes ago, Pitch Black Skies said:

Any reason for the specific numbers?

I've noticed they are both 2^6 and 2^8 respectively.

:D

Yes, there is specific reason for those numbers, but it is probably completely unimportant :D

I'm computer programmer by trade, and powers of two are easily divisible using fixed point math - no surprises. Much like it is very easy to divide something with say 1000 in decimal system - you just "move" decimal dot three places - note that 1000 is 10^3 - so is with binary operations.

Numbers written in binary form "don't loose precision" (this really depends on how are they recorded) when divided by power of two.

Take for example division by 3 in decimal system. Every third number is nicely divisible - but other two take on "infinite" form - say 34.33333333..... or 12.666666 (depending on remainder when dividing with 3). These numbers can't be precisely written with finite number of digits - you need to round them up - and rounding up introduces error.

There you go - if you use power of 2 subs - you are guaranteed not to introduce rounding error when dividing result - taking average.

In reality - this hardly makes any difference - especially if you start doing sigma clip to avoid hot pixels or cosmic rays - those pixels that have outliers will be divided with different number anyway, but due to way my brain is used to working - I instinctively go for these number of subs.

I often joke that I have "binary OCD" :D

 

  • Haha 1
Link to comment
Share on other sites

23 hours ago, vlaiv said:

In this particular case we measured background value to be ~77.3 (Mean value is ~78, median is ~77.3)

I've followed all of the steps and selected a few different patches in the final image and measured them. They are all ~131 or ~132 Median. Does that sound right to you? I was expecting to get ~77 or ~78 like yours.

Link to comment
Share on other sites

6 minutes ago, Pitch Black Skies said:

I've followed all of the steps and selected a few different patches in the final image and measured them. They are all ~131 or ~132 Median. Does that sound right to you? I was expecting to get ~77 or ~78 like yours.

If you followed all the same steps - and have different values to mine - then one of us have it wrong (or both of us, but one thing is certain - we can't be both right :D ).

Did you do procedure on same data?

While LP changes during the night, I doubt that LP levels can double.

  • Haha 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

but one thing is certain - we can't be both right :D

True 😄 and I'm gonna put my hands up on this one. There's a 99.99% chance that it's me who isn't.

I've repeated the steps from the beginning with the same files. This time I'm getting ~105, ~106.

 

When I select the centre of the green flat, I get a Mean of 13266.104.

I remove the selection and then divide that flat Green by 13266.104.

Then I'm dividing the light Green by that flat Green, and then dividing the resulting image by 4.

Edited by Pitch Black Skies
Link to comment
Share on other sites

On 20/04/2022 at 09:02, Fegato said:

So with my ASI2400MC Pro at Gain 140 (1.2 e) and Offset 15 (480 ADU),

In the CN thread you shared, Jon Rista uses the ASI1600 default offset of 50 in his equation. He doesn't use the median ADU of his bias sub. I am missing something here?

Link to comment
Share on other sites

31 minutes ago, Pitch Black Skies said:

In the CN thread you shared, Jon Rista uses the ASI1600 default offset of 50 in his equation. He doesn't use the median ADU of his bias sub. I am missing something here?

In principle, offset should correspond to bias level.

It should be offset in electrons prior to any gain is applied, but I've found that it is not reliable. It depends on driver implementation and it is not always correct.

I've just pulled one of my 60 second darks. I set my offset to 64 - yet, average ADU of dark sub is

image.png.1fbc4d7eb1ae1e54f00f5847916d71c5.png

62.81ADU

This is with dark current (which is not much - less than 1e total at 60s at -20C, but still - value should be higher than 64 not lower).

You can always see how it corresponds to bias levels for your camera.

Set offset to some value, take bias sub, load it in ImageJ - divide with 16 and then multiply with e/ADU value if you are using gain different than unity - and see what average pixel value you get. In theory it should be the same as offset you set - but it might not be.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

62.81ADU

This is quite low though compared to mine.

My camera's default offset is 70, yet my bias ADU is nowhere near that figure, it's 2796 median ADU. Even 2796/16 = 174.75. Still nowhere near 70.

Why is Jon using 50 in his equation but I'm not using 70?

I'm missing something very basic here 🧐

Edited by Pitch Black Skies
Link to comment
Share on other sites

3 minutes ago, Pitch Black Skies said:

This is quite low though compared to mine.

My camera's default offset is 70, yet my bias ADU is nowhere near that figure, it's 2796 median ADU. Even 2796/16 = 174.75. Still nowhere near 70.

Why is using 50 in his equation but I'm not using 70.

I'm missing something very basic here 🧐

I would not count on offset number to correspond to electrons of offset.

You should set offset high enough so you don't get clipping (in fact - I advocate you don't get any pixel with lowest value in dozen or so consecutive subs - but I'm probably over doing it :D ).

After that basically forget about it - use bias average ADU value as baseline - or simply use calibrated subs.

  • Thanks 1
Link to comment
Share on other sites

I've found the answer.

With this camera the offset number gets multiplied by 10 to get the value in ADU's.

70*10 = 700

As you've shown me Vlaiv, this figure is 14-bit so to get 16-bit figure we multiply by 4 .

2^(16-14) 

= 2^(2)

= 4

 

700*4= 2800

My bias frame measured as 2796. Quite close.

Things are finally starting to make sense 😬

  • Like 2
Link to comment
Share on other sites

13 hours ago, Pitch Black Skies said:

I've found the answer.

With this camera the offset number gets multiplied by 10 to get the value in ADU's.

70*10 = 700

As you've shown me Vlaiv, this figure is 14-bit so to get 16-bit figure we multiply by 4 .

2^(16-14) 

= 2^(2)

= 4

 

700*4= 2800

My bias frame measured as 2796. Quite close.

Things are finally starting to make sense 😬

Interesting. I guess mine must be multiplied by 8...

Offset 15:  x8 = 120.   x 2^(16-14) = 480.  Which is what my bias frames give me (and all my darks up to 600s - which just shows how little dark current there is, I suppose?)

Quite why they would use 10 for the ASI533 and 8 for the ASI2400, I really don't know. 

 

  • Like 1
Link to comment
Share on other sites

25 minutes ago, Fegato said:

Quite why they would use 10 for the ASI533 and 8 for the ASI2400, I really don't know

Yes it seems to be different for different models, 'ZWO has this thing for obscure offset settings that seem to be different in every camera. The offset you select in the driver needs to be multiplied by some number to get your value in ADU. I believe in the 183 the offset multiplier is 5 and the 294 it is 16 (please correct me if Ím wrong)' (ERHAD).

What would it be for an ASI1600?

If someone has a bias frame from a ASI1600, we could measure it to find out.

What's still confusing me is that Jon uses 50 as the offset figure in his formula on the CN thread you linked. This gave me the impression he was using the default offset setting within the driver which is why I was using 70 in my equation, but using that figure is incorrect. It needs to be converted to ADU. Is it possible he is using the default offset setting or has he already converted it to ADU? It seems unlikely.

Anyone got a asi1600 bias sub handy???

Edited by Pitch Black Skies
Link to comment
Share on other sites

1 hour ago, Pitch Black Skies said:

What would it be for an ASI1600?

I made above measurement on ASI1600 - got 62 from dark with 64 offset, so I think multiplier is 1 for that camera (if there is indeed integer multiplier for each camera).

  • Thanks 1
Link to comment
Share on other sites

24 minutes ago, vlaiv said:

I made above measurement on ASI1600 - got 62 from dark with 64 offset, so I think multiplier is 1 for that camera (if there is indeed integer multiplier for each camera).

Great stuff, yes it seems this camera is by 1, making things easy.

Can't understand why mine is being multiplied by 10 and Fegato's by 8???

I don't think a lot of people know about this. The mentioned equation on CN wouldn't ever have worked for them.

Link to comment
Share on other sites

  • 3 weeks later...

Hi @vlaiv,

I'm trying to make the same exercise with one of my lights, just out of curiosity. Let me explain, cause I cannot figure out what's going on.

I have a 60sec light (M33): https://www.dropbox.com/s/j9bglelhgmkh4v1/M33_Light_L_60_secs_035.fits?dl=0

And the corresponding bias: https://www.dropbox.com/s/qh0ah4ix1g5shnn/M1_G56_OFF30_Bias_300.fits?dl=0

Note the following parameters:

Gain: 0.333 e-/ADU

Bias mean value: 486.9 ADU

Bias stddev: ~4.9

Read noise: 1.61 e-   ---> 1.6 / 0.333 = 4.83 ADU

ADC depth: 16bit -> no need to convert from DN to ADU

 

I've read multiple times that a more practical approach to make sure I'm swamping the read noise by x3 (read noise, not LP signal, as you correctly clarified) is to get a clean region of my light, with no stars nor nebulosity (only background) and measure its stddev. I want this stddev to be at least x3 higher than my bias stddev. This makes completely sense to me. But cannot match this approach with yours. I'm doing something wrong.

** editing to complete post. Pressed sent accidentally **

Using ImageJ and defining a clean region, I get a stddev of ~17, which is > 3x bias stdev (4.9 x 3). That means is sufficient swamped. Furthermore, using Rista's formula, I get a very close DN compared with the current DN background of my image using a swamp factor of 10 over the background signal (don't want to extend explanation with this calculations). I assume is right

Using the same region, I get a mean of ~565 ADU. Background is, hence, 565-486.9 = ~78. Read noise is 4.83 ADU (I guess I need to use ADUs here...). Then, I need a signal of (4.83*3)^2 = ~209 ADU.

78 / 209 indicates I'm clearly underexposed (~0.37 times).

What am I doing wrong here? Why the results are so different. I'm clearly missing something quite trivial here...

Thanks

Edited by mgutierrez
incomplete
Link to comment
Share on other sites

26 minutes ago, mgutierrez said:

What am I doing wrong here? Why the results are so different. I'm clearly missing something quite trivial here...

I'm having a bit of difficulty getting stats for your camera. Can you help out?

You say that gain is 0.333 e/ADU, yet I see gain set at 56 in fits header.

image.png.d5a3a0d6ccf50f1a3fac08ec3d708253.png

I'm unable to identify readout mode or actual gain / e/ADU used for images. Fits header says it is mode 1 - so it should be blue line, right? Then e/ADU is about 4.4 at 56

(this is from fits header, so hopefully software wrote down exact values used).

Next issue is that QHY graph shows for read mode 1 - and gain 56 that read noise is about ~3.4e

Standard deviation of Bias sub is indeed 4.89ADU - and if we were multiply it with 0.44 e/ADU we get 2.15e of read noise at max. Problem is - stddev of bias is not the same as read noise.

In order to measure read noise from bias sub - you must first remove bias signal. Any sort of non constant signal that is not removed will increase stddev measurement and hence you will get higher value than read noise.

If you want to measure read noise with bias subs - simplest method is to take two bias subs - subtract one from another, measure stddev and divide that value with sqrt(2) (or 1.414 whatever rounded up value of sqrt(2) is).

Whatever the case - if we take 2.15e or less - we run into yet another problem. For numeric gain of 56 only read mode 0 has read noise 2.15 or less (although not much less)

As you can see - I'm rather confused what the actual numbers are in this case and don't know even where to begin.

43 minutes ago, mgutierrez said:

Using ImageJ and defining a clean region, I get a stddev of ~17, which is > 3x bias stdev (4.9 x 3). That means is sufficient swamped.

Problem is that you did not remove bias signal and dark current signal or respective noises.

If you want to do that sort of calculation, here is how you should perform it:

1. Measure read noise by taking two bias subs, subtracting them, measuring stddev and dividing it with sqrt(2)

2. Take your master dark at 60 seconds, subtract master bias and measure average value. Take square root of that - this is your dark current noise.

3. Take calibrated sub, select empty patch of background, measures stddev and then calculate

sqrt(sub_noise^2 - dark_noise^2 - read_noise^2)

to get approximate value of LP noise so you can compare that with read noise.

I say approximate - because we did not account for noise added back by calibration.

Using noise to measure swamp factor is very inconvenient - because all the noise addition that happens (sub contains one dose of read noise, one dose of dark current noise, one dose of LP noise all mixed before you start calibration and then you add more noise from calibration subs).

When measuring signal - mean takes care of the noise (its like stacking) so you only need to know

a) read noise

b) mean signal level of empty patch of calibrated sub (and if there are some stars - you can simply take median and it will ignore those)

Link to comment
Share on other sites

Thanks for the prompt reply@vlaiv

I need some time to digest your info 😅 and make some tests.

Just wanted to clarify your doubts. Read mode is effectively 1. At gain 56 the camera enters high gain mode and read noise decreases drastically, as you can see in the graphs.

Furthermore, read noise, gain and other values, have been calculated with basic ccd parameters pixinsight script, and also compared with other colleagues which have characterized the qhy268m. I would trust the results.

Regarding the dark noise, yes, I though about it but since this cam has a very low dark noise and the sub is not too long, I ignored it for this example. Maybe I shouldn't do that.

I will come back to you with the proposed tests.

Thanks a lot for your time

Edited by mgutierrez
Misspelling
Link to comment
Share on other sites

11 hours ago, mgutierrez said:

I would trust the results.

In that case, I'll just go over results and both methods without really worrying if starting assumptions are true or not.

Gain: 0.333 e-/ADU

Bias mean value: 486.9 ADU

Bias stddev: ~4.9

Read noise: 1.61 e-   ---> 1.6 / 0.333 = 4.83 ADU

Background level of light sub is measured at 565.

You want to swamp read noise by factor of x3 with LP noise.

486.9 ADU x 0.33e/ADU = ~160.7e

565 ADU x 0.33e/ADU = ~186.45e

Background LP value is 186.45 - 160.7 = 25.75e and LP noise is square root of that 5.07e

You are swamping it by factor of 5.07 / 1.61 = ~ x3.152

13 hours ago, mgutierrez said:

Using the same region, I get a mean of ~565 ADU. Background is, hence, 565-486.9 = ~78. Read noise is 4.83 ADU (I guess I need to use ADUs here...). Then, I need a signal of (4.83*3)^2 = ~209 ADU.

78 / 209 indicates I'm clearly underexposed (~0.37 times).

Error in your calculation comes from calculating in ADU units and not in electrons.

Said relationships hold in electron units and not ADU units. Problem is that we have square in our calculations and conversion factor also squares and is thus calculated once more (one more time than it should be taken) (4.83 *3)^2 is actually (1.61 * 3 * 0.33)^2 so this leads to (4.83 *3)^2 * 0.33^2 and not (4.83*3)^2 * 0.33

If you first convert to electrons - everything checks out like I showed you above.

 

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.