Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Swamp read noise


Recommended Posts

11 minutes ago, Pitch Black Skies said:

I think I am confusing Jon Riatas reccomendation for swamp factor X10 for optimal exposure.

Ah ok, so I'll explain that formula you wrote above in detail so you can understand it.

DN  = (read_noise^2 * swamp / gain + offset) * (2^16 / 2^camera_bits)

This is very unfortunate type of equation :D as we don't have very well defined meaning of things here, but I will explain what everything means, and produce same equation or rather similar equation - from procedure I presented above.

First - read noise bit is fine - and you can use it from graph you have on ZWO website. It is about 1.5e for your camera.

Swamp here is "background" swamp - not noise "swamp" - as it is acting on squared value of noise not directly on noise (remember that square relationship between noise and associated signal).

If you put swamp being 10 in above equation - it is the same as putting 3.16 in procedure I shown you above.

gain is e/ADU for your gain setting.

offset is very unfortunate naming. It is not offset that you use in your drivers. It is not offset that we measured in electrons from bias -it is ADU value from bias already divided with 4. This is problematic part of the equation. As is - it is very ambiguous what type of value you should use and in fact - it is probably worst type of value to use as it is "in between steps".

Last bit is calculation of that multiplicative constant for ADU values 12bit camera = 16,. 14bit camera = 4, 16bit camera = 1

It is bit manipulation and there is several ways to write it down - all equal.

You can write it down like this 2^(16-camera_bits) for example. That is the notion I would use

2^(16-16) = 2^0 = 1 (for 16bit camera)

2^(16-14) = 2^2 = 4 (for 14bit camera)

2^(16-12) = 2^4 = 16 (for 12bit camera)

In fact - 2^16 / 2^camera_bits = 2^16 * 2^-camera_bits = 2^(16-camera_bits) - it is same expression if you rearrange it a bit.

In any case, I would write above equation like this:

ADU = ((read_noise * swamp)^2 / gain) * (2^(16-camera_bits)) + offset

It is same type of equation but swamp and offset have slightly different meaning.

Swamp here is 3.16 or 5 (like I explained above - as it is "inside square root"). Gain is of course e/ADU value for your camera gain, and offset is simply measured mean ADU value of bias sub (without any division or anything - just straight mean of pixel values as they are produced by capture app).

Let's run that equation on above data to see what we will get:

ADU = ((1.5 * 5) ^2 / 1e/ADU) * 4 + 2800 = 56.25 *4 +2800 = 225 + 2800 = 3025

You need to measure 3025 on your sub - median value of patch of background.

I don't like this approach - as it does not allow you to quickly calculate how much longer your sub needs to be to hit the target. If you measure say 2000 and you calculate 3000 - it simply does not mean that you should extend your sub by 3000/2000 with this approach. Because most of both values is offset and it does not scale with time - it needs to be removed.

What you want to do is - calculate expected background value in electrons - and calculate measured background value in electrons (after subtracting offset) and ratio of the two will be ratio of your exposure times.

Makes sense?

  • Thanks 1
Link to comment
Share on other sites

Your camera is 14 bits, so that bit of the equation is 2^16 / 2^14 = 4.  Jon Rista refers to 16 because his example is based on a camera with a 12 bit ADC. So 2^16 / 2^12 = 16, which for your camera you can ignore.

This can all get confusing because what you see in ASIAir is 16 bit numbers. But what ZWO describe as gain, read noise are based on the camera bit depth, so 14 bit numbers. So if you leave the bias till the end (which you are seeing as a 16 bit number), you can work it out as:

Optimal DN (14 bit) = ((1.5^2 * 10) / 1)  = 22.5

Optimal DN (16 bit) = 22.5 * 4 = 90

But then add your bias back in. The 2800 is already a 16 bit value   - so gives 2800 + 90 = 2890 (so close to your 120s exposure)

If you went with Vlaiv's x5 factor it would be

Optimal DN = ((1.5^2 * 5) / 1) * 4 = 45.  Then add your 16 bit bias 45 +2800 = 2845 (so close to your 60s exposure)

NB Vlaiv can correct me if I'm wrong, but I think the basic read noise calculation is 1.5^2 = 2.25 and then multiply by 5 or 10 - so 45 or 90. Whereas he said 1.5*5 = 7.5 then ^2 = 56.25, which is why we have slightly different numbers.

With regard to x5 or x10 different people will have different views. But it also depends on the situation, so you can try out and see what works for you. With my RASA 11 and no filter, even 10s exposures will normally give me greater than x10 swamp values, and I don't really want to shoot shorter than that until I get a new PC with larger disk and more processing power (it's on order!).  Target may also affect this - I found another post on CN that quotes Chris Woodhouse's book (which I don't have). "He considers 5 X RN^2 and 10X RN^2 as bookends, with the best exposure to be somewhere in the range.  If the target is something with large dynamic range 5 X RN^2 gives better dynamic range.   If the goal is to maximize capturing dim detail, 10 X RN^2."

 

 

 

 

 

  • Thanks 1
Link to comment
Share on other sites

4 minutes ago, Pitch Black Skies said:

This is clever, more simple.

Well either way. Sorry - I realise your last calculation posted is correct too, as you have used Vlaiv's calculation that took the bias value back to 14 bit and then multiplied it out again. As long as you're getting to a repeatable value and it gives you confidence that your exposures are in the right area, that's the main thing.

 

  • Like 1
Link to comment
Share on other sites

12 hours ago, vlaiv said:

You can write it down like this 2^(16-camera_bits) for example. That is the notion I would use

2^(16-16) = 2^0 = 1 (for 16bit camera)

2^(16-14) = 2^2 = 4 (for 14bit camera)

2^(16-12) = 2^4 = 16 (for 12bit camera)

Also a much simpler way to calculate.

Link to comment
Share on other sites

29 minutes ago, Pitch Black Skies said:

Is that not for the light pollution swamp?

yes, using example of x5 or x10 swamp factor (you can go in between of course). I was just highlighting that (1.5^2) * N gives a different result to (1.5*N)^2

  • Like 1
Link to comment
Share on other sites

On 19/04/2022 at 09:21, vlaiv said:

Select a piece of background without much "stuff" in it (few stars is ok, but try not to have nebulosity or galaxies or such) in one of your subs and measure median ADU value (median is used because of any odd star that ends up in selection - it will "ignore" it).

Multiply that value with e/ADU for your gain - in case you are using unity gain - then it is easy - value is 1e/ADU and you don't have to multiply anything - just use median ADU.

Take square root of that number - that is your LP noise.

For this method my LP noise = 57.58

Mean ADU from your analysis = 3316 829*4=3316

√3316=57.58 LP noise

 

In the other method you got LP noise = 130

829-699=130 LP noise

What am I doing wrong here?

 

Edit: I think I could be mixing LP noise up with LP signal in the second calculation.

Edited by Pitch Black Skies
Link to comment
Share on other sites

29 minutes ago, Pitch Black Skies said:

For this method my LP noise = 57.58

Mean ADU from your analysis = 3316 829*4=3316

√3316=57.58 LP noise

In order to use that analysis you quoted - you need calibrated sub and measure it's background.

829 value is ADU value from uncalibrated sub, divided with 4 so actual measured value is ~3316, but this value contains offset as well.

If you want to utilize the method you quoted me on - you need to do following:

Take one of calibrated subs (calibrated = (sub - dark) / flat, where flat is normalized to 1), if you are still working with padded numbers - divide it with 2^(16-camera_bits) and multiply with e/ADU and then measure.

That way you will get pure background signal in electrons - and square root of that will be LP noise.

  • Thanks 1
Link to comment
Share on other sites

10 hours ago, vlaiv said:

where flat is normalized to 1

Does it mean just to use 1 flat?

 

For the M101 sub, I have the master dark, master flat and master flat dark.

Shall I calibrate it, upload it and see what figures you can pull from it?

 

What program can I use to measure the background like you did?

The software I use are in my signature.

I'm trying to load the file into NINA but having no luck.

Edited by Pitch Black Skies
Link to comment
Share on other sites

2 hours ago, Pitch Black Skies said:

Does it mean just to use 1 flat?

No, it just means that max value of the flat is set to 1.

You take your master flat - find max value and divide whole flat with that max value (that is simple approach,  more precise approach would be to select central region of flat without vignetting - find mean or median value and divide with that).

2 hours ago, Pitch Black Skies said:

For the M101 sub, I have the master dark, master flat and master flat dark.

Shall I calibrate it, upload it and see what figures you can pull from it?

Could you actually put up those files and I'll walk you thru how to do it?

2 hours ago, Pitch Black Skies said:

What program can I use to measure the background like you did?

I use ImageJ - it is open source, cross platform software for scientific imaging. Works great with fits and has loads of plugins.

Sub that you attached is saved as 32bit integer and contains negative values. You really want to use 32bit floating point and you don't want to scale any pixel values (except for flat)

  • Thanks 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Could you actually put up those files and I'll walk you thru how to do it?

Yes, really appreciate that. I will do it this evening.

I don't recall seeing an option to save as 32bit floating point. I think it is just 32bit integer/32bit rational, fit/tif.

Link to comment
Share on other sites

1 minute ago, Pitch Black Skies said:

I don't recall seeing an option to save as 32bit floating point. I think it is just 32bit integer/32bit rational, fit/tif.

Hopefully 32bit rational should be the same as 32bit floating point (although strictly speaking, rational numbers can be stored as both float and fixed point).

  • Thanks 1
Link to comment
Share on other sites

300s raw sub.FTS

MasterDark_Gain100_300s.tif

MasterDarkFlat_Gain100_10s.tif

MasterOffset_Gain100.tif

MasterFlat_Gain100.tif

I hope these are okay. I never calibrate with the Master Offset as I've (read that) Flat Dark/Master Dark contains that signal.

Edited by Pitch Black Skies
Link to comment
Share on other sites

9 minutes ago, Pitch Black Skies said:

I hope these are okay. I never calibrate with the Master Offset as I've Flat Dark/Master Dark contains that signal.

Almost ok :D

Master bias is here not needed - as you pointed it out, master dark and master flat dark are all that are needed, so these seem to be good but light is not.

Can you upload just single raw sub as it came from the camera? This one has been debayered, converted to 32bit floating point format and scaled.

  • Haha 1
Link to comment
Share on other sites

15 minutes ago, vlaiv said:

Can you upload just single raw sub as it came from the camera? This one has been debayered, converted to 32bit floating point format and scaled.

This is straight from the camera to my ASIair Pro.

I think it is the same file that I attached yesterday.

Light_M101_300.0s_Bin1_533MC_gain100_20220402-231650_-10.0C_0002.fit

Link to comment
Share on other sites

1 hour ago, Pitch Black Skies said:

This is straight from the camera to my ASIair Pro.

I think it is the same file that I attached yesterday.

Ok, so here is procedure with ImageJ.

First - you will need debayer / demosaic algorithm. I'll include one simple plugin at the end that you can use.

1. Open each of:

- light

- flat

- dark

- flat dark

and on each of them perform: Image / Type / 32-bit

2022-04-22_22-55.png.81f314d8e837117e7a1bf5042b48e8b8.png

This will convert each to 32bit float point so that we don't loose any precision.

2. Subtract master dark from light by using Process / Image calculator

2022-04-22_23-24.png.c03c8b41852ac7443e60a979f5df5150.png

Select light as image 1, select operation to subtract and select master dark as image2. Make sure you have 32bit result and you can create new window (otherwise image2 is subtracted from image1 "in place" - changing image1 - you can't undo that).

3. Do the same with master flat and master flat dark - subtract master flat dark from master flat

4. rename new images by right click / rename or F2 to light and flat

5. close others

6. You need to debayer both flat and light and extract only green channel. For this you can use supplied plugin.

2022-04-22_23-29.png.4b084c7a46fbc5707e13006fec189a89.png

Select bayer pattern, select replication and green component. Do this to both images. This will create new images

7. Select central part of flat image where there is no vignetting and do Analyze / Measure:

2022-04-22_23-32.png.b520d2663c17e6f6a4742e5c6f0a2781.png

note average value:

2022-04-22_23-33.png.0206bdc4f683a061ab2d4f916f1253d2.png

Now remove selection and to Process / Math / Divide to flat and divide with said value - in above case 13257.092

8. in the end produce final image by Process / image calculator -> divide light green with flat green:

2022-04-22_23-30.png.e436e9903a5660bb07aa3d3e7721a1a9.png

9. Divide resulting image by 4 to get true values rather than scaled to 16 bit value (Process / math / divide with 4)

10. Make selection and measure

2022-04-22_23-37.png.dbbe0d392b2c4022178beed12901181d.png

In this particular case we measured background value to be ~77.3 (Mean value is ~78, median is ~77.3)

Here are plugins:

Debayer_Image.class

This is .class file that you can save to your Fiji/ImageJ Plugin folder (open Astro sub folder and place it there). Be waned - above is "executable" and you should not trust executables posted online unless they are coming from trusty source. I no longer have source, otherwise I would post that also for you to compile yourself in case you wanted to.

You can also install one of other published "official" ImageJ/Fiji plugins for debayering and use that.

 

 

Edited by vlaiv
  • Thanks 1
Link to comment
Share on other sites

9 hours ago, vlaiv said:

Do the same with master flat and master flat dark - subtract master flat dark from master flat

Something's not right here. The master flat doesn't look right. Also, when I subtract the master flat dark from it I end up with a completely white uniform image with no vignetting 🤔

IMG_20220423_002552.thumb.jpg.fe9f678ba8dca92364e0f37272315100.jpg

Edited by Pitch Black Skies
Link to comment
Share on other sites

9 hours ago, Pitch Black Skies said:

Something's not right here. The master flat doesn't look right. Also, when I subtract the master flat dark from it I end up with a completely white uniform image with no vignetting

ImageJ automatically sets display range after operation on the image.

Flat dark has some level of noise, some hot pixels and so does flat.

These are single files not masters so it is not averaged out. When you perform operation image will contain large range of values and if you set "stretch" from min to max - it will look mostly grey and flat, but if you adjust contrast/brightness (these don't change pixel values in ImageJ - they are not real stretching but rather "screen transfer functions") - you will still get proper image.

image.png.0dab7db690d4e0972cfffb7ecc8026e6.png

note distinct three peaks in histogram - this is normal for raw color image that contains all three color pixels and has not been debayered (each peak is one color).

  • Thanks 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

Flat dark has some level of noise, some hot pixels and so does flat.

These are single files not masters so it is not averaged out.

They are a stack of 20 single files each.

Do you think I should increase to get a more averaged out master? I'm going by the reccomendation of 20 in the book 'Every Photon Counts'.

Link to comment
Share on other sites

What does read noise look like in a sub.... I know I'm going to get jumped on for this but CMOS looks so much noiser than my cc'd, and considering I don't use calibration frames, and my cc'd has a read noise far higher than a CMOS I'm interested in what it looks like in a sub or in a stack.. 

I know what dark current looks like and bias frames, so what does read noise look like?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.