Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Tip of the day: Dither and dither often


Recommended Posts

I thought I share some findings on benefits of dithering.

It is well known thing that hot pixels and fixed pattern noise benefit from dithering, but while I was thinking about calibration procedures I realized that there is overall significant benefit in dithering regardless of FPN.

This might be a well known thing also, but I have not come across it so far and thus decided to bring it forward.

First I'm going to talk about "standard" calibration procedure and why bias frames are unnecessary, and then explain general benefits of dithering.

Standard calibration goes like this:

- light subs, dark subs, flat subs, bias subs, flat dark subs

master bias = avg(bias subs)
master flat dark bias removed = avg(flat dark sub - master bias)
master flat = avg(flat sub - master flat dark bias removed - master bias)
master dark bias removed = avg(dark subs - master bias)
calibrated light = (light sub - master bias - master dark bias removed) / master flat

 

Ok, now I would like to show that using bias adds nothing of value and actually cancels out.

First let's look at "ingredients" of bias sub and dark sub:

bias sub = bias signal + bias noise
dark sub = bias signal + bias noise + dark signal + dark noise

Let's do the same for master bias and master dark:

master bias = bias signal + "reduced" bias noise

(this reduced noise is due to averaging when making master file - average of signal is signal, average of noise is "lesser" noise)

master dark bias removed = avg((bias signal + bias noise + dark signal + dark noise) - master bias) = bias signal + dark signal + "reduced" bias noise + "reduced" dark noise - master bias

bias signal, dark signal and master bias are "constants" and average to themselves. Let's do a bit more substitution:

master dark bias removed = bias signal + dark signal + dark "reduced" bias noise + "reduced" dark noise - (bias signal + bias "reduced" bias noise) = dark signal + dark "reduced" bias noise + "reduced" dark noise + bias "reduced" bias noise

I've added prefixes of bias and dark to respective reduced bias noise terms because those are different in value and can't cancel out! (both being random and independent from each other - first is generated when shooting bias files, second is generated when shooting darks - no relation).

For the time being let's put aside flat correction, but same logic applies there as well, and look at light sub composition:

light = light signal + light signal noise + bias signal + bias noise + dark signal + dark noise

Let's calibrate this light with "standard" method (omitting flats):

calibrated light = light signal + light signal noise + bias signal + bias noise + dark signal + light dark noise - master bias - master dark bias removed
calibrated light = light signal + light signal noise + bias signal + light bias noise + dark signal + light dark noise - (bias signal + bias "reduced" bias noise") - (dark signal + dark "reduced" bias noise + "reduced" dark noise + bias "reduced" bias noise)
calibrated light = light signal + light signal noise + light bias noise + light dark noise - dark "reduced' bias noise - "reduced" dark noise

So if we look at calibrated light frame only signal that is inside is light signal, all other terms are noise terms - so this is good calibration, and important thing to note is that none of noise terms has "bias" prefix, so none of terms from bias files is present in calibrated light - no need for bias at all!

Let's look at alternative calibration method:

master flat dark = avg(flat dark sub)
master flat = avg(flat sub - master flat dark)
master dark = avg(dark subs)
calibrated light = (light sub - master dark) / master flat

And quickly go thru components and equations (again no flat part, but in essence it is the same thing):

dark sub = bias signal + bias noise + dark signal + dark noise
master dark = avg(bias signal + bias noise + dark signal + dark noise) = bias signal + dark signal + "reduced" bias noise + "reduced" dark noise
light = light signal + light signal noise + bias signal + bias noise + dark signal + dark noise
calibrated light = light signal + light signal noise + bias signal + light bias noise + dark signal + light dark noise - (bias signal + dark signal + dark "reduced" bias noise + "reduced" dark noise)
calibrated light = light signal + light signal noise + light bias noise + light dark noise - dark "reduced' bias noise - "reduced" dark noise

We've got exactly the same components in calibrated light with second method of calibration without using bias subs at all!

Now why is this important and what does it have to do with dithering? Well important fact is to realize that bias noise in final image is not coming from bias subs at all - it is coming from dark subs!

So first conclusion is: make as many dark files as possible and don't use bias files - they don't matter if you do standard calibration (they have their purpose but it is not important for this discussion).

Second thing is related to dithering, and we will now examine that.

Let us first examine ideal case contrasting dithering - subs are exactly aligned (pixel perfect) without any need for additional alignment. We are going to use simple mean or average stacking method in calculations.

image = avg(calibrated light)
image = avg(light signal + light signal noise + light bias noise + light dark noise - dark "reduced' bias noise - "reduced" dark noise)

Now it is important to notice that dark "reduced" bias noise term and "reduced" dark noise term are same for each calibrated frame and are "constant" in relation to averaging. So we can move them outside avg brackets. Thus we have:

image = avg(light signal + light signal noise + light bias noise + light dark noise) - (dark "reduced" bias noise + "reduced" dark noise).

This is very important conclusion. It turns out that by calibrating with master dark we are in effect "polluting" final image with certain level of "reduced" bias noise and "reduced" dark noise. In modern cooled cameras dark noise is very small, thus "reduced" dark noise is going to be even smaller and won't impact image much, but "reduced" bias noise is still important term.

This is particularly important with CCD cameras that have high read noise - in range 6-9e and some models even more. Take for example case when you have 8e read noise and you take only 16 dark frames. You will end up adding 2e (8e / sqrt(16)) "reduced" read noise back into the image when calibrating. And it even does not matter how many light subs you stack, you will add those same 2e worth of noise back into image! This is of course provided that frames are perfectly aligned.

Let's examine other edge case - full dithering, meaning that each light sub is offset from all others in such way that no pixel aligns to exact same pixel in any of the subs. We still have this equation:

image = avg(light signal + light signal noise + light bias noise + light dark noise - dark "reduced' bias noise - "reduced" dark noise)

But important thing now to notice is that you no longer can pull dark "reduced' bias noise  and "reduced" dark noise terms out of the brackets. They are "pixel wise" constant - meaning that there will be same noise terms for certain pixel master dark but if you compare two different pixels in master dark you will get two different values for mentioned terms. Since noise is random in nature - it will also behave as random noise if you compare different pixels.

When you average noise - in this case in above formula you are averaging master dark noise not across different frames but across different pixels of master dark frame, you reduce it by factor of square root of averaged samples.

So let's get back to previous example. Let's suppose that you are doing NB image, and you've taken 16 10minute subs, but between each two subs you dithered. You also as in previous example have CCD with 8e noise and you took 16 dark frames. In this case you end up "polluting" final image not with 2e worth of read noise, but 0.5e worth of read noise! Quite an improvement and only thing that is responsible for this is the fact that each sub was dithered.

Bottom line is that one can improve SNR (in some cases considerably) just by dithering, even if no FPN is present in the image. Dither, dither often! :D

 

 

 

 

Link to comment
Share on other sites

Yes, that is one usage of bias frame, but you don't need bias frame to scale dark frame if you have two sets of dark frames with different exposures :D

Take for example that you have 2 minute master dark and 1 minute master dark and you need to create 3 minute master dark.

3 * ( 2_minute_master_dark - 1_minute_master_dark)

Bias can be considered 0_minute_master_dark, so it can also be used to do similar calculation ...

But like I've mentioned, bias does have some usages (bad pixel map, dark frame scaling, can replace master flat dark if sensor has particularly low dark current and flat exposure is very short...) but as I've shown, it is not needed in "standard" calibration scheme.

 

Link to comment
Share on other sites

I am going to be watching this with great interest as I know there will be a long discussion here (which I will understand 0.01% of )

Over past couple of years I have determined that the taking and use of flats/darks/bias leads to interesting discussion.

Link to comment
Share on other sites

4 minutes ago, iapa said:

I am going to be watching this with great interest as I know there will be a long discussion here (which I will understand 0.01% of )

Over past couple of years I have determined that the taking and use of flats/darks/bias leads to interesting discussion.

I know that most of the post was about calibration, but that was just to emphasize source of noise that is injected during calibration.

Main "message" of the post is that one should dither, and that dithering is helping with SNR even when no hot pixels / fixed pattern noise is present in camera.

Most of math is there to help one understand and help calculate impact of no dither vs dither in any given setup (target brightness, LP, read noise, dark noise ... calibration injected noise is often overlooked in SNR calculations, and now it even depends on number of dither moves vs number of total frames :D ).

Link to comment
Share on other sites

1 hour ago, michael8554 said:

This is the reverse of another method promoted here on SGL - dither, flats, and bias as darks (no long duration darks) ?

I'd  be interested in your maths for that method and your conclusions.

Michael 

That is simply wrong calibration. It can work for imaging provided that:

- your sensor has even and very small dark current across the surface (no amp glow) and no hot pixels, or hot pixels get removed by either cosmetic correction or dithering + sigma clip stacking.

- and your flats are "flat" - no significant dust and vignetting

Dark current has same impact on image as LP. LP can be removed via histogram manipulation (levels), provided that there are no gradients in image - it will be simple DC offset in signal. Same goes for dark current if it is DC in nature - it is easy to remove.

If your exposure length is high and dark current is of significant level you will however run into trouble when applying flats. Applying flats assumes that all signal in the image came from light as a source and thus is subject to vignetting, any shadows in optical train like dust or OAG or whatever. But if you calibrate with bias only, actual signal in image will be light signal + dark signal. In this case your flats will over-correct.

Here is an example:

Let's take 2 pixels, one that receives 60% light and one that receives 100% light. And let's assume that they each correspond to same light level from target (in example 10e).

Case 1 (no dark current in image, regular calibration so dark signal is removed):

Pixel 1 (should receive 10e) but receives 6e due to vignetting (60%).

Pixel 2 receives 10e (full 100%).

Flat would look like: 0.6, 1.0

So we divide each pixel with flat value and end up with: 10e, 10e - this is what one would expect for two pixels receiving same amount of light.

Ok, let's now look at case where there is additional 1e of dark signal left in image because of bias only calibration.

Pixel 1 is now at 7e (10e * 60% + 1e from dark current)

Pixel 2 is now at 11e (10e + 1e from dark current)

We again apply flats (same flats because flats tell us relative amount of light falling on each pixel, it does not know anything about dark signal):

Pixel 1 will be 7 / 0.6 = 11.66666

Pixel 2 will be 11 / 1.0 = 11

So flats over-corrected Pixel 1 (one that was darker) to higher value than it needs to be.

Link to comment
Share on other sites

Thanks Vlaiv

So if I  have understood your explanation (by no means a given......), there will be an approx 6% error in the correction of vignetting and bunnies ?

How noticeable is that, I haven't  particularly noticed overcorrection in my results ?

And not having to take hours of darks each night is a big advantage to me.

Michael 

Link to comment
Share on other sites

1 hour ago, michael8554 said:

Thanks Vlaiv

So if I  have understood your explanation (by no means a given......), there will be an approx 6% error in the correction of vignetting and bunnies ?

How noticeable is that, I haven't  particularly noticed overcorrection in my results ?

And not having to take hours of darks each night is a big advantage to me.

Michael 

Hm, that was just an example, so it can be less than that and it can be more than that. Depends on level of vignetting and strength of dark current vs strength of signal in your sub.

Like I've already mentioned, one can get away with such calibration if dark frame is even (no amp glow), dark current is low and vignetting is not so bad (flat frame is "flat").

But do remember that post-processing step involves non linear transform of pixel values and small differences in linear signal can end up as large differences when image is stretched.

One more thing to note, all above discussion is intended for set point cooling cameras where one can control dark frames. If you have uncooled sensor, then you probably want to either:

- do calibration with bias instead of darks

- take both darks and bias files and do dark frame correction (this is option that I would choose, and in this case you can have dark library composed out of master darks at different temperatures and choose one that is closes to "working" temperature for the night, so you don't have to shoot them every session, because even if you take them each time, odds are they will be at wrong temp).

In both cases you will end up with less than perfect calibration, but given the fact that you are using uncooled sensor - "perfect" calibration does not make much sense since you can't control things.

But main point of original post stands regardless of chosen calibration methods. Dithering will help reduce bias and dark current noise one injects back into image regardless of calibration method (weather you are using bias files or dark files to calibrate, you are still adding them to light subs, or subtracting, but noise don't care if it has -1 multiplier to it because of its random nature).

To reiterate the point. If you guide good and your resolution is such that in alignment process pixels don't move, you will be stacking pixels that each have same value of dark frame pixel (whether it is bias or true dark it does not matter). Because this value is "constant" in relation to average it is same as if you took average of uncalibrated pixels and removed dark pixel after.

If you dither, each pixel in the stack will be calibrated with different pixel from dark (dark, bias or whatever you choose), and you can't remove them outside average, but averaging process itself will have beneficial impact on noise in "dark" calibration value - it will lower it.

 

Link to comment
Share on other sites

Hi Vlaiv,

Great work and understanding of electronic noise and signal to understand the benefits of dithering. Unfortunately running a "double rig" dithering is out of the question. Until someone far cleverer than me compiles a piece of software capable of dithering a pair of imaging scopes.

Steve

Link to comment
Share on other sites

On 05/02/2018 at 00:32, vlaiv said:

To reiterate the point. If you guide good and your resolution is such that in alignment process pixels don't move, you will be stacking pixels that each have same value of dark frame pixel (whether it is bias or true dark it does not matter). Because this value is "constant" in relation to average it is same as if you took average of uncalibrated pixels and removed dark pixel after.

I am never sure I completely believe this argument (there was a long thread on Cloudy Nights about it some years ago) as you appear to be getting something for nothing. My suspicion is that to work the noise would have to be truly random and if it is not you end up with correlated pixels in the final image (but I have never done the maths to prove it!).  But if the noise was truly random you could just replace it by a single value over the whole frame anyway!

NIgelM

Link to comment
Share on other sites

On 2/6/2018 at 23:30, sloz1664 said:

Hi Vlaiv,

Great work and understanding of electronic noise and signal to understand the benefits of dithering. Unfortunately running a "double rig" dithering is out of the question. Until someone far cleverer than me compiles a piece of software capable of dithering a pair of imaging scopes.

Steve

Yes, for the time being if using pair of scopes only option is to sync frames "by hand" and dither manually, but that would mean babysitting rig the whole night.

It is indeed pity no software is capable of doing multiple exposures with multiple cameras in sync. How do you handle dual scope? Two instances of imaging software or two computers?

5 hours ago, dph1nm said:

I am never sure I completely believe this argument (there was a long thread on Cloudy Nights about it some years ago) as you appear to be getting something for nothing. My suspicion is that to work the noise would have to be truly random and if it is not you end up with correlated pixels in the final image (but I have never done the maths to prove it!).  But if the noise was truly random you could just replace it by a single value over the whole frame anyway!

NIgelM

You don't get something for "nothing". You are indeed "correlating" dark current noise and read noise (not signals, those get removed by calibration), and if sensor is "well behaved", that sort of noise is truly random, so you are effectively binning the dark frame in a sense.

Maybe some sort of diagram would help to visualize what is going on.

Let's take following scenario

no_signal, no_signal, star, no_signal    - to be uncalibrated ligth consisting of four pixels, three contain no signal at all, one has some signal in it - a star.

Now let's look at master dark:

dark_1, dark_2, dark_3, dark_4 - four distinct values of dark, in this simple case dark_1 is dark_signal1 + dark_noise1 + bias_signal1 + bias_noise1, but let's call it just dark_signal1+noise1

If we calibrate light frame, and do simple notation we will end up with:

0 + noise1, 0 + noise2, star + noise3, 0 + noise4 sort of pattern for light frame. This is simple notation and we are not concerned with dark noise, bias noise, shot noise and LP noise of light frame, just some signal and injected noise from dark calibration - hence 0 and star notation for "light signal" and noiseN notation for noise in nth pixel of dark frame.

case 1: no dither ideal case, we stack some calibrated subs and it looks like this:

0 + noise1, 0 + noise2, star_sub1 + noise3, 0 + noise4
0 + noise1, 0 + noise2, star_sub2 + noise3, 0 + noise4
0 + noise1, 0 + noise2, star_sub3 + noise3, 0 + noise4
.....

 

Let's examine third pixel and average of it. It will be like this (star_sub1 + noise3 + star_sub2 + noise 3 + star_sub3 + noise3) / 3 - if we take and stack 3 subs. Important to note, each sub has its own light signal in it (hence star_sub1, star_sub2, ...) but noise for each pixel is exact same noise for that pixel injected by calibration. So

(star_sub1 + noise3 + star_sub2 + noise 3 + star_sub3 + noise3) / 3 = (star_sub1 + star_sub2 + star_sub3 + 3 * noise3) / 3 = (star_sub1 + star_sub2 + star_sub3) / 3 + noise3. That is what I mean by term being constant in regard to average, you can pull it out of avg brackets.

Case 2: ideal dither, in each sub star is in different position:

0 + noise1, star_sub1 + noise2, 0 + noise3, 0 + noise4
0 + noise1, 0 + noise2, star_sub2 + noise3, 0 + noise4
0 + noise1, 0 + noise2, 0 + noise3, star_sub3 + noise4

But when we align subs, for star pixel we will have one column like this:

                                  0 + noise1, star_sub1 + noise2, 0 + noise3, 0 + noise4
                 0 + noise1, 0 + noise2, star_sub2 + noise3, 0 + noise4
0 + noise1, 0 + noise2, 0 + noise3, star_sub3 + noise4

Now let's look at the term for star:

(star_sub1 + noise2 + star_sub2 + noise3 + star_sub3 + noise4) / 3

And here it is! we can no longer pull the noise term out of the brackets because it is no longer the same value, so we need to add each term in the way noise terms are added (independent noise terms, and we assume random residual noise - it most certainly is random for dark current shot noise, and bias / read in most sensors is also very random).

(star_sub1 + noise2 + star_sub2 + noise3 + star_sub3 + noise4) / 3 = (star_sub1 + star_sub2 +  star_sub3 + sqrt(noise2^2 + noise3^2 + noise4^2)) / 3 = (star_sub1 + star_sub2 + star_sub3) / 3 + sqrt(noise2^2 + noise3^2 + noise4^2) / 3

So if you don't dither each pixel in every sub will be injected with same constant term for noise and that is like just adding to average for that pixel noise term from dark frame. But if you dither, then each aligned pixel will not get exact same value for each sub, but each sub will have "randomly selected" noise value from dark - and those add like normal noise terms.

On the topic of correlation, with this you indeed correlate noise term from darks, but since these are random, it is like regular stacking or binning by integer factor (well not really unless you force software to actually shift and add instead of doing sub pixel alignment).

With modern software for imaging this already happens somewhat even when not dithering. There is sub pixel alignment accuracy, and each image except reference gets correlation between adjacent pixels because subpixel shift. This correlates both signal and noise. On the other hand signal is already quite correlated because it was blurred by close to gaussian profile PSF.

 

 

Link to comment
Share on other sites

You lost me at "(this reduced noise is due to averaging when making master file - average of signal is signal, average of noise is "lesser" noise)".

That's absolutely not how it works. Noise is noise, you can't 'average' it away. What happens when you stack any type of frame is that the noise increases as the square root of signal. So by stacking frames, signal increases faster that noise, but they both increase.  All explained here:

https://www.blackwaterskies.co.uk/2013/09/pixinsight-dslr-workflow-part-1a-bias-frames/

Link to comment
Share on other sites

2 hours ago, IanL said:

You lost me at "(this reduced noise is due to averaging when making master file - average of signal is signal, average of noise is "lesser" noise)".

That's absolutely not how it works. Noise is noise, you can't 'average' it away. What happens when you stack any type of frame is that the noise increases as the square root of signal. So by stacking frames, signal increases faster that noise, but they both increase.  All explained here:

https://www.blackwaterskies.co.uk/2013/09/pixinsight-dslr-workflow-part-1a-bias-frames/

Difference between average and sum. It means the same thing for SNR.

Sum of noise is like sum of linearly independent vectors, each having magnitude of stddev of noise.

So sum of noise terms would be sqrt(noise1^2 + noise2^2 + noise3^2 + ....) - or square root of sum of squares.

Sum of signal is just n * signal, or signal1 + signal2 + signal3 ..... (we assume that signal is always the same - target brightness does not vary with time or between frames).

Average is above terms divided with N.

So in case of signal, it will be n * signal / n - or just signal (n cancels out), this is what I mean by average of signal is signal.

In case of noise we will have sqrt(noise1^2 + noise2^2 + noise3^2 + ....) / n.

If noise1, noise2, noise3 all represent random noise with same intensity (stddev of samples), and when calculating magnitude of resultant noise we can substitute for each term same magnitude (direction of each vector will be perpendicular to all other in n dimensional space).

This leads to following expression:

sqrt(noise^2 + noise^2 + noise^2 ....)/n = sqrt(noise^2 * n) / n = noise * sqrt(n)/n = noise / sqrt(n)

so resulting magnitude of noise is less by factor of 1/sqrt(n) than magnitude of each of components. Same as saying that SNR is increased by factor of sqrt(n) when stacking n samples.

That is the second part of sentence - average of noise is "lesser" noise, or to be precise, average of noise of certain magnitude is noise of magnitude - original magnitude / sqrt(number of averaged samples), provided that each sample has same noise magnitude.

 

Link to comment
Share on other sites

Hi Vlaiv

It is indeed pity no software is capable of doing multiple exposures with multiple cameras in sync. How do you handle dual scope? Two instances of imaging software or two computers?

I run and oag guide the main scope using SGPro. The second scope I just capture images through MaximDL. All on one PC.

Steve

Link to comment
Share on other sites

2 hours ago, Stub Mandrel said:

Observation...

DSS used division for flats and subtraction for darks.

Subtracting bias from flats before division means not using bias gives a different result?

Flats are prepared in following way:

master flat dark = avg(flat dark)

master flat = avg(flat sub - master flat dark)

Each flat sub consists of following signals: light signal, dark current signal (how ever small that may be, some use 0.3s flats, some use 1.5s flats, ...) and bias signal

Flat dark consists of: dark current signal (again how ever small) and bias signal

Subtract those two and you are left with light signal only - that is the point of calibration, you want your lights to contain light signal only and your flats to contain "light" signal only (one coming from flats screen/box) because all artifacts of "flatness" come from certain percentage of light being lost, so we need only light components to correct that. Additionally you only want light signal in your final stack - we are capturing light from stars and we don't want anything else in image.

When you create flat with bias subtraction only, then there is a bit of residual dark signal in there (how ever small that may be) that is going to throw off your results of applying flats.

So to answer your question about different results when using bias on flats:

If you use bias, flat darks and flats like this: master flat dark = avg (flat dark - master bias), master flat = avg(flat - master bias - master flat dark) it gives exactly the same results as using following: master flat dark = avg(flat dark), master flat = avg(flat - master flat) - and both produce correct result, but those two differ from the third way: master flat = avg(flat - master bias) which is wrong calibration, because it removes bias signal from master flat but leaves in flat dark signal (how ever small it may be).

Link to comment
Share on other sites

On 2/7/2018 at 00:30, sloz1664 said:

Hi Vlaiv,

Great work and understanding of electronic noise and signal to understand the benefits of dithering. Unfortunately running a "double rig" dithering is out of the question. Until someone far cleverer than me compiles a piece of software capable of dithering a pair of imaging scopes.

Steve

I need to get up early so I will only pop in for this. Multiple instances of APT can be synchronized together (each with its own camera) and one can be configured as master and control the dither. Other instances will act like slaves. Unfortunately you need to dither after each frame which I don't desire when I shot short exposures. However, when shooting short exposures, I can run 2 unsynchronized instances, dither less often and loose one short frame on one camera. When shooting long exposures, probably it makes sense to dither after each one.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.