Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

calibration frames with asi1600mm cooled


Recommended Posts

3 minutes ago, ollypenrice said:

(Actually I hope to have a CMOS camera coming soon....)

 

What? Change? At your age??? (I don't know your age, actually. I just chime in)

Link to comment
Share on other sites

  • Replies 64
  • Created
  • Last Reply

@andrew s

Here are my results:

Settings: offset 48, gain unity (139), usb speed 64, -19C (It's becoming too hot to go -20C, I need to switch to summer darks at -15C - I did actually try to go -20C but cooling could not keep up so I ended up with 10s being at -19.5 to -19.1 and all others at -19C but this had no influence on results as you will see)

Capture software: SGP lite

Analysis software: ImageJ

Data: I took x16 - bias, 10s, 20s, 30s, 40s, 60s, 90s, 120s darks I also collected one additional set of bias files (also x16) with a difference - instead of setting Bias frame capture in SGP I set Dark frame capture and set duration to 0.005s (and got interesting result).

Results are different from my previous measurements, but I have changed drivers in the mean time (which again puts me into doubt that this is manufacturer issue and kind of suggests that it might be driver issue - I know ZWO did fiddle with driver settings in the past - trying to minimize / remove amp glow).

Methodology - for each sub I measured mean pixel value, and for each set of subs (grouped by exposure time) I calculated mean and standard deviation. I include table of results as well as two graphs showing that bias is unusable (at least with my camera sample, above settings and latest drivers).

image.png.3b7fd9f5e6bbec2fd3a26e1b062d87a8.png

Last column is "special" bias taken at 0.005 seconds (it really should be no different from regular bias) - but look at difference of mean ADU (or electron since this is unity gain) - first one is 45.86 while second on is 45.26 (error is much less than difference so this is genuine difference).

Here is graph with bias included (first, or regular one, second is even lower):

image.png.784885898d3b28d7be20dfd6e3b19be6.png

(those little thingies on circles are error bars :D - so error here is way to small to cause wrong linear fit).

Here is second graph - no bias, just darks with linear fit:

image.png.43ab61735df104dce8bb32fe1c350cf2.png

Ok, now this indeed looks like a good linear fit (R2 value very close to 1) - but note something - Y intercept value is given as 46.36 - this is the level that bias should be, but if we look two bias values that I've measured - both are less than that by fair amount (45.86 and 45.26).

Bottom line is: whenever I measured Bias and its usability I got results that show that Bias is not usable. Whether this is due to sensor itself, or drivers, or if it is specific to my camera sample - that I don't know.

Link to comment
Share on other sites

Thanks for the results. I think we should not include the bias point in our plots e.g. your first plot. The normal way to use bias is to subtract it from the other frames. If you do this you will get a similar linear plot to your second one just shifted  down. 

The intercept should then be close to zero and we could then use scaled desks if we wanted to.

I will look at short exposure with my camera for bias estimate and see if I get a similar difference.

Estimating the true offset from bias frames if always going to be an approximation but we are in the 1 to 2e error range not dissmilar to the read noise.

We are plotting different errors. You are plotting the deviation in the mean values while I was plotting the deviation in the population that made up the mean.

Does your bias histograms look like mine?

We are getting similar results but perhaps drawing different conclusions. 

Regards Andrew 

Link to comment
Share on other sites

12 hours ago, andrew s said:

Thanks for the results. I think we should not include the bias point in our plots e.g. your first plot. The normal way to use bias is to subtract it from the other frames. If you do this you will get a similar linear plot to your second one just shifted  down. 

The intercept should then be close to zero and we could then use scaled desks if we wanted to.

I will look at short exposure with my camera for bias estimate and see if I get a similar difference.

Estimating the true offset from bias frames if always going to be an approximation but we are in the 1 to 2e error range not dissmilar to the read noise.

We are plotting different errors. You are plotting the deviation in the mean values while I was plotting the deviation in the population that made up the mean.

Does your bias histograms look like mine?

We are getting similar results but perhaps drawing different conclusions. 

Regards Andrew 

Actually my bias histogram looks rather different than yours (this is single bias sub):

image.png.43f485599d2c0a2953165f4c4ca66977.png

here is log histogram, just to make it clearer what it looks like:

image.png.b65121dd067c2dbaf743e5dc8126cf4b.png

Your histogram looks like you have your offset set too low - most of values "left" of central peak seem to be cut/compressed.

Mine looks much more like normal gaussian distribution.

I fail to see how bias like mine serves its purpose. If you take darks of matching length (and other parameters) to lights - using bias is redundant (it actually cancels out and you can use any, literally any bias frame and calibration will work), this can be seen from equation:

calibrated light = light - master bias - master dark = light - master bias - avg(dark - master bias) = light - master bias - avg(dark) + master bias = light - avg(dark)

as master bias can be pulled out of average - it is constant with respect to average (no shift of frames when you do master dark, so each pixel in stack is calibrated with same pixel from master bias - this means it is constant offset and can be pulled out of average).

Other use of bias frame is to remove bias from darks for purpose of scaling / optimizing darks, and it does poor job as it does not remove whole bias - there is 1 or 2 e left and if you multiply that with some constant - let's say two - you will end up with dark that differs by 2-4e from intended one.

This in turn will create problem with flat calibration - flat calibration must operate on light signal alone, any offset will cause wrong flat calibration (over correction or under correction).

Therefore bias should not be used for calibration (it is either redundant or can cause wrong calibration).

 

 

Link to comment
Share on other sites

Yes I will see if I can adjust my bias. You certainly don't need bias frames. I mainly used it when I was doing an automated search for possible Be stars where each exposure was calcuated from the targets magnitude and so I would have needed many different desks.

Good discussion

Regards Andrew

PS Your right my Bias was too low. I could not change it in the native The Sky X driver but could via ascom.  It now looks Gaussian as yours does.

I had not got round to measuring the camera before this discussion. It emphasises how important it is to check your camera and how easy it it is.

Thanks Andrew

Link to comment
Share on other sites

  • 4 months later...
On 01/06/2018 at 19:40, vlaiv said:

Calibration would be: master dark - stack darks, flats - stack flats, flat darks - stack flat darks, master flat - subtract stack of flat darks from stack of flats. Calibrated frame - subtract master dark from sub and then divide by master flat.

If you include this files only DSS will do this for you automatically. I suspect other software will be able to properly handle this calibration scheme.

2

I have tried using Bias and DarkFlats and.... Have exactly the same result.... My Flats are around 4sec, so I kinda... think on staying with bias (0.3s each).

However, I do think I do something wrong...
@vlaiv

How do you prepare Master >Dark/Flat/DarkFlats in PI>Integratioin and PixelMath?

if yes, what format you save and bit rate for DSS?

Or if you finish it in PI, how do you calibrate each file? in Script?

Link to comment
Share on other sites

2 hours ago, RolandKol said:

How do you prepare Master >Dark/Flat/DarkFlats in PI>Integratioin and PixelMath?

if yes, what format you save and bit rate for DSS?

Or if you finish it in PI, how do you calibrate each file? in Script?

The proper method is:

Image Integration without pixel rejection for dark subs. This creates the master dark.

Image integration without pixel rejection of dark flat subs. This creates a master dark flat.

Image calibration of flat subs with master dark flat. The master dark flat is subtracted from the flat subs.

Image integration of the calibrated flat subs to create a master flat.

Image calibration of light subs with master dark and master flat.

Star alignment of calibrated lights.

Image integration with appropriate pixel rejection of aligned light frames.

No pixelmath involved, only standard pixinsight processes.

Link to comment
Share on other sites

3 hours ago, RolandKol said:

I have tried using Bias and DarkFlats and.... Have exactly the same result.... My Flats are around 4sec, so I kinda... thing on staying with bias (0.3s each).

However, I do think I do something wrong...
@vlaiv

How do you prepare Master >Dark/Flat/DarkFlats in PI>Integratioin and PixelMath?

if yes, what format you save and bit rate for DSS?

Or if you finish it in PI, how do you calibrate each file? in Script?

Don't use PI, I use ImageJ to do calibration of my frames, but look at @wimvb answer - it sums up math behind it nicely.

Only thing that I do in addition to described process is to divide each sub (everything, lights, darks, flats, flat darks) with 16 (this is because using unity gain to get electron count - scaling from 16 to 12 bit) and sometimes I use sigma rejection on darks - but that depends if there is cosmic rays / slight radiation source near camera. Since I do my darks in basement - these events are rare but I've seen evidence of this.

During whole calibration process I use 32bit math, so first step is to convert each sub to 32 bit (and then stack and subtract / divide as explained above). I also don't do my normalization, alignment and stacking in DSS anymore. I use ImageJ for pretty much everything except processing (Gimp).

Link to comment
Share on other sites

Thanks @wimvb and @vlaiv

I am sure I have done it this way also (as I managed to run around 6 procedures in total: with  different types of bias (0.0003s and 0.3s), with dark flats and with bias, without bias at all ant etc :)

but result... was always without any visual difference, with some glowing corners... I guess... I simply had dark library with way too small amounts of darks. 

I left my cam in the basement for the whole night and this evening I will try again with 100 Darks. Will post the results.

Fingers crossed.

Link to comment
Share on other sites

On 13/10/2018 at 19:02, wimvb said:

The proper method is:

************

Image integration of the calibrated flat subs to create a master flat.

**********

 

Hi again,

that about pixel rejection while integrating flats?

Or should I go as in LightVortex tutorial: "Percentile Clipping here and then Equalize fluxes in Normlization"

Link to comment
Share on other sites

39 minutes ago, RolandKol said:

Hi again,

that about pixel rejection while integrating flats?

Or should I go as in LightVortex tutorial: "Percentile Clipping here and then Equalize fluxes in Normlization"

No need for that, flats are usually quite short to have hot pixels (no dithering, so that would not help), and have very high SNR to begin with. No passing airplanes, very low probability of cosmic rays / radiation striking sensor.

Plain old average works well with flats, but you can do sigma clip if you want to - no harm in that if you select your sigma properly (depending on number of subs, remember sigma is related to probability that certain value will fall in normal distribution range, for example sigma 3 is 99.73% of samples, so you can expect on average that out of 1000 samples - or subs, on particular pixel, only 3 will be greater than +/- 3 sigma, so for regular number of subs 3 sigma is good rejection criteria).

Link to comment
Share on other sites

And...

the last experiment:

Left>PI Script, classic way, Bias (0.3sec x 150), 100 Darks, Flats

Right=New with Flat Darks (100subs) and 100 Darks

Amp glow removed even with 0.3sec bias (150subs), but the background is not equal

image.thumb.png.cc7b8f7237f60e655a7132f918bf1d01.png

However, after simple Automatic Background Extraction (without cropping), the result is almost the same.

It looks like the number of Darks is essential, - initially, I used only Dark 20subs.

100 Darks did quite a good job even with 0.3 sec Bias...

Original stacks if someone wants to inspect are here

image.thumb.png.f23e3167a0366b5dfe2396c7cfdfbc93.png

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.