Jump to content

Question about flats and LPR


MishMich

Recommended Posts

Body cap in place, supposedly. No need to be connected to scope.

A bias frame is supposed to be necessary when the exposure time of darks does not correspond to the exposure times of the lights. So, if a single set of exposure lengths are taken for lights, and a set of darks of corresponding exposure, why would a biad frame be necessary? The bias frame is supposed to be for calculating the dark that would apply to any image when darks are not of the same exposure. If this is the case, the time it would be of benefit to take bias frames would be when taking lights of different exposure lengths, and then a set of darks of one exposure length should be sufficient for all the lights, regardless of exposure. No?

M.

Link to comment
Share on other sites

  • Replies 40
  • Created
  • Last Reply

I don't get all the nitty gritty of bias frames, but you don't need to worry about when you shoot them. Mine I shot with the lens fitted and capped.. I did read somewhere, that the 450d automatically handles a bias subtraction on every frame, which might explain the odd green effect, and the strange results from the analysis (info provided by narrowbandpaul on how) showed. I do not know if this is true mind you.

Link to comment
Share on other sites

Agree in principle....

A LIGHT is an exposure made up of a BIAS + target object exposure

A DARK is a compensating exposure ( no light) = BIAS + dark exposure

What happens when you add 20 subs to make up a stack?

You have 20 BIAS + 20 Target, so to get rid of any residual BIAS noise/ effects you have to remove 20 BIAS somehow.....

So either 20 equal Darks (BIAS+dark) or 10 Darks + 10 BIAS.....

Yes...No??????

Ken

Link to comment
Share on other sites

I got it now.

BIAS is the CMOS to processor to usb to laptop transfer Rubbish you don't want.

You need to remove it as it is still in lights, darks, dark flats

Easiest way is a 1/4000th second shot (or quickest you can do) and that stastically is the best method of containing only the bias.

You need to average out the bias, hence the multiple shot of bias.

Easy really..

Dark flats are for those that have problems taking flats and extend them past lets say a few seconds, where noise becomes a problem... If this is you, then you still need to remove the noise hence the term dark flats. But they still contain bias. Infact all shots contain bias + data except the bias which we hopefully only have bias data in. MOST OF US DON'T NEED DARK FLATS.

Would you notice bias in your shots, well if you are being super (subract another word for bum here) or inspecting pixel by pixel, then maybe... Either way they are simple to do, so why not...

Link to comment
Share on other sites

And if you think you understand it now, wait until I mention "dark stability"!

I shot a sequence of 30 1-minute darks at ISO 400 with my EOS 450D and did a statistical analysis on the frames. I measured the "standard deviation" over the whole frame, that's a measure of how much variation there is in the signal. Over the course of shooting those 30 frames, the standard deviation went from 14 to 17.5.

What does that mean?

That the camera doesn't reach thermal equilibrium before 30 minutes of operation so my first batch of lights (or darks) will be different to my last batch. Bottom line: I shouldn't stack them as if they were. Perhaps I need to have the camera snapping away while I set up the rest of the equipment so it has a chance to reach equilibrium by the time I need to take exposures...

dark stability.pdf

Link to comment
Share on other sites

Themos,

That is why I thought you needed to take darks before and after lights, and if a lot of lights, darks in between (say after every tenth light). That way, you would average out the fluctuation that arises from the incresing noise as the sensor heats up for the darks, which would mirror that of the lights.

Merlin, yes, you do - but how is 10 darks in conjunction with 10 bias going to be less random than 2 x 10 bias frames. That would cover the sensor noise from 10 darks, and the noise from sensor onwards for 10 bias frames, but not 20 darks. Unless there is some arithmetic calculation that allows for the derivation of the missing dark frames as a result of the 10+10. If that is the case, how would the dark component of the noise calculated differ in randomness between the original 10 dark frames, and the resulting 20?

Presumably there is some calculation that is encoded in software, I'd be interested in what that is, only I don't understand how on-sensor noise can be derived this way.

M.

Link to comment
Share on other sites

Themos,

That is why I thought you needed to take darks before and after lights, and if a lot of lights, darks in between (say after every tenth light). That way, you would average out the fluctuation that arises from the incresing noise as the sensor heats up for the darks, which would mirror that of the lights.

M.

That would work, I guess. But it makes a nightmare of processing as you'd have to make a new DSS group for every light/dark sequence to make sure you use the right darks for the right lights. Aaargh.

Link to comment
Share on other sites

Hi Themos,

Not according to Grant Privet in "Creating and Enhancing Digital Astro Images". He suggests that for cooled cameras, darks be taken before and after, and these used to derive a master dark. But for DSLR's, in a sequence of 100 lights you would take 10 at the start, 10 after the 33rd, 10 after the 66th, and 10 at the end. Then you would have three batches. The first batch would be 1-33, processed using the first 10 + second 10, 34 - 66 using the second 10 + third 10, the last 67-100 using the third 10 + last 10. That yields three batches, 40 darks taken, utilised as 60 in processing. In terms of time, for 3 minute exposures, that would mean 300 minutes of lights, 120 minutes of darks, of which 30 minutes at the beginning, 30 minutes at the end, and 60 minutes during the imaging sequence itself.

It also suggests that darks cannot be taken at a different time from the imaging sequence.

M.

Link to comment
Share on other sites

The clue was in the title, and the name, but I'll put the ISBN as well.

Privett, Grant: Ledbury, Herefordshire, UK.

"Creating and Enhancing Digital Astro Images."

Patrick Moore's Practical Astronomy Series.

Springer, London, 2007.

ISBN 9781846285806.

Link to comment
Share on other sites

Hi Merlin, I understand you now. No, you only need enough to derive a mean average or median. So, where does the bias come in? The way I read you was that these were used in someway to compensate for only having 10 darks. I agree. So, I would think that they are used to deduct the sensor-onward noise, on the one hand from the darks, and on the other the lights, and that leaves darks-bias and lights-bias, and the lights-bias are averaged (or median), while the darks-bias are averaged (or median). Presumably, the same method of averaging (or median) needs to be applied to both. Then the result for dark is substracted from light. I guess it could work differently - the darks averaged and subtracted from the lights averaged, and then the bias averaged and subtracted from the result. Either way, I can't see how there would be any point in subtracting the bias from darks and not lights.

The reason for selecting an uneven number is that it is better to have a 'middle' value when calculating a median, apparently. With even numbers, you have no 'middle' value, you have values either side.

So, getting back to my original question, I think I have the idea now.

As many darks as possible, as many as 1/2 the lights, depending on the length and number, interspersed within the lights, grouped, to accurately match the heating of the sensor.

As many bias as possible, at least as many as darks - can be taken any time

As many lights as possible, using the same camera-scope configuration & focus, at optimum exposure, preferably as many as one group of darks.

M

Link to comment
Share on other sites

The reason for selecting an uneven number is that it is better to have a 'middle' value when calculating a median, apparently. With even numbers, you have no 'middle' value, you have values either side.

This might be true if you were stacking 4 or 8 bit images and using 4 or 8 bit maths. But you are stacking 12, 14 or 16 bit images, and the stacking process is either converting them to 16 or 32 bit levels (DSS does it in 32 bit levels) whilst it does the maths. So there is absolutely no reason to have an even or an odd number of frames.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.