Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Is it worth taking bias frames?


Recommended Posts

 I have always struggled to understand the thinking behind the advice generally given that it is necessary to take fifty or more bias frames before they become useful.

I have in the past taken with a Nikon D90 loads of bias frames and sure enough, once more than fifty or so are stacked, you can see regular structure in the bias master, less than forty, give-or-take, it is harder to determine a solid pattern in the random noise.

The bit I don't understand is that I might only have taken twenty lights, in which case, in stacking the twenty lights, the regular pattern contributed by the bias noise would be much less than the correction I am applying by adding in fifty bias frames...?

I use PI or MaximDL for stacking and have read as much as I can find published to try and discover how the internal stacking process works, thinking perhaps that some kind of scaling is going on with the application of the bias frames, i.e. if twenty lights were stacked with 50 bias is the bias master scaled by a factor of 0.4 so that the bias master does not overcorrect the lights? but there is very little published on how the stacking process works internally in these programs. It would make sense to use a high ratio of bias to lights if automatic scaling was used.

Luckily I use the Nikon only rarely but on those occasions I found that adding in a stack of 60 bias frames to only ten lights actually made regular-pattern noise structure more visible in the calibrated image, dropping down the bias frames to match the lights, just ten, and the regular bias pattern disappeared in the final stack, that was in MaximDL5, have not tried it in PI yet.

So still a mystery to me how this high ratio of bias to lights can reduce fixed pattern noise in the image and not add to it when the number of lights and bias do not match...?

For normal cooled CCD work with the QSI I tend to take matching numbers of lights, darks, bias and flats, usually thirty of each in a session for a single target but once my new observatory is running I intend to try Ollie's suggestion of using bias for darks, sigma reject stacking and a bad pixel map, plus reuse the flats, so that I can get more targets in per session.

Link to comment
Share on other sites

  • Replies 43
  • Created
  • Last Reply
14 minutes ago, Oddsocks said:

 I have always struggled to understand the thinking behind the advice generally given that it is necessary to take fifty or more bias frames before they become useful.

I have in the past taken with a Nikon D90 loads of bias frames and sure enough, once more than fifty or so are stacked, you can see regular structure in the bias master, less than forty, give-or-take, it is harder to determine a solid pattern in the random noise.

The bit I don't understand is that I might only have taken twenty lights, in which case, in stacking the twenty lights, the regular pattern contributed by the bias noise would be much less than the correction I am applying by adding in fifty bias frames...?

I use PI or MaximDL for stacking and have read as much as I can find published to try and discover how the internal stacking process works, thinking perhaps that some kind of scaling is going on with the application of the bias frames, i.e. if twenty lights were stacked with 50 bias is the bias master scaled by a factor of 0.4 so that the bias master does not overcorrect the lights? but there is very little published on how the stacking process works internally in these programs. It would make sense to use a high ratio of bias to lights if automatic scaling was used.

Luckily I use the Nikon only rarely but on those occasions I found that adding in a stack of 60 bias frames to only ten lights actually made regular-pattern noise structure more visible in the calibrated image, dropping down the bias frames to match the lights, just ten, and the regular bias pattern disappeared in the final stack, that was in MaximDL5, have not tried it in PI yet.

So still a mystery to me how this high ratio of bias to lights can reduce fixed pattern noise in the image and not add to it when the number of lights and bias do not match...?

For normal cooled CCD work with the QSI I tend to take matching numbers of lights, darks, bias and flats, usually thirty of each in a session for a single target but once my new observatory is running I intend to try Ollie's suggestion of using bias for darks, sigma reject stacking and a bad pixel map, plus reuse the flats, so that I can get more targets in per session.

 

If this were the case and a stack of say 100 bias frames would impact and degrade rather than improve a set of say 20 lights, then surely this must also count for other calibrations too?

So for example a stack of 50 flats would not correct a stack of 20 lights correctly, yet they do without fail. I see no need to match the exact number, in fact you are working with the same rules as shooting lights, the more subs the better as you eliminate the random patterns exactly as you do when shooting lights (of course taking in to account the diminishing returns after 30/40 subs).

Link to comment
Share on other sites

1 hour ago, auspom said:

 

dark flat file...?????????????????????  I don't use these...should I?

 

dss frames.PNG

I think you should. If you don't there is a likelihood that your flats will over correct. In any event they will introduce noise into your image. Strictly speaking a 'dark flat file' is a dark taken at the same settings (but without light) as your flats. It corrects your flats in the same way that a dark corrects your lights. You can, if you wish, do it by the book and take darks for your flats 'correctly', so if your flats were 1.25 seconds in bin 1 then you take a set of 1.25 second darks at the same temperature.

The good news (just occasionally there IS good news in AP!!) is that you don't need to bother. A master bias will make a perfectly good dark for your flats, or dark flat file. (This name sounds illogical to me. I'd call it a dark for flats because... that's what it is.) So use a master bias as a dark for your flats and be done. Way back up the thread MartinB said exactly the same thing.

Olly

 

Link to comment
Share on other sites

2 hours ago, johnrt said:

 

If this were the case and a stack of say 100 bias frames would impact and degrade rather than improve a set of say 20 lights, then surely this must also count for other calibrations too?

So for example a stack of 50 flats would not correct a stack of 20 lights correctly, yet they do without fail. I see no need to match the exact number, in fact you are working with the same rules as shooting lights, the more subs the better as you eliminate the random patterns exactly as you do when shooting lights (of course taking in to account the diminishing returns after 30/40 subs).

 

1 hour ago, Jokehoba said:

@Oddsocks You may find this Google Hangout useful - https://www.youtube.com/watch?v=BNWGYZ7MH7g - by Craig Stark (Nebulosity, PHD, etc).

Thanks for the reply, JohnRT and Jokehoba, for the link to Craig Stark's lecture.

Much of the lecture discusses similar topics and theory to that covered by Richard Berry & James Burnell's Handbook to Astronomical Imaging. 

Craig suggests in the lecture that when using a cooled camera there is no reason why you can not use an infinite number of darks, flats and bias calibration frames to create the masters but in practice the law of diminishing returns sets in around thirty frames.

The situation for non-cooled DSLR's is different though and I suspect my problem with ten bias frames from a Nikon D90 making a good job of noise removal when used with ten lights, ten darks and ten flats say, but introducing regular pattern-noise artefacts where sixty bias frames are added to ten lights, ten darks and ten flats is because the temperature of the sensor slowly rises during the the fifty frame bias acquisition so the resulting bias master no longer correlates to the conditions under which the lights, flats and darks were taken. The make and type of sensor is obviously going to have some influence and Craig states that he had problems quantifying  Nikon sensors so did not include Nikon support in Nebulosity.

I guess a lot will come down to how the particular image stacking program processes and applies the acquired data, in this, Nebulosity seems to offer quite a broad range of possibilities that are not present in MaximDL, and I have not seen yet in PI but as mentioned in the earlier post, the lack of a detailed written description of how these two packages handle and process the data internally makes it hard to draw conclusions.

The answer is in the maths somewhere, not a subject close to my heart....

Link to comment
Share on other sites

9 hours ago, ollypenrice said:

The good news (just occasionally there IS good news in AP!!) is that you don't need to bother. A master bias will make a perfectly good dark for your flats, or dark flat file. (This name sounds illogical to me. I'd call it a dark for flats because... that's what it is.) So use a master bias as a dark for your flats and be done. Way back up the thread MartinB said exactly the same thing.

Olly

 

There is good news for anyone just using dslr and camera lenses.

If the user has access to Camera RAW they don't need flats, darks or bias.
ACR corrects chromatic abberations, vignetting and has a very good noise reduction.
ACR should take care of all bad pixels as it reads the bad pixel map, I believe the
pixel map is updated after each sensor clean and according to Roger Clark modern DSLRs
have on sensor dark current suppression, so darks aren't needed.


If dithering is used during capture colour mottle will be eliminated and according to Tony Hallas
in theory you can get close to what a CCD can do, not quite but close.

I am in the process of trying these methods, following Tony's video and it's an eye opener.
I have'nt perfected it yet and have'nt got the dark skies but the image quality is a big step up.

Link to comment
Share on other sites

2 hours ago, wxsatuser said:

There is good news for anyone just using dslr and camera lenses.

If the user has access to Camera RAW they don't need flats, darks or bias.
ACR corrects chromatic abberations, vignetting and has a very good noise reduction.
ACR should take care of all bad pixels as it reads the bad pixel map, I believe the
pixel map is updated after each sensor clean and according to Roger Clark modern DSLRs
have on sensor dark current suppression, so darks aren't needed.


If dithering is used during capture colour mottle will be eliminated and according to Tony Hallas
in theory you can get close to what a CCD can do, not quite but close.

I am in the process of trying these methods, following Tony's video and it's an eye opener.
I have'nt perfected it yet and have'nt got the dark skies but the image quality is a big step up.

Not so sure. I can see that software can correct quite well for vignetting (as Pixinsight's DBE can demonstrate) but, if the camera is used in a telescope, where does the ACR software obtain its information regarding field illumination? In the context of daytime photography I can see that software could be devised to make a decent guess, but AP is different in two ways. Firstly, the target may contain no clear clues because it might be partially nebulous and partially dark in any distribution at all. Secondly our astrophotos are highly stretched in processing and stretching places very high demands on field flatness, since any non-flatness will be 'enhanced' by the stretch. Processes like DBE allow the imager to input information regarding the field illumination. I can't see how this could be circumvented. I also feel that accurate flatfielding is the single most important component in calibration, particularly if the imager is intent upon pulling out every last scrap of faint nebulosity. I'll take some convincing that a computerized guess will be able to do that.

And then there's the filter question. If filters introduce dust bunnies then, again, only a flatfield can provide the information needed to correct for that. DSLR imagers often use filters, most obviously LP and Ha.

Like you, I think Tony Hallas has put his finger on the essentials of DSLR imaging, most notably in regard to large scale dithering. The big handicap with DSLR is inefficiency in Ha, even when modded. It's simply the Bayer matrix that gets in the way. In very fast F ratios DSLRs can produce CCD-like images on some targets, I think. They do need those super fast F ratios to do their best, though.

Olly

Link to comment
Share on other sites

4 hours ago, wxsatuser said:

There is good news for anyone just using dslr and camera lenses.

If the user has access to Camera RAW they don't need flats, darks or bias.
ACR corrects chromatic abberations, vignetting and has a very good noise reduction.
ACR should take care of all bad pixels as it reads the bad pixel map, I believe the
pixel map is updated after each sensor clean and according to Roger Clark modern DSLRs
have on sensor dark current suppression, so darks aren't needed.


If dithering is used during capture colour mottle will be eliminated and according to Tony Hallas
in theory you can get close to what a CCD can do, not quite but close.

I am in the process of trying these methods, following Tony's video and it's an eye opener.
I have'nt perfected it yet and have'nt got the dark skies but the image quality is a big step up.

I agree with olly that I don't think this is really applicable to Astrophotography.  With lenses, I would guess that the camera needs a general illumination guide - so a shot of a moon crater might be ok, but beyond that, the amount of dark is the frame would mess up whatever maths they're doing?

Then there's the other part of the problem in DSO imaging means you pretty much need to attach the camera to a telescope.  From that point of view, there's no information available abotu the lens.  Again, won't work.

 

I know that I'm trying to create a setup and processing that will allow me to get good results whatever target I go for.  My results so far have shown to me that having lights darks and flats is worthwhile.

Link to comment
Share on other sites

I just double checked what ACR stands for.  Looking at that, I think they're talking about a software algorithm to perform a correction based on the lens type that's attached.  I'm going to make a leap here and assume that it's like the lens correction that I can do in Lightroom.  with that assumption, I would guess that the adjustment is based on the average profile of a sample lens, not your lens.

With that in mind, I'd be dubious of skipping flatfields, or darks or bias frames.  The fix might not be correct for your exact lens, but the flatfield will be for sure.

Link to comment
Share on other sites

Just to be clear, the main purpose of a bias is to remove the constant signal level artificially added to the readout to avoid negative numbers. It is not to remove noise (although it may help with some hot pixels). In the days when CCDs first appeared we just to used to subtract a single constant number from the whole frame to correct for the bias!  You could still do this if you wanted, but it is probably better to  use a master bias frame just in case this number varies a bit. Having this constant signal messes up the maths of flat field division, which is why it must be removed from both the flats and the lights.

NIgelM

Link to comment
Share on other sites

23 hours ago, wxsatuser said:

I did comment that this is for camera with lenses, not scopes.

Both Tony and Roger Clark do say that the lens profiles in ACR correct for flatfield.

Are they wrong?

 

Sorry, I missed your caveat. My mistake. With known lens profiles the software solution should be perfectly sound.

Olly

Link to comment
Share on other sites

@ollypenrice your comment just got me thinking.   Can I do away with flat fields altogether if I can create a lens profile for my telescope?

And it turns out the answer is yes.

http://blogs.adobe.com/jkost/tag/lens-correction-profiles

https://www.adobe.com/support/downloads/detail.jsp?ftpID=5489

 

The looking at it, I'm seeing that there is one thing that it won't correct for.  Dust motes.  But even so, being able to create a lens profile would simplify the imaging process a bit.

Would have to create a profile for each "lens"   i.e. LX-90 Prime Focus, LX-90 FR6.3 Prime Focus.  LX-90 25mm AFocal.

It's an interesting idea.

This then leads on to a question of if we could share these lens profiles so that once someone makes one, they are the same for all telescopes of the same make and model?

Link to comment
Share on other sites

2 hours ago, cjdawson said:

@ollypenrice your comment just got me thinking.   Can I do away with flat fields altogether if I can create a lens profile for my telescope?

And it turns out the answer is yes.

http://blogs.adobe.com/jkost/tag/lens-correction-profiles

https://www.adobe.com/support/downloads/detail.jsp?ftpID=5489

 

The looking at it, I'm seeing that there is one thing that it won't correct for.  Dust motes.  But even so, being able to create a lens profile would simplify the imaging process a bit.

Would have to create a profile for each "lens"   i.e. LX-90 Prime Focus, LX-90 FR6.3 Prime Focus.  LX-90 25mm AFocal.

It's an interesting idea.

This then leads on to a question of if we could share these lens profiles so that once someone makes one, they are the same for all telescopes of the same make and model?

It also won't correct for the fact that your sensor might not be centred on the optical axis or, for that matter, asymmetries in your field as you rotate the camera for framing.

Flats: Love 'em or hate 'em, you just gotta take 'em. ;)

 

Link to comment
Share on other sites

14 minutes ago, Pompey Monkey said:

It also won't correct for the fact that your sensor might not be centred on the optical axis or, for that matter, asymmetries in your field as you rotate the camera for framing.

Flats: Love 'em or hate 'em, you just gotta take 'em. ;)

 

I'd not be worried about being off center of the optical axis, as I setup my camera the same every time.  It'll be off axis in a repeatable fashion.  asymmetries in the field as due to rotation  now that is pretty much a show stopper on that idea.  Lenses can only fit one way, but telescopes tend to be adjustable.

Link to comment
Share on other sites

On 27/01/2016 at 15:08, wxsatuser said:

I did comment that this is for camera with lenses, not scopes.

Both Tony and Roger Clark do say that the lens profiles in ACR correct for flatfield.

Are they wrong?

 

The profiles are only generic though and are designed to correct for optical distortions - not at all sure about variable illumination (I have them available within DXO software). However, one combination of an Atik490EX plus NP127is might be similar to another, but using a generic flat field for that combination of optics would not be perfect either.

ChrisH

Link to comment
Share on other sites

Maybe I am being stupid; (probably), but if you are using the same camera to image in every session, then you will only ever need 1 set of bias frames. Keep these in a folder on the pc and use them in every image you process. Simples :icon_biggrin:  I stand ready to be corrected if I am wrong.

Link to comment
Share on other sites

On 1/28/2016 at 19:31, ChrisLX200 said:

The profiles are only generic though and are designed to correct for optical distortions - not at all sure about variable illumination (I have them available within DXO software). However, one combination of an Atik490EX plus NP127is might be similar to another, but using a generic flat field for that combination of optics would not be perfect either.

ChrisH

I have tried the Hallas/Clark method and the profile for the Canon 70-200mm looked ok in ACR.
This is early days for me and this way of doing processing, only two clear sessions to try it.

Here is 25x300secs with the 60Da , EF 70-200mm, @135mm, ISO 800, F/2.8 all dithered with Lacerta, no flats, darks or bias.

Processing was decode in ACR, lens profile, chromatic abberations, noise control etc.
Save as 16bit tifs then register and combine in Registar and a quick run through PI.

calitest1.jpg

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.