Jump to content

stargazine_ep38_banner.thumb.jpg.6fe20536a22b28c17b2ee1818650993c.jpg

Over correcting flats, out of ideas.


Recommended Posts

I use three cameras here and have used many others in the past but with just one I cannot get consistent flats. It's an SXVH36 capturing in Nebulosity on a Mac. Unfortunately the camera, the Mac and Nebulosity are all common demonimators which is far from ideal.

The same problem has affected flats in both the ODK14 and the FSQ106, so at least we can eliminate the big reflector, which I always suspected (wrongly.)

The flats over correct, causing inverse vignetting and bright dust bunnies. If I weight a flat fielded and non flat fielded stack at 50-50 in an averaging algorithm I get the about the right result but not perfect, with some parts of the resulting image better flattened than other parts. (So this bodge is not a solution!)

I have used several panels undimmed or dimmed down to vary exposure times from sub second to 3 or 4 seconds. (The camera has no mechanical shutter.) I have calibrated the flats with both dedicated darks for flats and by using my usual method of subtracting a master bias. I have exposed to values between around 13000 ADU up to about 50,000. None of these experiments produces any improvement whatever. On the parallel scope with the same panel and the same technique but with a different camera, cables, PC and capture software, the flats work fine.

Equally bafflingly, I will eventually get a set of flats which work perfectly. Of course I hang on to them as long as I can but eventually some new contaminant arrives and they have to go.

Gentlefolks, I am right out of ideas. What haven't I tried?

Olly

Edit. I should add that the exceptional sets of good flats were taken at the same settings and processed in the same way as the many that don't work! Or at least, if there's a variable I don't know what it is... Maybe I can only take them on a Thursday. Or something!!!!

Edited by ollypenrice
  • Like 1
Link to post
Share on other sites
  • Replies 71
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Hi Olly, I have been working on resolving this over-correction problem on my QSI camera. I have resolved it completely and I am hoping that the cause of my issue may have some bearing on yours. H

Olly, off topic, but referring to your comment earlier in this thread ... "Apologies ... Can't get rid of this box ..." I've hit the same problem.  It happens if you start a reply to a post using

A little off topic, but may be of interest to anyone with a QSI 500/ 600 series camera, wondering about the mechanical shutter and how it works for short exposures (a potential issue for flats?), I go

Posted Images

I too would be very interested to hear ideas on this.

I got so fed up with Flats that didn't always work as they should that I started taking 3 or 4 sets of flats at differing exposures and producing a final image with each of them and used whichever result was best.

 

I never found out what I was doing wrong, as far as I was aware I was doing the same thing each in the same order etc (just like you). The well depth of my old camera was 65000 (give or take), Sometimes an ADU of 20K would work great. But I used to take 18K, 20k, 22k and 25k just to be safe... it never seemed consistent.

Cheers

Link to post
Share on other sites

I suspect that over correction is result of improper calibration of frames.

For example if software that is used for capture adds arbitrary constant signal that is not removed from flats. Lets consider following: full illumination is 1000 units, dust spot produces 900 units (90%,  so for correction we should be divide by 0.9 for that area). But capture software adds 100 units threshold signal so flat is composed of 1100 and 1000 values for full and dust part. Ratio is not 90% anymore but rather 90.9090....% so when you divide with it you get higher value as you reported (over correction).

As I understand proper procedure for calibrating Flats would be:

Take multiple bias frames (more the better to avoid introducing more noise), take multiple dark frames (again more the better) and take multiple Flat frames. Stack bias frames (stacking method of choice, but default average value is ok if using floating point operations from start, I personally don't like that some software uses 16 bits for this) to create master bias. Stack (dark-master bias) frames to create master dark frame. Stack (flat - master bias - master dark) to create master flat frame. Find maximum value of master flat frame and divide all values of master flat frame with it (to normalize it). This frame should be used to divide calibrated light frames.

Edit:

Just to add as I did not mentioned it:

Bias, Dark and Flat frames should be taken at exact same settings apart from exposure time - Bias being minimum exposure time available, but Darks and Flats with same exposure length. Also Flats should be taken for same configuration as used for Light frames - same camera orientation and even focus position.

Edited by vlaiv
Additional info ...
Link to post
Share on other sites

Might be a problem with RBI since the camera has no shutter, unable to find out if the KAI-16000 chip uses IR pre flash.

Once set up for taking the flats try dropping the first two or three frames from the final stack since the early frames in the series would be affected by RBI if it was present.

If RBI was present this should show up as a measurable difference in average pixel values between the first flats and the last in the series.

One technique is to flush the sensor first by taking a few images at maximum saturation followed by a few darks, these are not kept, they are just to flush the sensor, then begin the flats sequence but take care not to allow light into the camera before the flats sequence begins otherwise the flushing process will be ineffective.

I had a much older SX camera that used to suffer from RBI, I think it was an MX5 or MX7, can't remember the model for sure, it used to produce a bad image, looking as though it was underexposed and noisy, at the beginning of every acquisition series, whether lights or flats, so I just got used to shining a torch into the OTA for the first few frames to saturate the sensor and dumping the first few frames in each series, after that each image in the series was consistent.

  • Like 1
Link to post
Share on other sites

To add to my previous post, two points:

1. Calibrating with master bias only can produce unwanted offset component in flat frames, depending on camera and exposure duration (how much dark current there is for duration of flat frames) - as dark signal can act as constant offset if not removed.

2. You can easily test if there is constant offset in flat frames that is causing overcorrection. Compare result of two flat frames with different mean ADU value. If one that has lower values (histogram to the left) produces bigger overcorrection - it is almost certainly some kind of constant signal in flats.

Link to post
Share on other sites
1 hour ago, ollypenrice said:

Gentlefolks, I am right out of ideas. What haven't I tried?

I can only imagine that either, despite what it says on the tin, some astro-cameras perform some image processing before saving the RAW files or there's a non-linearity in the A/D converter's curve so a brighter flat is somewhat more stretched.

Link to post
Share on other sites
8 minutes ago, Dinsdale Piranha said:

That's very interesting. On a side note what does RBI stand for?

Honestly, fancy not knowing that! (I've known about it ever since I ditched the Google response 'Royal Bank of India' minutes ago and added 'CCD' to the search. :icon_biggrin:)  At this point the ever helpful Richard Crisp popped into view. http://www.narrowbandimaging.com/residual_bulk_image_ccd_orig_page.htm Residual Bulk Imaging. Now this may be on the money because when I asked Harry (Page) about this a while back he wondered if it might be a flushing issue. And indeed it might.

So thanks, Oddsocks,. Now here's a thing: suppose the good ones happened after an accidental 'saturation event' caused by an over-long test exposure on the panel?

When you say,  'If RBI was present this should show up as a measurable difference in average pixel values between the first flats and the last in the series,' it rings a loud bell. In nebulosity I would use the Preview mode to determine the flat exposure length and then switch to 'Capture Series' but I would often get a very different ADU reading when I did so even though Preview is supposed to replicate the series. In fact the camera often behaves erratically with regard to exposure time and resulting ADU during test exposures for flats so I think the finger is, indeed, pointing to RBI.

I've been through that flushing procedure before but can't remember why. I don't think it related to this issue but to something else.

I now feel a bit more hopeful since the presence, or not, of RBI would be the kind of variable that would explain the random successes.

Thanks,

Olly

Link to post
Share on other sites
4 minutes ago, Stub Mandrel said:

I can only imagine that either, despite what it says on the tin, some astro-cameras perform some image processing before saving the RAW files or there's a non-linearity in the A/D converter's curve so a brighter flat is somewhat more stretched.

One can easily check if there is non linearity of response by couple sets of flat frames of different exposure (mean signal value), just calibrate each set and compare them. If same at maximum value but different in part where there is light reduction then you have non linear response

 

Link to post
Share on other sites
16 minutes ago, Dinsdale Piranha said:

That's very interesting. On a side note what does RBI stand for?

RBI = Residual Bulk Image

It is where the sensor is flooded with electrons prior to the main exposure start. All the sensor pixel wells should be emptied before the each exposure begins but if a sensor is exposed to very bright light for a long period of time static charges build up in the pixel well that allow elections to "cling" to the well walls. In each subsequent exposure the electrons become stuck in the well and so a negative "ghost" residual image appears overlaid on your new image.

"Flushing" is used to flood the sensor with light and dislodge the stuck electrons, many CCD detectors incorporate a built-in IR source behind the detector that shines through the thined back of the chip and at the beginning of each exposure the IR source is flashed on, the pixel wells are flooded then cleared and the proper "exposure" begins.

CCD's that are not protected by a shutter may be prone to RBI and the effect is mostly clearly seen in the parts of the image that are saturated, such as those that might occur when placing the flats panel over the front of the telescope for a long period of time before beginning the flats acquisition series, or around bright saturated stars in a normal "light" frame.

RBI is pretty rare these days as modern detectors have built in ways to deal with the problem but older designs from ten years ago or more used to show RBI effects fairly frequently in astronomical imaging.

Link to post
Share on other sites

RBI does sound like a possible cause.  I was going to suggest making two master flats.  Suppose you have 20 flats.  Make a master flat from 1-10 and a second master using only 11-20.  Now try flat-calibrating the *flats* in two sets: calibrate flats 1-10 using the master made from 11-20, and flat-calibrate 11-20 using the master made from 1-10.  That should highlight any significant variation across the full set.  But having said all that, Oddsocks's flushing routine would be a more direct way of confirming or eliminating the RBI hypothesis.

I did not understand the earlier suggestion about bias calibration possibly being a factor.  As I understand it, if you dark-subtract your flats then the bias signal (also present in the flat-dark) is automatically subtracted in the process, leaving no unwanted residual pedestal value.

Adrian

  • Like 1
Link to post
Share on other sites
5 hours ago, opticalpath said:

RBI does sound like a possible cause.  I was going to suggest making two master flats.  Suppose you have 20 flats.  Make a master flat from 1-10 and a second master using only 11-20.  Now try flat-calibrating the *flats* in two sets: calibrate flats 1-10 using the master made from 11-20, and flat-calibrate 11-20 using the master made from 1-10.  That should highlight any significant variation across the full set.  But having said all that, Oddsocks's flushing routine would be a more direct way of confirming or eliminating the RBI hypothesis.

I did not understand the earlier suggestion about bias calibration possibly being a factor.  As I understand it, if you dark-subtract your flats then the bias signal (also present in the flat-dark) is automatically subtracted in the process, leaving no unwanted residual pedestal value.

Adrian

If you are referring to my suggestion about bias only calibration, I meant just that, if only using master bias to calibrate and not darks, dark signal remains.

Maybe I misunderstood the following:

8 hours ago, ollypenrice said:

I have calibrated the flats with both dedicated darks for flats and by using my usual method of subtracting a master bias.

For some reason I concluded that two different methods of calibration were employed one of them being "usual method of subtracting a master bias". Sorry if I caused confusion.

Vladimir

Link to post
Share on other sites

^^ Correct, there should be no RBI in a shutterless camera.

Olly, your ADU values... When you said you tried a lower value, was that value maxumim or average? Just an idea, but try for a maximum value only around 18-20k (ish). Also where are you getting your values from? If you have Maxim, use its information window (set to "Area") to get a more accurate reading. I had roughly the same problems recently until I backed off on the brightness/exposure. Worth a shot!

I usually chuck away the first few flats too, if that helps.

Link to post
Share on other sites
42 minutes ago, Uranium235 said:

^^ Correct, there should be no RBI in a shutterless camera.

Olly, your ADU values... When you said you tried a lower value, was that value maxumim or average? Just an idea, but try for a maximum value only around 18-20k (ish). Also where are you getting your values from? If you have Maxim, use its information window (set to "Area") to get a more accurate reading. I had roughly the same problems recently until I backed off on the brightness/exposure. Worth a shot!

I usually chuck away the first few flats too, if that helps.

These were maximum values. Nebulosity gives a max and a min. (In Artemis I read off the White Point, which I take to be the same.) I default to about 23K but have tried going lower to no avail. I think what I'll do is take a flat, put a mild gaussian blur into it and measure the ADU to see how the value of the output image compares with the max and min values at capture.

As a 'desperation bodge' I may also try dropping the gamma point of a master flat in a series of iterations to see if I can find a level at which it works. This is based on the supposition that there is too much contrast in the present flats.

Olly

Link to post
Share on other sites

The fact that you sometimes get good flats with the same setup is perplexing.  It suggests either that some component, or some part of the process, is not behaving consistently due to a fault, or that some subtle difference in method or setup is affecting the result.  Is Nebulosity plus the Mac working perfectly well with any other camera?

The only systematic way I could think of tackling this would be to swap out each component in turn to see if one element is the cause.  You have already eliminated the light panel, the scope and the cabling as possible culprits.  What does that leave (however unlikely):
- the software used to control the camera, create the master flat and apply the flat-field calibration
- the settings used in all of the above: camera gain, pedestal value added when combining, combination method used, normalization method etc.
- the dark (or bias) masters being subtracted from the flats
- the processing hardware
- anything else? 

*  Could you try combining the flats and flat-darks, and then calibrating lights, using AstroArt or something else apart from Nebulosity, in case there is just something subtle in the Nebulosity settings or workflow?

*  Something I would try is to pick just one raw flat from the stack, blur it slightly, then try calibrating using that.  Do the same but after first 'manually' subtracting one raw flat-dark or bias image. The results would be very noisy but might highlight if there's some kind of issue in the production of the master flat or flat-dark subtraction.

 

If you want to try it, put some flats and darks in Dropbox and I'll create a master flat in Maxim for comparison.

 

Adrian

 

Edited by opticalpath
Link to post
Share on other sites

^ good idea, put some raw flats up so we can measure it (we may get different readings). As what Artemis says is a max value isnt always the case when you drop it into another piece of software.

The only other times where ive had flats failing to correct was when the filters were too small for the chip size, or when there were light leaks - but I think we can rule those ones out.

Link to post
Share on other sites
20 hours ago, opticalpath said:

 

Guys, I'll gladly post some flats. I'll have to dig our the raws. However, on my other screen I'm looking at the L flat from the SX/Nebulosity scope and the L flat from the parallel Atik 11000/Artemis Capture scope.

I made them both in AstroArt* using a master bias as a dark. In the middle of each flat the pixel values shown are almost the same, around 24,300 for the SX and 24,700 for the Atik. In the corners the Atik is less vignetted becuase it uses unmounted rather than mounted filters. The Atik corners are around 19,500 and the SX corners are around 18,500.

In a nutshell these flats are more or less identical and show exactly what you'd expect them to show. The captured max brightness is just as it appeared at the capture stage and the corners show the variation in vignetting that you'd expect. Why do the SX flats over-correct???

(Just for the hell of it I tried flattening the flats with copies of themselves, a thing I do on courses to show what flats do, and both 'flattened flats' were perfectly flat.)

So I think that either the calibration is wrong because there is something wrong with the SX bias or there is some non-linearity arising which is not the same in the SX flats as in the SX lights. But then why do we sometimes get flats which do work? It's a good'un, is this... It deserves a prize, and so do you for helping out!

Olly

*Edit: on the same computer.

Edited by ollypenrice
Link to post
Share on other sites
20 hours ago, opticalpath said:

 

Guys, I'll gladly post some flats. I'll have to dig our the raws. However, on my other screen I'm looking at the L flat from the SX/Nebulosity scope and the L flat from the parallel Atik 11000/Artemis Capture scope.

I made them both in AstroArt using a master bias as a dark. In the middle of each flat the pixel values shown are almost the same, around 24,300 for the SX and 24,700 for the Atik. In the corners the Atik is less vignetted becuase it uses unmounted rather than mounted filters. The Atik corners are around 19,500 and the SX corners are around 18,500.

In a nutshell these flats are more or less identical and show exactly what you'd expect them to show. The captured max brightness is just as it appeared at the capture stage and the corners show the variation in vignetting that you'd expect. Why do the SX flats over-correct???

(Just for the hell of it I tried flattening the flats with copies of themselves, a thing I do on courses to show what flats do, and both 'flattened flats' were perfectly flat.)

So I think that either the calibration is wrong because there is something wrong with the SX bias or there is some non-linearity arising which is not the same in the SX flats as in the SX lights. But then why do we sometimes get flats which do work? It's a good'un, is this... It deserves a prize, and so do you for helping out!

Olly

Link to post
Share on other sites
20 hours ago, opticalpath said:

 

Guys, I'll gladly post some flats. I'll have to dig our the raws. However, on my other screen I'm looking at the L flat from the SX/Nebulosity scope and the L flat from the parallel Atik 11000/Artemis Capture scope.

I made them both in AstroArt using a master bias as a dark. In the middle of each flat the pixel values shown are almost the same, around 24,300 for the SX and 24,700 for the Atik. In the corners the Atik is less vignetted becuase it uses unmounted rather than mounted filters. The Atik corners are around 19,500 and the SX corners are around 18,500.

In a nutshell these flats are more or less identical and show exactly what you'd expect them to show. The captured max brightness is just as it appeared at the capture stage and the corners show the variation in vignetting that you'd expect. Why do the SX flats over-correct???

(Just for the hell of it I tried flattening the flats with copies of themselves, a thing I do on courses to show what flats do, and both 'flattened flats' were perfectly flat.)

So I think that either the calibration is wrong because there is something wrong with the SX bias or there is some non-linearity arising which is not the same in the SX flats as in the SX lights. But then why do we sometimes get flats which do work? It's a good'un, is this... It deserves a prize, and so do you for helping out!

Olly

Link to post
Share on other sites
38 minutes ago, ollypenrice said:

Guys, I'll gladly post some flats. I'll have to dig our the raws.

...............

 

Olly, if you can throw in some flat-darks/ bias images and a couple of lights as well, we could try a complete calibration using different software and workflows.  If we see the same over-correction, that would eliminate anything that might have happened after capture of the raw darks/bias/flats.

Adrian

Link to post
Share on other sites

I have also had this issue when I used my 200mm lens with a front aperture mask...it was nearly impossible to remove the inverse doughnut type gradient after applying flats. What I ended up doing was scrapping the flats and using PS to remove it by doing the old Gaussian blurring trick and subtracting it that way...seemed to work quite well. Luckily I didn't have too many dust bunnies....Not sure how much signal I removed as well, but it was getting to me so I didn't care too much at that point.

When flats work, they work well, but when they don't they can be a real pain.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.