Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Which is the better of these two?


Recommended Posts

Hi Everyone.

I've been playing around with the Batch Processing Script in PI with Lights obtained from my ASI1600. The Master Dark is 120s at gain 139, offset 50, -20 degrees to match the Lights. One of these is calibrated with 0.3s Bias frames and one is calibrated with 1.7s Dark-Flats. I've zoomed-in by equal amounts in the top right corner of the image and then performed a STF AutoStretch and captured the screen.

Is one of these better than the other?

Is there no significant difference between the two images?

Should there be a difference or will it only become really apparent if I reduce the Bias exposure to the minimum possible value?

192607024_Screenshot2019-11-0920_29_57.thumb.png.f9262d2c6d8f6dfacf19029977b8aebb.png

Any advice / help would be appreciated.

Adrian

 

Edited by Adreneline
Link to comment
Share on other sites

34 minutes ago, Adreneline said:

Should there be a difference or will it only become really apparent if I reduce the Bias exposure to the minimum possible value?

Presuming that difference between these two subs is only the way master flat was prepared (not quite clear to me from your post) - in one case you used proper flat-darks of same exposure (and other parameters) as flats and for other one you used bias instead of flat-darks to create master flat, then only difference that can show will be in flat application. You won't find significant difference in noise levels or star shapes or anything.

One that uses bias can have either over or under correction by flats (not sure which one of the top of my head), but it's not necessarily the case. It will depend on few factors - how much light blockage there is in the first place (either from dust particles or vignetting), and what is the difference in bias mean value vs dark-flat mean value (larger the difference - more chance there will be issue with flat calibration).

It might not even be visible there is issue (even if there is) unless you stack and stretch very hard, so there might be an issue that does not show on normal level of stretch and you can end up with good looking image anyway.

  • Like 1
Link to comment
Share on other sites

14 minutes ago, vlaiv said:

It might not even be visible there is issue (even if there is) unless you stack and stretch very hard, so there might be an issue that does not show on normal level of stretch and you can end up with good looking image anyway.

Thanks vlaiv for the response. The image on the left is using Bias - the one on the right is using Dark-Flat. I don't believe there is any great difference although on my MBP the Dark-Flat image appears to have a slightly darker background, but it is marginal. I cannot see any great difference in noise.

Adrian

 

Link to comment
Share on other sites

6 minutes ago, Adreneline said:

MBP the Dark-Flat image appears to have a slightly darker background

Can you translate that in non PI speak? :D

If you take master bias and master dark-flat and do stats on each, what do you get as mean pixel value?

Link to comment
Share on other sites

Ok, yes, as predicted value is different, and in this particular case your calibrated sub will be less noisy if you use bias - because you stacked 200 of them vs only 20 of dark flats :D

Best version would be to use 200 of dark flats.

Just to explain what will happen when you calibrate with using bias only.

Let's suppose that you have some pixel value in the image that received blocked light. It was 20% in shadow and it's original value was 100ADU, but since it was in shadow - sensor actually recorded only 80ADU (20% light block). You want to correct this by use of flat. Let's further suppose that your flat peak was at 75% of histogram - that means around 47872ADU. This value is together with bias and dark current. Shaded part is only 80% of this value.

If you use bias only to calibrate your light, you will create 6ADU difference in your flat (because mean of dark flat is larger for about 6ADU than bias only). We can say that proper value for flat at 100% is 47872 so subtract 872 and you have 47000. You need 80% of that and that will be - 37600. When you "scale" your flat you will get 37600 / 47000 = 0.8.

What happens if you use bias instead of flat dark? You will end up with following values: 47006 and 37605 and "scaled" value of flat will be 37605/47006 = ~0.800005 (very small change in this case).

If you correct your 80ADU in light sub with factor of 0.8, you will get proper value 80 / 0.8 = 100ADU, but if you correct with bias only you will get slightly different value 80 / 0.800005 = ~99.9995ADU. Your signal is very slightly darker than it should be. You ended up with under correction because you used bias instead of flat darks. In this particular case that we did - difference is so tiny that it would not show at all, but it is there because there is difference between master bias and master flat dark in mean value.

Depending on difference between bias and dark flat and how much your signal needs correction (we used 20% but sometimes it can be even larger than that) - under correction will show when you stretch your data enough.

  • Thanks 1
Link to comment
Share on other sites

Thanks @vlaiv for your very complete response. I need to try to absorb the detail and significance of this. It seems to me though that providing I don't overstretch the image then any differences will be negligable. I think I might up the number of Dark-Flats and use them.

Many thanks for the time you have taken to help me with this.

Adrian

Link to comment
Share on other sites

  • 2 weeks later...

If the difference is in the flats, it makes more sense to look at the background model as extracted by DBE. As Vlaiv noted in an early reply, the difference is in flat application. If a master flat doesn't correct properly, you see the difference in the background model. Just apply DBE with identical parameters to either image and compare the result. Set "none" for image correction method. Then stretch the two background models exactly the same. Or you could use PixelMath to compare the background models:

BG_model1 / BG_model2 and the other way around: BG_model2 / BG_model1.

  • Like 1
Link to comment
Share on other sites

On 18/11/2019 at 19:24, wimvb said:

If the difference is in the flats, it makes more sense to look at the background model as extracted by DBE. As Vlaiv noted in an early reply, the difference is in flat application. If a master flat doesn't correct properly, you see the difference in the background model. Just apply DBE with identical parameters to either image and compare the result. Set "none" for image correction method. Then stretch the two background models exactly the same. Or you could use PixelMath to compare the background models:

BG_model1 / BG_model2 and the other way around: BG_model2 / BG_model1.

Thanks Wim and sorry for the slow response.

I don't understand why I get notifications for some of my posts and nothing at all for others; I am afraid I did not notice your comment until I stumbled across it just now.

I will have a go at your suggestion and let you know what I find. I've been playing around at trying to do identical post-processing with masters produced by PI and APP to see if I can deduce anything. Along with taking new calibration frames it gives me something to do on the endless cloudy nights! I'm half way through doing new darks right now with unity gain and offset 50 (my new ASIair - which I love - does not allow for any adjustment of offset - 50 is the default so I thought I would redo my old 56 offset darks, etc.).

Thanks again for your help and advice.

Adrian

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.