Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

How do you know when a background removal tool is working properly?


Recommended Posts

I have some Ha data which I am processing, the final stage being in Affinity Photo. I am experimenting with the Astrophotography Remove Background tool and wondered if there is any definite way to decide what is legitimate background to remove?

The first image is after processing in PI with DBE, EZ denoise and star reduction, then onward adjustments in AP but no background removal, the second has had the tool applied. There is a slider to bring up the background but if you bring it up too far you arrive back at the original image. The second one looks clipped to me but it  looks like  there is still some LP background in the first one, even though I prefer it. Perhaps I shouldn't be using the AP tool at all, given the processing already done in PI? What gets me is that there are clearly some regions of dark space in the image, which is my reference point, but elsewhere how much is faint nebulosity and how much is LP?

This is why I prefer galaxy imaging, it's a lot more straightforward.😉

No Affinity Photo Background Removal:

HandS_SY135_Ha_14x10mins_Hydrogen_alpha_session_1_mod_DBENoAPBkgrnd.thumb.jpg.8e19f715a84a98508404aaf4afb68a3a.jpg

AP tool applied:

HandS_SY135_Ha_14x10mins_Hydrogen_alpha_session_1_mod_DBEAP.thumb.jpg.101fa75ac101659c18cdd247f53ecafa.jpg

 

 

Link to comment
Share on other sites

There is simply no way to tell what is background and what is legit signal.

Best you can do is model background somehow and remove it based on that model.

I'll name simplest two models of background and then they can be used:

- constant background

- linear gradient

For first - you don't need any special tool - setting black level will remove it. This is best kind of background, but it only happens in pristine skies with small field of view.

Second kind - linear gradient is best removed in linear stage. You simply model background as some sort of linear gradient. You can manually choose slope and direction of gradient - or with help of some reference points. This kind of gradient removal works good in moderate to strong LP - if your FOV is small and LP gradient is relatively linear.

Not sure how choice of stacking algorithm impacts linearity of gradient in the final image though. I specially developed normalization method that will equalize linear part of gradient between subs - and hence stack will have same linear gradient that can be removed easily.

In any case - if you want to successfully remove background - best to do it in linear stage. If you do non linear stretch - even linear gradient will become non linear one.

As soon as you have non linear gradients - things will start to get trickier. Most features that are not background are not linear and are some form of function, but every function can be approximated more and more precisely with higher order polynomial. If you start modeling background as higher and higher order polynomial - there is bigger chance that you'll flatten some of legit signal as well.

Even linear background approximation has specific cases where it does not work - or requires special handling.

Say you have nebulosity in left part of the frame - but not in right part of the frame. Using linear background removal with reference points on whole image - will "tilt" that image - make nebulosity darker and other side of image - which should be dark - lighter.

For that reason - background removal needs help determining what part of the image are in fact background. In above case - we would select only one side of image and mark that as background.

I've created some automatic process that does this - not sure how accurate is, but it's giving me good results most of the time.

If you want to see what it will produce from your image - just attach linear fits of that image and I'll do my background removal routine on it.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, scotty38 said:

I realise this is not a foolproof or technical solution but what about looking at other folks' images of the same region to gauge if what you have is nebulosity vs LP?

Yes, I always look at a variety of existing images to get a consensus on how it might look. I guess on wide field views with a lot of tenuous nebulosity present, there isn’t much else you can do. 
From a quantitative perspective I think @vlaiv has got it as close as it’s able to get.

  • Like 1
Link to comment
Share on other sites

You're asking the million dollar question, of course. As Vlaiv says, how do we identify terrestrial signal?  Although I've never done it, one idea comes to mind: image the target at the lowest possible elevation and at the highest and compare the two. Whatever trend you find in the difference between low and high will also be the trend of the light pollution and might be extrapolated further. Just a thought.

Olly

  • Thanks 1
Link to comment
Share on other sites

Steve, I’d endorse @daz’s suggestion of using the NSG script to normalise the gradient in each sub to the ‘best’ one, you have to choose it by examining them in blink and subframe selector..  it usually turns out to be the one taken highest on the clearest moon free night.  The script also weights them post normalisation and writes them into the image integration process..  I’ve found gradient removal on the subsequent master easier to do as the remaining gradient is much reduced and generally straight ( ie increasing linearly from side to side, top to bottom or diagonally) and features are more visible.   A point in each corner but away from any feature (check with an image from a dark site) with DBE or APP light pollution removal ( which I find a bit better) then removes it .  
 


 

 

 

  • Thanks 1
Link to comment
Share on other sites

As Vlaiv said in his first sentence, there is no way that you correctly separate background from useful/wanted signal in an experiment in which the response from the sensor is very similar/equal to both  phenomena that you try to separate. Although one can do excellent job by guessing it applying experimental or mathematical "tricks" , it is still just guessing . If one would be able to experimentally identify or even measure only one of two , the accuracy of separation would improve . In that respect , I think , combining images from different positions is a step in right direction . I was wondering whether som clever combination of images with or without filters could add value . Also if one would have ideally cloudy night , with clouds being perfectly homogeneous , one could possibly be able to measure the background only , at least the LP component of it . 

But unfortunately there are no perfect clouds😀, so the background remains guesiing and observer dependent. ☹️

 

Link to comment
Share on other sites

4 minutes ago, Stefek said:

If one would be able to experimentally identify or even measure only one of two , the accuracy of separation would improve . In that respect , I think , combining images from different positions is a step in right direction

I don't think that will help much.

You are always having one more unknown then number of equations.

In two images - you'll have 3 unknowns - Background signal, First LP gradient, Second LP gradient. Whenever you add another image - you add another unknown - LP gradient in that image.

5 minutes ago, Stefek said:

Also if one would have ideally cloudy night , with clouds being perfectly homogeneous , one could possibly be able to measure the background only , at least the LP component of it . 

That won't help either, I'm afraid. LP strength and distribution will change - because in one instance you have whole atmosphere that scatters light and in second instance - you have cloud at some distance and at some angle. You'll simply get different gradient pattern of different intensity.

However - your ideas have merit and I've identified one case where such reasoning can actually be exploited.

Imagine that you are imaging object of unknown extent - but it is extended nebulosity and you don't have idea if it will fit FOV (or you know it wont - you are imaging interesting part in larger nebula). Whenever you have signal on whole image - it is very hard to do background removal.

It is much easier in cases where you have say galaxy or cluster that is in center of the FOV and rest is just sparse star field. Here in lies the trick to do "accurate" gradient removal.

Say you have this case:

image.png.52c4dd18c289500572b19583b00e6515.png

At some point in the session, where you have the least LP,  but best around meridian (around the time of meridian flip) - you take one frame that you mark as "reference" for this purpose. At the beginning of the frame - you record Alt and Az of the object (not RA and Dec - but Alt/Az).

After you've done that - you wait a bit - enough so that target drifts out of view, but keeping the scope pointed at the same Alt/Az coordinates.

Then you take another image - like this:

image.png.f74dabb8c279d9683bf1b6e047aef29a.png

Then you continue imaging as normal.

Premise is that:

a) - in same part of the atmosphere (not celestial sphere) - gradient will be the same - hence Alt Az coordinates

b) - LP intensity did not significantly change in course of one exposure. LP changes over the course of the night - people turn on/off lights, but here we hope that it did not change significantly in course of few minutes - that is reasonable.

You now have one frame that you can do standard background extraction (note extraction rather than removal) - which we can then use to subtract from the target itself on reference frame. Then we can normalize all other subs against that reference frame with subtracted background.

Method is of course as accurate as your background removal on sparse star field, and it depends if there is sparse star field near the target.

Link to comment
Share on other sites

Of course , Vlaiv , as I said , there are no ideal clouds , and if there would be some , one would need them to show up and disappear when we want exactly . 

I just wanted to say that in an experiment as we perform there is hardly any method to measure the background only. Although your idea with the same Alt/Az  is intriguing (as alwys 🙂)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.