Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Removing Sky brightness


Recommended Posts

Greetings!

I'm not a complete newb, but I still learn a lot every day and have so many questions that I forget them on regular basis, because every moment another pops up.?

I know something about different noise sources and how to remove them either by calibrating or stacking. But I don't understand sky brightness or light polution noise. To me this is actually real light like the one we are collecting from DSO and I don't understand why is treated like noise. I know it is nuisance and unwanted signal. If we remove it in postprocessing doesn't that include removing a light from our dso object also? If we attenuate brightness of light polution we also influence brightness of dso. This particular aspect of noise removing is giving me headache. Could someone please explain this to me?

Br,

Andrej

Link to comment
Share on other sites

As you say, it's unwanted signal. So it's not really treated as noise - procedures like stacking/calibration and noise reduction don't have any affect on it. Think about a pixel in your image - some of the signal in that pixel comes from light pollution, some from the target. By "removing" the light pollution aspect with whichever tool you use to do so, you're left with the signal from the target. So in theory, you're not removing that signal from the DSO as the sky brightness adds to the signal, rather than replaces it.

The real problem is the magnitude limiting - when part of your target is so faint that the difference between sky and target is so small it falls within the read noise of your camera, there's not much you can do to recover it. With a really dark sky, you can just expose for longer. I must admit, I don't know much about the theory of all this, so I'm sure someone wiser will be along soon...

Link to comment
Share on other sites

Calibration does not remove noise.

This is common misconception because things that calibration removes we perceive as noise (unwanted signal) - but it is actually signal that we want to remove (we want to have only signal from the target in the end, and all other signals - like dark current signal, read signal, LP signal we want removed).

What makes noise - noise? All the things we work with are signal, but we distinguish signal by noise by its nature - noise is completely random component of the signal (may behave according to different statistical distributions but it is random in nature).

Calibration removes signal because it is predictable - noise is random and thus not really predictable (unless we observe it as collection of values - distribution).

We take number of darks and number of bias subs because we want signal part out of them, and again we are doing that by using collection of values - we are not interested in noise part of it - it can't be removed with calibration.

Back on LP - LP is signal, and by it's nature it consists out of two "components" - predictable part and random part (as does light from the target). When we remove LP signal - we remove predictable component (since we can predict its value) - but random component remains and "pollutes" signal from target - this is why LP is harmful for imaging - it adds more noise (true random noise) that can't be removed other than by means of stacking (taking collection of values and determining distribution of noise - average value).

As for how we can remove LP signal - it's simply it is additive process - like pouring half a pint of water in the glass and adding another half of pint - we can spill half and what is left is the water we want (there is no difference in water molecules same as photons - can't distinguish which ones are from LP and which ones are from target) - this is done by subtracting certain value from all pixels in the image (or doing gradient removal - again there is mathematical description of signal we want to remove).

Link to comment
Share on other sites

Living at a dark site means I have very little LP signal to remove. However, sometimes folks bring along data shot at home so we can try to process it here. In the worst cases the image consists of a sheet of bright orange with a few stars. Now you are welcome to call this 'genuine' or whatever word you choose, but a good processing algorithm like Pixinsight's Dynamic Background Extraction will turn this into an image of the Iris Nebula (to give a real example which I remember) which looks very like the same target imaged here. I know which I prefer!

Olly

Link to comment
Share on other sites

Let's see if I understand this correctly. Image contains light from dso and it's photon noise which is reduced during stacking. At the same time there is  light present from sky polution and it's photon noise which is also reduced with stacking. Then software analyzes  that amount of additional light which is relatively uniformly spread across the image and substract it. This leaves noise. Higher the LP brightness higher the noise after removal.

For my better imagining I like to play with some numbers.?‍?

Let's say Light from DSO has 100 electrons + noise and from LP 500, plus some noise. The whole image has 600 electrons + noise. When LP is removed it leaves 100 electrons + noise + noise from LP. So the image has light from dso and it's shot noise and additional shot noise from extracted sky brightness.  Is that right? At least in theory? Then how to reduce that residual noise which was left from removing LP?

Andrej

 

Link to comment
Share on other sites

Pretty much, yep. (There's read noise too).

19 minutes ago, astrosatch said:

Then how to reduce that residual noise which was left from removing LP?

Your choices are:

  • Light Pollution filters, which are becoming less effective with new LED streetlights
  • Shooting with narrow-band filters.
  • More sub-exposures. This follows the law of diminishing returns.
  • Software noise reduction, although you'll always lose at least some detail. Generally, we'll apply it only to the fainter areas, where the noise has more effect.
Link to comment
Share on other sites

58 minutes ago, Shibby said:

Pretty much, yep. (There's read noise too).

Your choices are:

  • Light Pollution filters, which are becoming less effective with new LED streetlights
  • Shooting with narrow-band filters.
  • More sub-exposures. This follows the law of diminishing returns.
  • Software noise reduction, although you'll always lose at least some detail. Generally, we'll apply it only to the fainter areas, where the noise has more effect.

Thanks for reply. Yes, LED lights are really bad for us night crawlers.?

Andrej

Link to comment
Share on other sites

2 hours ago, astrosatch said:

Let's see if I understand this correctly. Image contains light from dso and it's photon noise which is reduced during stacking. At the same time there is  light present from sky polution and it's photon noise which is also reduced with stacking. Then software analyzes  that amount of additional light which is relatively uniformly spread across the image and substract it. This leaves noise. Higher the LP brightness higher the noise after removal.

For my better imagining I like to play with some numbers.?‍?

Let's say Light from DSO has 100 electrons + noise and from LP 500, plus some noise. The whole image has 600 electrons + noise. When LP is removed it leaves 100 electrons + noise + noise from LP. So the image has light from dso and it's shot noise and additional shot noise from extracted sky brightness.  Is that right? At least in theory? Then how to reduce that residual noise which was left from removing LP?

Andrej

 

You are absolutely correct about that, and in addition, since you like to play with numbers, here are some other things that can help you with that:

1. Thermal and light signal follow Poisson distribution - magnitude of noise is equal to square root of expected value for that distribution

In your example 100 electrons are from target, 500 electrons are from LP - total 600 electrons, noise part will be ~24.5e (square root of 600), and total signal part will be 600. We can now subtract 500 from total signal to get real target signal - that leaves us with original 100 electrons from target, but we don't remove noise, so ~24.5e of noise remains, and we end up with SNR (signal to noise ratio - just divide magnitude of signal with magnitude of noise) - SNR = ~4.08.

2. Signal behaves as scalar values, while noise behaves as linearly independent vectors

This means that you can add / subtract signal like regular numbers, but when adding / subtracting noise (truly random noise without correlation - hence linearly independent) you need to do it like vectors - simplest way to think about it is rectangle and adding sides to produce diagonal (simple vector addition where vectors are at 90 degrees).

Let's run the numbers again. In above example we added signal and then derived noise, let's look at what happens if we take each source separately. So we have target signal of 100 and associated noise of 10, and we have LP signal of 500 and associated noise of ~22.36e.

We will add signal like scalar values: 100+500 = 600, we will add noise like vectors (pythagorean theorem) noise = sqrt(10^2 + 22.36^2) = sqrt(100+500) = sqrt(600) = ~24.5. As you see it works out the same.

3. Stacking improves SNR by factor of square root of number of subs.

Let's say that we have K subs, each sub has signal S, and noise N (above is strictly true when each sub has same SNR, so same signal, but noise will be random and same magnitude in each sub).

Original SNR is S/N

Average of signal will be K*S/K = S (we add S K times and divide with K)

Average of noise will be sum(noise)/K = sqrt(N_1^2+N_2^2+N_3^3+...+N_K^2)/K = sqrt(K*N^2)/K = sqrt(K)*N/K = sqrt(K)*N / sqrt(K)*sqrt(K) = N/sqrt(K) - since all N are same in magnitude (when we deal with noise we deal in magnitude of it, hence whole vector thing).

Stack SNR = S / (N/sqrt(K)) = S*sqrt(K)/N = Original SNR * sqrt(K)

4. Read noise follows Gaussian distribution (ideally, in CCDs this is generally so while with CMOS sensors you can have telegraph type noise in addition, but is generally small to be included in calculations). It adds like normal noise addition, and is equal in magnitude to standard deviation of population.

I think that above is pretty much all you need to do simple SNR calculations.

Link to comment
Share on other sites

52 minutes ago, astrosatch said:

Thanks vlaiv, that helped me a lot. Now I know how to calculate noise values. Depending that I know initial values to begin with.? Is there any way to read those values from image?

Yes, it is quite easy to do that, but you need some information to get electron count from image.

First, image needs to be still linear and calibrated (stack is ok, or you can use single sub). Second, you will need e/ADU value for your camera. CCD based cameras have fixed e/ADU value, and you can look it up in specification. CMOS based cameras have variable gain, and each gain setting will have specific e/ADU value - it should also be given with camera specification.

e/ADU can also be measured - and involves taking flats, and a bit of above math - calculating noise in flats from standard deviation, and getting signal - since noise is square root of signal and both are multiplied with e/ADU - you can get e/ADU value from that.

In the end, you need to take care of bit count used in ADC. Most CMOS based cameras have 12bit ADC, some have 14bit and most CCDs have 16bit ADC. Subs are saved as 16bit format - and if camera ADC has less bits than 16bit - values are "padded" to the right - or measured values are stored as most significant bits. This means that you should divide ADU values prior to conversion with 16 in case of 12bit ADC (16-12 = 4 and 2 to the power of 4 is 16), or with 4 in case of 14bit ADC (again 16-14 is two and two to the power of two is 4).

Take any software that will open fits file format and examine pixel values - make selection somewhere and do statistics and look at average value. Multiply that average value with e/ADU (after you divide it with 16 of 4 in case of 12bit ADC or 14bit ADC - for 16bit ADC you don't need to divide with anything).

Does this make sense? :D (I might have over complicated the explanation).

BTW, you can use ImageJ / Fiji / AstroImageJ - it is free/open source, java based software for scientific image manipulation (those different names are different distributions of same program - just different plugins installed by default).

Link to comment
Share on other sites

12 hours ago, astrosatch said:

Let's see if I understand this correctly. Image contains light from dso and it's photon noise which is reduced during stacking. At the same time there is  light present from sky polution and it's photon noise which is also reduced with stacking. Then software analyzes  that amount of additional light which is relatively uniformly spread across the image and substract it. This leaves noise. Higher the LP brightness higher the noise after removal.

For my better imagining I like to play with some numbers.?‍?

Let's say Light from DSO has 100 electrons + noise and from LP 500, plus some noise. The whole image has 600 electrons + noise. When LP is removed it leaves 100 electrons + noise + noise from LP. So the image has light from dso and it's shot noise and additional shot noise from extracted sky brightness.  Is that right? At least in theory? Then how to reduce that residual noise which was left from removing LP?

Andrej

 

Jerry Lodriguss wrote an article about how noise from light pollution affects the final image, and how much you need to increase the total integration time if you live in a light polluted area. His argument is that for every magnitude you lose in light pollution, you need to increase your integration time by a factor of 2.5 to compensate for the increase in noise you suffer.

For example if someone images a target with a sky quality of 21.5, and I try to do the same at my location with a sky quality of 20.5 (with similar equipment), I would need to collect 2.5 hours of data for every hour that that other person collects, in order to have the same level of noise in the stacked image. If my sky quality is only 19.5, I need more than 6 times as much time.

All this is just theoretical, since imo, it will be impossible to extract extremely faint detail from light polluted data, no matter how much longer you'd collect that data.

The theory assumes that the camera's sensor is working in its linear interval. Only then is signal from LP added to signal from the target. If the sensor starts to saturate, this linear relationship no longer holds, and I won't be able to just subtract the LP.

Since you can't use long single exposures in a light polluted area, the very faint detail may fall inbetween the lowest bit numbers (ADU's) in a short exposure from a light polluted site, but be significant in a longer exposure from a dark site. So the argument made by Lodriguss holds for bright and moderately bright targets, but may break down for very bright (saturated) and very faint targets.

The noise from LP is reduced by increasing the integration time. But since I'm lazy, I will also use noise reduction in post processing. However, this degrades the quality of the image.

Btw, here's a link to the article:

https://www.skyandtelescope.com/astronomy-blogs/astrophotography-benefits-dark-skies/

Link to comment
Share on other sites

8 minutes ago, wimvb said:

All this is just theoretical, since imo, it will be impossible to extract extremely faint detail from light polluted data, no matter how much longer you'd collect that data. 

Yes you can. And same goes for very very faint signal in dark skies. People think that if signal is so faint that less than one photon arrives per sub, they won't be able to capture it. Not true.

As long as you operate within linear regime (and sometimes LP helps there - if sensor is not linear for very small values - LP raises total signal into linear region), it does not matter how faint signal is and how strong LP is - you can capture it given enough total exposure time.

Case one: very very faint signal - average 0.1e per exposure: this means that on average a bit less than 9 out of 10 subs will have no signal at all, and "remaining" subs will have 1, 2, 3 or more electrons captured (with decreasing probability according to Poisson distribution). Let's take naive approach - 9/10 subs has 0 and last one has 1e. Although you did not capture anything in 9 subs - average is still 0.1 - (0*9+1*1)/ 10 = 0.1

Case two: very faint signal in strong LP. Let's assume that we have 1e of signal and 10000e of LP per exposure (full well is 20k+ and you are using 16bit camera - it can work with smaller FW and less bits - just reduce exposure so that LP + signal is not saturating). Single exposure will have SNR of 1/sqrt(10001) = ~0.0099995, but use 2500 of such exposures, stack them and you will get SNR of ~5 - enough for fair detection of target.

In case 2, like in case 1, it may seem that signal is simply "lost" in noise and no way to recover it. But stack enough exposures and it will show - since you on average increase center of distribution by 1e in each sub - if you can determine precisely center of distribution (expectation value - and you do this by stacking) you can see that it is "shifted" by one from where it supposed to be (part of image where there is no signal - that you use to subtract background LP level).

This happens even if you capture on average 1e every 10 subs -with rest of them being on average 0e from target signal. Again average of values, given enough subs will show that signal is 0.1e.

Link to comment
Share on other sites

@vlaiv: in theory, there's no limit to what can be imaged under light pollution. And more subexposures increase the bit depth. I did a write up and simulations of this on my site: http://wimvberlo.blogspot.com/2017/08/that-other-reason-to-stack-images.html (see the last two images for a clear example.)

However, I would like to see a good image of, say, IFN or tidal streams, taken under light polluted skies. I think that for very faint signal, theory and practice differ. But I'm always very willing to be proven wrong.

Link to comment
Share on other sites

12 hours ago, wimvb said:

@vlaiv: in theory, there's no limit to what can be imaged under light pollution. And more subexposures increase the bit depth. I did a write up and simulations of this on my site: http://wimvberlo.blogspot.com/2017/08/that-other-reason-to-stack-images.html (see the last two images for a clear example.)

However, I would like to see a good image of, say, IFN or tidal streams, taken under light polluted skies. I think that for very faint signal, theory and practice differ. But I'm always very willing to be proven wrong.

Can't prove either by practical example (I don't have any images of mine with more than 16h total, but I do live in quite a bit LP - around mag 18.5), but I wonder what makes you think that theory and practice differ.

Or to be more precise, why do you think theory fails to predict what will happen in practice - because theory is flawed or perhaps we are applying it outside it's domain of validity (and if so what didn't we account for)?

Link to comment
Share on other sites

2 hours ago, vlaiv said:

However, I would like to see a good image of, say, IFN or tidal streams, taken under light polluted skies. I think that for very faint signal, theory and practice differ. But I'm always very willing to be proven wrong.

I don't think there is anything wrong with the theory, however I suspect there are at least two practical problems with this: (1) it would take an unfeasable length of exposure time to reach the same S/N, (2) it would require the LP to be spatially flat to high precision and require your flat fielding to be correct to high precision (at least at the scale of the IFN).

NigelM

 

Link to comment
Share on other sites

9 minutes ago, dph1nm said:

I don't think there is anything wrong with the theory, however I suspect there are at least two practical problems with this: (1) it would take an unfeasable length of exposure time to reach the same S/N, (2) it would require the LP to be spatially flat to high precision and require your flat fielding to be correct to high precision (at least at the scale of the IFN).

NigelM

 

I agree, but both are achievable for relatively small FOV.

Flat fielding precision is just down to number of flats that you use - again increase of time will get you precision needed (and of course one should use bit precision for calculations that supports this). I'm advocate of using very large number of calibration frames anyway - my flat set is usually 256 subs (properly exposed).

LP flatness is more difficult matter. I do have solution for its handling if it is flat (or to good degree approximated with bi linear surface) - over single sub. This means FOV size considerations. Large FOV will cover more sky and therefore natural distribution of LP can cause non linear gradients. If gradients are linear enough for single sub, it does not matter if direction of linear gradient is changing over course of imaging run - over set of exposures.

There is algorithm that will normalize frames (one should do this anyway because of changing alt/air mass and possibly changing transparency over the course of the evening) - including first order tilt in background illumination. This will produce subs with matching linear gradient, so stacking will not introduce non linear gradient (as it might do if all linear gradients were not aligned).

After that it is simple matter of fitting linear gradient and removing it from final stack.

Link to comment
Share on other sites

56 minutes ago, vlaiv said:

perhaps we are applying it outside it's domain of validity (and if so what didn't we account for)?

I have a feeling that this is the case. If we knew thus to be certain, and knew the cause, we could build a new theory.

Link to comment
Share on other sites

2 minutes ago, wimvb said:

I have a feeling that this is the case. If we knew thus to be certain, and knew the cause, we could build a new theory.

I agree, but we can't dismiss theory either based on feeling that it might not work in practice, or because of lack of examples (simply no one bothered to spend enough time to try?)

Link to comment
Share on other sites

On 07/11/2018 at 15:18, vlaiv said:

Flat fielding precision is just down to number of flats that you use - again increase of time will get you precision needed

Hmm - I think scattered light getting into the flats might limit the ultimate precision. Also the spectral difference between the astrophysical source and the light source used for the flat.

NigelM

Link to comment
Share on other sites

38 minutes ago, dph1nm said:

Hmm - I think scattered light getting into the flats might limit the ultimate precision. Also the spectral difference between the astrophysical source and the light source used for the flat.

NigelM

I agree about scattered light, but why would spectral difference be an issue for flat correction? Unless you are referring to light interacting with obstacles based on wavelength - but those differences being tiny for distances and angles involved.

Link to comment
Share on other sites

On 12/11/2018 at 18:48, vlaiv said:

but why would spectral difference be an issue for flat correction?

It is what you might call a 2nd order effect, but the response of a CCD is wavelength dependent, and this dependence can vary slightly between pixels and over the chip (for thinned chips you can get fringing of course). As a result, the flat correction required for, let us say, an emission line source would be different from that for a blackbody source. But I admit it is tiny effect (although maybe not so tiny for fringing).

NIgelM

Link to comment
Share on other sites

15 minutes ago, dph1nm said:

It is what you might call a 2nd order effect, but the response of a CCD is wavelength dependent, and this dependence can vary slightly between pixels and over the chip (for thinned chips you can get fringing of course). As a result, the flat correction required for, let us say, an emission line source would be different from that for a blackbody source. But I admit it is tiny effect (although maybe not so tiny for fringing).

NIgelM

With flats you are not correcting for CCD wavelength response, but rather difference in "QE" of optical path and that one is independent of wavelength (or should be, any dimming due to vignetting should be ND type of dimming - simply shadows. Not sure how "grease stains", or icing, or fogging up respond to different wavelengths).

So for example you have 30% QE in Ha and you have 60% QE in your flat field source (let's for argument sake assume that dust distribution and vignetting is absolutely the same between these two).

Pixel being illuminated by 50% vs pixel being illuminated at 100% for 30% QE has the same ratio as for 60% QE.

Lets take 100 photons as example - first pixel in Ha will have 15e, second will have 30e (30% QE, first being illuminated with only 50%, second - 100%). You want to correct that with source that has 60% QE, so flat will produce value for first pixel of 30e (60% * 50%), while for second one 60e (60% * 100%) - but you don't care about absolute values of flat - you care about ratios between pixels, so your flat is in effect:

1/2 : 1  - being (50% illumination, 100% illumination) - it is scaled version just representing ratios and any QE dependence to source is lost.

Unless by second order effects you imply already stated - umbra / penumbra dependence on wavelength (extremely small effect to be noticed, I believe - as diffraction vs wavelength around edge produces angles so small that over distances in optical train they can't project on imaging plane very large component - it will be much less than single pixel).

Link to comment
Share on other sites

On 16/11/2018 at 14:29, vlaiv said:

With flats you are not correcting for CCD wavelength response, but rather difference in "QE" of optical path and that one is independent of wavelength

Flats do not just correct for optical effects, they also correct for variation in the QE of each pixel on the chip, and that obviously depends on the spectral distribution of the incoming light. If all pixels had identical spectral response that wouldn't matter, but at some level they don't. For instance, CCDs are often thinned to improve response to blue light, but if than thinning is not uniform there will a be a different response in different parts of the chip. If you illuminate with e.g. an Halpha source you will not see these variations.

NIgelM

Link to comment
Share on other sites

57 minutes ago, dph1nm said:

Flats do not just correct for optical effects, they also correct for variation in the QE of each pixel on the chip, and that obviously depends on the spectral distribution of the incoming light. If all pixels had identical spectral response that wouldn't matter, but at some level they don't. For instance, CCDs are often thinned to improve response to blue light, but if than thinning is not uniform there will a be a different response in different parts of the chip. If you illuminate with e.g. an Halpha source you will not see these variations.

NIgelM

Indeed - I did not consider variation in QE response curve, I was just considering linear shift in QE as a whole.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.