Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Why does Ha imaging work when the Moon is bright?


Recommended Posts

It's regularly repeated that imaging in Ha is possible when the Moon is bright, but I don't ever recall seeing an explanation of why and I'm struggling to work out anything plausible myself.

Obviously the Sun gives out plenty of Ha otherwise the PST wouldn't exist.  The Moon definitely reflects plenty of Ha too, because I've seen images of it taken through an Ha filter.  Clearly the atmosphere isn't going to scatter or absorb "Lunar Ha" any more than it will from any other source because it's all the same frequency of light so that doesn't look to be a valid explanation.

So, why does Ha imaging work when the Moon is bright?

James

Link to comment
Share on other sites

It works because all other imaging "works" when the full moon is out :D

Let me explain. Issue with full moon is light pollution. For visual it reduces the contrast making target almost invisible (and in some cases invisible) to the naked eye. Not so for photography. Sensor will gather both photons from the target and photons from the LP.

Issue that we are having in imaging from LP can be summarized by:

- annoying gradients (LP is not equal in all parts of the image and produces gradient)

- lower SNR. This is important part. We don't need LP signal and we can remove it from the image by levels (and leave only target), or more fanciful ways that deal with gradients as well. What remains is shot noise associated with that signal - you can't remove that. Additional noise lower our SNR because it combines with other noise sources (read noise, dark current noise and shot noise from target).

Whole issue with LP can be overcome by using longer exposure - you can get very good images in heavy LP provided that you image for enough time.

Back to full moon and narrow band. If you are using narrow band filters that have band pass of about 7nm then you are in fact doing the following:

- passing all the light from the target at particular wavelength (Ha for example)

- cutting down all other type of light. This includes broadband LP signal. If we observe 400-700nm range that is used for imaging - it is 300nm wide. With narrow band filter - we remove everything except 7nm - or in another words we reduce LP by 300/7 = ~ x43. In reality we reduce LP even more because LP is strongest in other wavelengths (even Moon light).

x43 is 4 magnitudes of difference, and you are making you sky be 4 magnitudes less bright.

So if full moon puts you in Bortle 9 skies (18 mag skies) - using narro band filter returns you to Bortle 1 (mag 22 skies).

  • Like 3
Link to comment
Share on other sites

Thank you, Vlaiv :)

Your post suggests to me that I should try to think of camera sensors more as photon counters: the "Ha LP" will not "drown out" the "target Ha", though it may appear that way to the human eye, because the camera will still count the same number of photons from the target in the same time as it would if the sky were free of all LP.  It just counts all the LP photons too.  So perhaps it's effectively the same result as imaging when there is less LP, but in terms of the image histogram, the "black point" has just been pushed more to the right.

James

Link to comment
Share on other sites

1 minute ago, JamesF said:

Thank you, Vlaiv :)

Your post suggests to me that I should try to think of camera sensors more as photon counters: the "Ha LP" will not "drown out" the "target Ha", though it may appear that way to the human eye, because the camera will still count the same number of photons from the target in the same time as it would if the sky were free of all LP.  It just counts all the LP photons too.  So perhaps it's effectively the same result as imaging when there is less LP, but in terms of the image histogram, the "black point" has just been pushed more to the right.

James

That is quite correct.

Issue with our vision is that it is not linear - that is why we have magnitudes for stars (logarithmic dependence) and also why there is Gamma on our displays. When we see something "twice" as bright - that does not mean it is twice the intensity of the light. This hurts contrast even more then simple signal addition that sensor experiences (LP + target photons give signal level).

With sensors that LP level is just black point on histogram in case there is no gradient. In gradient case it's a bit more complicated but in principle the same - level of signal that will be subtracted / removed. Biggest issues is associated noise. Since that signal also comes in form of photons and those hit randomly (Poisson process) - there is noise that one can't remove because it's random and it is equal to square root of level of signal. More LP signal there is - more associated noise there will be.

What filters do is rather simple - remove unwanted signal and keep wanted signal. Removing of unwanted signal brings back "black point" (and also makes gradients less of a hassle) - but also minimizes noise from that signal (square root of 0 is 0 - no signal no associated noise).

Link to comment
Share on other sites

17 hours ago, vlaiv said:

So if full moon puts you in Bortle 9 skies (18 mag skies) - using narro band filter returns you to Bortle 1 (mag 22 skies).

Surely if you have Bortle 9 (18 mag) skies caused by moonlight which is broad spectrum, you can't image anything darker than 18 mag. :icon_scratch: You can reduce the noise caused by moonlight by greatly increasing the number of subs but the moon's light pollution 'signal' will still swamp the fainter areas of the target which are less than mag 18.

Only Ha imaging is successful in moonlight, other narrowband frequencies seem to be significantly affected, and it seems to be accepted that SII and OIII are best carried out on moonless nights. Admittedly, Ha filters are generally a bit narrower than the others which helps.

The spectral response of moonlight which pretty much matches sunlight shows a dip at Ha which I assume is part of the reason Ha imaging works.

72dkh.png.fabb4345f84e4d326161eb0ebee56875.png

The Sun's spectrum has a dark Fraunhofer line at 656 nm, (the C line) which corresponds to Ha. The Sun's photosphere produces these lines by absorbing energy at those wavelengths. Ha images of the Sun would be even brighter if this absorbtion didn't occur I would imagine.

768px-Fraunhofer_lines.png.4a319f0392c7b3fd5a5622b2e8e59ac0.png

SII at 673nm and OIII at 502nm have no corresponding Fraunhofer absorbsion lines so imaging them are more affected by moonlight than Ha.

That's my take on it but am willing to be proved wrong. :D

Alan

  • Like 1
Link to comment
Share on other sites

Something to consider is that the HII line at 656 nm is nearly always the strongest in emission nebulae (PNs may differ), while the [OIII] and [SII] lines are often weak, or almost non-existent. You won't see the [SII] and [OIII] lines in the solar spectrum as they are "forbidden" and only seen in very rarefied gasses. [NII] ,ight occasionally be strong enough to be moon-proof.

3 nm filters have the edge here.

Link to comment
Share on other sites

1 hour ago, symmetal said:

That's my take on it but am willing to be proved wrong. 

Not trying to prove you wrong, just want to expand on my answer above. I did hint that 300/7 is only approximation and that for Ha this factor is even greater.

Graph that you showed is direct Sun/Moon light - not something that will end up on your image if you are doing Ha imaging in the night of the full moon. At least it should not - no body images with part of the moon in their image.

What constitutes LP from moonlight is not actual light of the moon (above spectrum on your graph) - but rather part of that spectrum scattered in the atmosphere. Same thing that gives us day and blue skies when the Sun is up. I must put emphasis on "blue skies" part from above sentence in case it gets missed :D

Thing with our atmosphere is that it scatters shorter wavelengths much more than long wavelengths. Hence sky is blue - blue light being of shorter wavelength is scattered more than green and red part of the spectrum (unless you have desert storm and larger dust particles in the air - then it turns brown because larger particles scatter red light better).

In any case - that is the reason for Ha being shot more often than OIII. SII is usually fainter so it suffers more from small amount of LP that does get thru the filter. Filter "width" also plays a part, and yes Ha tends to be narrower than the rest.

But you can image under LP and under moon light - no swamping happens - signals just add. What does happen is that you get poorer SNR per total imaging time - sometimes considerably so. For example - not counting in the moonlight, just plain LP - I calculated that difference of about 2 magnitudes for me means x6 less imaging time for same SNR. I'm currently at mag 18.5 skies and plan to move to mag 20.8 skies. One hour there will give same SNR as 6 hours over here.

Same is true for moon light - you can image faint targets under full moon - but nobody does it because they would spend 5-6 hours on the target and get something like half an hour worth of data. Combining subs with different SNR is not something that is readily available in stacking software, and gradients are pain to remove - so most people opt not to do it for such a small contribution.

  • Like 3
Link to comment
Share on other sites

4 minutes ago, vlaiv said:

What constitutes LP from moonlight is not actual light of the moon (above spectrum on your graph) - but rather part of that spectrum scattered in the atmosphere. Same thing that gives us day and blue skies when the Sun is up. I must put emphasis on "blue skies" part from above sentence in case it gets missed :D

That's a very interesting point and one that I'd not considered before last night, when I went outside when the (full) Moon was showing and to my eyes the sky did actually appear somewhat blue.

James

Link to comment
Share on other sites

59 minutes ago, vlaiv said:

What constitutes LP from moonlight is not actual light of the moon (above spectrum on your graph) - but rather part of that spectrum scattered in the atmosphere. Same thing that gives us day and blue skies when the Sun is up. I must put emphasis on "blue skies" part from above sentence in case it gets missed :D

Good point. :smile: Forgot about the scattering. The OIII would be scattered more so would add to the sky background pollution.

1 hour ago, vlaiv said:

But you can image under LP and under moon light - no swamping happens - signals just add. What does happen is that you get poorer SNR per total imaging time - sometimes considerably so. For example - not counting in the moonlight, just plain LP - I calculated that difference of about 2 magnitudes for me means x6 less imaging time for same SNR. I'm currently at mag 18.5 skies and plan to move to mag 20.8 skies. One hour there will give same SNR as 6 hours over here.

I appreciate the improvement in SNR where longer imaging in LP skies will produce the same result as dark skies, but surely only for areas of the image which are brighter than the skyglow. 

 Are you saying that you can get an image of say a mag 21 object with a skyglow of mag 18 if you image for long enough. For every 16 photons from the skyglow you'll get 1 extra photon from your target once the noise has been reduced to an insignificant level by enough imaging time.

The problem I see is that the skyglow isn't a constant value which can be subtracted out once the noise has been made insignificant by sufficient exposure duration. It varies continuously for many reasons, some predictable and some not. Different areas of every sub will likely have a different 'average' skyglow value, so determining if a 1ADU variation of a pixel came from the target and not from somone a mile away turning a light on is difficult. Standard calibrating and stacking would be insufficent and each sub would need to be calibrated with its own unique sky glow to achieve that I'd have thought. :icon_eek:

Alan

Link to comment
Share on other sites

10 minutes ago, symmetal said:

Are you saying that you can get an image of say a mag 21 object with a skyglow of mag 18 if you image for long enough. For every 16 photons from the skyglow you'll get 1 extra photon from your target once the noise has been reduced to an insignificant level by enough imaging time.

Absolutely.

Here is an example:

M51 surface brightness profile:

image.png.afb4c29b7c312afd98d1f9ef2b66a857.png

As you see, it very quickly falls of below mag 20 or so as you go away from the center. In fact, I'm almost certain that tail of that galaxy is somewhere around mag26-28.

Here is 2h exposure of M51 from mag18.5 light pollution:

image.png.eeb1cea054feb56a676eb98b27972b99.png

It is down to processing. There is no "drowning" of the signal - signal just adds up. If you have 16e/px/exposure from LP and 1e/px/exposure from target - you will end up having average background signal of 16e where there is no target and 17e average signal on target. You subtract 16e and then you are left with 0e for background and 1e for target (black level) - then you stretch your image and you end up with 0e background (black) and 50% luminance for example (depends on how much you stretch).

Key here is setting black point. If you have uneven LP - and it will certainly be, depends on your FOV how much gradient there will be (small FOV - small gradient, larger FOV - larger gradient because more sky is captured and difference in LP will be larger at edges) - you might need to remove that gradient as well.

You are right that each exposure does not have same gradient direction, however there are algorithms to deal with this - in process of sub normalization you can extract planar component of the sub (interpolate signal with a flat plane - assumption here is that FOV is small enough so that LP changes linearly) and equalize those as well. That way you will end up with final linear gradient that is the same across the subs. When you stack it will just average out to again - linear gradient - which you can remove because there is mathematical dependence on that background level (fairly easy one to model and fit).

In fact, let me see if I can find linear stack of above image in my archive and I'll present you with measurements of background signal and faintest parts - just to see what sort of signal we are talking about.

I've managed to find only scaled (0-1 range) linear image and not electron count one, but that will do as we can see how much brighter background is from tail.

Measurement on median pixel value for background is:

0.018510705

Median pixel value just below NGC 5195 (companion dwarf galaxy) is 0.019229632

Difference of the two is:  ~ 0.00072, or about x25.75 less signal (around 3.88% of LP signal is signal of that faint region). That is 3.5 mag less than LP level.

 

  • Like 3
Link to comment
Share on other sites

1 hour ago, vlaiv said:

Absolutely.

Well that told me didn't it. :D Thanks vlaiv.  So LP puts no restriction on how much faint detail you can see. It just means you have to allocate much more imaging time to be able to reduce the significantly more noise contributed by the light pollution in order to see that detail.  And deal with the gradients of course. I stand corrected. :smile:

So hypothetically, if you had a camera with sufficient well depth that it didn't saturate, you could do DSO imaging in the daytime if there were no clouds, (and you weren't pointing at the Sun). 

Alan

Link to comment
Share on other sites

1 minute ago, symmetal said:

Well that told me didn't it. :D Thanks vlaiv.  So LP puts no restriction on how much faint detail you can see. It just means you have to allocate much more imaging time to be able to reduce the significantly more noise contributed by the light pollution in order to see that detail.  And deal with the gradients of course. I stand corrected. :smile:

So hypothetically, if you had a camera with sufficient well depth that it didn't saturate, you could do DSO imaging in the daytime if there were no clouds, (and you weren't pointing at the Sun). 

Alan

Hypothetically, you don't even need a camera with very deep wells to do that. As long as single exposure does not saturate sensor - in principle you could image during daytime.

Of course, it would take probably decades to do something comparable to single one minute exposure at night :D

  • Like 2
Link to comment
Share on other sites

  • 4 months later...

On @Vlaiv's second point it's a bit like this:

TESTING TESTING ONE TWO THREE

The text is only marginally brighter than the background and appears almost invisible, but if you backed the black point downwards it would be become visible and could be stretched to be normal black and white.

 

Link to comment
Share on other sites

So what emerges as the key factor is that the Ha emission from the moon is scattered far less than for shorter wavelengths. We might also want to add that there are plenty of objects which are strong in Ha emission, strong enough to be able to produce a workable S/N ratio in a reasonable time. I guess we wouldn't, in reality, be shooting in Ha if our objects were no stronger in Ha than they are in, say, SII. So it's a happy marriage between the behaviour of the atmosphere and the behaviour of the objects.

Olly

  • Like 1
Link to comment
Share on other sites

5 hours ago, Stub Mandrel said:

I just wish he'd put his face mask on properly...

He doesn't want his eyebrows infected with Covid19. Do you? I have great faith in Vlaiv and have modified my face mask according to his model.

Olly

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.