Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Lucky imaging Nebulas with Night Vision ?


powerlord

Recommended Posts

I just watched this excellent Veritasium video:

And it got me thinking when they were looking at the night sky and the milkyway, etc.

Seems to me if you stuck one of those onto a telescope, you could then attached a camera to that and take very short <10ms images aka planetary imaging with it. And with enough of them remove the emissive noise.

Then apply the same lucky imaging techniques as we do to planetary to nebulas and galaxies.

Couldn't this enable unprecedented terrestrial resolution to astrophotography ?

 

Edited by powerlord
Link to comment
Share on other sites

Night vision devices can't remove quantum nature of light and can't amplify signal without amplifying the noise.

One can certainly image night sky with 10ms exposures with regular gear (no need for night vision) but you run into two difficulties:

1. read noise.

Read noise must be sufficiently small compared to other noise sources not to cause too much issues. This is true for planetary imaging because target shot noise per exposure is often greater than read noise because planets are bright. Even the we select very low read noise cameras to do planetary imaging

I would personally wait for affordable close to zero read noise sensor to attempt anything similar. There are sCMOS sensors with very low read noise, but I'm not sure we have sufficiently low read noise for sub 10ms DSO imagin

2. Stacking

In order to stack images - you need to have something to align images to. These are usually stars, but with such short exposures - you don't have enough SNR in even stars (except brightest ones) to be able to tell where they are, and besides that, seeing ensures that star image is not in exact position. Same thing happens with planetary imaging - we use alignment points and average of these disturbances is used to determine actual part of planet under alignment point, but for stars - it is more difficult as we have mount motion to contend with.

It takes about 2 seconds and even more for star position to stabilize (that is why we use at least 2s guide exposures and why seeing is expressed as FWHM of 2 second star image).

There have been attempts to do DSO lucky imaging that involves 16"+ apertures and about 500ms exposures, but it is not really lucky imaging in planetary sense - it is just clever way to remove poorest of the seeing conditions and it still does not move resolution boundary too much after stacking.

Large apertures are needed in order for subs to have enough stars to register for stacking in half a second exposure.

Link to comment
Share on other sites

But the SNR is clearly better, or else they'd be useless for thier purpose surely ? have a look at video - the milkyway is clearly visible in real time. With images bright to the human eye. it still seems similar to the planetary issues to me ?

Link to comment
Share on other sites

I don’t really see the need to stack with NV, you can make videos of Nebulae as well.

Here are some NV pictures taken with a native F2 newt’ish scope.

http://www.loptics.com/articles/nightvision/nightvision2.html

To be able to fabricate your own custom scope.

Also using a supergain tube, which has double the gain as a L3 tube. The SNR is around the same in both cases of the supergain and L3 technology.

Note this scope has a Novell corrector to flatten the image, no Parracor works below F2.8.

 

Edited by Deadlake
Link to comment
Share on other sites

4 minutes ago, Deadlake said:

I don’t really see the need to stack with NV, you can make videos of Nebulae as well.

Here are some NV pictures taken with a native F2 newt’ish scope.

http://www.loptics.com/articles/nightvision/nightvision2.html

To be able to fabricate your own custom scope.

Also using a supergain tube, which has double the gain as a L3 tube.

Note this scope has a Novell corrector to flatten the image, no Parracor works below F2.8.

 

wow. yeh that's what I'm talking about. that truly is amazing. I wonder how long till we get the seestar s500 that does that for all of us.

 

Link to comment
Share on other sites

3 minutes ago, powerlord said:

But the SNR is clearly better, or else they'd be useless for thier purpose surely ? have a look at video - the milkyway is clearly visible in real time. With images bright to the human eye. it still seems similar to the planetary issues to me ?

You can't improve SNR in that way. SNR is the same or worse for visual.

With visual observation, several things happen:

Eye/brain combination filters out low level light. Even if we are able to detect light sources that are few photons strong - we never see associated shot noise. We never see noisy image. This is because our brain actively filter things out. If we look for prolonged periods of time - we then can "detect" object - because all that signal is accumulated enough for the brain to let us know something is there.

When you amplify light enough - you will see it but it will start to show something you've never seen in real life - and that is shot noise. Look at all the amplified footage - you can actually see shot noise and individual light flashes.

You can't accumulate more light than those 30ish milliseconds that brain integrates for.

You can't make extended light sources appear brighter even when using optics. We always see equal (or less) surface brightness of objects, no matter how big or fast our telescope is.

Our sensors when imaging are already working very close to limit of what is possible. We have very high quantum efficiencies of close to 90% (no room for improvement beyond 100% I'm afraid). We also have rather low read noise cameras of ~1e per exposure (this can be further improved, but even with 0e read noise - we would still have shot noise - which is seen in images - that we can't do anything about).

 

  • Like 1
Link to comment
Share on other sites

4 minutes ago, powerlord said:

wow. yeh that's what I'm talking about. that truly is amazing. I wonder how long till we get the seestar s500 that does that for all of us.

 

EEVA is already doing that for us - no need for "fancy" equipment or night vision.

Issue is that people simply don't understand resolution nor SNR when imaging and their relationship.

Want to see example taken with regular equipment that rivals and bests those images?

image.png.66ae4f2fdc38ad4e919606acf5c8b635.png

This is single sub of M51 - 60s of integration and it shows signs of tidal tail and no background noise.

How is that possible?

Well, for starters - it is the same size as objects on that link when imaged thru an eyepiece. So we have traded resolution for SNR - not something people are willing to do (and often in fact over sample).

In any case - try afocal method of imaging and produce very small images and you will be surprised of what can be achieved in close to real time - no night vision devices needed.

  • Like 1
Link to comment
Share on other sites

53 minutes ago, vlaiv said:

You can't make extended light sources appear brighter even when using optics. We always see equal (or less) surface brightness of objects, no matter how big or fast our telescope is.

Could you explain this, gain on a tube is the number of photons multiplied. This is controlled (via gain control) to keep the background noise under control.

if you didn't have photon multiplication then not possible to use with H-Alpha filters to see Nebula???

Apart from an eyes adjustment for brightness not sure of relationship you claim for an integration time of 30 seconds, NVD is real time as you an see from the videos which are close to looking thru the NV eye piece.

Link to comment
Share on other sites

1 hour ago, Deadlake said:

Could you explain this, gain on a tube is the number of photons multiplied. This is controlled (via gain control) to keep the background noise under control.

Ok, so first lets explain this.

There are several noise sources in the image - regardless if the image is observed with eye or captured with sensor (in fact - sensor adds just a few more noise sources like dark noise which is negligible for cooled sensor and read noise).

Two main sources of noise that are present in image are target shot noise and LP background shot noise. Both of these are the same in nature and in fact - you can't tell them apart. There is no way of knowing if photon landing on your sensor or your retina is from target or sky background. You can only do some filtering if target signal is very specific compared to background sky signal (like in Ha nebulae or emission nebulae in general) - but shot noise remains none the less - for every source that comes in discreet packets.

Noise is "embedded" in signal when it reaches aperture - it is there before amplification of any kind. Gain or any other type of amplification amplifies whatever number of photons happens to be, and this number of photons already contains noise because of the way light works.

Say that on average in some small period of time - like 30ms which is movie type exposure (so we get 30fps) - or "exposure" time of our eye/brain combination (we don't differentiate images if displayed at 30fps and our brain blends them into smooth motion) we get 50 photons. This is on average. Which means that in one integration period we will have 42 photons, in next one we will have 57 and so on - with average over time getting closer to 50 photons.

No matter what you do to 42 photons - you can't conclude that it is 50 - 8, or that 57 is actually 50+7. Only with enough measurements (integration time) you can start getting the idea what is real signal - but this happens only when you reduce noise so much.

In any case - amplify those 42 photons by 1000 times and you will amplify both signal and noise the same amount 42000 is equal to 50000 - 8000, so both signal has been amplified by 1000 and noise has been amplified by 1000, but their ratio remains the same 50 / 8 is the same as 50000 / 8000 - no change there.

So amplification by gain in camera or by night vision device won't change SNR of photon signal so you don't get any sort of "background noise under control" from applied gain.

Only way to reduce background noise - being LP shot noise is to block LP signal itself and this is what filter does. It does the same regardless of your use of night vision device. It does it the same when observing (with/without night vision) and when imaging - filter knows nothing of the device that sits behind it and it filters the light all the same.

Now, you say - but look at what you can see with night vision device and you also point out images that were taken by phone or camera at the eyepiece.

And I'm explaining that in the following way:

1. For visual - difference is only in applied or strength of the signal. Not noise reduction or noise removal. When we observe regular light without amplification - it is dim, we can see it but it takes effort. This is because of our brain kicking in without our knowledge and filtering noisy part of the image - or signal that is too weak not to produce noise. In fact - some of the signal is not noticed (although cells are triggered by photons) because of filtering and some of signal is denoised by brain. We never see the noise in dark images but we do have several sensations that are effect of what the brain is doing behind the scene - we for example might see object "pop in and out of view". Longer we observe - object will be present more (we learn to see it - or our brain has this need to keep our belief true - it is psychology thing and happens in other areas like making up events if we can't remember them exactly and being totally convinced it happened that way and so on).

There is a way to trick our brain to show photon shot noise - I once managed to do it by accident. Take Ha filter and look thru it at a bright source but being in dim room. I looked at bright day scene in darkened room thru Ha filter when I noticed that view thru Ha filter looks like there is "snow" (or effect of old TVs when reception is too weak). This is because there was enough light in scene for brain to turn off noise reduction - but there was not enough light coming from Ha filter for SNR to be high enough and noise not to be seen - so I saw it.

2. How can video record amplified image and could normal camera without night vision be able to do the same / take single exposure and record the video.

This part has to do with SNR of the image and has nothing to do with night vision. It has to do with "speed" of the system - which we often think in terms of F/ratio, but is actually aperture at resolution.

Eyepiece and camera lens together act as powerful focal reducer and resulting resolution or sampling rate is enormous. Add that to the fact that 16" telescope is being used - which is massive aperture and you get enough signal to show object in short exposure with poor SNR.

To explain a bit more - let's take one of those images - and analyze it:

 

image.png.250780467173ec2106ed1320efd2d7e3.png

This is object observed thru 16" of aperture. It is observed in real time or near real time - but unfortunately, we don't know what size of sensor it is. For the sake of argument - let's say it is 4/3 sensor.

To get that sort of FOV from 4/3 sensor - we must be operating at approximately:

image.png.d2edfbfb9a036dca062435fa53226062.png

This is simulation of ASI1600 at 600mm

Since telescope is 400mm and if we have say 600mm of FL we are effectively at F/1.5. Further more ASI1600 has pixel size of 3.8 but has 4600px across and the image above has maybe only 460px across - that is like having x10 pixel size.

This produces enormous "speed" and could allow for very short exposures to show nebula - but that nebula:

1. Has very low resolution compared to normal astronomical images

2. Has itself low SNR compared to normal astronomical images.

However - you can yourself achieve the same with regular astronomy camera if you do following:

Use eyepiece / camera lens combination to get very large effective focal reduction. Use very large aperture scope and bin data in real time by some crazy factor. I've shown you above that single exposure can have very high SNR by loosing resolution. Here is another example - which we can compare to above:

image.png.7a44dd62ca91c122a3aade8383af74cf.png

If I take just one uncalibrated sub that I made with 8" telescope of Pacman, that is only one sub - sure it is long exposure sub and can't compare to fraction of a second exposure, but neither can level of detail and SNR, I get above image.

So I have x4 smaller aperture - much longer exposure, but also SNR and much much bigger and level of detail is much much bigger.

This is the closest I can get without actually doing EEVA live stream on large telescope with afocal method to show that you can view nebula in real time - provided that you use very aggressive focal reduction in form of afocal method (eyepiece + camera lens) and binning data on the fly to reduce sampling rate and improve SNR.

 

Link to comment
Share on other sites

Short exposures using an intensifier will also just look like snow…. You need to integrate for a little while to see the inage, using fast optics maximises the brightness. If you use too much magnification you can also starve the intensifier and get a similar result. This is why people take longer exposures with NV, to smooth the noise to get a smooth image. The reason we can see hydrogen nebulae normally isn’t that they’re faint it’s that our eyes are rubbish at seeing deep red light.

if you want more detail on nebulae then why not use the tried and trusted reconciliation route, measure the spread and remove it. I am sure it’s only a matter of time before some amateur makes a laser guide star system, though sodium orange is not an easy colour to create. Then you could detect and correct the blurring in real time.

Peter

Link to comment
Share on other sites

2 hours ago, PeterW said:

Then you could detect and correct the blurring in real time.

Can't happen.

At least not for amateur setups and the way we observe.

There is something called isoplanatic angle

https://en.wikipedia.org/wiki/Isoplanatic_patch

It is very small - like 20ish arc seconds in diameter (but it depends on conditions and equipment used).

In the conditions we observe - different parts of the sky distort differently. Every isoplanatic patch has different deformation, and you would need laser for each to measure its deformation. That would be many many lasers. Second issue is that with physical correction like bending mirror - again, you can correct for only one of these patches - with others you will increase error by correcting for different patch.

There is simply no way to correct for atmosphere over larger distances.

For planetary imaging we can do this because we do corrections after we have gathered all the data and examined it for statistics. We also have strong signal in all the points of interest (no way of determining distortion in empty patch of the sky or where SNR is low). In any case - it can't be done in real time as you need to gather the data first and a lot of data (thousands of frames for a good planetary image).

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.