Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Passive Optics (guider based PSF deconvolution)


NickK

Recommended Posts

A clearer image is what every earth based astronomer wants, and in this thread I’m going to focus on ways of doing that - including introducing an idea that’s been on my mind for over a year.

Don’t worry if you’re a beginner, the thread will start off gentle and include some of the threads I have written on the subject.

What is “The Point Spread Function”

There is no single definition for this - it’s literally everything that messes up your image. Everything from the bending of the light caused by the turbulence of the earths atmosphere to the bending/scattering caused by the telescope itself.

post-9952-0-19698000-1344689009_thumb.pn
post-9952-0-78086800-1344689972_thumb.pn === deconvolution===> post-9952-0-30653400-1344690062_thumb.pn

However I wrote a graphic introduction for this and the concept of “Deconvolution” here, comparing bubble’s M57 and my own M57: http://stargazerslounge.com/topic/158573-what-did-the-point-spread-function-psf-ever-do-for-us

So as you can see it causes a lot of problems for us earth bound astronomers! 

The new technique focuses on correcting the blur caused by the atmosphere. This is the same technique I mention in that thread back in 2012! So let me take you over a journey.. hopefully at each step you’ll understand and by the time we get to the Passive Optics, you’ll be able to see what I’m on about..

Existing software Blind Convolution

Existing image sharpening tools exist. Virtually every astrophotography tool has a form of sharpening and some even have deconvolution. However those tools often use a mathematical model of the PSF and then repeatedly use that model to sharpen the image (called blind deconvolution) or they use the stars from the long exposure (this has it’s own issues) - and in the case of PixInsight it attempts to combine them both…

This thread shows the blind convolution using mathematical PSF fit to stars on a long exposure - called “Dynamic PSF” in PixInsight. I’ve shown the M57 steps in easy to follow detail with pictures: http://stargazerslounge.com/topic/157001-using-dynamic-psf-to-add-some-clarity (it shows how I got the image in the first section)

Here's the effect of IIR Lucy Richardson deconvolution (sharpening) on the left, the original in the middle and Lucy Richardson convolution (blurring) on the right:

post-9952-0-48582200-1418565420_thumb.pn

Not bad eh? :D However.. blind convolution isn’t the same as really undoing the atmospheric blur.. it’s a vague approximation (although “gaussian” or “poisson” distribution is going to have some decent results) - the fundamental problem is that the light has hit the imaging sensor and we’ve lost data about the PSF..

Examples of Blind Deconvolution algorithms are Lucy Richardson,  Van Cittert and wavelet sharpening.

Active Optics

Professional astronomers use a couple of tools for making their images clearer. They focus on solving the problem before the light hits the imaging sensor.

Active optics means to adjust the light in realtime to correct path of light before it hits the imaging sensor. The result is an exceptionally clear image - rivalling Hubble in the new telescopes.

Here's a link about active optics (with a nice description): http://www.eso.org/public/teles-instr/technology/active_optics/

Professional active optics work by having a flexible secondary mirror and a special wave front sensor - the sensor uses the fact that all light should appear from a star should appear at the same time, so if if the light appears slightly before/after the pixel next to it, then something is wrong. The processing then instructs the pistons behind the mirror to adopt a specific shape that corrects the light. This happens hundreds of times per second.

Modern active optics now use a powerful laser to excite atoms in the atmosphere. The result is that looking at the scattering of light from the artificial star provides a really good measurement of the atmosphere - this is then used to instruct the mirror to correct.

Unfortunately, here the UK, shining lasers into the sky will cause lots of issues with air traffic and other astronomers! Also deformable mirrors and the systems in use here are hideously expensive - out of the reach of the amateur astronomer.

However there is a cheaper, less sophisticated system - use the guide star and then use a motorised tilting lens to bend the light back - thus reducing blur. Starlight Express have an “Active Optics” unit that does just this. By measuring the centre of the star 5-10 times a second, the computer instructs the sense to move to minimise the wobble of the guide star. As the whole image is captured through the lens, the net effect is to help sharpen the image. It works well for blur caused by mount movements and some atmospheric distortion. However it cannot undo the complex distortion completely as the professional deformable mirror active optics will.

Link to comment
Share on other sites

Passive Optics

What if:

1.  you measured a star that provided a reference of the earth’s atmosphere

2.  you too a long exposure image during that time

3.  you have a computer

If you used a guide camera then the guide star is effectively the active optics reference star sampled every 2-3 seconds. The guider images are then the best description of the atmospheric PSF that is impacting your long exposure being taken at that instance.

post-9952-0-98944300-1448875933.png

This differs from blind convolution because the PSF is not a mathematical guess - it is the real PSF that has occurred during the long exposure and has affected the long exposure.

So in theory - taking the guider images, processing them and then processing the long exposure should be possible to undo some of the atmospheric blurring. Not perhaps as good as active optics - but without any additional hardware.

This is the idea that has been bounding around for a while. Although not the perfect solution it has some promise. I also have an idea of how this could be made better. A lot better.

Here’s an experimental output, as demonstrated in the thread yesterday.

Left being the output of the processing (single 60s sub) - right being Aladin showing the large scope deep sky image

post-9952-0-19157900-1448822184_thumb.pn

The right being the processed output (single 60s sub - not stacked) and the left being the original input long exposure (auto-stretched)

post-9952-0-91545200-1448823729_thumb.pn

Over the next few weeks I'll dive into the algorithm, the aspects of noise and the possible improvement by using full-frame guiding as well as secondary full frame using Shack–Hartmann sensors (although a loss of some transmitted light).

Link to comment
Share on other sites

Actually, aren't the stars in your actual long exposure going to give you the PSF anyway - no need to look at the guider images at all, which could be mis-shapen due to being far off center as Chris points out.

I'm sure I've read somewhere about this sort of processing - ie using the star images to estimate the PSF.

cheers,

Robin

Link to comment
Share on other sites

Your 'Off-Axis PSF' is not going to represent a non-distorted star image though? Off-Axis stars are often problematic due to distortion, but PHD2 handles the poor star shapes quite well.

ChrisH

Yes - hence there is some OTA PSF.. but it's not as bad I would expect.

Using a sensitive camera and a full frame 80:20 prism works, however coupling that with a wavefront detector really reduces the light to the main imaging camera.. the impact then is sensitive (QE) of the main camera and the addition noise.

Link to comment
Share on other sites

Actually, aren't the stars in your actual long exposure going to give you the PSF anyway - no need to look at the guider images at all, which could be mis-shapen due to being far off center as Chris points out.

I'm sure I've read somewhere about this sort of processing - ie using the star images to estimate the PSF.

Robin

Yup - however the PSF in the long image is subject to noise, the guider (although noisy) has a set of noise. So if you're doing processing the guider images (even if you're averaging) are better because of the shifting noise between the guider exposures. The result is better than taking it from a long exposure.

The technique isn't new as such as you've pointed out.

Link to comment
Share on other sites

Yes - hence there is some OTA PSF.. but it's not as bad I would expect.

Using a sensitive camera and a full frame 80:20 prism works, however coupling that with a wavefront detector really reduces the light to the main imaging camera.. the impact then is sensitive (QE) of the main camera and the addition noise.

You could use a hot/cold mirror arrangement then use the IR component to calculate the PSF.

ChrisH

Link to comment
Share on other sites

Nice write up. I have a minor correction, however. Blind deconvolution is a term technically reserved for those methods where the PSF is unknown, and it is estimated from the image itself. LR deconvolution requires the PSF to be known. It is therefore not a blind deconvolution method

Link to comment
Share on other sites

Nice write up. I have a minor correction, however. Blind deconvolution is a term technically reserved for those methods where the PSF is unknown, and it is estimated from the image itself. LR deconvolution requires the PSF to be known. It is therefore not a blind deconvolution method

Agreed, most of the time LR in the package is implemented using a gaussian or equivalent model (often transforming the gaussian mathematically) but it is possible to apply using a bespoke kernel. In the IIR LR, I made a single pass LR using the separated filter (i.e. vertical and horizontal applied) using the start (the PSF input) and desired (a single pixel) rather than iterating. However this doesn't quite do such a good job on PSF compared to the mass adjust. PixInsight's DynamicPSF provides both.. but the bespoke PSF kernel could be used to implement following guider processing. 

The main thing here is the noise from taking a long exposure being a bad noise removal source (until stacked with other long exposures)  vs having a larger noise but multiple samples (i.e. guider frames) gives a decent noise removal source.

You could use a hot/cold mirror arrangement then use the IR component to calculate the PSF.

True - there's a IR guider out there at the moment but it needs a specific spectra star to lock on to.

Doing a full frame IR would require specific spectra of stars (assumption that the PSF is the same across the spectrum through), however it would/could provide a brighter image without taking some of the imaging light.

The idea - create a interpolatable PSF definition for each area of the image (call it a PSF field). Then the deconvolution at the end can simply use that PSF field to create a full image adapted (i.e. supports local turbulence). A wavefront sensor would allow you to take this one step further and adjust the sampling based on undoing the wavefront (akin to implementing the deformable mirror in software).

Link to comment
Share on other sites

Actually another way would be use phase correlation for the PSF against the image. The corresponding result would show where each of the matches are - effectively giving a processed image.. how that is then processed further I'm looking into too.. It may have some advantages of not being impacted by noise as much (the noise in the guider frame being constant across the result).

Link to comment
Share on other sites

One option of doing something similar in this in pixinsight:

1. Take raw FITS guider images

2. subtract guider CCD noise and long exposure CCD noise (i.e. read noise)

3. sort according to time for each long exposure

then for each long exposure:

4. use a non-align registration for the guider frames (i.e. static stacking - as the guider should be pointing at the centre..)

5. put the stack into the dynamicPSF function using option on far right tab..

6. deconvolve the long exposure using the dynamic PSF.

7. use flats (debate if this is worth using here or at the point of step 2 due to the likelihood of removing signal.

Then..

8. register as normal

9. stack as normal

10. post process as normal.. 

Link to comment
Share on other sites

Regarding LR deconvolution: it is never a blind deconvolution because it needs a PSF defined as part of its input. LR deconvolution can be implemented with any input PSF. This could be the theoretical Airy disk of the optics, or the Airy disk convolved with a Gaussian to add the effect of seeing, or the empirical Moffat distribution.

Anything using an externally estimated PSF and then uses that without modification is not a blind deconvolution method, because blind deconvolution simultaneously estimates both the PSF and the deconvolved image. They may well need an initial estimate, and for that your method could work. There is one complication, however. Seeing cells are not very big, meaning that a guide scope offset from optical axis of the main imaging scope by several inches may not be recording the correct information. I am not sure what offset would be OK, but my intuition would be that it is on the order of a few inches, no more.

Link to comment
Share on other sites

On the LR  - I use an airy disk for the Lena image (the generated disk covers a band of wavelength for imaging), the main issue I have is that it's a ideal distribution - in the same way that gaussian, moffat etc are. To me that input is still part "guesswork". Perhaps not strict definition (I know you have a far more advanced academic knowledge on the subject).

Regarding LR deconvolution: it is never a blind deconvolution because it needs a PSF defined as part of its input. LR deconvolution can be implemented with any input PSF. This could be the theoretical Airy disk of the optics, or the Airy disk convolved with a Gaussian to add the effect of seeing, or the empirical Moffat distribution.

Anything using an externally estimated PSF and then uses that without modification is not a blind deconvolution method, because blind deconvolution simultaneously estimates both the PSF and the deconvolved image. They may well need an initial estimate, and for that your method could work. There is one complication, however. Seeing cells are not very big, meaning that a guide scope offset from optical axis of the main imaging scope by several inches may not be recording the correct information. I am not sure what offset would be OK, but my intuition would be that it is on the order of a few inches, no more.

Just going from experience on the OTA with the Pentax and the 383L.  The OTA sits just at a sweet spot over the top of the sensor for that scope. I know if you're pushing further out the OTA guide star shape is both subject to severe aberration but also represents the PSF only at that point.

Using a whole field would be better giving the PSF delta over the entire imaging field. Another intermediate is using IR spectrum close enough to visible light that the atmosphere IR absorption is not a problem - the final issue is getting the spectra for that IR right for each star. However when possible - this very simplified method works and each guider frame gives a better understand of the PSF (as a constant over the field).

Link to comment
Share on other sites

One issue with this technique is the overload defraction - stars by themselves are likely to be one pixel, however the scope causes defraction of the signal that causes the blooming of the star size. However I been thinking about this and may have a solution. It won't be able to recreate the missing data caused by the overload but it may be an interesting result.

The essence of this solution is to create a estimator by using a stretching phase correlation. The resulting field gives an estimation of the overload - in fact it gives a single point estimation field across the entire image allowing (in theory) for a better reconstruction. The resulting field is a 3D focus field, the narrowest/steepest peaks represent the best fits and therefore by looking at the estimation 3D field location in the Z axis gives an estimates across the image. It also estimates the range of values in the field.

I'm going to code this up and see what pops out. The downside is it will be very slow on a Core2Duo.. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.