Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

First Mosaics from Home Brew LR Deconvolution data


michael.h.f.wilkinson

Recommended Posts

Woo - going to have to pull my finger out :p

Now what I really want to see is the output of your PSF estimation :D

LR deconvolution relies on an external estimate of the PSF. Blind deconvolution estimates the PSF for you. I my case I found that a Gaussian PSF with sigma=1.4 was best, purely experimentally

Link to comment
Share on other sites

LR deconvolution relies on an external estimate of the PSF. Blind deconvolution estimates the PSF for you. I my case I found that a Gaussian PSF with sigma=1.4 was best, purely experimentally

Yup - there's some interesting approaches with GPU into PSF estimations that cropped up whilst researching the subject. One was using a line sharpening process that then used as an input into the PSF estimation (IIRC that was microsoft research). Another used brute force GPU swarm to estimate.

At the moment I'm just applying a generated PSF based on the airy disk for the more constant scope based deconv as a simple modifier in realtime, I have some more longstanding ideas around using high framerate bright objects to provide references to feed into the main image (DSO) and using dense optic flow for the higher frame rates. The idea is that those PSF generated from those analysis are then used as input.

edit - found it:

"PSF Estimation using Sharp Edge Prediction" - uses super-resolution of the PSF/image, which is great for higher frame rates :D

Link to comment
Share on other sites

Many of these approaches fair poorly on solar images (lunar might be different). The edges here are rarely sharp enough in solar. Real time is applications might suffer due to poor S/N at high frame rates (even in solar).

But for planetary/lunar it may yield some nice results.

There was a paper on super-granulation for solar and PSF estimation too .. lemme check..

Ahh actually the swarm GPU estimation was the same as the solar surface deconv:

"POINT SPREAD FUNCTION ESTIMATION OF SOLAR SURFACE IMAGES WITH A COOPERATIVE PARTICLE SWARM OPTIMIZATION ON GPUS"

Link to comment
Share on other sites

Interesting stuff (all in white light), intended to much higher resolutions and much higher photon counts than obtained with amateur scopes in H-alpha (not too impressed by the results on Lena either, but would need to run tests to see if other methods do better). What is lacking from the results posted on the web was a thorough comparison with other deconvolution methods. I will have a look at Perroni's dissertation later, to see if he gives more comparisons there. I say this to all my students every time: it is not sufficient to show your method works, you must show that it works better than stuff we already had. The nice thing of Perroni's work is that the code has been made available.

One thing to bear in mind is that to gain enough S/N to run most deconvolution methods reasonably well, we have to use stacked images. These have an effective PSF that is quite close to Gaussian, although you might consider better Airy-disk approximations as well

Link to comment
Share on other sites

One thing to bear in mind is that to gain enough S/N to run most deconvolution methods reasonably well, we have to use stacked images. These have an effective PSF that is quite close to Gaussian, although you might consider better Airy-disk approximations as well

My thinking here is that, in this case - it's better to mask out areas of bad data quality in high frame rate situations rather than correct. Then simply align the areas of high quality and stack. You have a large stream of input so rejecting a high S/N or outlying alignment area is preferable to spending hours attempting to reclaim it (especially only to find the refraction has effectively left holes or made stacked pixels on top inseparable in the data anyway).. (at high magnification - the thermals end up being more of a problem). The main issue is that the suns surface is never static :/

For the pentax, the airy disk (I provide a wave length range) works as it has a very flat field and reasonable focal alignment of colours, whereas non-flattened refractors would need an adaptive field solution - perhaps using a known reference image whilst focusing at infinity to provide the solution across the field.

Link to comment
Share on other sites

My thinking here is that, in this case - it's better to mask out areas of bad data quality in high frame rate situations rather than correct. Then simply align the areas of high quality and stack. You have a large stream of input so rejecting a high S/N or outlying alignment area is preferable to spending hours attempting to reclaim it (especially only to find the refraction has effectively left holes or made stacked pixels on top inseparable in the data anyway).. (at high magnification - the thermals end up being more of a problem). The main issue is that the suns surface is never static :/

For the pentax, the airy disk (I provide a wave length range) works as it has a very flat field and reasonable focal alignment of colours, whereas non-flattened refractors would need an adaptive field solution - perhaps using a known reference image whilst focusing at infinity to provide the solution across the field.

This is precisely the set-up (except you reject the LOW S/N frames) when you use programs like AS!2. Deconvolution is applied after stacking, rather than on the frames

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.