Jump to content


Registax wavelets vs Deconvolution for solar system images

Recommended Posts

After many hours of fiddling round with Registax wavelet settings to process my own solar system images, I've always been curious as to how it actually works. In doing so I've put together my own image sharpening program which does something similar to Registax wavelets. For comparison, I've also added some general purpose deconvolution techniques which you'll probably be familiar with from other image processing software (like Wiener inverse filtering, Richardson-Lucy, etc). In choosing a point spread function to deconvolve with, one suprising result was that the typical stack outputs from Autostakkert work best with a Lorentz point spread function (with a minor modification). Deconvolving with a Gaussian point spread function doesn't really work. Deep-sky images seem to deconvolve best with a Moffat point spread function (which is to be expected - it's already well established that star profiles in long exposures are best approximated with a Moffat function).

On the whole, it's unlikely that you can sharpen solar system images much more in this program than you already can in Registax. You can see results from Registax wavelet (sharpening layers), inverse filtering (e.g. Wiener), and iterative deconvolution (e.g. Landweber) below. They all give very similar results. In all the techniques there's a similar trade-off between less noise but less detail vs more noise but more detail.

There are some quick start notes on the first page of the Readme here:

There are some examples of deconvolved images here (move mouse over image to see before/after):
Image credits are on the hyperlinks

The Windows download is here:

Example solar system tifs to experiment with are here:

And the project page is here (with Source code in the src folder)

If anyone finds it useful, do post here how it compares to other tools you use for solar system image sharpening.

The download and the source code are free, you can use it unrestricted for any purpose. The OpenCV and OpenCVCSharp components which my program use have licence information at the end of the Readme.pdf.




  • Like 2
Link to comment
Share on other sites

Could you explain image to the right of each - FT of filter.

Is that separation filter that you used to extract particular layer for amplification, and if so, why did you do only single layer and not multiplayer?

Link to comment
Share on other sites

For the Wiener filter row, the "FT of filter" displayed is just the Wiener filter used for the repair, i.e. it shows Wiener Filter as used in the equation:

FT(Restored Image) = FT(Input image) x Wiener Filter
where the Wiener Filter is a (Fourier Transform) function of the psf.

The radius of the white circle changes as you change the FWHM of the point spread function that you use to deconvolve. High frequencies towards the corners are reduced (black), but mid frequencies (white circle) are enhanced, which sharpens as the required scale. The centre, although dark, will be approximately equal to 1, since we don't want to change low frequencies.

For the Landweber row, deconvolution is performed by iterations using a point spread function (i.e. no Fourier Transforms are performed). So in this case, I display the Fourier Transform that would have achieved the repaired image. It looks fairly similar the the circle of the Wiener filter.

It's the same idea with the top row showing Sharpening layers (like Registax). Again, no Fourier Transform is used in this technique, just a convolution with a Laplacian sharpening kernel (at different scales). So, again, I display the FT that would have achieved the repaired image. In this case, choosing the 1px layer will result in a circle of different radius to choosing the 2px layer. A blend of say the 1px and 2px gives a circle with a radius in between (this is the fiddling around bit with different layer strengths in Registax). Hence, when you choose a layer scale (or combination of layers) in Registax, you are in effect performing a deconvolution with a particular point spread function with a precise FWHM to match the FWHM of your blurred input image.

  • Like 1
Link to comment
Share on other sites

Ok, I get it now - it is boost vs frequency image.

I would be much clearer if you did radial cross section as graph (similar to MTF diagram) as image does not clearly show values (I had no idea that central part is 1.0 - it looks like close to zero - hence the question above).

What puzzles me are high frequency values.

In order to restore image - in simple case you want to divide with FT of PSF - that follows from convolution theorem which says that FT of convolution of two functions is product of respective FTs.

Say image was blurred with simple gaussian kernel (although we don't know exact kernel and that is the biggest problem). FT of gaussian is gaussian and FT of original image was multiplied with that gaussian.

In order to restore image we need to divide FT with again gaussian, or multiply with inverse of gaussian (in general case multiply in frequency domain with inverse of FT of blur kernel).

For gaussian kernel, I would expect image to look like this:


Or as plot of cross section:


(Ignore values on Y axis - I did not normalize kernel).

Do you know why high frequencies are not touched?



Link to comment
Share on other sites

A Wiener filter with a very small noise to signal ratio looks exactly like your first diagram. In this case, it's not really a Wiener filter anymore, it's just a simple inverse filter. This would work perfectly to restore a blurred image if there was no noise, but is pretty useless for real images.


It's the introduction of an extra NSR factor which changes the shape of the useful repairing Wiener filter. The second plot show a radial version of my original white circle diagrams, showing a typical repairing Wiener Filter. Hopefully you can see the y value of exactly one here for the lowest frequency at the left, going up to a peak, then down to less than one for the highest frequencies.


There's also an option in my program to show the MTF of the PSF (like your second graph). Here's a plot bottom right of the MTF for a Lorentz PSF (albeit with x-axis reversed)



Glad it makes a bit more sense now. I found this pdf quite useful in understanding the progression from a simple inverse filter to the Wiener filter:



  • Like 2
Link to comment
Share on other sites

What do you think about frequency decomposition approach?

I think it is very similar to wavelets, but I'm not 100% sure how wavelets work. There is excellent paper on wavelets that I need to revisit in order to figure out possible approach.

I'm currently thinking of doing frequency decomposition by scaling. Here is simple explanation of approach:

You take original image and scale it down by some factor - say 80% of original size. Scaling method is key - as it provides certain frequency response.

Then you take that scaled image and scale it back up to 100% again using certain resampling method. Such image will effectively have low pass filter applied to it and you can subtract it from original image to get high frequency component.

Doing that many times will slice frequency domain into small bands - each of which you can simply multiply with some value. Here is graphic representation in frequency domain:


Here is what decomposition into 50% looks like:


I just generated some gaussian noise and created 50% resampled version using bilinear resampling. Then I again scaled back up that and got low detail image.

Difference of the two will be high frequency image:


We can repeat this process and get multiple scale divisions (in fact it needs to scale as 1/X to have linear character in frequency domain, so 1/2, 1/3, 1/4 and so on of the size).

Then it is simple matter of multiplying each scale with "boost" factor - which is inverse of attenuation factor.

Catch is to apply resampling filter that produces nice frequency separation.

Link to comment
Share on other sites

The idea of scaling down an image, applying a transform, then scaling back up again to 100%, and subtracting from the original image, and adding weighted versions of each layer is exactly how Registax wavelets works. The Jupiters below shows the difference images at different scales of 0.5px, 1px, 2px, 4px, 8px and 16px. You just combine some fraction (or none) of each of the six layers to get the repaired image.

I know this isn't quite what you are suggesting. It's hard to say what the result would be with your technique without trying it !

What I have found so far is that the standard textbook deconvolution techniques of inverse filtering like Wiener, iterative methods like Richardson-Lucy, or the Registax style sharpening layers all seem to do a very good job of repairing typical planetary imaging stacks. The fact that all of these quite different techniques seem to converge gives a strong indication that they are all restoring an image back to its original unblurred version.


  • Like 1
Link to comment
Share on other sites

27 minutes ago, SamK said:

The Jupiters below shows the difference images at different scales of 0.5px, 1px, 2px, 4px, 8px and 16px. You just combine some fraction (or none) of each of the six layers to get the repaired image.

These are standard / dyadic filters, not gaussian, right? Gaussian filter uses the same scheme as unsharp mask (which uses single decomposition).

I think that wavelet / multi frequency approach is very good idea and just depends on filter used to separate frequency.

Problem with deconvolution approach is the fact that you need to know or guess kernel. Wavelets is sort of guided kernel guessing - we use our visual feedback as guide of how good current kernel is. Regular deconvolution is missing that and it also models kernel with single model (say Moffat, Gaussian, Lorentz or similar) while actual blur kernel can be something in between or totally different.

In either case - all methods have strong mathematical background and indeed produce proper results - actual detail being sharpened. Sometimes there are artifacts of course if parameters are wrong.

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

These are standard / dyadic filters, not gaussian, right?

Yes, not the Gaussian. It's like the Wavelets filter set to Default, and Wavelet scheme set to Linear, (in Registax Wavelet tab).

17 minutes ago, vlaiv said:

Problem with deconvolution approach is the fact that you need to know or guess kernel.

Yes, of course, it is a problem, but having tried quite a range of planetary stack images - right from Pic du Midi images down to my own (underwhelming) images, and some more in between, a PSF of fairly close to Lorentz (i.e. a Moffat PSF with beta of 1.0) seems to be a very good starting point for all of them. It's only the FWHM pixel width of the PSF that then needs to be guessed.

I've tried to incorporate immediate visual feebback in my program in "tuning" the PSF, in the same way that you get feedback in Registax when you turn layers on/off and move the sliders. Totally agree that without this visual feedback loop, it is too cumbersome to try and guess a PSF.

When you tweak the settings for the PSF, you can see there's quite a bit of leeway in what actually gives a good result. It would be interesting to see what PSF settings work best with a wider variety of images though. For the Lorentz PSF, it's also quite important to taper the tails of the PSF to zero at say 5-10 times the FWHM, since the function (1 over x) doesn't go to zero as quickly as a Gaussian PSF.

Link to comment
Share on other sites

7 hours ago, SamK said:

After many hours of fiddling round with Registax wavelet settings to process my own solar system images, I've always been curious as to how it actually works. In doing so I've put together my own image sharpening program which does something similar to Registax wavelets.


Really nice. Kudos!

It took some fiddling to get it to work under Linux but I got there in the end. The stock WINE implementation doesn't cut it because the dotNET framework is not included. To assist others, I followed the instructions at https://www.dedoimedo.com/computers/wine-dotnet-mono.html

TL;DR install winetricks and mono-complete, then run "winetricks dotnet45". Wait for ages, accept licensing details and ignore error messages.


Link to comment
Share on other sites

Another thing that comes to mind - planetary images are often over sampled and there is no point in trying to restore frequencies that are not there.

In order to account for that - you should be using some of standard kernels that emulate seeing / stacking induced blur (which is really gaussian because of central theorem) and optics PSF - which is Airy pattern with clear cutoff in frequency domain.

Link to comment
Share on other sites

10 minutes ago, Xilman said:

Yay, there's a blast from the past! 30 years ago I used to work at robots.ox.ac.uk alongside Andrew Zisserman.

Talking about 30 years ago, my only claim to (deconvolution) fame is feeling unworthy in undergraduate supervisions with Steve Gull when he wasn't too busy fixing Hubble's blurred images in the early 90s !

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.