Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

When to apply Deconvolution to an image


Recommended Posts

I've only recently started in imaging and have made my first attempts in Deconvolution but  I'm a bit confused about whether I should be even attempting this. To explain:

My current understanding (from reading "Lessons from the Masters" p133) is that you should not apply deconvolution to undersampled images. I also read (somewhere)  that you are not supposed to apply any form of non-linear stretch before Deconvolution because it would make the Point Spread Function that you are trying to deconvolve vary throughout the image.  

Assuming these are correct,  how do I know if my image is undersampled or oversampled and hence if I should/should not apply deconvolution ?

Currently, I'm imaging Deep Sky objects (according to pinpoint) at 2.39 arc seconds/pixel. Whilst I understand this should be OK from a Deep Sky imaging perspective, I don't understand how this relates to undersampling/oversampling with various seeing conditions.  To explain, say on a really good night the seeing is 2 arc seconds, I presume that a point source (eg star) would be spread over 2 arc seconds and hence always be contained within less than one pixel, so in my case I would be undersampling.  Conversely on a poor night of seeing at (say) 6 arc seconds, given my relative pixel size, I would be oversampling the image. Is this correct ?

Alan

Link to comment
Share on other sites

I've only recently started in imaging and have made my first attempts in Deconvolution but  I'm a bit confused about whether I should be even attempting this. To explain:

My current understanding (from reading "Lessons from the Masters" p133) is that you should not apply deconvolution to undersampled images. I also read (somewhere)  that you are not supposed to apply any form of non-linear stretch before Deconvolution because it would make the Point Spread Function that you are trying to deconvolve vary throughout the image.  

Assuming these are correct,  how do I know if my image is undersampled or oversampled and hence if I should/should not apply deconvolution ?

Currently, I'm imaging Deep Sky objects (according to pinpoint) at 2.39 arc seconds/pixel. Whilst I understand this should be OK from a Deep Sky imaging perspective, I don't understand how this relates to undersampling/oversampling with various seeing conditions.  To explain, say on a really good night the seeing is 2 arc seconds, I presume that a point source (eg star) would be spread over 2 arc seconds and hence always be contained within less than one pixel, so in my case I would be undersampling.  Conversely on a poor night of seeing at (say) 6 arc seconds, given my relative pixel size, I would be oversampling the image. Is this correct ?

Alan

Hi Alan,

Indeed Deconvolution must be applied in the image's linear state and this is generally done very early on in the post-processing workflow. In fact, you could apply this before anything else whatsoever. The linear image state limitation is indeed due to the Point Spread Function model generated by DynamicPSF needing to use linear (true) data. 

The subject of oversampling or undersampling depends on your equipment's resolution of the night sky as well as your average seeing. If your seeing is say, 1.5 arcseconds, this means that your night sky is allowing you to determine the difference between fine details 1.5 arcseconds apart. Nyquist Theorem dictates that the absolute minimum sampling rate should be half of the system's. This is so that the equipment is reproducing the night sky at its best possible resolution of fine details. With 1.5 arcsecond seeing, the ideal sampling rate is then 0.75 arcseconds/pixel - half of your seeing. Therefore, if your equipment is sampling the night sky at 2.39 arcseconds/pixel, you are by definition undersampling (in this example). Oversampling would be happening if you were sampling the night sky at say, 0.5 arscseconds/pixel, for example. 

I've read that the UK's average seeing is 2 arcseconds, so 2.39 arcseconds/pixel is also undersampling as 1 arcsecond/pixel would be ideal. Now, don't take this to heart. Seeing varies not only from night to night but also with the elevation above horizon of your target. It can also vary during a night in the same spot of sky. Moreover, my Borg 45EDII refractor with my QSI 660wsg-8 CCD camera give me 3.89 arcseconds/pixel. Gibraltar's average seeing is around 1.5 arcseconds so I'm undersampling my night sky tremendously. But so what? The telescope and camera combination produces stunning widefield images. I just accept that there are very, very fine details I won't see, but that's normal because my FOV is so incredibly large. 

You will notice benefits from Deconvolution on many of your images, if not all, since by virtue of its mathematics, effectively cancels some of the atmospheric distortion introduced by our atmosphere when you were imaging. Therefore it is indeed worth applying. However, if you have undersampled the night sky quite a bit, like I do with my Borg 45EDII telescope, you won't notice much benefit from applying Deconvolution. So really it's not about whether you should apply it to this image or to that image, but whether it is worth bothering for this image because of your night sky resolution. I would say that 2.39 arcseconds/pixel may give you some benefit in applying Deconvolution so give it a go! :) I have a tutorial specifically on this, here:

http://lightvortexastronomy.blogspot.com/2015/08/tutorial-pixinsight-sharpening-fine.html

I hope this helps answer your questions. 

Best Regards,

Kayron

Link to comment
Share on other sites

Hi Kayron

Thanks for the explanation - that is much clearer.

I've tried applying deconvolution to my images using Maxim DL (which I primarily use for image acquisition), as you say it seems to help a little with image sharpness (even though I'm probably mainly undersampling) and it does reduce the average FWHM on the stars, so all in all, a net benefit, although nothing dramatic. In the context of deconvolution, I hadn't really though the implications of seeing versus elevation above the horizon. I presume the more atmosphere you are looking through the worse the average seeing and hence the greater the potential benefit of applying deconvolution.

I've had a look at your tutorial -  although I don't have PixInsight, it does explain the concept of deconvolution well  :smiley:.   

I decided to go down the MAXIM DL (mainly for image acquisition) and Photoshop CC (for post processing) route. I'm using MAXIM DL for deconvolution which provides Lucy Richardson and Maximum Entropy options. I'm not sure what the difference are between the two methods but I have found that the results are extremely sensitive to the input value of PSF radius (for reasons that I don't understand the suggested PSF radius value for a particular image also seems to be twice the optimum value).

Anyway, thanks for your help.

Alan

Link to comment
Share on other sites

Deconvolution really comes into its own in lunar and solar imaging, I feel, where I typically sample at the Nyquist rate, or a touch beyond. Note that the PSF is not just the seeing disk, it is the seeing disk convolved (blurred) with the Airy disk of the optics, and any motion blur due to tracking errors.

Link to comment
Share on other sites

  • 1 month later...

I searched deconvolution and found this thread.

When in the image process should it be applied (based on my workflow I'm learning / improving at the moment)?

->Register

->Calibrate

->Stack

->Liberate Channels from debayered stack

->Stretch Channels

->Colorise Channels

->Adjust Channels

->Merge Channels

->Noise reduction (of sorts)

->Final (ish) image

Link to comment
Share on other sites

Reading this thread, I tried applying LR deconvolution to a stacked but unaltered image of M31, then tried using it as a luminance layer.

Although when the deconvoluted image was zoomed it it did sharpen the detail, particularly the stars, the amount by which it increased noise in 'flatter' parts of the image was considerable.

When combined with an RGB layer, the sharpening impact was minimal but the noise was very visible. I went back and used much gentler maximum entropy deconvolution, and again, once the luminance layer was adequately stretched the noise was excessive but little or no useful extra detail was appearing.

Maybe with more knowledge and practice I could do better?

Link to comment
Share on other sites

Jonk - I'm not an expert at this..... :smiley:  but my understanding is that you apply deconvolution after stacking of subframes and prior to any non-linear stretching (eg Digital Development Processing).  To reduce the overall noise in an image you should apply all sharpening (including deconvolution) only to the luminescence channel. If you have a one shot colour camera and you wish to follow LRGB processing techniques, then you'd need to create a synthetic (L) luminescence channel.  If you don't wish to do this, you could apply deconvolution to the colour image in which case it would be after your "Liberate Channels from debayered stack" step. The disadvantage of applying deconvolution (or other forms of sharpening) to the entire colour image is that you will increase the chromatic noise compared to LRGB based processing.  

Stub - deconvolution works by exchanging apparent SNR for apparent resolution. So, the fundamental challenge is apply deconvolution only to those image areas that can tolerate an increase in noise.  Although I haven't tried this yet, my thinking is to apply deconvolution to an entire image and then use Photoshop mask techniques to apply the deconvolution selectively. The advantage of this method is that it can be extended to a multi-layer deconvolution strategy eg take the original image and apply a low strength deconvolution, save the result; take the original image and apply a high strength deconvolution, save the result.. Then use Photoshop masks to apply the two deconvolution results selectively to the image - high strength to the high SNR areas and low strength to the lower SNR areas.  This technique is demonstrated by Ken Crawford in one of his videos  https://vimeo.com/53216842

On a practical note, I've found that CCDstack from CCDware implements an excellent deconvolution algorithm, so I'm now performing all my deconvolution in CCDstack rather than Maxim DL.

Alan

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.