Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

SamK

Members
  • Posts

    12
  • Joined

  • Last visited

Reputation

29 Excellent

Profile Information

  • Location
    Co Down

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Talking about 30 years ago, my only claim to (deconvolution) fame is feeling unworthy in undergraduate supervisions with Steve Gull when he wasn't too busy fixing Hubble's blurred images in the early 90s !
  2. Yes, not the Gaussian. It's like the Wavelets filter set to Default, and Wavelet scheme set to Linear, (in Registax Wavelet tab). Yes, of course, it is a problem, but having tried quite a range of planetary stack images - right from Pic du Midi images down to my own (underwhelming) images, and some more in between, a PSF of fairly close to Lorentz (i.e. a Moffat PSF with beta of 1.0) seems to be a very good starting point for all of them. It's only the FWHM pixel width of the PSF that then needs to be guessed. I've tried to incorporate immediate visual feebback in my program in "tuning" the PSF, in the same way that you get feedback in Registax when you turn layers on/off and move the sliders. Totally agree that without this visual feedback loop, it is too cumbersome to try and guess a PSF. When you tweak the settings for the PSF, you can see there's quite a bit of leeway in what actually gives a good result. It would be interesting to see what PSF settings work best with a wider variety of images though. For the Lorentz PSF, it's also quite important to taper the tails of the PSF to zero at say 5-10 times the FWHM, since the function (1 over x) doesn't go to zero as quickly as a Gaussian PSF.
  3. The idea of scaling down an image, applying a transform, then scaling back up again to 100%, and subtracting from the original image, and adding weighted versions of each layer is exactly how Registax wavelets works. The Jupiters below shows the difference images at different scales of 0.5px, 1px, 2px, 4px, 8px and 16px. You just combine some fraction (or none) of each of the six layers to get the repaired image. I know this isn't quite what you are suggesting. It's hard to say what the result would be with your technique without trying it ! What I have found so far is that the standard textbook deconvolution techniques of inverse filtering like Wiener, iterative methods like Richardson-Lucy, or the Registax style sharpening layers all seem to do a very good job of repairing typical planetary imaging stacks. The fact that all of these quite different techniques seem to converge gives a strong indication that they are all restoring an image back to its original unblurred version.
  4. A Wiener filter with a very small noise to signal ratio looks exactly like your first diagram. In this case, it's not really a Wiener filter anymore, it's just a simple inverse filter. This would work perfectly to restore a blurred image if there was no noise, but is pretty useless for real images. It's the introduction of an extra NSR factor which changes the shape of the useful repairing Wiener filter. The second plot show a radial version of my original white circle diagrams, showing a typical repairing Wiener Filter. Hopefully you can see the y value of exactly one here for the lowest frequency at the left, going up to a peak, then down to less than one for the highest frequencies. There's also an option in my program to show the MTF of the PSF (like your second graph). Here's a plot bottom right of the MTF for a Lorentz PSF (albeit with x-axis reversed) Glad it makes a bit more sense now. I found this pdf quite useful in understanding the progression from a simple inverse filter to the Wiener filter: https://www.robots.ox.ac.uk/~az/lectures/ia/lect3.pdf
  5. For the Wiener filter row, the "FT of filter" displayed is just the Wiener filter used for the repair, i.e. it shows Wiener Filter as used in the equation: FT(Restored Image) = FT(Input image) x Wiener Filter where the Wiener Filter is a (Fourier Transform) function of the psf. The radius of the white circle changes as you change the FWHM of the point spread function that you use to deconvolve. High frequencies towards the corners are reduced (black), but mid frequencies (white circle) are enhanced, which sharpens as the required scale. The centre, although dark, will be approximately equal to 1, since we don't want to change low frequencies. For the Landweber row, deconvolution is performed by iterations using a point spread function (i.e. no Fourier Transforms are performed). So in this case, I display the Fourier Transform that would have achieved the repaired image. It looks fairly similar the the circle of the Wiener filter. It's the same idea with the top row showing Sharpening layers (like Registax). Again, no Fourier Transform is used in this technique, just a convolution with a Laplacian sharpening kernel (at different scales). So, again, I display the FT that would have achieved the repaired image. In this case, choosing the 1px layer will result in a circle of different radius to choosing the 2px layer. A blend of say the 1px and 2px gives a circle with a radius in between (this is the fiddling around bit with different layer strengths in Registax). Hence, when you choose a layer scale (or combination of layers) in Registax, you are in effect performing a deconvolution with a particular point spread function with a precise FWHM to match the FWHM of your blurred input image.
  6. After many hours of fiddling round with Registax wavelet settings to process my own solar system images, I've always been curious as to how it actually works. In doing so I've put together my own image sharpening program which does something similar to Registax wavelets. For comparison, I've also added some general purpose deconvolution techniques which you'll probably be familiar with from other image processing software (like Wiener inverse filtering, Richardson-Lucy, etc). In choosing a point spread function to deconvolve with, one suprising result was that the typical stack outputs from Autostakkert work best with a Lorentz point spread function (with a minor modification). Deconvolving with a Gaussian point spread function doesn't really work. Deep-sky images seem to deconvolve best with a Moffat point spread function (which is to be expected - it's already well established that star profiles in long exposures are best approximated with a Moffat function). On the whole, it's unlikely that you can sharpen solar system images much more in this program than you already can in Registax. You can see results from Registax wavelet (sharpening layers), inverse filtering (e.g. Wiener), and iterative deconvolution (e.g. Landweber) below. They all give very similar results. In all the techniques there's a similar trade-off between less noise but less detail vs more noise but more detail. There are some quick start notes on the first page of the Readme here: https://github.com/50000Quaoar/Deconvolvulator/blob/main/Readme.pdf There are some examples of deconvolved images here (move mouse over image to see before/after): https://50000quaoar.github.io/Deconvolvulator/ Image credits are on the hyperlinks The Windows download is here: https://github.com/50000Quaoar/Deconvolvulator/raw/main/Deconvolvulator32.zip Example solar system tifs to experiment with are here: https://github.com/50000Quaoar/Deconvolvulator/tree/main/image%20examples And the project page is here (with Source code in the src folder) https://github.com/50000Quaoar/Deconvolvulator If anyone finds it useful, do post here how it compares to other tools you use for solar system image sharpening. The download and the source code are free, you can use it unrestricted for any purpose. The OpenCV and OpenCVCSharp components which my program use have licence information at the end of the Readme.pdf. Sam
  7. It is a bonus galaxy, PGC147737, about mag 18. Awesome Horsehead by the way Richard. The framing with the Horse Head dead centre works really well. I like the star spikes and the 3D effect of the dust in the lower half of the frame. The red/blue wispy nebula (IC432) to the left (and slightly up) of the Flame Nebula has come out really well (reminiscent of Running Man). I'm guessing the massive amount of data from 8hrs at f/3.3 has allowed some deconvolution on the brighter details like the Flame Nebula and the blue reflection nebulae.
  8. Hi Brendan If you want to get back to basics, Dave Coffin's dcraw has options to get the underlying ADU values from a RAW image: Using windows command line (with dcraw.exe in C:\temp\dcraw), eg dcraw -D -4 -T "C:\temp\dcraw\file.CR2" The options are: -D Document mode without scaling (totally raw) -4 Linear 16-bit, same as "-6 -W -g 1 1" -T Write TIFF instead of PPM The resulting 16bit tiff produced should will have a maximum ADU value on saturated stars based on the number of bits of your camera, so around 16,000 for a 14bit 450D, 4,000 for a 12bit 1000D. You'll also see the bayer matrix pattern if you open the tiff. There's a windows build of dcraw.exe here: https://github.com/ncruces/dcraw/releases/tag/v9.28.2-win Sam
  9. Hi Ciarán With a 60 image stack, drizzle definitely improved stars and brought out a bit more detail out in the nebula (below is Drizzle at 5''/px vs Standard at 10''/px enlarged in Photoshop). I found about 20 images would normally be required to make drizzle work well. I'd guess the main drawback in say 10 images for drizzle is that you'd get a lot more noise than a standard stack and that you'd want to be more careful to get dither working between frames (in PHD). I think you'll be fine with differential flexure (my setup was substantially more botchy than yours). If you do get a problem, flex is a lot less of a problem with objects nearer the zenith! (when the camera is hanging vertically down)
  10. Nice image and setup - Sh2-119 on the left gives a nice balance to the North American and Pelican. I've also found vintage (and budget) M42 prime lenses work pretty well for wide-angle narrowband and give a lot of versatility to an existing telescopic setup. My theory was that with only one wavelength to contend with in narrowband, there's no need to have expensive glass to counter chromatic aberration. A few notes which might help in case anyone else tries this type of setup: +Focussing - With patience, I found I could get good enough focus by hand (I measured FWHM in short test subs and adjusted the lens barrel). +Guiding - Much easier than normal when imaging at 10''/px (works fine even if windy!) +Drizzle - Since you'll almost certainly be oversampling compared to seeing, drizzle works well if you have lots of subs +Aperture - With a Ha filter, wide open on a Carl Zeiss 135mm f/3.5 was pretty much the same as stopped down to f/4 and higher, same thing on a Asahi Pentax Takumar 200mm f/4. +Mount attachment / Differential Flexure - Since the Atik314L+ doesn't have a screw mount on the bottom of the camera, I used a lens collar ring around the lens (see picture below). Even with a tight fit and extra felt tape around the lens barrel I sometimes got a bit of differential flexure (you can compensate for this to some extent using PHD2 comet tracking mode - not recommended but it worked!). Would be much better to use a side-by-side setup like Ciarán's. As an example of what's possible, here's 60 x 300secs Ha (Carl Zeiss 135/Atik314L+), drizzled to 5''/px on an infrequently-imaged faint area in Vela (Small and brighter RCW40 top slightly right) Sam
  11. Here's a close-up of the chain of dark clouds of the Rosette Nebula using public domain data. I've used IPHAS survey H-alpha as luminance with a colour layer derived from the Digitized Sky Survey red/blue channels (transformed to a SHO style turquoise/gold colour layer). My own 8'' GSO RC version (last image below) was the inspiration (shown for low-res comparison). Some people see "animals" formed from the dark clouds... I see a leaping jaguar top right (or spanner!) and an ostrich bottom left. The tiny dark circles to the left of the jaguar's head are referred to as globulettes in Gahm et al (2007). The smaller circles are estimated to have masses of about 1-2 times that of Jupiter and a radius of about 2,000 AU and are speculated to result in the formation of free-floating interstellar planetary-mass objects. Processing: -Photoshop (with Google Nik Collection for output sharpening and noise reduction) -Defect cleaning from the IPHAS frames, stacking and mosaic re-mapping using my own software (which occasionally seems to work) -Again, my own processing software to do the red/blue to SHO-style colour mapping and also to make a local contrast layer Data acknowledgements: -H-alpha survey from the 2.5m Isaac Newton Telescope (www.iphas.org/data.shtml), 120 second frames -Red and blue from the ESO Digitized Sky Survey archive DSS2 (archive.eso.org/dss/dss) -Star colours from GAIA DR2 Archive (gea.esac.esa.int/archive) Unprocessed data here Cheers, Sam
  12. Thanks again Richard, a good selection of targets requiring different processing techniques. My processing was mainly in Photoshop (and by the looks of the Taurus Molecular Cloud, primarily on the saturation slider) ... Cheers, Sam
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.