Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Mathematica for Image Processing - Pretty Powerfull


Hugechris

Recommended Posts

I recently upgraded my Mathematica license to the latest version 8 that allows some very powerfull image processing methods.

Shown here

http://www.wolfram.com/mathematica/new-in-8/comprehensive-image-processing-environment/index.html

(Links to the functions are given here: http://reference.wolfram.com/mathematica/guide/ImageProcessing.html)

  • Methods include the expected suite image scaling , masking, addition, subtraction, histogram manipulation & matching, gamma, contrast adjustment, mask construction.
  • Plus there are some very smart deconvolution algorithms that allow you to input your own point spread functions in image form as well as a functional form. This looks very good from what I have played with so far.
  • There are denoising algorithms, sharpening algos. Also there are registering algos, that allows for a custom alignment algo. There is a separate Wavelet module and more functions that you will ever need, to provide image enhancement.
  • In principal you could design your own custom stacking / processing application once you have built your own mathematica notebook.
  • There is also a live capture mode.

As mathematica is a maths program not an image processing package you need to code your own processing algorithms which will appeal to some. However the ability to fine tune and customise your processing without the constraints of Black-Box software is great.

Here is an example of some image sharpening that I could not get from just using the Canon Professional Tool.

Mathematica Image Processing Example

Link to comment
Share on other sites

  • Replies 29
  • Created
  • Last Reply

You must have some pretty good coding skills. I recall using programs like Mathematica at Uni but I would even attempt to code my own imaging processing tools. I look forward to seeing some of the results of your efforts on the SGL.

Clear skies,

Matthew

Link to comment
Share on other sites

You must have some pretty good coding skills. I recall using programs like Mathematica at Uni but I would even attempt to code my own imaging processing tools. I look forward to seeing some of the results of your efforts on the SGL.

Clear skies,

Matthew

You do need some coding skills for this, but Mathematica is not a full on coding environment, if you are used to using scripting languages then the step up to Mathematica is not that great.

I am sure there are plenty astronomers out there keen to squeeze the best out there images.

Registax is good, but I feel there is a lot more I could get out of it if I could get into the engine and tweek it a bit, especially the wavelets filter that tends to saturate a lot of images when ramped up. When I was playing with the Sharpening algo in mathematica I could asjust the histogram on the fly to avoid saturation and apply custom noise adjustment.

The noise reduction algorithms alone are a huge step up from the tools I have (GIMP, Canon Professional). The deconvolution algorithm is very interesting especially as one can provide your own point spread funtion.

So that long exposure measurement of a star through your scope can be used do deconvolve your optics and imaging set up, without the approximation used by black-box packages.

I have quite a bit of spare time on my hands (weather!!!) to build an processing framework and will share the results.

Link to comment
Share on other sites

Yup.. mathematica is good by there's step up for most. It's possible to write camera sources for it.

£195 excl VAT for hobbists.

PixInsight also allows the use of external PSFs.

Link to comment
Share on other sites

It's certainly not cheap, but it is cheaper than photoshop and will handle 16 bit (upto 32bit) images.

Once the process have been set up its quite simple to process an image.

It has all the functionality of Reistax and more. I am in the process of building a registration, alignment algorithm for large images, combined with a muktiscale sharpening tool, with a bit more functionality than Registax.

Link to comment
Share on other sites

We have Mathematica and MatLab at work. Both are powerful but not cheap. I know MatLab handles floating point (32 and 64 bit) and integer images (up to 64-bit). It also handles VERY big images well (3.9 Gpixel was the last one I tried, took a while but it worked). Mathematica can generally do the same things

Link to comment
Share on other sites

3.9 Gpixel image thats huge, pretty interesting job if you need to process this size of image. Will be a little while for affordable DSLRs to reach the 64bit level, though I am sure Nokia will build one into their new smartphone after then next iphone release. Matlab is a cleaner coding environment, but is up in the Celestron EdgeHD 14" price territory in terms of price for full functionality.

So far I reproduced the Registax wavelet sharpening at different scales, its looking pretty good, I have a much cleaner handle on the noise and intensity saturation than Registax.

Link to comment
Share on other sites

Here is an example of the wavelet sharpening in Mathematica. The sharpening was applied to the 3rd detail layer of the wavelet transform and the image reconstructed.

Mathematica custom wavelet enhancement

My next topic is to build some custom stacking alogorithms that will also allow the reduction of noise and avoid blooming issues and the need for image masking.

Link to comment
Share on other sites

3.9 Gpixel image thats huge, pretty interesting job if you need to process this size of image. Will be a little while for affordable DSLRs to reach the 64bit level, though I am sure Nokia will build one into their new smartphone after then next iphone release. Matlab is a cleaner coding environment, but is up in the Celestron EdgeHD 14" price territory in terms of price for full functionality.

So far I reproduced the Registax wavelet sharpening at different scales, its looking pretty good, I have a much cleaner handle on the noise and intensity saturation than Registax.

These are hi-res satellite images of a city and environs. We are now exploring algorithms to handle a 1.5 Tpixel image. If you image an country at high resolution with aerial photography or satellites, this is what you might get.

Link to comment
Share on other sites

These are hi-res satellite images of a city and environs. We are now exploring algorithms to handle a 1.5 Tpixel image. If you image an country at high resolution with aerial photography or satellites, this is what you might get.

1.5Tpixel is even more impressive. To handle and transmit that amount of data never mind the processing requires some serious computing power, quite a challenge for memory management, a great project that would have plenty of uses.

Link to comment
Share on other sites

Thanks, really is a huge computing project, my challenges are to achieve image processing with my ageing 2009 4GB quad core pc .... not really a comparison compared with what you have at your disposal !

Quantity is not the same as quality. It is often more difficult to find out what the best image processing method is, than to get it running in parallel on big iron.

Link to comment
Share on other sites

Quantity is not the same as quality. It is often more difficult to find out what the best image processing method is, than to get it running in parallel on big iron.

I can imagine, its always a challenge to get the most out of a process with limited resources, it has driven the optimisation of many algorithms. Image processing is very amenable to breaking up into smaller problems (images).

I guess the other aspect is the quantification of many subjective criteria (eg focus, sharpeness, colour) when processing an image. Where the goal is fuzzy defining an optimal criteria is not so straight forward.

This is the one area I have realised is not so simple when designing an algorithm that scores an image on its focus quality recently.

Link to comment
Share on other sites

My thought on quality is- as quality over an image changes, is possible to grade a set of images, alpha out the low quality regions and then merge them into a finer quality product.

Link to comment
Share on other sites

I have used Mathematica in the past, but then it was only in pro version and v. expensive. At £195 for the hobby version I may raid the piggy bank. It really is a cool tool. Is the hobby version the same as pro, but just you are not supposed to use it for anything but personal use?

old_eyes

Link to comment
Share on other sites

  • 2 weeks later...

I have used Mathematica in the past, but then it was only in pro version and v. expensive. At £195 for the hobby version I may raid the piggy bank. It really is a cool tool. Is the hobby version the same as pro, but just you are not supposed to use it for anything but personal use?

old_eyes

The home version has all the same Lagos as the full version. The pro gives you some reductions on extra packages.

Link to comment
Share on other sites

  • 3 weeks later...

Test of the Mathematica image alignment and stacking routines. Some images of Jupiter taken on 11/8/2012, under less than ideal conditions.

The idea here is to be able to process the large TIFF images by automatically finding the planet in each image, then cropping the image and saving just the main planet information discarding the rest of the sky. This is done the all images of the planet and saved to disk.

Then each image is aligned and stacked. The final processing involves a deconvolution.

One of the single frames is shown here:

Planetray Image Jupiter Single Frame 20120811

The result of stacking 5 images is shown here:

Planetray Image Stacking Mathematica Jupiter 20120811

Though the base image is nothing special, using mathematica has allowed me to get around the memory issue when using Registax and mean I do not have to manually crop each image to stack them if I was forced along the Registax path.

Link to comment
Share on other sites

Noyt quite the same but has anyone here used IDL or ENVI for the same purposes ?

I used to use IDL quite a lot in my job and there was quyite a lot of imaging and astronomy support libraries from US/NASA and EU/ESA mission support teams who used it a lot in their modelling.

Slilghtly lost touch but I can dig out the links if anyone is interested.

Mike

Link to comment
Share on other sites

Never used IDL or ENVI, IDL does look in the same vein as Mathematica as a coding environment. ENVI looks like a package designed for planetary data image processing

From their web site looks useful.

The thing I like about Mathematica is that you can do all the things you can do in photoshop, all be it without drag and drop and extra mouse clicks. Plus you can add a lot of extra image processing functionality with its maths library. If you go along the writing your own source code route the need for other maths functions that need to be written takes up more of your time.

Sent from my iPad using Tapatalk HD

Link to comment
Share on other sites

I have tried IDL for my computer vision classes. This was years ago, and a couple of crashing demos made me opt for matlab. The current version is very different, and I have heard some very good reports about it. I should get an ENVI license at work for remote sensing image analysis. I gather it is very expensive

Link to comment
Share on other sites

  • 1 month later...

An update on some of the algorithms I have built for imaging. I have moved on to registering and stacking deeps sky images. I have built an alignment algorithm based on the one used by DSS and NASA (http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=1995PASP..107.1119V&db_key=AST&high=39463d35aa24090).

This alignement algo is a lot more memory efficient than the native Mathematica ImageAlign algorithm.

I then built some custom quality filters on top of the NASA algo to ensure good matches to the stars (or culsters - in the mathematical sense that is).

One a set of points and a region of interest is defined in the master image matching points are found in each image to be stacked. A transformation between the master image and the image to be stacked is determined and applied to the stack image. The transformation translates and rotates each image if required (I am using an Alt-Az mount :sad: )

The images are then stacked (and the histogram stretched) to build the image.

The image show below is the raw stacked image after a bit of streching and adjustment of the red levels to diminish the light light pollution. One can see the effect of the rotation in the image as stars have rotated out of view.

M31 14 X 15s (!!!)

Andromeda Galaxy M31

There was a small amount of trailing in some of the base images that you can see has come through onto the stacked image, but overall the alignement and stack have performed well.

To do it justice, I would love to have access to a set of subs (and darks / flats) taken using a decent mount.

Link to comment
Share on other sites

I used IDL for my PhD work and am not a fan of it.

People might want to check out Python (with SciPy/NumPy) or R for doing this kind of stuff. Both are free and there are probably image processing libraries for both.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.