Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Image resolution enhancement


Recommended Posts

post-9952-0-28280200-1342858258_thumb.pn

383L+ native resolution on the left - same data reprocessed using this technique on the right (native resolution) - click image for better resolution.

I thought I’d write a nice little illustrated summary of how to get additional detail from images using a technique that many know as drizzling. I’m hoping that this will help people understand how to get the best from drizzle or even how to implement it using their favourite image processing tools. I'll be adding to this as I go along (I'm hoping to get all the illustrations in the post rather than a large thread!).

Drizzling uses a set of low resolution images and applies the assumption that as each image is slightly different (meaning you see slightly different details in each image), if you could recover those differences into a final image you could get more detail out of it - more resolution of the image.

Some understanding..

Images are made up of pixels (we’ll use square ones to make it easier). So an image is really not an image of the object, more of a picasso painting where someone has taken a transparent grid and filled each square with the average colour or luminosity that they see in that square.

post-9952-0-42784200-1342860685_thumb.pn

Now if the painter took the grid and shifted it half grid cell (to the right and down) without moving their eyes then repainted the picture then each cell would have a slightly different colour.

post-9952-0-77015400-1342860952_thumb.pn

If then I took a light bulb and overlaid the two grid images so that the target object aligned perfectly you’d have two squares to each of the original one square with slightly different colours in.

Now it would be rather dumb if we didn’t use those sub pixels - so we now make our output picture twice the resolution in X and Y.

post-9952-0-47866300-1342860695_thumb.pn

Infact we don’t need to know how much each image has shifted by to make use of this - in astrophotography we regularly align our images using stars. Infact what happens when we star align (registration) is that the star centres are measured at a far far higher accuracy than the pixels of the image. Often hundreds of times higher.

So if we know the positions to a high resolution and have a high number of images ... we can overlay the images and recover a lot of detail to make very high resolution pictures :D

An additional technique not done by some drizzle implementations is a rotation. The rotation stops the resulting image appearing blocky by aligning images on a rotated grid. We can achieve this by simply rotating our input subs and let the alignment think it's just aligning a normal image (it's oblivious that things are rotated). With expertise it's also possible to use the rotation to bring out additional detail in regions - the final solution on this is to actually use circular pixels on the stacking (but lets leave that for now!)

Link to comment
Share on other sites

The process

For each sub we:

1. upscale - in this case I have upscaled by 4 in X and 4 in Y.

2. rotate 45 degrees

Then we take all the upscaled & rotated subs:

4. register (align),

5. stack

and then if you so desire de-rotate and adjust resolution as you wish.

In the example image at the top of the first post, the images start off as mono images from the 383L (3362x2537 16bit mono). There's 27 subs (3x Red, 3x Green, 1x Blue, 20x Neodymium)

unfortunately the blue (which is most important for the bear neb suffered from trails).

So the initial images start as 17MB/sub. After upscaling (both resolution to 13448x10148 and colour depth to 64bit) they're 279MB/sub. Rotation/registration then makes this 556MB/sub and the final LRGB combine is a whopping 6.68 GB final image. The area in display is approximately 28MB.

It's entirely reasonable to stay at 16 bit colour (and even just select a sub-area crop) to control image bloat. However I need to work out if detail equates to pixel depth to see if I'm not wasting my time running at 64 bit colour per channel during the process before dropping down to the 12 bits colour depth (for R+G+B ) that most LCD screens support. I never see the 64 bit just process it..

Link to comment
Share on other sites

Lastly - perhaps you ask why I like doing this...

post-9952-0-81626200-1342863343_thumb.pn

(click for higher resolution)

Also nearby is a nice side of galaxy...

post-9952-0-91506600-1342863588_thumb.pn

(click for higher resolution)

You'll note my lovely light pollution.. hence my photos will never win beauty contents ;) (I'd probably suggest some 'stars' in mine are noise but.. you never know :D)

Here's a few nicer images:

2.5m scope: http://kudzu.astr.ua...__________.html

125mm: http://www.astrophot...IES/IC 2233.htm

And SDSS: http://www.sdss.org/iotw/NGC2537.jpeg

Link to comment
Share on other sites

  • 1 month later...

Ok, so here's the final word on this - a 4x4 upscale, rotate, LRGB. The final image was 16884x16884 in 64bit per colour channel from the initial upscale.. background extraction (division) then D-PSF deconvolution.

I don't think I'm going to get more out of a 4" refractor.

post-9952-0-44736300-1345587962_thumb.pn

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.