Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Pixinsight Drizzle


cardconvict

Recommended Posts

I have got a trial version of pixinsight which I have fell in love with and will be buying after or near the end of the trial. I have also have Warren Kellers book on processing and been following his guide on the mono work flow, but I have been practicing on a work flow I also found on the internet, on the free guide on the internet he mentions about drizzling of the frames once stacked, question is what is the advantage of drizzle and do people who use pixinsight for stacking use drizzle.

Ian

Link to comment
Share on other sites

I wondered that myself :)  I've been using the paid-for version of PixInsight for a while and also have the book but not tried drizzle yet - maybe we get too much in the atmosphere here :D

Link to comment
Share on other sites

I have tried drizzle in DSS.

Basically, it takes advantage of drift and 'shares' out the signal from each camera pixel across several pixels in the finished image. This extracts a greater level of detail - I have seen an image of a boulder on Mars where Nasa used extreme drizzle to increase the resolution several times!

In practice, I haven't found it improves most images. You need lots of data for a start, and it needs to be good data. Here's one that worked (the processing isn't identical, but it is the same subs):

M27 without drizzle:

58f0bb72ab1cd_M27DumbellNebula.thumb.png.062eee9d2d7407fa72743ece6bfa4c25.png

M27 with drizzle:

58f0bbc7ddf39_M27PSP.thumb.png.9f3209b5ff8bcdba9c9ffe0655e1d91c.png

Link to comment
Share on other sites

Difficult to tell - the magnification would have to be the same.  The non-drizzled version looks better to me too but that may be just the image size.

Link to comment
Share on other sites

41 minutes ago, Gina said:

Difficult to tell - the magnification would have to be the same.  The non-drizzled version looks better to me too but that may be just the image size.

Agreed it would have to be at the same image scale to even notice I think but I'm no expert on these things.

Link to comment
Share on other sites

The documentation for the BayerDrizzlePrep script in PixInsight has a really good explanation of what Drizzle is and how it works.

To viciously use on of the images:

drizzle.png.c30c094d5af1fe5f55d30b497230fb3d.png

The first image on the left is one of your source images (a 2x2 pixel part of it anyway). The orange squares inside the grey pixels show the "Drop Shrink" sized pixels that you are going to drizzle onto the final image.

The middle image shows the target image with a single frame drizzled onto it. In this example they are using x2 drizzle, so there are twice as many target pixels, and you can see how the source orange pixels are drizzled onto the target image. The third image shows the target after 3 different frames have been drizzled onto it.

The main difference between Drizzle Integration and normal integration is that in Drizzle integration the source frames have each pixel applied to multiple target pixels using a geometric transformer, and in normal integration the pixels are just layered one on top of each other.

Link to comment
Share on other sites

29 minutes ago, Gina said:

I see...  No I don't - I tell a lie :D  I don't see how that makes things better.

With a normal integration you first register the images against a reference frame. Say for instance frame 1 was the reference (and is 80x600 pixels) and frame 2 was to be registered against it. If frame 2 is out by 2.3 pixels on the X axis, 1.7 pixels on the y axis and out by 1.2 degrees of rotation, then the software will attempt to take all of the pixels from frame 2 and move them along by 2.3 pixels, down by 1.7 pixels and rotate the image by 1.2 degrees; and then it mushes it back into an 800x600 image. Obviously you can not move a pixel across by 2.3 or 1.7 or rotate it by 1.2 degrees and have it fit perfectly into the target pixel, so the software does the best fit it can, but it is going to lose some fidelity as the source data of frame 2 is being munged into a new position. Once you have done this to your dozen or so source frames, you can then integrate them by just going along pixel by pixel and saying "what is the average value of pixel 0,0 from each frame" and that is your final value.

With drizzle, it does not do the transform and stacking as separate processes. Instead it uses the original source frame before registration, and the mathematical transformation matrix that would be used to register it. Then it iterates over the source pixels in frame 1 and works out which of the pixels in the target frame it overlaps with. Then it adds a weight to the target pixel based on how much of the source pixel overlaps the target pixel. So pixel 3,4 in the source frame might overlap 60% with (5,6) in the target frame, 20% with (6,6), 15% with (6,5) and 5% with (5,5). 

Link to comment
Share on other sites

I don't know if this is going to make things more or less confusing, but here goes:

As a practical example I took my avatar image that illustrates the actual target we are trying to reproduce:

drizzle1.jpg.370988342b8c0fa53f8463588caac4cf.jpg

and I made 3 copies of it. Each one was shifted and rotated just a little bit, and then mosaiced to 20 pixel squares. These represent the 3 images caught by the camera in my drizzle stack:

02.png.3c0f70c41bb87d1d71ed603c241ce7d5.png01.png.112ebdaa7ff89ca9b266bc74557d3885.png03.png.8c70392336271d6b30ffa58856a5b9fd.png

I then stacked them on top of each other into a new image, carefully aligning them. I them mosaiced this image at 10 pixels to show a x2 drizzle:

final.png.cb8189562ccefa736569162f14790b82.png

Not as much detail as the original, but more than the individual frames.

As a comparison. I took the same 3 pixelated images; aligned them, and then remosaiced them to 20 pixels squares to reflect the standard integration technique, and then stacked the 3 images:

final-nondrizzle.png.968f8bdb2fb05369e2a474b3c153a21a.png

Link to comment
Share on other sites

I used Drizzle the other day with PI and was impressed by the improvement it made. It is very useful if your imaging setup is undersampling, which I am. My stars are quite blocky (undersampled) when I do not drizzle however after drizzling they are much tighter and rounder. You also need to be dithering images.

It does however mean that it makes most of subsequent post processing time a lot longer due to increased datasets. But as I am in no rush when post processing and if it improves your images then so much the better I say.
 

Link to comment
Share on other sites

  • 5 months later...
On 15/04/2017 at 03:02, frugal said:

I don't know if this is going to make things more or less confusing, but here goes:

As a practical example I took my avatar image that illustrates the actual target we are trying to reproduce:

drizzle1.jpg.370988342b8c0fa53f8463588caac4cf.jpg

and I made 3 copies of it. Each one was shifted and rotated just a little bit, and then mosaiced to 20 pixel squares. These represent the 3 images caught by the camera in my drizzle stack:

02.png.3c0f70c41bb87d1d71ed603c241ce7d5.png01.png.112ebdaa7ff89ca9b266bc74557d3885.png03.png.8c70392336271d6b30ffa58856a5b9fd.png

I then stacked them on top of each other into a new image, carefully aligning them. I them mosaiced this image at 10 pixels to show a x2 drizzle:

final.png.cb8189562ccefa736569162f14790b82.png

Not as much detail as the original, but more than the individual frames.

As a comparison. I took the same 3 pixelated images; aligned them, and then remosaiced them to 20 pixels squares to reflect the standard integration technique, and then stacked the 3 images:

final-nondrizzle.png.968f8bdb2fb05369e2a474b3c153a21a.png

 

I have been experimenting with drizzle processing and trying to understand what is going on - your old post has been a big help, thanks!

Link to comment
Share on other sites

Must have missed this thread first time round.

I wrote the BayerDrizzlePrep script and documentation (borrowed a lot from the developer forum posts though). The script now redundant as current versions of PI can do Bayer Drizzle without all the channel splitting and recombination shenanigans originally needed.

The basic premise of Drizzle is that you can recover additional resolution provided:

- You are undersampling, i.e. camera pixels are larger than the resolving power of the optics. Otherwise no benefit.

- You are dithering by fractional pixel amounts between frames. If you don't dither, or do it so that pixel footprints overlap exactly between frames, you cannot dither as it relies on splitting original pixels randomly across the superresolution pixels.

The trade off is that each superresolution pixel has a lower SNR than the originals so you need a lot more total exposure time to reach an acceptable SNR. If you downsample (software bin) an image it always looks less noisy, and drizzling is  the inverse if that. You also need a lot of dithered subs to 'wet' each superresolution pixel.

With Bayer Drizzle, it is a way of debayering OSC images without interpolating colours from adjacent pixels so you get fewer artefacts and full sensor resolution in each colour channel.

Even if you have insufficient data to drizzle the whole image, the stars usually have enough SNR to cope. So try combining the background of a normal integration that you have upsampled x2 with the stars from a drizzled x2 version using a star mask. No more blocky small stars.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.