Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Drizzle integration


Ken82

Recommended Posts

So why should we generally avoid drizzle integration, added noise etc ?

Im currently using a 106mm scope with 3.75pixel camera so already pushing the resolution at 1.5"arc secs pixel. But comparing my previous 1x1 binning image of M101 with a drizzle integration i can see the stars in the drizzle image are round even when zoomed into the image at 400%.

Although the stars are good/better what does this mean for the SNR ? 

Thanks 

Link to comment
Share on other sites

8 hours ago, Ken82 said:

So why should we generally avoid drizzle integration, added noise etc ?

Im currently using a 106mm scope with 3.75pixel camera so already pushing the resolution at 1.5"arc secs pixel. But comparing my previous 1x1 binning image of M101 with a drizzle integration i can see the stars in the drizzle image are round even when zoomed into the image at 400%.

Although the stars are good/better what does this mean for the SNR ? 

Thanks 

If you have both 'before and after' drizzle images can't you just compare the noise directly? Give both images exactly the same stretch and compare them.

Olly

Link to comment
Share on other sites

No, it's the other way round. Drizzle requires a very precise dither.

@vlaiv has quite thoroughly rubbished the idea of drizzle with amateur level imaging, but the technicalities are best explained by him rather than me. ie I don't fully understand them, but take his word for it.

Link to comment
Share on other sites

11 hours ago, Ken82 said:

So why should we generally avoid drizzle integration, added noise etc ?

Im currently using a 106mm scope with 3.75pixel camera so already pushing the resolution at 1.5"arc secs pixel. But comparing my previous 1x1 binning image of M101 with a drizzle integration i can see the stars in the drizzle image are round even when zoomed into the image at 400%.

Although the stars are good/better what does this mean for the SNR ? 

Thanks 

If you are already at 1.5"/px why do you want to drizzle?

It is very likely that you are already at proper sampling rate. Measure FWHM of stars in your subs (or after integration while still linear). If they are 2.4" FWHM or larger - you are good where you are and you should not go with higher resolution.

Drizzle works under very controlled circumstances. You need to dither and dither very precisely. If you intend to drizzle x2 - you should dither so that your offsets are always aligned at half a pixel for best results. Similarly if you plan to drizzle x3 - you should align your dithers every 1/3 of a pixel (this means dither of 2.666 is fine and so is 3 and 4.333 but you should not dither for 3.5px for example).

Another important thing - you need to be under sampled for drizzle to make sense.

With amateur setups - you simply can't control your guiding and dithers precisely enough to make sense.

Drizzle will lower your SNR. Because it will use only 1/4 of samples for each pixel of output image (in case of x2 drizzle, 1/9 in case of x3 drizzle, etc ...) it is like stacking only quarter of your subs, so instead of imaging for let's say 8h - you'll get only 2h worth of imaging.

I think that people are just better off resampling their subs to larger size prior to stacking then using drizzle in amateur conditions.

  • Like 1
Link to comment
Share on other sites

Ok thanks for the feedback, i was using dither between images although the dither movements were small and likely sporadic. 

I ran the noise evaluation tool on each image with the following results although im not sure what these numbers are suggesting ? Im assuming the e is the noise although this obviously includes the light pollution as noise ? So from 5e -05 to 10e -05 noise has doubled ? Is that correct ?

Luminance image noise statistics -

σK = 9.983e-05, N = 43616131 (71.30%), J = 4

Drizzle image statistics -

σK = 5.386e-05, N = 71571940 (29.25%), J = 4

Thanks Ken 

Link to comment
Share on other sites

No idea what those numbers mean - but here is rather simple way to see what image has higher SNR.

Take two regions in both images (try to do same regions at least roughly), first being empty background without stars - just a patch of sky without anything in it and second being part of target that is more or less uniform.

Try to select same regions for both images (this might not be straight forward because drizzled image will be larger, so you can't simply align them and use same selection).

Measure stats in both regions - in empty region measure standard deviation. In target region measure average value.

Make sure both images have been stacked the same - same reference frame, same normalization of subs - everything apart from drizzle / no drizzle. Alternatively, you can wipe the background in both images.

If you treated both images the same - target patch should produce roughly the same average value. Compare standard deviation between empty patches - it represents noise floor - higher the number, noisier the image.

  • Like 1
Link to comment
Share on other sites

On 01/05/2020 at 12:21, vlaiv said:

Another important thing - you need to be under sampled for drizzle to make sense.

This is actually the only reason why you would drizzle. In other words, if your stars aren't square blocks, don't drizzle.

https://pixinsight.com/forum/index.php?threads/drizzleintegration-example.6901/

https://pixinsight.com/forum/index.php?threads/new-drizzleintegration-tool-released.6911/

Link to comment
Share on other sites

6 minutes ago, wimvb said:

This is actually the only reason why you would drizzle. In other words, if your stars aren't square blocks, don't drizzle.

Only problem with that is - there are no square stars.

  • Haha 1
Link to comment
Share on other sites

10 minutes ago, wimvb said:

In the image, I mean. 

So do I :D

Pixels are not little squares. Well, they are, but they are not. I know that this is confusing. Pixels are square on sensor, almost, but once image is read - pixels are no longer squares. They are values with coordinates - just dimensionless points.

Once you have image, even if it is very under sampled - stars will be dots. It then depends on how you render those pixels if they look square or not.

Here is very under sampled image:

image.png.85e2ce8ea071f1d8b6784c9809382014.png

I would not say any of the stars is square?

Here is 600% enlarged part of above image:

image.png.54cca0a52381f91cc29286441c0e9c06.png

Now, this is enlarged using nearest neighbor algorithm.

image.png.0cdc1a52260ba0d7be4507fd354cbaeb.png

Same image segment enlarged using different algorithm - no more "square" stars.

  • Like 2
Link to comment
Share on other sites

Would drizzle make sense if you were looking to print out your pictures to put on your wall? Seems an easy way of making the image twice (or 3 times) the size without any loss of pixels per inch.

I know its not ideal but any increase in noise shoudn't be too noticable when you stand around a meter away.

Link to comment
Share on other sites

44 minutes ago, CloudMagnet said:

Would drizzle make sense if you were looking to print out your pictures to put on your wall? Seems an easy way of making the image twice (or 3 times) the size without any loss of pixels per inch.

I know its not ideal but any increase in noise shoudn't be too noticable when you stand around a meter away.

I called multiple times for people to do an experiment. I did it once and simple resizing won. In part because DSS threw some artifacts in drizzled image and in general because SNR loss was obvious and there was no gain in resolution in favor of drizzle.

Want larger megapixel count in your image?

Why don't you simply do one of the two:

- take all your calibrated subs prior to stacking and just scale them up x2 or x3 or x4 or even x3.5 - what ever number you choose - and stack them like that. Use a fancy resampling algorithm that produces good results like Lanczos3 or Splines or similar.

- Do the same but after you are done stacking.

In fact, I think that first approach might even do as much as drizzle in terms of improving resolution (real or not) and it certainly won't lose SNR since it will provide sample for each pixel from each sub instead of spreading pixels around and not covering all the pixels of output with every sub.

Back to original idea, I'll propose experiment and someone else can do it with their own data (partly because I have not imaged for ages and in part because I want to exclude any sort of bias on my part - too many parts in there? :D ).

It does not matter if your data is properly sampled or even over sampled - it is rather easy to create under sampled data - just bin it in software (do it after calibration). For our purposes it is just like using larger pixels. Do 3"/px or 4"/px - that way you'll be certain you are under sampling.

Experimental protocol:

1. Produce under sampled data by taking your calibrated subs and then binning them. PI has it as integer resample - use average method.

2. Take one sub as reference frame for other subs to be aligned on.

3. Do drizzle stacking x2 without any fancy stuff - just simple average (no weighing of the subs, no sub normalization - just plain average)

4. Take all subs and scale them up x2 using Lanczos3 resampling and then do regular stacking - again just simple average no fancy stuff

5. Do comparison by selecting 2 regions on both stacks. Try to select same regions on both stack (this is why I said take same reference frame as stack should be roughly aligned this way). One region should be pure background - without anything but pure dark background - try not to select stars or anything. Measure standard deviation in this region on both stacks.

Second region should be uniform part of target (just take some nebulosity without too much features - end of a spiral arm or galaxy halo or whatever) - measure average pixel value here.

Divide respective measures and compare. This will show how SNR fares between two approaches

Also measure FWHM of resulting stacks - this will show you what happens with resolution.

  • Thanks 1
Link to comment
Share on other sites

Ok so the pixinsight developers have said  the noise evaluation tool in pixinsight is now obsolete and will be removed in the next update. They gave me a new script to run and i got the following results-

Integrated image

Ch | noise | count(%) | layers |
---+-----------+-----------+--------+
0 | 8.589e-01 | 71.30 | 4 |

Drizzle image
Ch | noise | count(%) | layers |
---+-----------+-----------+--------+
0 | 6.265e-01 | 29.25 | 4 |

They also kindly explained what each value represents, so for anyone that is interested. CH represents each channel 0 being luminance or grayscale. With an RGB image 0=red 1=green 2=blue. The noise column is the estimated standard deviation of noise pixels. The % column is the percentage of pixels that have been classified as noise, with respect to the total number of pixels in the image.

 

Link to comment
Share on other sites

I also asked about drizzle potentially adding noise to which i got another good response -

Drizzle cannot 'add noise'. There are two main reasons for which a drizzle-integrated image should always provide higher noise estimates than a 'normal' integrated image:

- Drizzle does not interpolate pixel data. Pixel interpolation acts like a variable low-pass convolution filter, which correlates source pixels in the final integrated image. This is known as aliasing. Drizzle generates no aliasing, which is the main reason why drizzle is always, when applicable, better than normal integration.

- If the input data is not well dithered, and/or if the amount of input data is insufficient for the selected drizzle scale and drop shrink parameters, drizzle can generate dry pixels, which are pixels insufficiently covered with drizzle drops. This leads to 'holes' that can be identified as noise because they exist at the one-pixel scale. However, this should never happen under reasonably normal working conditions.

So in this case (comparisons between normal integrated and drizzle integrated results) higher noise estimates do not mean a worse result. On the contrary, absence of aliasing is an extremely nice and highly desirable property of drizzle integration.

 

Link to comment
Share on other sites

43 minutes ago, Ken82 said:

I also asked about drizzle potentially adding noise to which i got another good response -


- If the input data is not well dithered, and/or if the amount of input data is insufficient for the selected drizzle scale and drop shrink parameters, drizzle can generate dry pixels, which are pixels insufficiently covered with drizzle drops. This leads to 'holes' that can be identified as noise because they exist at the one-pixel scale. However, this should never happen under reasonably normal working conditions.

 

 

So i guess we need to find a definition for "well dithered" and sufficient input data amounts before coming to a conclusion. It does improve the image at the end of the day done correctly but i have only really noticed it when zooming close in to stars- at a trade of much longer processing time given the increased size of the images.

Link to comment
Share on other sites

21 minutes ago, Ken82 said:

I also asked about drizzle potentially adding noise to which i got another good response ...

Much of what has been said just simply makes no sense.

With regards to SNR estimation - first thing to understand is that there is no single SNR value per image. Signal in the image is not the same (otherwise it would be all gray image without any detail) and part of noise depends on that signal (target shot noise) - so noise is definitively not the same - hence SNR can't be the same.

Every pixel has its own SNR that we can estimate with different methods.

27 minutes ago, Ken82 said:

The noise column is the estimated standard deviation of noise pixels.

Standard deviation of set of values that are sampled from a population of measurements of certain value is noise.

There are no noise pixels in the image - every pixel has certain value associated with it. That value can be close to real value of signal - small noise or far from real value - large noise. Even if noise is large - it might not matter if signal is also large - we are interested in SNR. In any case - calling some pixels noise pixels makes no sense.

26 minutes ago, Ken82 said:

Drizzle does not interpolate pixel data. Pixel interpolation acts like a variable low-pass convolution filter, which correlates source pixels in the final integrated image. This is known as aliasing. Drizzle generates no aliasing, which is the main reason why drizzle is always, when applicable, better than normal integration.

Drizzle is in fact sort of interpolation. That might not be immediately obvious - but it is. In fact, mathematically it is close to worst kind of interpolation.

image.png.8e4346d1079cf1f43c576dd1de74f1f8.png

First thing to understand is - each pixel of output image gets only subset of values from input stack. If you are stacking 40 subs and you drizzle x2 - on average each output pixel will get only 10 samples. This is because input pixels are "shrunken down" and empty space is created between them. Some of those pixels are mapped to output pixels, but empty space gets mapped to other output pixels - in "this round" - some output pixels receive no value. Stacking 10 subs can't produce same SNR as stacking 40 subs - simple as that.

Now second point about interpolation - here is another simple graph that will make obvious what happens:

image.png.d6060dbc64bb3141e83af94300b7289e.png

This is very simple example of drizzle mapping in action. Simple shift and no rotation - for simplicity. Black pixels are output pixels and red pixel is one being drizzled. Each of output pixels will get proportional piece of red pixel. 60% of red pixel value will be stacked onto first output pixel and 40% of red pixel value will be stacked onto second output pixel - because 60% of surface of red pixel "falls" onto first output pixel and 40% surface of red pixel falls onto second output pixel.

So far so good - it just involves surfaces and there is no "interpolation".

But let me ask you - where does center of red pixel fall?

Let pixel side be 1 unit long. Center of first output pixel is 0 and center of second output pixel is 1 (since pixel side is 1 unit long). If 60% of surface of red pixel falls onto first output pixel - where does it center lie?

It actually has coordinate of 0.4 (all Y coordinates in this case are the same, so we don't write them down). Since distance between output pixel centers is 1 - that means that red pixel center is 0.6 away from second output pixel.

Guess what linear interpolation does? It linearly "divides" values based on distance to centers - we have 0.4:0.6 - one pixel gets 40% and other gets 60% - closer gets higher value - so first output pixel gets 60% and second output pixel gets 40%.

Hold on, this is exactly the same as above drizzle integration - but that can't be right because "it does not use interpolation at all!".

Finally - let's examine two more graphs:

First Fourier transform of triangle (that is kernel for linear interpolation):

image.png.ae46ba59b9762f526514e05fbee29c9d.png

Right graph shows in frequency domain attenuation - or low pass filter shape

image.png.bb3a07382f63d91ea244493514a9dd58.png

Yellow graph here shows the same for Lancoz3 resampling method.

First one falls faster - making it stronger low pass filter.

 

Link to comment
Share on other sites

  • 1 month later...
On 02/05/2020 at 21:57, vlaiv said:

I called multiple times for people to do an experiment. I did it once and simple resizing won. In part because DSS threw some artifacts in drizzled image and in general because SNR loss was obvious and there was no gain in resolution in favor of drizzle.

Want larger megapixel count in your image?

Why don't you simply do one of the two:

- take all your calibrated subs prior to stacking and just scale them up x2 or x3 or x4 or even x3.5 - what ever number you choose - and stack them like that. Use a fancy resampling algorithm that produces good results like Lanczos3 or Splines or similar.

- Do the same but after you are done stacking.

In fact, I think that first approach might even do as much as drizzle in terms of improving resolution (real or not) and it certainly won't lose SNR since it will provide sample for each pixel from each sub instead of spreading pixels around and not covering all the pixels of output with every sub.

Back to original idea, I'll propose experiment and someone else can do it with their own data (partly because I have not imaged for ages and in part because I want to exclude any sort of bias on my part - too many parts in there? :D ).

It does not matter if your data is properly sampled or even over sampled - it is rather easy to create under sampled data - just bin it in software (do it after calibration). For our purposes it is just like using larger pixels. Do 3"/px or 4"/px - that way you'll be certain you are under sampling.

Experimental protocol:

1. Produce under sampled data by taking your calibrated subs and then binning them. PI has it as integer resample - use average method.

2. Take one sub as reference frame for other subs to be aligned on.

3. Do drizzle stacking x2 without any fancy stuff - just simple average (no weighing of the subs, no sub normalization - just plain average)

4. Take all subs and scale them up x2 using Lanczos3 resampling and then do regular stacking - again just simple average no fancy stuff

5. Do comparison by selecting 2 regions on both stacks. Try to select same regions on both stack (this is why I said take same reference frame as stack should be roughly aligned this way). One region should be pure background - without anything but pure dark background - try not to select stars or anything. Measure standard deviation in this region on both stacks.

Second region should be uniform part of target (just take some nebulosity without too much features - end of a spiral arm or galaxy halo or whatever) - measure average pixel value here.

Divide respective measures and compare. This will show how SNR fares between two approaches

Also measure FWHM of resulting stacks - this will show you what happens with resolution.

I like experiments and I have never really dabbled with drizzle, so here goes! 

I started by using some data captured at 0.86"/px. The calibrated subs have a median FWHM of 2.34" To get these undersampled I used integer resample with a downsize factor of 4 which yielded 3.448"px and visibly undersampled stars:

1439441842_2020-06-12(4).thumb.png.5a59d6a901e137538eaf27070587a1c7.png

The binned subs were then registered and integrated before being drizzle integrated (a PI workflow thing) with no weighting, no normalisation and linear fit clipping (dataset of 60 subs). The integrated only image looks badly undersampled:

1428033779_2020-06-12(5).thumb.png.0d67082ffaa5e29153fdb6fb427f08db.png

The drizzle (x2) integrated subs look better yielding 1.725"/px:

1501124486_2020-06-12(6).thumb.png.0d641f1cf9b29210de25626ea4edc6b5.png

Next up is resampling, so I resampled the registered bin4 subs before stacking them. I tested the Lanczos3 algorithm but seemed to get better results with Bicubic spline so this was chosen (screenshot shows auto, PI picked Bicubic Spline):

2088387349_2020-06-12(7).thumb.png.f18caab95fd18177e4cac16e3f42d8d2.png

 

These were then integrated yielding this, it doesnt look great:

648616171_2020-06-12(8).thumb.png.65984c65288b3b6543aa892d26247b2f.png

The resulting integrations are linked below for people to interrogate as they wish. 

drizzle_integration.fit integration_only.fit integration_resampled.fit

  • Like 1
Link to comment
Share on other sites

55 minutes ago, jimjam11 said:

The resulting integrations are linked below for people to interrogate as they wish. 

This is very interesting, and it shows that I could have been wrong either in dismissing drizzle or with proposed comparison method.

Will need to look into it a bit more.

Star shapes are indeed poorer on upscaled subs, but that is consequence of ringing that can happen when we upscale very undersampled data. I guess that different upscaling algorithm would deal with those. Maybe B-spline interpolation could be used instead of Lanczos?

Could you do another test with Bicubic B-spline for upsample?

  • Like 1
Link to comment
Share on other sites

I reran the experiment with some natively undersampled data. This was captured with an EF200/ASI1600 yielding 4.04"/px. The results are somewhat different:

744558825_2020-06-12(10).thumb.png.4a237802891d5a90d6fea61835397111.png

 

Top left is 4.04"/px in theory undersampled. In reality the stars dont look anything like as bad as the 3.48"/px binned data above but they are slightly blocky. FWHM is 5.802"

Top right is the data resampled prior to integration yielding 2.02"/px. FWHM is 6.145"

Bottom right is drizzled. FWHM is 6.190"

Bottom left is integrated as per the top left, but the integrated stack was then resampled. FWHM is 5.578". This is the lazy option!

 

Some observations:

Resampling a bunch of subs and then integrating them is time/space consuming compared to drizzle integration (in the land of PI at least). PI is very fast at drizzle integration despite it being a two step process. 

 

Whilst my data suggests drizzle has some benefit over resampling prior to stacking (acknowledging I might/probably have done something wrong) I am struggling to see the benefit of drizzle at all unless you have a very strange equipement config or image somewhere exotic (like mauna kea) which can yield very small star FWHM. Based on these results it is really hard to get undersampled data which properly benefits from drizzle because you need the combination of large aperture, short focal length and large pixels; something like a rasa paired with an Atik 11000? I guess the HST is one such example :).

 

 

Link to comment
Share on other sites

I use CFA Drizzle (a.k.a. Bayer Drizzle) as an important part of my DSLR workflow.  It means that Bayer interpolation no longer takes place and it results in noise that is more pleasant-looking  and more finely grained.  Such noise is easier to remove from the image and gives a better looking final result. 

Mark

  • Like 2
Link to comment
Share on other sites

56 minutes ago, vlaiv said:

This is very interesting, and it shows that I could have been wrong either in dismissing drizzle or with proposed comparison method.

Will need to look into it a bit more.

Star shapes are indeed poorer on upscaled subs, but that is consequence of ringing that can happen when we upscale very undersampled data. I guess that different upscaling algorithm would deal with those. Maybe B-spline interpolation could be used instead of Lanczos?

Could you do another test with Bicubic B-spline for upsample?

Previous data was Bicubic spline. Lanczos 3 was even worse. This is with Bicubic b-spline and is definitely better but still not as good as drizzled:

1512817411_2020-06-12(11).thumb.png.ddfc7ea9dc481f7d766f9540e1b37f8c.png

integration_resampled_bicubic_b_spline.xisf

Link to comment
Share on other sites

26 minutes ago, sharkmelley said:

I use CFA Drizzle (a.k.a. Bayer Drizzle) as an important part of my DSLR workflow.  It means that Bayer interpolation no longer takes place and it results in noise that is more pleasant-looking  and more finely grained.  Such noise is easier to remove from the image and gives a better looking final result. 

Mark

Bayer drizzle will actually work if implemented properly, unlike regular drizzle.

That is because in Bayer drizzle - one does not shrink down pixels, but rather exploits the fact that pixels are already "shrunken down" compared to sampling rate. It is this shrinking step in regular drizzle that is questionable (in my view) as it needs quite precise dither offsets to be effective.

50 minutes ago, jimjam11 said:

Top left is 4.04"/px in theory undersampled. In reality the stars dont look anything like as bad as the 3.48"/px binned data above but they are slightly blocky. FWHM is 5.802"

Top right is the data resampled prior to integration yielding 2.02"/px. FWHM is 6.145"

Bottom right is drizzled. FWHM is 6.190"

Bottom left is integrated as per the top left, but the integrated stack was then resampled. FWHM is 5.578". This is the lazy option!

This is something that I would expect in comparison of the two.

I don't think that methodology is wrong however and synthetic data is also representative of what will actually happen. I think it is down to level of oversampling if differences will show.

For example 4.04"/px is very close to theoretical "ideal" sampling rate if FWHM is 5.8 - and that is 5.8 / 1.6 = 3.625. We could say that we have oversampling by (4 - 3.625) / 3.625 = ~ 0.1 or 10% in this case.

In first case we had FWHM of 2.34" which corresponds to 1.4625"/px while image was oversampled to 3.448"/px which would be  ~ 135.76%.

I would say that it is the first case that should matter more - and it shows that drizzle is not bad. I have not done SNR measurements and wonder - how much dithered are the subs?

In drizzle integrated image, I also measured FWHM to be around 3.1 - 3.2px, which corresponds to 0.86"/px * 3.2px = 2.752" FWHM - very close to 2.34" FWHM of original image. Lanczos upscaled images fare very similarly - a bit higher FWHM at around 3.4px being 2.924" FWMH - both values being far below FWHM that corresponds to 3.448"/px - ~5.52".

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.