Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

To Drizzle? or not to Drizzle?


Recommended Posts

Hi all.

Not been into AP long and I have been working my way through the ever increasing pile of information on the subject, trying to make sense of all the terminology and techniques.

The first time I used DSS I noticed the 2x drizzle option, but not knowing what it is or does, I left well alone. But hearing its name mentioned on here many, many times, I thought I better find out what this drizzle was all about.

I found this site, which seems to explain the concept quite well, and I feel I have a basic grasp of its workings.

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/Drizzle_API.pdf

My question is, would using drizzle be a good option for me, using images gained from my limited setup (see signature)?

From what I can understand, it seems to work better if there is some movement between subs, which I would imagine I would get from time to time.

Thanks for any advice

Jez

Link to comment
Share on other sites

  • Replies 32
  • Created
  • Last Reply

It would but it only works in conjunction with dithered guiding. Once you have a good number of sub exposures which are not precisely aligned to each other (because you are unguided/have bad polar alignment/have used dithered guiding) it is a good system. In my case I find that it is hard to get the dithered guiding to work though!

Olly

Link to comment
Share on other sites

I think you may also need to be starting with an undersampled image, otherwise there's no additional information for the dithering process to work with and it just becomes a glorified rescaling.

James

Link to comment
Share on other sites

With a small target on a DSLR sensor you can use drizzle well enough. It is better than resizing because you are doing it before stacking (or while stacking, I don't know how the maths works) so you get a smoother upscale than simply resizing the final image. You probably should select a smaller 'custom rectangle' covering just your target rather than the whole DSLR image unless you have a monster PC to work with.

Olly, I find dithering works wonders but I occasionally have to increase the guider settle time, otherwise I get the jitters at the start of an exposure. The joys of EQ6 machining tolerance :rolleyes:

Link to comment
Share on other sites

With a small target on a DSLR sensor you can use drizzle well enough. It is better than resizing because you are doing it before stacking (or while stacking, I don't know how the maths works) so you get a smoother upscale than simply resizing the final image. You probably should select a smaller 'custom rectangle' covering just your target rather than the whole DSLR image unless you have a monster PC to work with.

Olly, I find dithering works wonders but I occasionally have to increase the guider settle time, otherwise I get the jitters at the start of an exposure. The joys of EQ6 machining tolerance :rolleyes:

Nebulosity doesn't restart till it has settled at the right shift and sometimes this simply never happens!

Olly

Link to comment
Share on other sites

Thanks for the advice guys. Does seem it would be worth a try next time I get some useable data. Il stack a 'with' and 'without' version and compare.

Forgive my ignorance, but a few more questions, like I said....Im new :) and these are more to clarify my understanding of it all.

Is dithering basically where you force slight guiding errors, which would shift the image a few pixels on the sensor?

I know the name gives it away, but what exactly is an under-sampled image, and how would one go about obtaining these?

Thanks for your help and comments. Appreciate it.

ATB

Jez

Link to comment
Share on other sites

An undersampled image is where the potential resolution of the telescope is not matched by the resolution of the camera. For example, if your optics have a potential resolution of two arcseconds, but the pixel size of the camera means that each pixel covers five arcseconds of the sky then you have an undersampled image because you're not capturing all the data that's available to you.

I've been thinking about it this morning whilst servicing the egg incubator and I'm tempted to revise my view. I'm now thinking that drizzling becomes less useful the more you oversample. There may perhaps be some sort of inverse relation between how much you oversample and how much drizzling can pull more information out of the image.

That said, my suspicion is that in the general case DSO imagers will be undersampling and perhaps quite significantly. Planetary imagers are more likely to be able to heavily oversample because the targets are bright enough to allow very large focal lengths and slow focal ratios.

If no-one else beats me to it I'll post some calculations for examples of checking for sampling rate, but first I'm beinbg hassled to help sort out lunch :)

James

Link to comment
Share on other sites

Is dithering basically where you force slight guiding errors, which would shift the image a few pixels on the sensor?

Sort of, yes. While the exposure is running, the guiding stays on target. In between exposures, the guider moves the mount a tiny bit. Mine is set so it moves upto about 8 pixels each time. Then it waits until the mount/guiding is stable again in the new place before it starts the next exposure. This gives me better average sampling of the target and the hot pixels on my chip get lost in the stacking. I have a Sony chip camera and always dither guide,. so I don't have to use darks.

Link to comment
Share on other sites

An undersampled image is where the potential resolution of the telescope is not matched by the resolution of the camera. For example, if your optics have a potential resolution of two arcseconds, but the pixel size of the camera means that each pixel covers five arcseconds of the sky then you have an undersampled image because you're not capturing all the data that's available to you.

I've been thinking about it this morning whilst servicing the egg incubator and I'm tempted to revise my view. I'm now thinking that drizzling becomes less useful the more you oversample. There may perhaps be some sort of inverse relation between how much you oversample and how much drizzling can pull more information out of the image.

That said, my suspicion is that in the general case DSO imagers will be undersampling and perhaps quite significantly. Planetary imagers are more likely to be able to heavily oversample because the targets are bright enough to allow very large focal lengths and slow focal ratios.

If no-one else beats me to it I'll post some calculations for examples of checking for sampling rate, but first I'm beinbg hassled to help sort out lunch :)

James

Cheers James. Would be interested in the calculations. Thanks for your time to do this :)

Sort of, yes. While the exposure is running, the guiding stays on target. In between exposures, the guider moves the mount a tiny bit. Mine is set so it moves upto about 8 pixels each time. Then it waits until the mount/guiding is stable again in the new place before it starts the next exposure. This gives me better average sampling of the target and the hot pixels on my chip get lost in the stacking. I have a Sony chip camera and always dither guide,. so I don't have to use darks.

Cheers Rik, The poster of the article I read on this subject had a whole row of pixels that were dead, and this is also how he got rid of it. I dont guide ATM. I want to learn all the bits that come before that, first, using the kit I have. But, from the posts on this thread, it would seem that, because I am unguided, It will kind of be a (bodged) dither. because I won't be able to get anything perfect. Is this the right assumption to make?

Really appreciate all the help to get my head wrapped around this :D

Link to comment
Share on other sites

, from the posts on this thread, it would seem that, because I am unguided, It will kind of be a (bodged) dither. because I won't be able to get anything perfect. Is this the right assumption to make?

Yes, I guess so. Before I started guiding, I found that I got much better results if I imaged on both side of the meridian. The dreaded flip is a pain but I think having two sets of essentially the same data but not quite framed exactly the same, averaged things out a bit. You could achieve the same thing as automated dithering buy blipping a handset button and then waiting for the mount to settle again in between each exposure. If you are pausing between exposures to let the sensor cool, this won't add any time to your run.

Link to comment
Share on other sites

It works well for me and I oversample the sky and I dont dither.

If you read Craig Stark's paper (linked in the OP's first posting) it suggests that drizzle requires an undersampled image and that misalignment of the images (whether dithered or otherwise) is also necessary. I'd not argue that you're happy with whatever drizzling achieves for you, but is he wrong, or am I misreading his paper, or is it actually not doing what you think it's doing for your images?

James

Link to comment
Share on other sites

Historical note : "Drizzle", as now defined, has been around for decades - witness the 8mm etc homemovies from the 1930s onward. Each individual frame is relatively course and grainy but project the movie at the usual running speed and the image become sharp and clear despite the minute size of each projected frame. The individual photo-senstive grains in each frame are randomly located [unlike the CCD sensor!] and hence mimic the astro application of "drizzle"

Link to comment
Share on other sites

I'd have said that drizzle cannot work if the same pixels sample exactly the same bit of sky between images. There is no new information in a second or a third image. However, I'd also say that no amateur will ever find him or herself in this situation, which is why Tim finds it works. We are bound to have at least sub pixel errors arising from imperfect PA and guiding. Drizzle doesn't need whole pixel movement as I understand it. Indeed Nebulosity only seems to offer sub pixel dither increments.* The new information, as I tend to think of it, comes from a given pixel taking a peek either side of its notional 'original position.'

I'd expect Sigma Clip to benefit from whole pixel movements though. We shouldn't mix up the Sigma clipping of outlying pixel values with drizzle stacking. They are distinct processes though both involve moving between subs.

Olly

*I think. I'd need to check that and I'm too whacked!

Link to comment
Share on other sites

Ok, regarding calculating resolution I'm really not sure how this works with a colour camera because of how the Bayer mask and subsequent recombination of the colour data is done. Perhaps we should treat the pixels as twice their normal size as a first approximation, which would make them 10.4um for the 1100D.

The 200P has a focal length of 1000mm I believe, so that would give you 206265/1000 arcseconds of sky per mm of image sensor, or 206265 / 1000 * 0.0104 = 2.15 arcseconds per pixel.

According to Rayleigh, the resolution of the telescope is wavelength-dependent, but if for the sake of this example we assume blue light with a wavelength of 400nm the resolution is 206265 * 4x10-4 / 200 = 0.41 arcseconds. Red light would almost half the resolution giving about 0.77 arcseconds.

So, assuming I've done the calculations correctly in theory at least the camera is significantly undersampling the potential resolution of the telescope and therefore drizzle may well be of benefit. My recollection is that Craig Stark's paper also says that you can't use drizzle to overcome the limitations of the seeing however, so if the camera can sample at about 2 arcseconds per pixel I'd be starting to wonder if there's anything to be gained other than on nights of quite exceptional seeing.

I don't recall seeing it mentioned anywhere but I also can't help wondering if we shouldn't let Nyquist stick his oar in at some point and consider the image "practically undersampled" if you can't oversample by a factor of two. I think that requires the input of someone who understands the theory rather better than me though.

James

Link to comment
Share on other sites

I'd have said that drizzle cannot work if the same pixels sample exactly the same bit of sky between images. There is no new information in a second or a third image. However, I'd also say that no amateur will ever find him or herself in this situation, which is why Tim finds it works. We are bound to have at least sub pixel errors arising from imperfect PA and guiding. Drizzle doesn't need whole pixel movement as I understand it. Indeed Nebulosity only seems to offer sub pixel dither increments.* The new information, as I tend to think of it, comes from a given pixel taking a peek either side of its notional 'original position.'

I'd expect Sigma Clip to benefit from whole pixel movements though. We shouldn't mix up the Sigma clipping of outlying pixel values with drizzle stacking. They are distinct processes though both involve moving between subs.

Olly

*I think. I'd need to check that and I'm too whacked!

This is exactly how I understood the info, Olly. It was the fact that drizzle wont work with 100% aligned subs that initially caught my eye, because clearly any subs I take ATM, won't be 100%.

This drizzle does seem to have a bit of grey area though, judging by some of the comments. seems quite a confusing topic even for the more experienced amongst us. So i feel like stepping back from the thread, and see what info crops up on here.

Jez

Link to comment
Share on other sites

I'd have said that drizzle cannot work if the same pixels sample exactly the same bit of sky between images. There is no new information in a second or a third image. However, I'd also say that no amateur will ever find him or herself in this situation, which is why Tim finds it works. We are bound to have at least sub pixel errors arising from imperfect PA and guiding. Drizzle doesn't need whole pixel movement as I understand it. Indeed Nebulosity only seems to offer sub pixel dither increments.* The new information, as I tend to think of it, comes from a given pixel taking a peek either side of its notional 'original position.'

That's my understanding of the Stark paper. Moving the image by an exact number of pixels doesn't give you what you need for the drizzle to work. Crop that same number of pixels off the sides of the image and you've effectively got exactly the same image twice. Drizzle relies on sub-pixel movement to be able to "see" what the camera would have picked up in every sub if the camera resolution were better in the first place.

Interesting idea about what might be happening in Tim's situation. I'm struggling to get my head around that, I have to admit. If guiding is unsufficiently accurate and as a result subs do in fact have slightly different alignment then presumably there's some "blurring" in each sub as a result of the same problem. How that interacts with the drizzle calculations I'm really not at all sure.

James

Link to comment
Share on other sites

Ok, regarding calculating resolution I'm really not sure how this works with a colour camera because of how the Bayer mask and subsequent recombination of the colour data is done. Perhaps we should treat the pixels as twice their normal size as a first approximation, which would make them 10.4um for the 1100D.

The 200P has a focal length of 1000mm I believe, so that would give you 206265/1000 arcseconds of sky per mm of image sensor, or 206265 / 1000 * 0.0104 = 2.15 arcseconds per pixel.

According to Rayleigh, the resolution of the telescope is wavelength-dependent, but if for the sake of this example we assume blue light with a wavelength of 400nm the resolution is 206265 * 4x10-4 / 200 = 0.41 arcseconds. Red light would almost half the resolution giving about 0.77 arcseconds.

So, assuming I've done the calculations correctly in theory at least the camera is significantly undersampling the potential resolution of the telescope and therefore drizzle may well be of benefit. My recollection is that Craig Stark's paper also says that you can't use drizzle to overcome the limitations of the seeing however, so if the camera can sample at about 2 arcseconds per pixel I'd be starting to wonder if there's anything to be gained other than on nights of quite exceptional seeing.

I don't recall seeing it mentioned anywhere but I also can't help wondering if we shouldn't let Nyquist stick his oar in at some point and consider the image "practically undersampled" if you can't oversample by a factor of two. I think that requires the input of someone who understands the theory rather better than me though.

James

:eek: My brain has just fallen out!

Cheers for the info, James. Very informative, but has made me think that perhaps I am trying to understand things a little out of my depth ATM. I am still not quite sure how much sky is an actual arcsecond. ?? I am working on it :)

But, from you calculations and post it would seem I am undersampling, so drizzle will work and be of benefit.

I know I will never get amazing images with my kit, compared to someone like Olly and most other people on here and there. But I would like to learn to get the best out of my kit, and it sounds like drizzle will aid me in this task.

thanks

Jez

Link to comment
Share on other sites

I am still not quite sure how much sky is an actual arcsecond. ?? I am working on it :)

It's probably easiest to visualise in terms of things you can see. An arcminute is one sixtieth of a degree. An arcsecond is one sixtieth of an arcminute. The Moon (and the Sun) are about half a degree or thirty arcminutes across (both vary by a few arcminutes). Even one arcminute is very small in terms of the area of sky covered. Jupiter is about fifty arcseconds (so not far off one arcminute) at opposition, yet with the naked eye it's not really distinguishable from a point. One arcsecond is therefore clearly quite tiny in terms of our naked eye view of the sky.

James

Link to comment
Share on other sites

I'd expect Sigma Clip to benefit from whole pixel movements though. We shouldn't mix up the Sigma clipping of outlying pixel values with drizzle stacking. They are distinct processes though both involve moving between subs.

I dither guide so I can use sigma clipping to get rid of hot pixels, not for drizzle. I dither each frame by 2.5 guider pixels which is about 7.5 pixels on my imaging camera.

Moving the image by an exact number of pixels doesn't give you what you need for the drizzle to work. Crop that same number of pixels off the sides of the image and you've effectively got exactly the same image twice. Drizzle relies on sub-pixel movement to be able to "see" what the camera would have picked up in every sub if the camera resolution were better in the first place.

I think even moving the mount by a whole number of pixels, they won't line up exactly and you will get the required subpixel misalignment for drizzle to work by default.

Link to comment
Share on other sites

I agree that moving by a whole number of pixels is no different from not moving at all from a dither-drizzle point of view. No new information comes in. However, you'd have a hell of a job to move exactly one pixel up and one left!!! Or whatever.

On the other hand a whole pixel shift should be good for sigma, no?

Please, let's not get involved either with the Bayer matrix or with that man Nyquvist!!!!!

Olly

Link to comment
Share on other sites

OK, i really hate to interrupt the flow. But ( I have googled and tried to find my own answer) what exactly is sigma clipping (SC median etc) Again I am aware of their presence but I really dont know what they do.

Also, although I am a bit ashamed to ask because it is mention so much, so i should know, but what is the bayer matrix? is this to do with the lay out of the sensor.

Sorry to be the dumbass of the thread, but you guys are genuinely the SGL encyclopaedia of all things worth knowing .

Link to comment
Share on other sites

I think even moving the mount by a whole number of pixels, they won't line up exactly and you will get the required subpixel misalignment for drizzle to work by default.

I agree, but I think in any discussion we need to be clear about whether we're talking about the theoretical results or the empirical ones. In theory moving by a whole number of pixels won't help for the reasons already stated. Empirically it might, because it may well be impossible to guide to the level of accuracy required though different mounts may give completely different levels of accuracy. Without qualification, a statement such as "I dither guide to an exact number of pixels" might give completely the wrong impression about what is actually happening. I'm reminded that "in theory practice gives the same results as theory, but in practice it doesn't" :)

James

Link to comment
Share on other sites

OK, i really hate to interrupt the flow. But ( I have googled and tried to find my own answer) what exactly is sigma clipping (SC median etc) Again I am aware of their presence but I really dont know what they do.

Sigma clipping is a way of stacking, where pixels falling outside the average range of brightness for the stack of exposures are rejected.

That means that with dither, the target image falls on different pixels on each subframe but the hot pixels stay in the same place since they are in the camera. Then when you align the frames prior to stacking, this misaligns the hot pixels. The stars and DSO target are in the same place on each frame but the hot pixels only show on one frame each. On that frame, they are much brighter than the average brightness of that same pixel on all the other frames so are rejected from the stack (and don't show on the final image). Satellite trails and cosmic ray hits get rejected in the same way, because they only show on one frame.

Sigma clipping, Sigma-Kappa median whatever, (it's called SD mask in Maxim) all gives much the same result but the maths behind it may be different.

Also, although I am a bit ashamed to ask because it is mention so much, so i should know, but what is the bayer matrix? is this to do with the lay out of the sensor.

The Bayer matrix is the set of coloured filters stuck ontop of the pixels in a one shot colour or DSLR camera. Usually they are grouped in sets of 4 with 1 red, 1 blue and 2 greens as that mimics the sensitivity of the human eye. These filter blocks can be arranged in a different order on the chip, so to interpret the data correctly, software has to know what order the colours are in RGGB, RGBG, etc.

... you guys are genuinely the SGL encyclopaedia of all things worth knowing .

I know very little, but I did read a good book once : http://www.firstligh...e-richards.html

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.