Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Does Binning help with smaller targets on a wide field scope?


oymd

Recommended Posts

Please explain to me how binning would affect the end result image?

Say I want to image IC 63, the Ghost Nebula, with my WO Star 71 and 294MC Pro.

I had a look on Telescopius, and the target would be very small on my image.

I've only imaged at binning 1x1 in the past.

If I image at 2x2 or 3x3, would that help when processing the final image, as each pixel would be 4 or 9 aggregated pixels, and I can zoom in on the target when processing and crop accordingly?

here's what my FOV would look like if I imaged IC 63.

Lastly, I suppose the above is HARDWARE binning? What would SOFTWARE binning mean?

Many thanks

Binning.jpg

Link to comment
Share on other sites

Hardware binning helps dramatically with signal to noise ratio but any binning severely reduces the resolution of your sensor. So for example if the image was 500x200 pixels wide to start with on the sensor then after 2x2 binning you will end with an image which is only 250x100 pixels wide. So you will decrease the fine detail visible, but on the plus side the image will be much cleaner, as the new super pixels will collect 4 times more light.

The only way to get finer detail in the image is either get a longer focal lenth scope or a barlow, or a sensor with smaller pixels. But there is a price to pay, usually you end up with less light per pixel and need to expose for longer.

Actually there is another very popular method: drizzle. This works  when your images are moved a little bit by a few pixels (dithering). But others can advise on that as I'm not an expert.

Edited by Nik271
  • Like 1
Link to comment
Share on other sites

No, binning will not do what you want.

Like @Nik271 explained - binning works to reduce number of pixels image is composed of and in process increases SNR of that image at expense of detail. In some cases no detail will be lost - if your image is over sampled.

What you want in your case is to crop image.

First, let's address hardware vs software binning. Hardware binning is only available on CCD sensors that support it. There is very little difference between two methods and it concerns read noise. Everything else is the same - except for the fact that software binning can be performed after you gather data - in processing, which is good as you can decide to bin based on actual level of detail achieved.

You are imaging with ASI294mc and your only option is to do software binning. You can choose to do it in camera firmware (at capture time) - but I would advise against it as there is no real benefit in doing so (except for smaller image files). You can decide to bin your data in software if you want to improve SNR or conclude that no detail is present for bin x1 to be used (which is often case with today's cameras with small pixels).

In any case, I would advise you to take image at normal resolution, after stacking decide if you want to bin or not and then create two versions of the image:

1. Full FOV. This will show target at proper size if zoomed in at 100% zoom level

2. Cropped version - that will show target properly without need to zoom in at 100%

 

  • Like 1
Link to comment
Share on other sites

Use dither on you guiding to move the image by a few pixels for each frame.

This means then you can use Drizzle in PI to double or triple the size of the image using approximations.

Using 2x or 3x bin will not increase your image size, it will reduce the image size and give you 2x2 pixels or 3x3 pixels for each pixel instead of one at 1xbin and therefore you will lose resolution, but gain signal.

 

  • Like 1
Link to comment
Share on other sites

I would recommend against using drizzle.

You'll be using 71mm aperture scope. Such scope under regular imaging conditions (decent mount, average seeing, etc ...) will generate image with ~3.2" FWHM stars (that is 0.8" RMS guiding and 2" seeing). Optimum sampling rate for such sharpness of the image is about 2"/px.

With 2.7"/px you are already quite close to 2"/px and you don't need to drizzle - it will not provide you with additional detail.

(In fact, I don't think that drizzle works at all in amateur setups, but that is another topic).

In any case - neither drizzle nor binning will change size of target with respect to FOV. Only cropping / mosaicing or use of reducers / focal extenders will change FOV and hence ratio of target size to FOV (as target size remains the same).

If you want to make your target larger with respect to FOV - you have two options:

1. Shoot at current resolution and then crop your FOV

2. Use focal extender (like barlow or telecentric lens).

Second option also changes your pixel scale and that is something you might or might not like - so do take it into consideration.

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, Nik271 said:

Hardware binning helps dramatically with signal to noise ratio but any binning severely reduces the resolution of your sensor. So for example if the image was 500x200 pixels wide to start with on the sensor then after 2x2 binning you will end with an image which is only 250x100 pixels wide. So you will decrease the fine detail visible, but on the plus side the image will be much cleaner, as the new super pixels will collect 4 times more light.

The only way to get finer detail in the image is either get a longer focal lenth scope or a barlow, or a sensor with smaller pixels. But there is a price to pay, usually you end up with less light per pixel and need to expose for longer.

Actually there is another very popular method: drizzle. This works  when your images are moved a little bit by a few pixels (dithering). But others can advise on that as I'm not an expert.

 

2 hours ago, vlaiv said:

No, binning will not do what you want.

Like @Nik271 explained - binning works to reduce number of pixels image is composed of and in process increases SNR of that image at expense of detail. In some cases no detail will be lost - if your image is over sampled.

What you want in your case is to crop image.

First, let's address hardware vs software binning. Hardware binning is only available on CCD sensors that support it. There is very little difference between two methods and it concerns read noise. Everything else is the same - except for the fact that software binning can be performed after you gather data - in processing, which is good as you can decide to bin based on actual level of detail achieved.

You are imaging with ASI294mc and your only option is to do software binning. You can choose to do it in camera firmware (at capture time) - but I would advise against it as there is no real benefit in doing so (except for smaller image files). You can decide to bin your data in software if you want to improve SNR or conclude that no detail is present for bin x1 to be used (which is often case with today's cameras with small pixels).

In any case, I would advise you to take image at normal resolution, after stacking decide if you want to bin or not and then create two versions of the image:

1. Full FOV. This will show target at proper size if zoomed in at 100% zoom level

2. Cropped version - that will show target properly without need to zoom in at 100%

 

 

1 hour ago, Catanonia said:

Use dither on you guiding to move the image by a few pixels for each frame.

This means then you can use Drizzle in PI to double or triple the size of the image using approximations.

Using 2x or 3x bin will not increase your image size, it will reduce the image size and give you 2x2 pixels or 3x3 pixels for each pixel instead of one at 1xbin and therefore you will lose resolution, but gain signal.

 

Very informative replies, thanks...

Ok, let's say that theoretically I want to Drizzle.

@vlaiv You mentioned that I can choose to BIN AFTER stacking? How can I do that? Never saw that option? Either in DSS or in PI's WBPP? I never saw an option to change the images from the way I took the subs at 1x1 BIN to say 2x2 BIN?

In PI, in WBPP under IMAGE REGISTRATION, I always click on the box that says GENERATE DRIZZLE DATA, but I never see actual Drizzle files generated? I always DITHER after EVERY 300s sub anyways, but never saw the drizzle files? Do they get automatically added to the registered files during image INTEGRATION in WBPP in PI?

In my REGISTERED folder, I can see all the registered frames, and under each one is a 1MB file that looks like this:

2021-09-21_02-10-04__-4.90_600.00s_0020_c_cc_d_r.xdrz

Edited by oymd
Link to comment
Share on other sites

10 hours ago, oymd said:

You mentioned that I can choose to BIN AFTER stacking? How can I do that? Never saw that option? Either in DSS or in PI's WBPP? I never saw an option to change the images from the way I took the subs at 1x1 BIN to say 2x2 BIN?

Simplest method to do it is after stacking. You take your linear image, prior to any processing steps and choose PI option - "Integer resample".

https://pixinsight.com/doc/tools/IntegerResample/IntegerResample.html

Mode should be down sample, resampling factor is bin factor - set it to 2 if you want to bin 2x2, or 3 for 3x3 and so on ...

Down sample mode should be set to average (or sum if you want to actually sum pixel values to measure things like photometry, not really important for image processing).

58 minutes ago, gorann said:

To do 2x2 binning with your camera you chose the Superpixel option when doing debayering in PI

Super pixel mode is slightly different than bin x2 after stacking. Very slight difference in result, and general difference in method.

As far as image size and sampling rate goes - they will produce same result. As far as SNR and pixel-to-pixel correlation, channel alignment and few other things - they will differ.

10 hours ago, oymd said:

Ok, let's say that theoretically I want to Drizzle.

Do you know why you would want to do that? Do you have any particular reason to do so?

Link to comment
Share on other sites

2 hours ago, vlaiv said:

Simplest method to do it is after stacking. You take your linear image, prior to any processing steps and choose PI option - "Integer resample".

https://pixinsight.com/doc/tools/IntegerResample/IntegerResample.html

Mode should be down sample, resampling factor is bin factor - set it to 2 if you want to bin 2x2, or 3 for 3x3 and so on ...

Down sample mode should be set to average (or sum if you want to actually sum pixel values to measure things like photometry, not really important for image processing).

Super pixel mode is slightly different than bin x2 after stacking. Very slight difference in result, and general difference in method.

As far as image size and sampling rate goes - they will produce same result. As far as SNR and pixel-to-pixel correlation, channel alignment and few other things - they will differ.

Do you know why you would want to do that? Do you have any particular reason to do so?

So which should I use, Superpixel or binning after stacking?

Link to comment
Share on other sites

4 hours ago, gorann said:

So which should I use, Superpixel or binning after stacking?

Depends what you want to achieve.

I would avoid super pixel all together if possible. There is better way to do that - one can split image into four subs - red sub, two green subs and one blue sub.

Then you take each "color" and stack as mono subs. That is the simplest and best approach to avoid slight channel offset that super pixel mode introduces. Unfortunately, I'm not sure if this approach is readily available in software, but that is "cleanest" way to treat OSC data - it will give you exact sampling rate for each channel and if you align all channels when stacking - it will fix any offset issues.

On the other hand, binning after stacking will work "properly" only if subs are dithered. If they are aligned - well, you won't get expected SNR improvement as 3/4 of red and blue subs are in fact linearly dependent on other pixels and 1/2 of green pixels as well (debayering fills in missing pixels by interpolation of existing pixels - and noise in missing pixels is no longer random - it is product of adjacent pixels).

Main problem is that people think OSC camera has same resolution / pixel count as equivalent mono camera and same pixel count is often quoted.

For example ASI2600mc is 6248px x 4176px but in reality you only have 3124 x 2088 red and blue pixels in color version.

I think that it is best to view color sensor for what they really are and skip debayering all together - they are "interspersed" red, blue and two green sensors each having 1/4 of original pixel count and corresponding resolution.

 

Link to comment
Share on other sites

@vlaiv

with regards to your question about why would I want to drizzle, it’s just to know the process of how to do it. 
 

I’m currently imaging with my WO Star 71 only, which is 350mm FL, and have not yet got the right bracket for the ZWO EAF for my Esprit 100ED, but that is also 550mm FL. 

I realise that I will not need to bin or drizzle with my 294MC Pro camera on those 2 scopes, but I also have an Edge HD 11, and plan to give it a go at imaging in the near future, and according to that formula on over or under sampling, I will need to Bin and drizzle on that scope with my camera. 
 

So I just wanted to be prepared and have the knowledge to do it when I’m ready to cross that bridge. 

Link to comment
Share on other sites

46 minutes ago, oymd said:

@vlaiv

with regards to your question about why would I want to drizzle, it’s just to know the process of how to do it. 
 

I’m currently imaging with my WO Star 71 only, which is 350mm FL, and have not yet got the right bracket for the ZWO EAF for my Esprit 100ED, but that is also 550mm FL. 

I realise that I will not need to bin or drizzle with my 294MC Pro camera on those 2 scopes, but I also have an Edge HD 11, and plan to give it a go at imaging in the near future, and according to that formula on over or under sampling, I will need to Bin and drizzle on that scope with my camera. 
 

So I just wanted to be prepared and have the knowledge to do it when I’m ready to cross that bridge. 

Drizzle and bin are sort of opposite operations.

Bin reduces sampling rate and regains SNR in the process.

Drizzle is supposed to increase sampling rate but looses some of SNR. Original drizzle algorithm required very precise dithers. I'm not sure how efficient it is in amateur setups, and nowadays pixels are small enough that you don't need to drizzle at all. Drizzle was developed for Hubble space telescope that had massive aperture and long focal length, and scientific CCD sensors used had huge pixels - which led to under sampling because of lack of atmosphere.

We don't have anything like that. We have atmosphere that blurs our images. We have small pixels and small aperture scopes (and even large aperture scopes are limited by atmosphere) and in the end - our dithers are anything but precise enough (dithers for drizzle need to be able to point the scope with accuracy of a fraction of pixel - something HST was capable of but most amateur setups are not).

On Edge HD 11 you'll need to bin your data and not drizzle.

While drizzle is interesting concept to know about - it is really not feasible in most if not all amateur setups.

Link to comment
Share on other sites

Just to note that binning is only one of many downsampling methods, and is far from the optimal approach. Yes, binning is the only practical method when it is done on the sensor. But once the data is off the sensor it would be better to choose a different method. Something called image rescaling -- if implemented properly -- is  most likely the one to choose. (The main difference is that rescaling ought to include a low-pass filter that excludes high spatial frequencies that cannot be represented in the downsampled image; failure to exclude them -- which is what binning does -- results in a little additional noise that comes from the energy in high spatial frequencies that has nowhere else to go in the downsampled image).

cheers

Martin

 

Link to comment
Share on other sites

1 hour ago, Martin Meredith said:

Just to note that binning is only one of many downsampling methods, and is far from the optimal approach. Yes, binning is the only practical method when it is done on the sensor. But once the data is off the sensor it would be better to choose a different method. Something called image rescaling -- if implemented properly -- is  most likely the one to choose. (The main difference is that rescaling ought to include a low-pass filter that excludes high spatial frequencies that cannot be represented in the downsampled image; failure to exclude them -- which is what binning does -- results in a little additional noise that comes from the energy in high spatial frequencies that has nowhere else to go in the downsampled image).

cheers

Martin

 

There is big difference between binning and other forms of down sampling.

Binning does not introduce pixel to pixel correlation (which happens with low pass filter) - and has precisely defined SNR improvement. SNR improvement is higher than most other down sampling methods - except those that smooth data too much and hence lower resolution of the image (say bicubic interpolation).

Good way to compare different down sampling techniques would be threefold:

1. Generate random noise of known magnitude, then down sample by certain factor (usually x2) and measure resulting noise magnitude - this will show how much SNR improvement there will be (assumption is that proper down sampling won't change average signal value, so we can examine just noise component)

2. Take PSF of certain FWHM that is over sampled by factor of x2 and then measure resulting FWHM after down sampling to measure how much blurring did down sampling introduce (binning actually introduces a bit of blurring and this is known as pixel blur - other methods introduce more or less of blurring, depending on method)

3. Take number of random noise images, resample them down and then stack them and measure resulting noise levels. This measures pixel to pixel correlation introduced by down sampling. In ideal case (binning is example of this) - stacking would produce expected results for random samples - noise would decrease by factor of square root of number of stacked images.

 

Link to comment
Share on other sites

1 hour ago, Martin Meredith said:

Just to note that binning is only one of many downsampling methods, and is far from the optimal approach. Yes, binning is the only practical method when it is done on the sensor. But once the data is off the sensor it would be better to choose a different method. Something called image rescaling -- if implemented properly -- is  most likely the one to choose. (The main difference is that rescaling ought to include a low-pass filter that excludes high spatial frequencies that cannot be represented in the downsampled image; failure to exclude them -- which is what binning does -- results in a little additional noise that comes from the energy in high spatial frequencies that has nowhere else to go in the downsampled image).

cheers

Martin

 

Forgot to say - in stacking we can have best of both worlds.

Since we need to align images anyway - for that we must use interpolation method that utilizes the least in terms of low pass filter. This preserves sharpness and noise distribution (good for stacking).

Binning introduces pixel blur, but if we do it in special way - we won't introduce pixel blur. We can do "split bin".

We take each sub and create 4 (or 9, 16, etc ...) images out of it by taking every odd / even pixel into its own sub image. This creates more subs to stack but we absolutely did nothing with individual pixels. Stacking more subs means SNR improvement (same as we get from binning). We just need to make sure we use good resampling method for alignment procedure - namely Lanczos if we have optimally sampled data after "binning".

Link to comment
Share on other sites

22 minutes ago, oymd said:

Sorry, but this seems so so complicated. 

Just to add a bit of consolation… this is not a problem with your understanding, it’s just that digital signal processing IS complicated.  Mathematics is the right language to describe it and make decisions, as @vlaivand @Martin Meredith both know well.  So when we resort to more qualitative descriptions it becomes confusing.

Rest assured, though, that the differences between the methods being described are generally small, and there are undoubtedly more significant impacts to your final image from whatever you choose to do further down the processing chain.

Tony

  • Like 2
Link to comment
Share on other sites

10 hours ago, vlaiv said:

Binning does not introduce pixel to pixel correlation (which happens with low pass filter) - and has precisely defined SNR improvement. SNR improvement is higher than most other down sampling methods - except those that smooth data too much and hence lower resolution of the image (say bicubic interpolation).

If you can point me to an independent source for the bold part of this statement I'd be very interested to read it.

To the OP: sorry for this little diversion

Cheers

Martin

Link to comment
Share on other sites

39 minutes ago, Martin Meredith said:

If you can point me to an independent source for the bold part of this statement I'd be very interested to read it.

To the OP: sorry for this little diversion

Cheers

Martin

I don't really have any particular source, but it is a fact that can be easily verified. Take ImageJ and conduct couple of tests.

Here, I'll do couple of examples for you and then you can further experiment to see what sort of results you'll be getting.

Just generate image with pure Gaussian (or Poisson or mix of the two) noise and reduce it's size by different methods and measure standard deviation:

image.png.0fda76dc0d50beae04e04ec0330ce5c4.png

Results:

image.png.a6c3fa84ebc3d3fa7f544119db60ac6c.png

As you can see - only bin x2 and linear interpolation (which is in this particular case same mathematical operation as binning - average of 4 adjacent pixels with the same weights) have standard deviation that is half of original value. This is the same thing as stacking - SNR improvement is equal to square root of number of stacked subs - in this case square root of number of binned pixels (2x2 = 4 adjacent pixels, sqrt(4) = 2 - noise reduces by factor of 2).

If you want intuitive explanation, here is one: out of all rectangles - square has shortest diagonal with respect to shorter side. Only binning uses two samples (I'm now talking in 1D case - for ease of understanding) with equal weights. All other interpolation algorithms use two or more pixels and weights that are no longer equal. Benefit of that approach is that filter more and more resembles box filter - ideal case.

In fact, you can analyze the shape of filter that happens in interpolation in following way - create image with random section in it:

image.png.8f2b2f1bed20da0d3a93ca08677f9a0c.png

Translate it (0.5, 0.5) with selected interpolation algorithm (by the way - this method can be used to see how different offsets of translation impact low pass filter as well) and take FFT of both images.

image.png.6ab72573c6f809a867b5a0ad71f66662.png

Left is FFT of original image, and right is FFT of translated image.

Divide resulting FFTs and you'll get filter response:

image.png.8a5168652908670ad47f59dedfc5a3c6.png

This happens due to properties of Fourier transform (namely shifting function in spatial domain does not alter it's frequency signature)

In the end, you can plot profile (X axis cross section for example) of different interpolation algorithms to compare filters:

image.png.bd41e7481037ea0b1ed65f69e0d50791.png

Here is comparison of cubic convolution and cubic b-spline interpolations. It is obvious that cubic convolution blurs image more (for example when aligning frames for stacking).

I've done this sort of comparisons before and posted results here on SGL. Let me see if I can find that thread.

Here it is (one of them):

 

  • Like 1
Link to comment
Share on other sites

Thanks Vlaiv but I was rather hoping you could provide a reference to an external source that contains a mathematical proof of the (near?) optimality of binning for SNR. I realise you are making a relative statament, but one thing we do know for certain about binning as a means of downsampling is that it is *not* itself optimal: it produces aliasing noise (those high frequency components have to go somewhere). So I was genuinely interested to see if your statement was based purely on intuition backed up by empirical examples or whether you had come across any theoretical support for it. Since the former appears to be the case you can't really say "it is a fact that can be easily verified". 

cheers

Martin 

Edited by Martin Meredith
typo
Link to comment
Share on other sites

13 minutes ago, Martin Meredith said:

Thanks Vlaiv but I was rather hoping you could provide a reference to an external source that contains a mathematical proof of the (near?) optimality of binning for SNR. I realise you are making a relative statament, but one thing we do know for certain about binning as a means of downsampling is that it is *not* itself optimal: it produces aliasing noise (those high frequency components have to go somewhere). So I was genuinely interested to see if your statement was based purely on intuition backed up by empirical examples or whether you had come across any theoretical support for it. Since the former appears to be the case you can't really say "it is a fact that can be easily verified". 

cheers

Martin 

Not sure what sort of mathematical proof are you looking for? Given usual set of down sampling methods available in software - we can both test and derive SNR improvement in down sample x2 case (we simply need to play around with addition of noise which adds in quadrature). Sure it is not conclusive proof that binning is the best in terms of SNR in general - but then again, no such claim is made.

By the way, there is something called pixel blur that effectively deals with aliasing effects of down sampling in binning.

Sensor pixels do surface sampling rather than point sampling. It can be shown that this causes point sampled signal to be convolved with pixel surface function - or filtered by its Fourier transform in frequency domain. Binning is effectively the same as "joining" surfaces - so it again does filtering in frequency domain.

Above statement can be proven, and I did that already in one discussion here:

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Sure it is not conclusive proof that binning is the best in terms of SNR in general - but then again, no such claim is made

Well, you did say "SNR improvement is higher than most other down sampling methods" which is the claim I'm looking for hard evidence for. I must say I'm skeptical -- if binning is really so good one does wonder why most/all image processing software uses lowpass (spatial) filtering prior to downsizing. Or is aliasing not a problem for astronomical images?

Anyway, further apologies to oymd for going off-topic and maybe this can be pursued elsewhere if it is felt to be worth pursuing at all  -- I kind of doubt it as I've come across the "astronomical images are (magically) free of all Nyquist constraints" several times and the matter never seems to be resolved!

cheers

Martin

 

Link to comment
Share on other sites

4 minutes ago, Martin Meredith said:

Well, you did say "SNR improvement is higher than most other down sampling methods" which is the claim I'm looking for hard evidence for. I must say I'm skeptical -- if binning is really so good one does wonder why most/all image processing software uses lowpass (spatial) filtering prior to downsizing. Or is aliasing not a problem for astronomical images?

I'd say that most software utilizes other down sampling methods as they implement general resampling. Binning is very constrained in how it works - it is applicable only to down sampling and only to integer factors - you can bin 2x2, 3x3 and so on with reduces image size only to half, one third, one quarter and so on ...

Not really handy for general resampling operations. It can't be used for image alignment (translation / rotation) for example or reducing image to 75% of its original size.

As far as hard evidence - well I just gave you example above that you can easily replicate. I also gave you rationale for why it works. I can put it in a bit more mathematical terms?

Say you want to "average" two noise values with arbitrary weights. Let weight of first be w and second 1-w (we want average of signal to remain constant and assume that both samples have the same associated signal component). We are thus adding two noise values a and b with associated w and 1-w weights.

resulting noise value will be square root of sum of squares of these two components (noise adds like linearly independent vectors).

result = sqrt(a^2 * w^2 + b^2 * (1-2*w+w^2))

As with stacking - we assume that a and b are equal in magnitude - or nearly equal. This is valid assumption if we assume that signal over two values is the same (noise magnitude being sqrt of signal + read noise).

result = (2*w^2 - 2*w +1)^(1/2)

We now take derivative of above expression and equate that with zero (to find minimum).

0 = (2*w^2-2w+1)^(-1/2) * (4*w-2) or 0 = (4*w-2) / ((2*w^2 - 2*w +1)^(1/2))

From this 4*w-2 = 0 (left part is dividing and hence can't be zero, so right part must be equal to zero) => 4*w = 2 => w=1/2

Resulting noise is lowest when we take both samples with same weight. This logic can be extended to multiple samples. Only binning uses two samples with equal weights (and linear interpolation in this special case).

Take cubic interpolation - coefficients are given by expression:

image.png.a87986ccda91a885e0a466801815e083.png

https://www.paulinternet.nl/?page=bicubic

they are not all equal weights, so resulting noise will not be highest possible (like in 1/4, 1/4, 1/4, 1/4 case)

In the end, I'd like to address issue of binning with astronomical images. I recommend binning for over sampled images. Such images won't have higher frequencies to cause aliasing issues to begin with, but regardless of that, I pointed out that there is something called pixel blur that effectively acts as low pass filter for binning.

Here is example of what sort of low pass filter I'm talking about.

image.png.5a49a6d689ff323b0a321007ffe38357.png

I created Gaussian noise image and then I averaged each group of 4x4 pixels (right image). This is what happens when we bin - we average certain group of pixels - except here I left original sampling rate so we can compare frequency representations of both.

image.png.0db28bdb73aa725cc1914aaa4c7c0b40.png

Here are respective FFTs - you can clearly see that bin x4 has had low pass filter acting on it.

 

Link to comment
Share on other sites

Apart from all the math, there's a simple rule of thumb you can use. If the (fainter) stars in your image appear square and "blocky", you can try drizzle integration. Drizzle integration only gives a benefit when your images are undersampled. But as vlaiv wrote, that is seldom the case with amateur equipment. Undersampling occurs at shorter focal length and/or larger pixels. Furthermore "there ain't no such thing as a free lunch". Data that is used to retrieve information in the drizzle process, cannot be used to reduce noise (increase snr). In other words, drizzled images are noisier than undrizzled images of the same number of subs. That's what mathematicians call degrees of freedom.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.