Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Binning? Wot?


AndyThilo

Recommended Posts

Hi

One thing I've never understood is binning and what effect it has on the imaging. For my Edge HD 8", 0.7 reducer and ASI294MC Pro, my pixel resolution is 0.67, and the recommended I believe is between 0.67 and 2"/pixel. So I'm right on the edge (pardon the pun ;)). If I go full 2032mm then i'm at 0.47. Binning to 2x2 results in 1.34 with the 0.7x and 0.94 without. 

In both cases, it would appear that binning 2x2 would be preferable, but what difference am I going to see? How does it effect image resolution, exposure times etc..? It utterly confuses me, please dumb it down for me :)

Cheers

Andy

Link to comment
Share on other sites

11 minutes ago, AndyThilo said:

So I'm right on the edge

There isn't really an "edge" in that at one resolution your images will be crystal clear and at 0.00001 arc-sec more/less, they will be unusable.
There is a gradual - veeeeeeery gradual - change in image quality as resolution goes from under-sampled through "ideal" to over-sampled.
Once processed, the difference between an image taken at 1 arc-sec per px and another taken at 2 will be so slight that it comes down to personal preference which one is better. And most of that will be due to the computer monitor that viewers are using :):):)
I wouldn't sweat it.

Edited by pete_l
Link to comment
Share on other sites

I think there are two key issues:

1) Your seeing on the night and your guiding will limit the real resolution of detail you can hope to achieve. Of one thing we can be pretty sure: you will not be able to resolve detail at 0.47 arcseconds per pixel. For one thing your guide error is unlikely to be less than half that figure, ie 0.24 arcseconds. Even premium mounts run somewhere in the 0.3 to 0.4" region. To find out, just give PHD your guide focal length and your guide camera pixel size and it will trot out the RMS automatically. Whatever it is, double it to see the minimum guide-able resolution at which you can usefully image. If your RMS is 0.5 arceseconds you shouldn't image below 1 arcsec per pixel.

2) You can lower your resolution in two ways, by reducing your focal length, which will also increase your field of view, or you can (on some cameras) bin the pixels 2x2 or 3x3 to make bigger 'effective pixels.' But beware: much of the discussion on the net concerns 'hardware binning' in CCD where 4 pixels can be read as one at capture, making them genuinely bigger pixels. You cannot do this with your CMOS camera. Any binning will be done in software afterwards. I haven't used a CMOS camera so I'll defer to someone who has their head around this. Still, it will achieve a similar result, I think. In any event, don't bang your head on a wall when processing by trying to get a perfect finish on an over-sampled image. It isn't possible because the detail simply isn't there. Bin it or resample it downwards and you'll lose noise without losing real details. You'll still have your object at the screen size allowed, ultimately, by your seeing and guiding.

Personally I think I'd use the reducer by default.

Olly

Link to comment
Share on other sites

There still seems to be lack of understanding what binning does and how it behaves - but in reality it is pretty much straight forward.

We stack images in order to improve SNR. We average our pixels in "stack" direction (each pixel that we average comes from different sub). This results in reduction of "resolution" in "stack direction" - instead of having number of subs we end up with single image.

Binning is the same process - but in different "direction". We take 2x2 (or N x N) group of pixels in single sub and we "stack" those together to produce single output pixel. Result is the same - improvement to SNR - exactly the same as regular stacking - and reduction of resulting size of image (but we don't bin to 1x1 image like we do in regular stacking where we average all subs).

Stack 64 subs? - improve SNR by 8 (square root of 64).

Bin 2x2 - (which is same as stacking 4 pixels) - improve SNR by 2 (again square root of number of pixels stacked sqrt(4) = 2).

Same thing.

There seems to be lack of understanding of difference between hardware and software binning. They are the same - except "average" step is performed at different stage.

It is like having couple of buckets of apples - you can either count apples in each bucket and then sum resulting numbers or you can empty buckets in a really big bucket and then count apples.

Imagine you can make error while counting. Counting 4 times is more likely to produce error than counting one time - so it is wiser to transfer all apples to single large bucket before counting. If you can't do that - simply count apples from each bucket and count those and sum your results.

This is basically difference between hardware binning and software binning - both produce same SNR improvement except for read noise - where hardware binning produces one "dose" of read noise, while software binning produces two "doses" (for 2x2 bin). There are actually 4 read noise doses involved with software binning - but because of the way noise adds - it turns out that you add x2 read noise level to image itself. If you have 5e read noise per pixel - binning in hardware will still have 5e and binning in software will be same as having 10e.

Luckily, CCD sensor that usually have higher read noise - have hardware bin option, while CMOS sensors that can't bin in hardware have low read noise anyway, so increasing their read noise is not that big a deal.

Also - read noise in imaging is important when choosing sub duration - hence you can overcome slightly higher read noise in software binning - by using longer subs.

Link to comment
Share on other sites

14 minutes ago, AndyThilo said:

Thanks for the detailed answers. So is it worth binning in my case? I'm still not sure what binning means with regards to a final image compared to not binning. 

I'm for binning :D

I always prefer better SNR than empty resolution. It is probably worth binning and you should bin your linear subs after calibration and prior to stacking.

Difference on image when show on screen size (fit to screen) is probably going to be rather small. Difference will be most visible when image is shown at 100% zoom level (1:1 zoom - or one screen pixel for one image pixel).

Over sampled images just look blurry at that setting and stars look large. Noise looks different also - there is less of it when binned and it is much more fine grained.

Overall I prefer images that are properly sampled. You can see what sort of sampling you need for your image if you take FWHM of stars in your image (beware - different software will report different FWHM values - I trust AstroImageJ) and divide value with 1.6 - that will give you sampling rate in arcseconds per pixel that you should use.

17 minutes ago, andrew s said:

@vlaiv is it not possible in software to not just do a nxn average but a more nuanced reduction using a better algorithm? 

Regards Andrew 

Depends on what you value in your algorithm.

Binning - defined as N x N sum or average has couple of advantages and one disadvantage:

- predictable SNR improvement - it is exactly N times

- no pixel to pixel correlation

- disadvantage is increased pixel blur

Second point is rather moot since in amateur setups we can't control our dithers with great precision and we often need to align our frames with sub pixel precision. This means that aligned subs are already subject to resampling.

For this reason, I believe slightly different approach should give best results in both SNR and resolution - that is split bin and Lanczos resample for alignment.

Instead of doing "local stack" of pixels - or summing / averaging them - one should split each sub in number of subs with smaller resolution - respective pixels that would end up in sum / average should be put in different subs.

This really shows that stacking and binning is the same thing - and since subs need to be aligned - Lanczos3 will do good job of preserving both signal and noise statistics with respect to sampling rate.

In any case - you can do some experiments with ImageJ to measure impact on resolution and SNR of different methods of changing resolution.

- Make 512x512 gaussian noise image with single star being simulated by gaussian blur of one pixel of a certain FWHM (compatible with sampling rates used)

- resize using different algorithms - plain bin x2, bilinear resampling, bicubic resampling and other more advanced methods like cubic spline, quintic kernels and such.

Measure stddev on flat patch of each and measure FWHM of resulting star profile.

Lower FWHM and lower stddev mean - better resolution and better SNR improvement respectively.

  • Thanks 1
Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I'm for binning :D

I always prefer better SNR than empty resolution. It is probably worth binning and you should bin your linear subs after calibration and prior to stacking.

Difference on image when show on screen size (fit to screen) is probably going to be rather small. Difference will be most visible when image is shown at 100% zoom level (1:1 zoom - or one screen pixel for one image pixel).

Over sampled images just look blurry at that setting and stars look large. Noise looks different also - there is less of it when binned and it is much more fine grained.

Overall I prefer images that are properly sampled. You can see what sort of sampling you need for your image if you take FWHM of stars in your image (beware - different software will report different FWHM values - I trust AstroImageJ) and divide value with 1.6 - that will give you sampling rate in arcseconds per pixel that you should use.

You can bin afterwards? Not when you take the lights? I generally use WBPP script in Pixinsight.

Link to comment
Share on other sites

22 minutes ago, AndyThilo said:

You can bin afterwards? Not when you take the lights? I generally use WBPP script in Pixinsight.

I prefer to bin in software - later. There is really no difference with CMOS sensors. In fact - software binning afterwards is better as it gives you more flexibility. Sometimes you don't get full benefit with binning on camera (not hardware - still "software" type binning - but done in firmware of camera) - since it can round things up the way you might not want it to do.

Don't know what WBPP script does - I don't use pixinsight.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.