Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Binning - at capture, or after???


iapa

Recommended Posts

I’ve been looking around for a while, but haven’t found anything definitive.

Binning

is it best to bin when capturing or after?

this query is specific to the ZWO 1600MM-C

Are there any benefits to either 2x2 binning, via capture software, or to capture at 1x1 then use post processing software to bin to 2x2?

this is, another, area where I cannot get my head around even the benefits of binning, let alone when to do it ?

Link to comment
Share on other sites

I'm no expert but I think you should do the binning when you capture and it' is done in the cam. It is done by combing eg. 4 pixels in to one bigger pixel which will be able to capture more photons than a single pixel alone.

Link to comment
Share on other sites

With CCD sensor you want to bin at capture - in hardware. With CMOS sensor like in ASI1600 I would go for software bin. It should not really matter if it is done by driver or software afterwards, except for two differences:

1. I'm not sure if there is loss of precision if done in driver (it should not be, since ADC is 12bit so there are couple of bits to spare and be used by binning). But this part can be easily "measured". Take one sub without binning and examine values - they should all be divisible by 16 (this is because only top 12bits are used by ASI1600 so lower 4 bits are set to 0). Now take one sub binned 2x2 by driver and examine file. If you still get all pixel values divisible by 16 that is not good - do it in software, but if you get all values only divisible by 4 then it is ok to bin at capture time.

2. By binning in driver (at capture time) you end up with less disk space usage - each sub is 4 times smaller than unbinned (this is true for both CCD and CMOS).

Binning benefit is really simple - in CCD sensor it just simply uses 1 read per 4 pixels so you get SNR increase per pixel (or in this case bin) - this is because the way that CCD reading is done - all electrons are "marshaled" off the chip column by column (in standard mode, or 2 columns at a time together when binning) to off chip amp and ADC. So amp and ADC operate on electrons from 4 pixels (in 2x2 bin) together and inject only one "dose" of read noise but signal is x4 (4 pixels added). So SNR is increased. With CMOS sensors this cannot happen since technology is such that each pixel has its own amp and ADC unit. Read noise is injected prior to binning.

So CCD benefits more from binning than CMOS (in terms of lowering read noise).

Now binning in itself also increases SNR related to other sources of noise (such as dark current, LP and shot noise) but at an expense of loss of resolution / image scale.

If still confused about how binning increases SNR - just think of it this way: when you stack 4 subs you increase SNR - similar thing happens when you "stack" 4 adjacent pixels to form single one (binning is form of stacking - adding values).

I personally bin the stack at the end. For some reason (purely my fiction) I believe that aligning unbinned subs will yield better "sub pixel" precision for star alignment than if star positions are measured on binned subs (justification for this is - centroid algorithms can find position to a fraction of pixel - but if you "increase pixel size", fraction also gets bigger - hence loss of precision). This however might have slight effect on final SNR (I'm really not sure about this, since aligning is smoothing noise in sub a bit and introducing correlation between pixels, so it really depends if "correlating binned" pixels is the same as "binning correlated" pixels, but such math is beyond me).

Link to comment
Share on other sites

You should only bin to get a reasonable pixel scale such as one arc second per pixel to 3 arc seconds per pixel.

 (   Pixel Size   /   Telescope Focal Length   )   X 206.265  

This is what I use to get a reasonable pixel scale. When you bin 2x2 multiply the outcome by 2 and when you bin 4x4 multiply by 4.

 Anyway, to answer your question I would bin at capture because you will capture double or quadruple the light and get better signal to noise ratio.

Link to comment
Share on other sites

I would do it in processing as I really doubt that the camera is able to do it better than photo shop (if anything it may be worse at it / have less options) and that way you can try it and see what works. If yo do it on the camera you are stuck with it. To be honest I think of it as something of a marketing ploy targeted at current CCD users who expect binning in a astro camera.

Link to comment
Share on other sites

  • 1 year later...

It’s better to do binning after acquisition, in software. I have demonstrated this to my own satisfaction recently. However, I needed to write my own software to do this as the astronomical image processing software that I have doesn't seem to include binning (sigh!). (I had other motivations for writing this software as well.) 

A digression: The motivation to try software binning came from my professional experience, over four decades, as an oil and gas exploration geophysicist running seismic surveys to image geology deep underground. This involves surveys costing millions of dollars. It is the universal experience that the more data that is acquired, the better the final image becomes, resulting in better decisions about where to drill expensive wells. Modern techniques in this industry involve spatial sampling which is orders of magnitude denser than required by the simple considerations of Nyquist sampling theory. As well as deploying vastly more sensors, in the last couple of decades there has been a major switch from combining signals from multiple sensors by analogue summing (electrical connection) to individual digital recording of sensors and summing afterwards in software. This relates directly to this discussion.  Also there has been a switch from 16 bit to 24 bit recording and this also has demonstrable benefits. In the seismic surveying industry, the improvements this extra effort and cost make are well understood and documented. The improvements come mainly from much better handling of noise but with huge knock-on benefits to many other aspects of data processing and further analysis. (I believe the audio recording industry has similar experience.)

Back to astronomy. If we didn’t have hot and cold pixels, cosmic ray hits and limited camera dynamic range to deal with, then there would be little or no difference between in-camera binning and software binning for image quality. (There is an efficiency gain with camera binning in terms of download time and smaller file size, but I don’t think this is a major factor for most people.)  For camera binning, a single native duff pixel renders useless the bin into which it falls. For 2x2 binning, three other native pixels are rendered defective so that the proportion of defective pixels is quadrupled.  And then there is dynamic range: 2x2 software binning - providing higher precision arithmetic is used - adds another two bits of dynamic range. I have done tests with darks to check that software binning does not adversely affect signal to noise ratio and it doesn’t. And I have seen the benefits from hot-pixel handling and dynamic range comparing camera and software 2x2 binning of images.

Yes, I should write this up properly. I hope to do so one day. I’d suggest people try it for themselves, but -  as mentioned above - I am not sure if there is publicly available software which does the binning job properly. I note the use of Photoshop, but haven’t tried this myself. If others know of such software  - and are confident that it works OK - it would be good to make the knowledge known.  I am very reluctant to release my own software publicly as it really isn’t yet developed to a satisfactory standard for use by others. (Maybe in time.)

Link to comment
Share on other sites

24 minutes ago, bigbend said:

It’s better to do binning after acquisition, in software. I have demonstrated this to my own satisfaction recently. However, I needed to write my own software to do this as the astronomical image processing software that I have doesn't seem to include binning (sigh!). (I had other motivations for writing this software as well.) 

A digression: The motivation to try software binning came from my professional experience, over four decades, as an oil and gas exploration geophysicist running seismic surveys to image geology deep underground. This involves surveys costing millions of dollars. It is the universal experience that the more data that is acquired, the better the final image becomes, resulting in better decisions about where to drill expensive wells. Modern techniques in this industry involve spatial sampling which is orders of magnitude denser than required by the simple considerations of Nyquist sampling theory. As well as deploying vastly more sensors, in the last couple of decades there has been a major switch from combining signals from multiple sensors by analogue summing (electrical connection) to individual digital recording of sensors and summing afterwards in software. This relates directly to this discussion.  Also there has been a switch from 16 bit to 24 bit recording and this also has demonstrable benefits. In the seismic surveying industry, the improvements this extra effort and cost make are well understood and documented. The improvements come mainly from much better handling of noise but with huge knock-on benefits to many other aspects of data processing and further analysis. (I believe the audio recording industry has similar experience.)

Back to astronomy. If we didn’t have hot and cold pixels, cosmic ray hits and limited camera dynamic range to deal with, then there would be little or no difference between in-camera binning and software binning for image quality. (There is an efficiency gain with camera binning in terms of download time and smaller file size, but I don’t think this is a major factor for most people.)  For camera binning, a single native duff pixel renders useless the bin into which it falls. For 2x2 binning, three other native pixels are rendered defective so that the proportion of defective pixels is quadrupled.  And then there is dynamic range: 2x2 software binning - providing higher precision arithmetic is used - adds another two bits of dynamic range. I have done tests with darks to check that software binning does not adversely affect signal to noise ratio and it doesn’t. And I have seen the benefits from hot-pixel handling and dynamic range comparing camera and software 2x2 binning of images.

Yes, I should write this up properly. I hope to do so one day. I’d suggest people try it for themselves, but -  as mentioned above - I am not sure if there is publicly available software which does the binning job properly. I note the use of Photoshop, but haven’t tried this myself. If others know of such software  - and are confident that it works OK - it would be good to make the knowledge known.  I am very reluctant to release my own software publicly as it really isn’t yet developed to a satisfactory standard for use by others. (Maybe in time.)

There are two types of binning, that are different.

What we call software binning is simple summation of adjacent pixel values - can be done in software, or can be done in hardware / drivers - usually CMOS cameras implement this due to their technical constraints - how pixels work in CMOS cameras on physics / electronics level.

Then there is true hardware binning - available only in CCD cameras, again due to physics / electronics involved. It is different in the way it is performed. It involves "hardware" type summation rather than simple value summation. Electrons in electron well from multiple pixels (usually 4 adjacent pixels for x2 bin, but some cameras implement x3 and x4 hardware bin as well) are brought together prior to ADC conversion, and total electron count is measured and converted to digital value. This leads to one important fact: with software binning since you are summing 4 measured and digitized values - it is like simple summation of values, in hardware binning you are "summing" prior to measurement and as a result you get measurement error (read noise) only on total value, rather than four separate values. True hardware binning results in less noise than software binning as you inject read noise after "summing" - and you inject single read noise "dose". With software binning you will operate on 4 values each having their own respective read noise "dose" - so true hardware binning x2 adds single read noise and in fact acts as large pixel (four times as large by surface, and x2 as large in any of two axis - width and height), while software binning acts as large pixel - but one with read noise being twice as large. There is clear advantage of true hardware binning (as implemented in CCD).

On the topic of Nyquist and sampling rate on DSO imaging, things are not quite as easy or straight forward.

First we need to see that camera sensor is not point sampling device, but rather area sampling device - this produces additional blur thus changing frequency response of sampled data. Any binning, even if underlying sampling frequency is properly chosen will result in loss of information in high frequency domain. If you sample at x4 of highest frequency component for band limited signal with area sampling device and bin x2 thus expecting that you will still maintain needed x2 (x4 binned by x2 will still be x2 needed sampling frequency) sampling needed by Nyquist criteria to preserve all information - you will be wrong, you will loose some information.

If you want to do above and retain all information, you need to reconstruct original signal by use of Sinc function and then sample on x2 lower frequency - that will preserve all signal components. Binning is equivalent to using simple bilinear interpolation for reconstruction of original signal (average of four pixels is the same as calculating point value in their center by using bilinear method - either way you end up with kernel consisting of 1/4 weights for each pixel).

You can easily verify this by any sub that you have - measure star FWHM on image and convert to arc seconds (any unit independent of pixel scale). Then bin image x2 in software and measure star FWHM - it will increase quite a bit. It does not matter if you do this on oversampled or undersampled sub - result will be the same.

I'm currently doing some research on impact of interpolation function on noise reduction. This is important in high resolution work, when you want to preserve sharpness irrespective of SNR gains that could be had from simple binning. There are still some gains when using different interpolation technique, but they are usually less than x2 expected from binning.

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.