Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

What is binning *really*?


JamesF

Recommended Posts

It's been my understanding for some time that "binning" effectively combines a group of pixels into one, thereby increasing the sensitivity of the camera (at the cost of reduced resolution) by increasing the "photon collecting area" for each pixel.

I discovered last night that the ASI120MM implements "binning" in software on the host computer, not in-camera.  It takes the full resolution image from the camera and post-processes it combining the data from adjoining pixels to form an image at half the resolution.

To my way of thinking that's not really binning.  By doing it that way surely some of the potential resolution in pixel values is lost?  For example if you have four adjoining pixels each of which receives almost, but not quite enough photos to reach an output value of 10 during the integration period, then each will read 9 in the ASI image and be combined to give 36 in the final "binned" image.  If the data is combined much closer to the photosites however, it may be that the combined result for the "superpixel" could give a final pixel value of 39 (and if slightly fewer photons were to arrive perhaps you could have values of 36, 37, 38 and 39, all of which might read 36 in the equivalent ASI image).

Is this correct, or am I misunderstanding/expecting too much of the technology?

James

Link to comment
Share on other sites

The only time I have come across binning James when linked to light sensitivity is the 3 step Macadam :smiley:

Thus if you keep your application parameters within 3 bins no perceivable difference in light will be detected by the human eye.

This may be a term that has been tagged onto many other applications that are linked to light input and output

Link to comment
Share on other sites

The only real difference between software and hardware binning is that when you bin in hardware you only get one dose of read-noise per binned-pixel, whereas with software binning you get four. If your noise per pixel is well above the read-noise then either method gives much the same end product for faint objects. For brighter objects where saturation may be an issue, hardware binning might not be optimal, as I believe you do not get four times the well depth when you bin (i.e. you are more likely to saturate). Software binning doesn't have this problem of course.

One should be aware that, apart from the read-noise improvements, binning does not really alter the depth of your image. Although the s/n per pixel increases, objects now cover fewer pixels, so the s/n per object remains the same.

NigelM

Link to comment
Share on other sites

The only real difference between software and hardware binning is that when you bin in hardware you only get one dose of read-noise per binned-pixel, whereas with software binning you get four. If your noise per pixel is well above the read-noise then either method gives much the same end product for faint objects. For brighter objects where saturation may be an issue, hardware binning might not be optimal, as I believe you do not get four times the well depth when you bin (i.e. you are more likely to saturate). Software binning doesn't have this problem of course.

One should be aware that, apart from the read-noise improvements, binning does not really alter the depth of your image. Although the s/n per pixel increases, objects now cover fewer pixels, so the s/n per object remains the same.

NigelM

That doesn't make binning look like a very compelling option at all, especially if you have a camera with low read noise.

James

Link to comment
Share on other sites

I can see how the software binning could potentially lead to blocking increments of up to 4 levels using your example although it is likely that the 4 individual pixels will have different values.  The main value of binning isn't to increase brightness/signal.  This on it's own doesn't do much because shot noise is similarly increased.  However, the binned pixels are read once so read noise is reduced up to a theoretical factor of 4.  This means that you can use shorter exposures without becoming read noise limited and dynamic range is improved.  I can't see how these benefits can apply to software binning.

Link to comment
Share on other sites

This makes me wonder whether for a given final image size (in pixels) one wouldn't be better off halving the effective focal length of the telescope than binning.  And, if the benefit of binning is greatest when the per-pixel signal-to-noise ratio is poor to start with, if there's any point in using it for solar system imaging where the targets are normally quite bright, and if not, why high frame rate cameras such as those used for solar system imaging bother supporting it at all.

James

Link to comment
Share on other sites

Yes binning is just taking say a square 2x2 block of pixels and adding them together (and scaling back down if need be) to give you a reduced resolution image with a slightly better noise figure.

I guess if it's done in the camera itself it would allow a faster frame rate to be transfered over a limited (speed wise) USB 2 connection.

I see binning useful really only if seeing conditions are in some way limiting image resolution and maybe also useful for a screen display with limited resolution (but still saving to disk at full resolution).

Link to comment
Share on other sites

That doesn't make binning look like a very compelling option at all, especially if you have a camera with low read noise.

Probably not, unless you are oversampling the seeing (which probably means long FL) in which case you can drastically shorten the time taken to process your images without any loss in quality (and they take up less hard disk space!).

However, if you are imaging DSOs in very dark skies (or are doing narrow band imaging) read-noise may be more of an issue than you think.

NigelM

Link to comment
Share on other sites

James, your understanding of binning is fine.  I think that imagers, and camera manufacturers, understand that hardware binning can be very useful in certain scenarios.

If the readout electronics of the camera are not sophisticated or versatile enough to allow hardware binning, it is relatively simple to do it in software.  However, this does not offer the same advantages as hardware binning, but it does have most of the drawbacks. 

Therefore, feel free to call me a cynic, but I think that having 'software binning' in the spec of a camera, is mainly marketing spiel.

JMHO.

Jack

Link to comment
Share on other sites

What is binning *really*?

Explanations so far are all very good, but thought I'd add a bit more detail on how hardware binning works. I'm not a detector expert though, so I apologise if I've got some of my understanding wrong!

In a CCD detector (and the camera mention is NOT a CCD detector, it's a CMOS detector), all the pixels are read-out by electronically 'clocking' the measured charge sequentially from one pixel in the array to the next, until you get to a special pixel (often called the 'summing well') which is used to measure the value of the charge via an amplifier. It is the process of measuring this charge in the summing well which introduces read-noise. Every single pixel in the array passes through the same summing well and amplfier. To implement hardware binning on a CCD, you electronically change how many pixels are 'clocked' into the summing well before measuring the charge.

The process of clocking the charge from one pixel to the next is very quick (typ nanoseconds), but to get good read-noise you need to spend a lot longer measuring the value of the charge in the summing well (typ microseconds). So the read-speed of the detector is basically determined by how many times you measure the summing well, rather than the total number of physical pixels in the detector. That is why binning gives you faster read-out and a better read-noise to signal ratio.  The summing well often has a higher capacity than the individual pixels in the array, to help avoid saturating when binning -- but then the saturation is caused by the 16-bit limit of the analog-to-digital convertor in the amplifier used to convert the measured charge into a digital number.

In a CMOS detector, each pixel is individually addressed, rather than sequentially read through a common amplifier. There is no (as far as I'm aware!) way of electronically coupling the pixels together. So there is no way of doing hardware binning on a CMOS detector. It is probably possible, but it would need some fancy electronic design on the CMOS array itself...

Link to comment
Share on other sites

If I'm reading that correctly Fraser, I think it suggests that 2x2 binning on a CCD would only potentially halve the read noise because you still get two reads for the 2x2 superpixel.  Is that correct?  If so it makes binning look quite quite a poor choice compared with halving the focal length without binning, for instance.  I think :)

James

Link to comment
Share on other sites

If I'm reading that correctly Fraser, I think it suggests that 2x2 binning on a CCD would only potentially halve the read noise because you still get two reads for the 2x2 superpixel.  Is that correct?  If so it makes binning look quite quite a poor choice compared with halving the focal length without binning, for instance.  I think :)

James

There is only one read.  As you know CCDs are composed of columns and rows.  To readout a line you clock the bottom row into a hidden row and then clock each pixel into the "summing well" and then digitize it.  To bin two horizontal pixels you clock two pixels into the summing well and then digitize.  To bin two vertical pixels you clock TWO lines in to the hidden row.  Then digitize as before.  To do 2x2 binning you do both, clock two lines and then digitize every two pixels.  In theory you can do n x m binning.  

There are some benefits in that the clocking is a lot faster than the digitizing so not having to digitize a whole line and then the only having to effectively digitize half a line saves a lot of time.

Coupled with being able to dump lines so that you only have to digitize a ROI then it is possible to get fairly high frames rates with low noise for applications like satellite autoguiding.

Another trick is to drift scan where you track the scope over the sky at the same rate and direction as you readout the CCD.  You  sort of "scan" a strip of the sky like a flat bed scanner. I do have a image of the crab nebula from when I was at the RGO using this technique.  I'll see if I can find it.

I do have an analogy of CCDs using buckets of water but it takes time to write out.

Link to comment
Share on other sites

Perhaps I'm looking at this all wrong, but if you lose resolution then why on Earth would you bin in the first place? The only way I can see it being useful is when the atmosphere limits the resolution so binning actually doesn't lose anything, just increases the SNR.

Link to comment
Share on other sites

Perhaps I'm looking at this all wrong, but if you lose resolution then why on Earth would you bin in the first place? The only way I can see it being useful is when the atmosphere limits the resolution so binning actually doesn't lose anything, just increases the SNR.

There are all sorts of reasons. For example targeting.  You may want a fast readout whilst you get the object in the FOV. Once you have achieved that you than switch off binning to take the "real" images.

Or for auto-guiding you bin a whole image to identify a bright enough star and then switch to non-binned ROI readout on the selected object to auto-guide.

Link to comment
Share on other sites

There are all sorts of reasons. For example targeting.  You may want a fast readout whilst you get the object in the FOV. Once you have achieved that you than switch off binning to take the "real" images.

Or for auto-guiding you bin a whole image to identify a bright enough star and then switch to non-binned ROI readout on the selected object to auto-guide.

In these scenarios, there isn't any real imaging in the sense that you're not using the data in the final image - is this what binning is for - for images that were never meant to be in the final product. At least it has it's uses :).

Link to comment
Share on other sites

In these scenarios, there isn't any real imaging in the sense that you're not using the data in the final image - is this what binning is for - for images that were never meant to be in the final product. At least it has it's uses :).

For imaging that is certainly true because you want the resolution but maybe if you just interested in photometric data then so long as the image wasn't saturated a binned image would consume less storage space and be quicker to process.

Link to comment
Share on other sites

I use 8x8 binning for finding and framing my image.  This vastly increases the sensitivity and readout rate so a fairly faint DSO in NB can be seen with an exposure of around 10s.  This also make plate solving in AstroTortilla very much faster and placing an image near enough exactly in the same place as a previous night's imaging. easy and quick.

I only use binning for the real image when the seeing or transparency is poor and I'm using a long FL such as with the MN190 with the Atik 460EX and best FWHM is 3 or 4 unbinned.  In this case I can reduce the exposure times on my subs to something like a third of what I'd want unbinned.  So say 5m Ha and 10m OIII and SII rather than 15m Ha and 30m on the others.  The resolution is still useable with the 460EX for posting here and displaying on a monitor.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.