Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Adaptive Digital Pixel Binning


Recommended Posts

I came across this paper the other day... "Low-Light Image Enhancement Using Adaptive Digital Pixel Binning", 2015.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4541814/

 

I've been playing with JamesF's oacapture, adding the facility to boost the brightness of the preview image for help focusing or a bit of EAA-type stuff. I initially added pixel value multiplication (ex x2, x8, etc) and pixel binning (eg 2x2, etc). The former amplifies noise as well as signal and the latter reduces spatial resolution. I reinvented digital pixel binning which preserves spatial resolution but smooths details by low-pass filtering. The paper linked has this image to describe the two:

An external file that holds a picture, illustration, etc. Object name is sensors-15-14917-g001.jpg

 

Not one to hang about, I gave implementing their algorithm a go - it goes beyond the simply digital pixel binning (b) in the diagram above. It's a couple of hundred lines of C++, mostly comments. I've only tested it using the PS3 Eye webcam which is currently sitting in the dark pointing at a hank of paracord...

image.png.2cb9d8eedc028cf9bc0e8d9315b1fd0e.png

This was with the camera set to 3fps, low gain, low-ish exposure, just enough so that I could test 4x/8x routines within the available brightness range.

Here's what it looks like multiplying all pixel values by 8:

image.png.c21175e89c8d05d67f53545a5e6f8d8c.png

The 2x2 digital pixel binning does a job of removing some of the noise but loses a bit of sharpness.

image.png.5a30c532e8ca694cc01982c65dd4fbcc.png

The ADPB algorithm does a better job still:

image.png.06d87d0e1c992c7caed1dbc6720ef65d.png

 

The algorithm consists of four steps:

  • calculate the optimum amplification ratio for each pixel based on a 3x3 average kernel (brightness adaptive)
  • calculate a binning pattern based on neighbouring pixel values (context adaptive)
  • blend a uniform binning pattern to reduce noise (noise adaptive)
  • blend in the original to remove saturation

It does this with a single pass over the image, inspecting neighbouring pixels and doing it's thing. ... no, convolution of 3x3 averaging kernel, find max pixel, then a single pass for everything else... that's three.

 

Another comparison with an improved image to begin with:

image.png.202b55bd89f4d7a81ec2901cbf8fa57c.png

 

And a closeup of the earlier comparison:

 

image.thumb.png.9c54ab290765a00046a442d0bba9b3ba.png 

 

I don't know what's in other software or if there are simpler algorithms you could run to despeckle or what - I've not done any image processing really other than AS!2-the-moon. Just learned about darks in SharpCap, but this looked pretty neat and wasn't that hard to implement once I got my head around their mathsy notation.

 

Is this interesting to anyone?

Edited by furrysocks2
  • Like 2
Link to comment
Share on other sites

/**
 * Adaptive Digital Pixel Binning
 * https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4541814/
 *
 * G       8bit input image
 * F       8bit output image
 * W       image width (eg 640)
 * H       image height (eg 480)
 * Rb      maximum binning ratio (eg 4)
 * lambda  noise suppression sensitivity (eg 16.0, 1.0)
 * mu      pixel depth?? (eg 255)
 *
 * #include <algorithm>; // std::min, std::max, std::sort
 */
struct AbsCompare {
  bool operator()(int16_t a,int16_t b){return abs(a)<abs(b);}  // compare by absolute value
};

void adpb(const uint8_t* G, uint8_t* F, int W, int H, int Rb, double lambda, int mu)
{
  double* const HG=new double[W*H];                            // allocate 3x3 average buffer
  double max=0;                                                // find max average value
  for(int y=0;y<H;++y) for(int x=0;x<W;++x){                   // iterate all pixels
    const int L=(x-1+W)%W,R=(x+1+W)%W,U=(y-1+H)%H,D=(y+1+H)%H; // wrap at edges
    const double avg=(G[U*W+L]+G[U*W+x]+G[U*W+R]+              // convolve 3x3 average kernel
                      G[y*W+L]+G[y*W+x]+G[y*W+R]+              // ...
                      G[D*W+L]+G[D*W+x]+G[D*W+R])/9.0;         // ...
    if(avg>max)max=avg; HG[y*W+x]=avg;}                        // find max and store average
  for(int y=0;y<H;++y) for(int x=0;x<W;++x){                   // iterate all pixels
    const double hg=HG[y*W+x],t=hg/max,                        // average/fractional pixel values
                 r=1+(1-t)*(Rb-1);                             // optimal binning ratio
    const uint8_t g=G[y*W+x];                                  // center pixel value
    const int L=(x-1+W)%W,R=(x+1+W)%W,U=(y-1+H)%H,D=(y+1+H)%H; // wrap at edges
    int16_t d[9]={g-G[U*W+L],g-G[U*W+x],g-G[U*W+R],            // calculate differences
                  g-G[y*W+L],g-G[y*W+x],g-G[y*W+R],            // ...
                  g-G[D*W+L],g-G[D*W+x],g-G[D*W+R]};           // ...
    std::sort(d,d+9,AbsCompare());                             // sort differences, abs(a)<abs(b)
    double bc=0,bu=0;                                          // accumulators for convolution
    for(int q=1;q<=9;++q){                                     // iterate sorted differences
      const uint8_t s=g-d[q-1];                                // original pixel value
      const double contrib = r-(q-1);                          // calculate contribution
      bc+=contrib>1?s:contrib>0?(s*contrib):0;                 // convolve context kernel
      bu+=(q==1)?s:(((r-1.0)/8.0)*s);}                         // convolve uniform kernel
    const double gamma=abs(hg-g)/lambda,                       // combination coefficient
                 b=((1.0-gamma)*bc)+(gamma*bu),                // denoised pixel value
                 w=(1.0/mu)*((b/(Rb-1.0))+(g/2.0)),            // blending coefficient
                 f=(1.0-w)*(b)+w*(g);                          // blend for anti-saturation
    F[y*W+x]=std::max(0.0,std::min(255.0,f));}                 // final pixel value
  delete[](HG);                                                // clean up
}

Edit: DON'T call with Rb==1, lambda==0.0 or mu==0.0! And don't feed it a frame with all pixel values of 0.

Edited by furrysocks2
  • Like 2
Link to comment
Share on other sites

  • 11 months later...
  • 1 year later...

hi furrysocks2, 

This algorithm looks interesting. I'm interested to try it out. How can I try it out on visual studio using an 8-bit still image? Just a simple testbench will do.

Thanks in advance.
Best regards,

William

Link to comment
Share on other sites

  • 1 year later...

@furrysocks

I came across this paper independently recently.

Thought I would prototype it in Matlab (my tool of choice for playing with signal processing.)

Any further experience of this? Interested as a "black box" in EAA processing.

Or has anyone else given this a go?

Tony

  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.