Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Next obstacle to understand


alan potts

Recommended Posts

12 hours ago, vlaiv said:

Color cameras can be binned of course.

Depending on type of binning applied you can either loose color information or retain it.

I meant to write one extensive post about different aspects of binning, and what I consider to be the best way to bin the data when doing images from multiple exposures (for other uses, other methods of binning might be more suitable).

Anyway, I'll briefly outline some things related to color imaging that people might not be aware (not much talk about it elsewhere) and explain how can binning both loose and preserve color data.

First thing to understand is that resolution of color sensor is not the same as mono sensor. It is a bit different, and we can argue that it is twice as coarse as mono sensor.

I've included image of bayer pattern for easier understanding:

image.png.39eeeaecacdd3213df8a8460b217847f.png

Now with mono sensor, we think of resolution of the sensor as being "how much sky is covered by single pixel", expressed in "/px and calculated based on size of pixel. This works fine for mono sensors, but there is alternative way of thinking that is better when we want to discuss both mono and color sensor. Instead of thinking about "width" of the pixel and length on sky covered by this width - let's think about pixels being points (without dimension) and resolution being "distance" between these points. If we place one such point at the center of each pixel for mono camera, then distance between two points will be same as pixel width - so effective resolution is the same - no change there (two views are in this case compatible).

However when we apply "pixel" approach to bayer pattern of color sensor, we have a slight problem - look at red pixels in above image - there is a pixel, than a gap, then a pixel, then a gap (going in horizontal, but same in vertical direction). How do we "incorporate" these gaps in our thinking about resolution if we accept "pixel per sky area/length" approach?

It is much easier to think about sampling points (without dimensions, and only distance between them). In this case we can see that red color is sampled at two "pixel lengths" rather than one. Distance between sampling points is twice the length (and height in vertical) of a single pixel.

If pixel/resolution calculation for this sensor gives us 1"/px - red color will be actually sampled at 2"/px. Same is true for blue color. In principle we can say that same is true for green color, although it is not quite clear why (there is a bit of problem with X pattern of green pixels), but because both red and blue are sampled at twice calculated resolution, we should treat green as well. In fact if we "split" green into green1 and green2 where green1 and green2 are two respective pixels in 2x2 element of bayer matrix like this:

image.png.84b89c6d7a2b250706d2f7d62c030376.png

Here green1 being denoted as Gb and green2 being denoted as Gr, and threat each green component separately (namely green1 and green2 or as denoted in image Gr and Gb), we can see that each of those two components of green (although they are the same filter) are indeed sampled at twice "pixel size" resolution.

Now that we understand why is resolution of color sensor twice lower than that of mono version of same sensor, we can see further how we can do debayering of such sub and how we can do different binning on it.

First let's do bayer split. Most people think about debayering process as creating color image out of mono. It is perhaps better to think about it being "channel extraction" process. When you do it like that, you turn single color sub into corresponding R, G and B subs - which are in principle the same as if taken with mono camera and filter (apart from the fact that regular filters have different response curve and QE).

Regular debayering employs interpolation to fill in missing pixels and produces color sub of same dimensions (pixel count in height and width) from mono sub. Problem with this approach is that you are effectively "making up" missing values. It is done cleverly so that image remains smooth (like doing average of two adjacent red pixels to create missing value), but problem with this approach remains - you cannot recover missing detail. Image created this way will still have the same detail as image sampled at twice lower rate (which it in fact is) - you will get the same thing if you take image sampled at twice lower rate and upsample it to larger size (that is what debayering in fact is).

If you want sharp image (or rather sharper) from bayer matrix - you are better off splitting image into colors without resampling / debayering the regular way.

Easiest way to describe this process would be with following image:

image.png.6593422ded323f99a07ba9782d7e53d0.png

First part of the process would be to split colors into separate "sparse" grids, and then next step (here denoted as interpolation because it explains regular debayering workflow) would be to "condense" those sparse grids instead of interpolating missing pixels.

This way bayer matrix sub is split into 4 smaller subs, one containing only red pixels, one with blue pixels, and two with green pixels ("green1" into one smaller sub and "green2" into another). Now we have color subs as if we took them with mono camera and appropriate filter with a small difference - sampling rate is twice lower (because of color sensor and bayer matrix) and you end up with twice as much green subs then red and blue (this is due to color bayer matrix being developed for day time photography primarily and eye is most sensitive in green and green response curve closely matches that of perceived luminance in human vision so having more green gives better SNR and definition in that part of spectrum that is more important for human vision).

Resulting subs will have half the pixels in height and width resulting in x4 smaller file.

After we do this - then we can bin each of those color subs further in software to increase SNR and decrease sampling rate. This way we preserve color information and increase SNR - however note that x2 binned color sub will in effect have x4 less pixels in height and width than original sub. This might look odd, but that is only because we are used of thinking in full pixel count for color subs from OSC camera, when in fact, as shown above you do have "half" of pixel count in both height and width for each color.

Of course when you bin raw image in the regular way - you add one red pixel, one blue pixel and two green pixels and you get single value, which turns to "mono" simply because there is no way to reconstruct what numbers you added together to get result (4 can be either 2+2 or 1+3 or 4+0 or in principle infinite number of different numbers can add up to 4 like -8+12, ... so when you take 4 you can't know what two original numbers are added together to form it).

 

Superb piece of work, sorry I missed it last night, I am not sure how, I always read your posts, I don't always understand them but I try to, thank-you for taking the time to write it.

Alan

  • Like 1
Link to comment
Share on other sites

16 hours ago, vlaiv said:

If you want sharp image (or rather sharper) from bayer matrix - you are better off splitting image into colors without resampling / debayering the regular way.

Easiest way to describe this process would be with following image:

image.png.6593422ded323f99a07ba9782d7e53d0.png

First part of the process would be to split colors into separate "sparse" grids, and then next step (here denoted as interpolation because it explains regular debayering workflow) would be to "condense" those sparse grids instead of interpolating missing pixels.

This way bayer matrix sub is split into 4 smaller subs, one containing only red pixels, one with blue pixels, and two with green pixels ("green1" into one smaller sub and "green2" into another). Now we have color subs as if we took them with mono camera and appropriate filter with a small difference - sampling rate is twice lower (because of color sensor and bayer matrix) and you end up with twice as much green subs then red and blue (this is due to color bayer matrix being developed for day time photography primarily and eye is most sensitive in green and green response curve closely matches that of perceived luminance in human vision so having more green gives better SNR and definition in that part of spectrum that is more important for human vision).

Resulting subs will have half the pixels in height and width resulting in x4 smaller file.

 

This is the super pixel deBayering I described earlier. It is an option during image calibration in pixinsight. The only advantage this process has is that it doubles imaging scale and decreases file size. Only in the green channel do you get a slight increase in SNR. There is no improvement in detail, other than perceived. 

Link to comment
Share on other sites

1 hour ago, wimvb said:

This is the super pixel deBayering I described earlier. It is an option during image calibration in pixinsight. The only advantage this process has is that it doubles imaging scale and decreases file size. Only in the green channel do you get a slight increase in SNR. There is no improvement in detail, other than perceived. 

It is a bit different then super pixel mode. In principle much of the results are the same, but splitting channels and stacking them separately has advantage over super pixel mode.

In super pixel mode, you are replacing bayer group (2x2 RGGB or what ever pattern is) with single pixel. If you look the at the position of sampling point, you will notice:

- red grid is "offset" by 0.5px, so is blue but in opposite direction (as no interpolation is done, it is in fact translation of the grid and pixel values remain the same). With this you are slightly misaligning color channels in resulting color sub thus creating slight blur.

- green is handled in such way that you take one green grid - green1 and translating it into one direction, while green2 is translated into opposite direction (again without interpolation) and then you take average - this creates small blur in green channel.

With split approach you avoid this. By registering all subs to same position - you avoid misalignment of color information, and green is no longer blurred (you do proper "shift" instead of "warped" translation of two subs). You have additional benefit of slight noise removal from interpolation process. If you for example use Lanczos kernel to interpolate images while registering them, you remove some of the high frequency components of the noise and you end up with smoother result.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.