Jump to content

Debayerization


Guest

Recommended Posts

I suddenly realized that I didn't fully understand what debayerization is.

I start with CR2 files from my canon camera and process these in deep sky stacker to make FITS files. In one book I have it says that debayerization calculates the missing colours in each pixel by averaging the colours in the surrounding pixels. So that would mean that each pixel in the image has a value for all three colours.

But what happens if the colour layers are kept separate? In my fits files there's a layer for each colour and each layer has a size corresponding to the full size of sensor. That would mean that the red layer, say, has exact values for red in some pixels but what values are in the pixels which are not red? Are they just zero? Or does the debayerization interpolate the intermediate values?

 

Link to comment
Share on other sites

On 17/12/2020 at 08:12, woodblock said:

debayerization interpolate the intermediate values

Hi

I'm no expert but no one has replied so...

AFAIK, you choose at the time of debayer. Or at least with Siril you do. The averaging debayer -quite often called 'superpixel'- makes one blob by taking a block of 4 say rggb pixels and taking a mean (median? Std. Deviation?) value. You can also debayer normally and then split the colour channels, but bear in mind, there is always 'bleed' of one colour into the next; the Bayer filter wavelengths overlap. As do rgb filters when using a monochrome camera.

Call out to @vlaiv (sorry) to correct me!

HTH

Edited by alacant
  • Like 1
Link to comment
Share on other sites

Yes, to expand to answer already given - image prior to debayering is like monochromatic image.

It is in fact monochromatic image with different pixels having different intensity because they recorded different wavelengths of light. Enlarged, it looks like this:

image.png.6aa5cacaf05544d8e7406c49b0a035d8.png

Debayering is process of turning this information into three separate color channels - three monochromatic images that will not have checkerboard pattern and will contain correct color.

There are three principle ways of doing this:

- interpolation

- separation

- superpixel mode

In interpolation mode - software "makes up" missing pixels by interpolating surrounding pixels of the same color. Simplest way of interpolating is linear interpolation - in 1D that would mean - take pixel to the left of missing and take the pixel to the right of missing and average their values to replace missing value

This mode makes up values and for this reason is not my preferred way of doing things for astronomical applications - but is wide spread in regular use - probably because they are selling sensors that have 24MP or 6000x4000 and people expect to get 6000px x 4000px images from this sensor. Also critical sharpness/accuracy is not needed in daytime photography.

Separation mode works by simply extracting 4 sub fields from image - each containing only pixels at same position in bayer matrix. Only red pixels are extracted to one image, only blue to another, only "upper left" green (sometimes denoted as GR) to one green image and "lower right" green (denoted as GB) to another green image - so you end up with 1 red, 2 green and 1 blue mono images.

Each of the images will have twice lower resolution than original bayer matrix image (simply because that is count of corresponding pixels). Maybe best explained by image / diagram:

image.png.7346f2d8bbdf32efa5357312d2ff32a0.png

Here it explains with interpolation - by making up blanks, but instead of that - imagine you "squish" R and B samples - simply delete missing spaces. This can be done as samples are not "squares" but are just points and sampling rate is distance between points - in that sense - we take just points that we have and say we have longer distance between them - lower sampling rate  / lower resolution.

In the end superpixel mode is sort of combination between these two - it takes one group of 2x2 pixels and makes up single pixel out of them - it takes R from R, B from B and G makes out of (G1+G2)/2 - average of G pixels.

I personally prefer and advocate separation method for astronomy.

 

 

  • Like 2
Link to comment
Share on other sites

41 minutes ago, vlaiv said:

Yes, to expand to answer already given - image prior to debayering is like monochromatic image.

It is in fact monochromatic image with different pixels having different intensity because they recorded different wavelengths of light. Enlarged, it looks like this:

image.png.6aa5cacaf05544d8e7406c49b0a035d8.png

Debayering is process of turning this information into three separate color channels - three monochromatic images that will not have checkerboard pattern and will contain correct color.

There are three principle ways of doing this:

- interpolation

- separation

- superpixel mode

In interpolation mode - software "makes up" missing pixels by interpolating surrounding pixels of the same color. Simplest way of interpolating is linear interpolation - in 1D that would mean - take pixel to the left of missing and take the pixel to the right of missing and average their values to replace missing value

This mode makes up values and for this reason is not my preferred way of doing things for astronomical applications - but is wide spread in regular use - probably because they are selling sensors that have 24MP or 6000x4000 and people expect to get 6000px x 4000px images from this sensor. Also critical sharpness/accuracy is not needed in daytime photography.

Separation mode works by simply extracting 4 sub fields from image - each containing only pixels at same position in bayer matrix. Only red pixels are extracted to one image, only blue to another, only "upper left" green (sometimes denoted as GR) to one green image and "lower right" green (denoted as GB) to another green image - so you end up with 1 red, 2 green and 1 blue mono images.

Each of the images will have twice lower resolution than original bayer matrix image (simply because that is count of corresponding pixels). Maybe best explained by image / diagram:

image.png.7346f2d8bbdf32efa5357312d2ff32a0.png

Here it explains with interpolation - by making up blanks, but instead of that - imagine you "squish" R and B samples - simply delete missing spaces. This can be done as samples are not "squares" but are just points and sampling rate is distance between points - in that sense - we take just points that we have and say we have longer distance between them - lower sampling rate  / lower resolution.

In the end superpixel mode is sort of combination between these two - it takes one group of 2x2 pixels and makes up single pixel out of them - it takes R from R, B from B and G makes out of (G1+G2)/2 - average of G pixels.

I personally prefer and advocate separation method for astronomy.

 

 


 

Hi vlaiv, I know nothing about debayering apart from having had it explained to me, by you I think, that I needed to do it in FireCapture in order to see some colour, I was originally only seeing Mars as a white circle. However, it only meant moving some sliders around until Mars looked  the right colour.

My question is do I need to readjust this for other objects, such as Jupiter or the moon? Or leave things as they are because that is the right settings for my camera and by selecting one of the other objects in the FireCapture menu, say Saturn, the adjustment is done automatically from my settings. I don’t understand why it’s even necessary in the first place to be honest because with every other camera the colour balance is automatic. I suppose there must be a good reason for it.

Link to comment
Share on other sites

22 minutes ago, Moonshed said:

My question is do I need to readjust this for other objects, such as Jupiter or the moon? Or leave things as they are because that is the right settings for my camera and by selecting one of the other objects in the FireCapture menu, say Saturn, the adjustment is done automatically from my settings. I don’t understand why it’s even necessary in the first place to be honest because with every other camera the colour balance is automatic. I suppose there must be a good reason for it.

You are asking about slightly different process - once you complete debayering and you have color information - how to make that color information look right to human eye - color balancing.

The reason for astronomical cameras not having this feature integrated is simple - it is not consumer grade product. Most astronomical cameras are either scientific sensors (where you don't want to mess with the data) or for industrial / surveillance application - where application will dictate color balance.

On the other hand, DSLR cameras that produce images that people look at - they have that and a bunch of other features related to that (vivid color, white balance, different mood settings and so on).

Answer to your question is unfortunately not straight forward - or rather you can do either - adjust each time or find best settings and leave it at that - depending on what effect you want to achieve.

On one hand - you should find best settings and leave it like that. This is because color balance is used to offset different lighting conditions (sunshine, artificial lights, cloudy day, etc ...). In outer space for planets of our solar system - there are no different lighting conditions - they are always lit up by exactly the same sun in the same way. For this reason, you can find best color balance settings and leave it there.

On the other hand - we are not viewing planets from outer space but from under our atmosphere. Our atmosphere distorts color, as can be easily seen in this comparison images:

sun color when high in the sky:

image.png.93d94c47cad83dae7b1340a33ccee816.png

sun color at sunset

image.png.14bfbc192cb4de98eb9eee237e26560c.png

Sun is actually white-yellowish, but when that light passes thru our atmosphere - blue wavelengths get scattered away (thus making sky blue) and sun turns more yellow. More atmosphere light travels thru - less blue it has and stronger yellow/red cast.

Same thing happens with planets - if you want to have same color of the planet high in the sky and lower down towards horizon - then you need to slightly adjust your color balance between the two.

Hope this helps.

  • Thanks 1
Link to comment
Share on other sites

28 minutes ago, vlaiv said:

You are asking about slightly different process - once you complete debayering and you have color information - how to make that color information look right to human eye - color balancing.

The reason for astronomical cameras not having this feature integrated is simple - it is not consumer grade product. Most astronomical cameras are either scientific sensors (where you don't want to mess with the data) or for industrial / surveillance application - where application will dictate color balance.

On the other hand, DSLR cameras that produce images that people look at - they have that and a bunch of other features related to that (vivid color, white balance, different mood settings and so on).

Answer to your question is unfortunately not straight forward - or rather you can do either - adjust each time or find best settings and leave it at that - depending on what effect you want to achieve.

On one hand - you should find best settings and leave it like that. This is because color balance is used to offset different lighting conditions (sunshine, artificial lights, cloudy day, etc ...). In outer space for planets of our solar system - there are no different lighting conditions - they are always lit up by exactly the same sun in the same way. For this reason, you can find best color balance settings and leave it there.

On the other hand - we are not viewing planets from outer space but from under our atmosphere. Our atmosphere distorts color, as can be easily seen in this comparison images:

sun color when high in the sky:

image.png.93d94c47cad83dae7b1340a33ccee816.png

sun color at sunset

image.png.14bfbc192cb4de98eb9eee237e26560c.png

Sun is actually white-yellowish, but when that light passes thru our atmosphere - blue wavelengths get scattered away (thus making sky blue) and sun turns more yellow. More atmosphere light travels thru - less blue it has and stronger yellow/red cast.

Same thing happens with planets - if you want to have same color of the planet high in the sky and lower down towards horizon - then you need to slightly adjust your color balance between the two.

Hope this helps.

Thank you for the information, much appreciated, I now understand it much more than I did before. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.