Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

which debayer algorithm


chrisg18

Recommended Posts

ive got a zwo 174mc camera. in firecapture which is the correct debayer algorithm box.  1. nearest neighbour  2..blinear...3hqlinear...4edge...5smoothhue.6adaptive..7 vng.. ive been using blinear on Jupiter and it seems ok. any ideas? thanks chrisg

Link to comment
Share on other sites

I don't think there is "the best" debayer algorithm, it depends on usage scenario. For example, I would go for bilinear when sampling on critical resolution or above, and raw / bayer drizzle up to critical resolution. Critical resolution is point where if you will not have any more detail due to diffraction effects of aperture. These effects tend to produce effect similar to Gaussian blur (PSF not being Gaussian curve but rather Airy pattern), and therefore after certain point samples are smooth enough to be approximated by bilinear to a good degree. Below this critical resolution it is SNR trade off, do you want your noise to come from guessed missing samples, or from stacking less exposures (this is what bayer drizzle does, it uses 1/4 of samples for R and B and 1/2 for G in each frame).

On technical side, maybe best approach is to just shoot in raw and deal with debayer later. This way you are not stressing CPU (it just deals with recording and not debayering in realtime), and you can try different debayer techniques later, on recorded data and compare results.

Link to comment
Share on other sites

Are you sure that you are not mixing terms up? Debayering relates to telling the software where the RGB filters are in the Bayer matrix on the chip, to ensure that the colour is correct. Get this wrong and you will end up with weird colours in the final image.

Bilinear, Nearest Neighbour and so on are interpolation algorithms. These software routines fill in the gaps that are caused by the Bayer matrix. What Firecapture is attempting to do is to apply the correct Bayer matrix and then use an interpolation alogrithm to "fill in" the missing Red, Green and Blue data.  Bilinear or HQ Linear is superior to Nearest Neighbour (which is why old stacking programs such as Registax which uses Nearest Neighbour gives poorer results than more modern software).

A better option is to use Autostakkert!2 for both de-Bayering and interpolation on-the-fly during stacking. AS!2 uses a Bayer drizzle algorithm that does away with interpolation. Instead of using clever software to guess what the missing data should be, it uses slight drift in the image to fill in the missing data with real data. It's also why OSC cameras are very close in performance to mono for planetary imaging. In the best hands and in good conditions mono still has the edge, but for the majority of users in the majority of conditions the difference is very slight.

The added advantage of doing the de-bayering after capture is that if you get the wrong Bayer matrix in Firecapture then the data is useless as it de-Bayers on-the-fly. If you select the incorrect Bayer matrix in stacking, then you can always go back and select the correct one. the underlying data isn't touched, only the output.

In summary, unless there is a particular need to use Firecapture's DeBayer software you are probably better (and quicker) to just use Autostakkert for de-Bayering, stacking and cropping.

 

See this article for more information:

http://www.skyandtelescope.com/astronomy-resources/astrophotography-tips/redeeming-color-planetary-cameras/

Link to comment
Share on other sites

nearest neighbour will be much too simplistic, bilinear only a little better (simple interp between the 4 closest pixels) and both will produce blocky edges, as though not anti-aliased.

VNG (variable number of gradients, apparently) is good, and is what is used in Pixinsight.  Not sure what the others are. 

Experiment and see which you like best.

Link to comment
Share on other sites

thanks to all your replies. I think ill use hq linear in firecapture. the link in Zakalwe,s link is very informative.ill also experiment with raw and debayering in autostakkert2 .

Link to comment
Share on other sites

  • 9 months later...

I have a demosaicing problem. Looking at the RAW pictures of the Orion Nebula, they show purplish colours for the nebula. I now understand about the demosaicing operation but, with the default coefficients, I get all greens. Messing about with random coefficient values produces random colours but nothing like the RAW original. The camera is a Pentax k2s. Those 3X3 matrix coefficients seem to be the only tweakable values. it also confuses me that the Bayer matrix is basically 4X4(???) so I can't  even guess what those 3x3 values are doing.

The Nebulosity manual doesn't help me at all, I'm afraid.

Help

Link to comment
Share on other sites

1 hour ago, sophiecentaur said:

I have a demosaicing problem. Looking at the RAW pictures of the Orion Nebula, they show purplish colours for the nebula. I now understand about the demosaicing operation but, with the default coefficients, I get all greens. Messing about with random coefficient values produces random colours but nothing like the RAW original. The camera is a Pentax k2s. Those 3X3 matrix coefficients seem to be the only tweakable values. it also confuses me that the Bayer matrix is basically 4X4(???) so I can't  even guess what those 3x3 values are doing.

The Nebulosity manual doesn't help me at all, I'm afraid.

Help

Bayer matrix is 2x2 (one red, one blue and two green pixels), so matrix - I'm guessing this, that you are talking about is color transform matrix, i.e. camera RGB values to image RGB values - these are two vectors size of 3 so matrix 3x3 is used to transform between them. If that is the case then matrix should be doing following: R in image = M00 x CameraR + M01 x CameraG + M02 x CameraB, G in image = M10 x CameraR + M11 x CameraG + M12 x CameraB, same for B in image. If you want exact mapping between two color vectors - use unity matrix - one with 1 on diagonal (top left -> bottom right) and 0 everywhere else.

Green cast to image is "normal" when doing astrophoto with OSC camera or DSLR. Chips are made this way to be more sensitive in green - wider band, because daytime photo has most light in this band. You can leave matrix at unity value and correct green in post processing by reducing green channel value (curves and such).

Link to comment
Share on other sites

23 hours ago, vlaiv said:

Bayer matrix is 2x2 (one red, one blue and two green pixels), so matrix - I'm guessing this, that you are talking about is color transform matrix, i.e. camera RGB values to image RGB values - these are two vectors size of 3 so matrix 3x3 is used to transform between them. If that is the case then matrix should be doing following: R in image = M00 x CameraR + M01 x CameraG + M02 x CameraB, G in image = M10 x CameraR + M11 x CameraG + M12 x CameraB, same for B in image. If you want exact mapping between two color vectors - use unity matrix - one with 1 on diagonal (top left -> bottom right) and 0 everywhere else.

Green cast to image is "normal" when doing astrophoto with OSC camera or DSLR. Chips are made this way to be more sensitive in green - wider band, because daytime photo has most light in this band. You can leave matrix at unity value and correct green in post processing by reducing green channel value (curves and such).

I am confused by your "green cast" remark. The nebula in the picture on the camera viewer and when previewed or in photoshop is definitely pale purple. I am not getting just a green cast (a familiar effect on some normal photos with some lighting conditions. The nebula looks emerald green and nothing like the other versions. The debayering in the camera and PS agree with each other. The .fit picture is so very different.

I was using the default unity matrix up until the time I started playing about with the coefficients.

I will try trifling the colour curves  on the .tif file.

Cheers

Link to comment
Share on other sites

Green cast I was referring to is typical of light polluted sky (if much of sky illumination is full spectrum, not from sodium lights) if image is not white balanced. What color is the background if any (it will be dark if you have no LP)? When using debayer you need to specify correct bayer matrix order, failing to do so can cause strange colors / color casts on image. Also note about fits format - some software threats fits format in top to bottom, whilst other threats it from bottom to top - this can lead to wrong debayering because bayer matrix is vertically mirrored. Try different bayer matrix orders and see if it helps.

Link to comment
Share on other sites

I am assuming that my DNG picture is 'what I would see' if my eyes were good enough. That shows me a pale but definitely purple nebula. Why should the .fits picture show anything different? There are two other buttons on Nebulosity - an X and Y 'offset' control. Could that be a way to solve the problem? I will suck it and see.

The dark background is pretty much neutral but the camera noise, when ISO is cranked up, has a touch of orange about it.

Link to comment
Share on other sites

If you have raw bayer matrix picture which you debayer assuming for example RGGB (meaning top left is Red, top right and bottom left is Green, and bottom right is Blue), and image is flipped vertically - and that would happen if software that created fits assumes one direction, for example bottom up, and software that displays fits assumes other direction - top to bottom - then "interpretation" of bayer matrix will be wrong - software assumes it is RGGB but since picture is flipped in reality pixels swap places in vertical direction so in effect you have GBRG bayer matrix configuration - so what is red and blue in picture will actually be green and what is green will be a mix of red and blue.

Mix of red and blue is purple-ish, and if sensor has biggest sensitivity in green - giving green cast to image, wrong bayer matrix interpretation can lead to picture having purple-ish cast instead of green.

You can simply check if there is vertical mirroring of image in fits vs DNG - just look at image orientation if two images are different - what is at bottom is at top on the other then you have image flip in vertical direction, and if both are raw, when you debayer them with same bayer matrix setting - one of those will be wrong (either fits or dng, depending on actual bayer matrix vs applied transformation) - they might both be wrong because you have 4 options for bayer matrix configuration - RGGB, BGGR, GRBG and GBRG

Link to comment
Share on other sites

On 06/01/2017 at 11:33, vlaiv said:

If you have raw bayer matrix picture which you debayer assuming for example RGGB (meaning top left is Red, top right and bottom left is Green, and bottom right is Blue), and image is flipped vertically - and that would happen if software that created fits assumes one direction, for example bottom up, and software that displays fits assumes other direction - top to bottom - then "interpretation" of bayer matrix will be wrong - software assumes it is RGGB but since picture is flipped in reality pixels swap places in vertical direction so in effect you have GBRG bayer matrix configuration - so what is red and blue in picture will actually be green and what is green will be a mix of red and blue.

Mix of red and blue is purple-ish, and if sensor has biggest sensitivity in green - giving green cast to image, wrong bayer matrix interpretation can lead to picture having purple-ish cast instead of green.

You can simply check if there is vertical mirroring of image in fits vs DNG - just look at image orientation if two images are different - what is at bottom is at top on the other then you have image flip in vertical direction, and if both are raw, when you debayer them with same bayer matrix setting - one of those will be wrong (either fits or dng, depending on actual bayer matrix vs applied transformation) - they might both be wrong because you have 4 options for bayer matrix configuration - RGGB, BGGR, GRBG and GBRG

I have solved the problem by doing things another way. My DNG files go into Aperture (OS X) and I can fool around with levels etc and then save them as Tiff, which Nebulosity recognises and produces the right (same) colours. Final results are getting more encouraging. 

I was disappointed to find that PS Elements  (which I bought recently) doesn't handle 16 bits. Any upgrade from Adobe involves actually renting the software and I am certainly not up for that. I need something that will let me construct 16 bit images from multiple images (AP enthusiasts are such liars with their pictures!!) and also some batch handling would be useful. Aperture will do that for simple operations - levels curves colour balance etc but it really isn't an editor. I may try GIMP, which is free but it's hard to get all the necessary info about the alternatives before making a choice. I used to have a hooky version of PS and that was brilliant but I decided on Honesty and look where it has got me!!

PS my pictures are all the right way up! 

Link to comment
Share on other sites

Glad you solved it. Give GIMP a go, but make sure you use 2.9 version (still in development phase), rather than 2.8.

GIMP 2.8 can only handle 8bit per channel, 2.9 introduced support for 16, 32 and 64 bits per channel.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.