Jump to content

Narrowband

software debayering vs real mono


Recommended Posts

i was curious to know the performance of a real mono ccd vs if someone using software to turn their ccd or camera into mono. aside from the resolution loss would it

be the same thing or completely different?


Link to comment
Share on other sites

  • Replies 28
  • Created
  • Last Reply

In terms of the sensor, each pixel will receive less light as it has a colour filter in front of it. Typically 1/4 and red, 1/4 blue and 1/2 green pixels, with each colour only passing some fraction of the incoming light. Simplistically the sensor if about 1/3rd as sensitive as the same sensor with no Bayer filter on it, so you'd have to expose for three times as long to get the same result.

In practice there is a lot more to consider. An uncooled DSLR sensor will not produce as good an image as a cooled mono CCD sensor due to higher dark current noise and readout noise. On the plus side you get a much bigger imaging area for far less money. You can get larger format cooled colour CCD cameras for about 3x the price of an entry level DSLR, and half the price of a reasonable mono CCD with wheel and filters.

Link to comment
Share on other sites

Software Debayering produces a colour image, not a mono image.

Cheers,

Chris

Mono gives full resolution and QE spectrum - obviously leaving you to image in whatever filter you want at full resolution. Or at full broadband mono.

Bayer matrices will give each colour resolution depending on the configuration, the rest of the image resolution is then interpolated form the available colour pixels. (i.e. that may be every other one).

If you wanted to create a mono image - you would normally attempt this by using each of the interpolated colour images and then adding them (according to ratio).. to give a pseudo mono image.

The strictly speaking OSC 'mono' is a made up of two sets of guesswork as the stars over the image have different spectra.

What happens with an OSC camera is that the image appears as a mono image. The software then has todo the interpolation to guess the missing colours .. that's debayering.

Link to comment
Share on other sites

Whatever colour you ask the software to deliver at the end, the filters were still in place at capture, each one blocking two thirds of the light (or more than two thirds, perhaps.)

I often extract a synthetic luminance from RGB data to add to the 'real' luminance captured in an L filter passing red and green and blue simultaneously. Experiment shows that I need about four times as much synthetic lum as real lum for equivalent signal. This is why mono imaging in LRGB is faster, not slower, than OSC.

Olly

Link to comment
Share on other sites

You have to debayer at the right time though, no? First you dark and flat calibrate and then debayer and stack, though AA knows this and does it in the right order. It's a while since I did any OSC.

My Artemis capture software gave a mono preview. My first sight of colour was in the stack.

There may be something to be said for taking a luminance from an OSC image and processing it in greyscale as you would a normal L layer. You can concentrate on contrast, detail and sharpness. Then you can process the colour layer with other priorities (low colour noise, high colour intensity) and finally combine the two.

By the way, I'm relieved to see the AstroArt site back. It seemed to have vanished a while ago.

Olly

Link to comment
Share on other sites

I bought a ASI174mc to kind of cruse around with, but probably going to go mono and narrow band when I get serious about everything and get it all figured out.

but if I use it in Sharpcap, it previews in color, and saves fits in color with the local drivers. But if I use ascom, with apt, astrolive, sharpcap ascom, or nebulosity, the preview is in mono, and it saves the fits in mono, and I have to convert to color which doesn't seem very accurate and looks way off unless I take the time to dial in the values to look right.

That seemed strange to me, but I am assuming that is the process, unless I did something wrong. But from what I've been reading, I think that is the process.

Link to comment
Share on other sites

oh, I see, in APT the Bayer filter in settings creates the color. Which doesn't resemble to me how it should look. Once I turn off the Bayer filter, it is mono again. 

But RGGB just has a yellowish tint

post-38374-0-88480200-1425553299_thumb.j

is this what RGGB should look like? I would think it would look like what the room would look like

Link to comment
Share on other sites

You have to be sure that your debayering software is set to the particular Bayer pattern over your chip. For instance, the top left hand pixel could be any colour. There is only a finite number of possibilities so (using Astro Art in my case) I tried the various permutations on a shot of my kitchen until I got the right colour. Once you know the pattern you save it as default and forget about it.

Olly

Link to comment
Share on other sites

But what I don't get, is that. If I have a color camera, why is it being shot in mono, and simulated color bby converting to color? It defeats the purpose of a color camera?

Or is that because the ascom drivers don't support color?

Link to comment
Share on other sites

Hi Vertigo262,

You may have the idea that an electronic "color" camera records all three primary colors equally, at the same time.

This was the case many years ago before the advent of compact silicone detectors used in DSLR's and CCD cameras of today.

Old video camera's of yesteryear used three individual color tubes, or detectors, each with a separate primary color filter in front of it's lens and at transmission each color channel was sent to three electron guns in your color tv, each gun being aligned with its dedicated red, green or blue phosphor which glowed when the electrons from the color gun stuck the phosphor, creating an exact copy of what the camera saw and with no extra processing needed, after all, there were no computers around to do this when colour video cameras first appeared.

Some specialist applications today do still use three separate single channel cameras to detect all three color channels simultaneously but not too many amateur astronomers could afford the expense!

Color cameras in general use today do not take a "color" picture as such, they take a mono picture through a Bayer mosaic of color filters, each individual pixel below each color filter of the mosaic only records a grey scale pixel value depending how many photons make it through the filter. This applies equally to OSC astro cameras as well as HandyCam movie cameras, webcams, DSLR's, color security cameras, TV studio cameras etc etc.

Pretty much any color camera in general use today uses the same principle.

An image recorded in this way has no actual color content, it is just a monochrome picture made up of a range pixels values ranging from black, thru grey to white.

The software used to display the image has a "map" of the Bayer pattern mosaic used in the camera and assigns each pixel from the camera it's real color chanel in accordance with the color filter of the mosaic pattern that it knows was was above the pixel.

To do this the software needs to know what layout the camera mosaic of color filters had, and as different manufacturers have different ideas about this there are many different mosaic patterns, called debayer patterns, to choose from.

The software then recreates the color picture using this "map" 

You could simply display the non-deBayered image as monochrome but it would look odd as each pixel would not have the same intrinsic brightness as the one beside it, the bayer mask prevents an equal distribution of photons reaching all adjacent pixels.

Software, such as Photoshop, can convert a deBayered color image into true monochrome by stripping out the color information and balancing the individual adjacent pixels values to produce a smooth monochrome picture but you have gained nothing in doing this that couldn't be seen in the original color version, and in fact, the conversion process will have thrown some pixel data away in the smoothing process.

Ascom drivers are not used in the debayer process, drivers are for hardware, not image reconstruction, but the Ascom camera driver does need to know what type and make of camera is connected so that when it downloads the image from the camera it knows that a Bayer mosaic was used on the source image and it records this information in the image header of the file so that any suitable image processing software, capable of reading the FIT file format, on opening the image, is capable of applying a deBayer mask and can display the image in its real computed colors.

So what is the purpose of a OSC camera?

It simplifies the process of recording a color image by reducing the number of individual shots required to make a picture.

If you use a full monochrome camera, without a Bayer mask to record a color scene how do you record the colour?

You have to take each picture three times, each picture is taken with a different primary colour filter in front of the camera.

The three "color" pictures are still only "monochrome" but the individual pixels making up each color filtered picture will vary in grey intensity depending on the "color" of the photon reaching the detector from the object photographed. Blue photons will make it thru the blue filter but will be stopped by the red and green filters, the same applies to red photons, passing thru the red filter, stopped by green and blue filters and green photons passing thru the green filter but stopped by red and blue.

The Photo reconstruction software then assigns each of the three individual monochrome images to it's correct color channel, either by reading the image header to find out which filter was in the camera when that shot was made, or else you have to tell it what filter was used manually, then the image reconstruction software "reconstructs" the image, pixel by pixel, assigning the correct colour to the final image in accordance with the intensity recorded by each corresponding pixel in each of the three images.

The advantage of an OSC, one shot color camera, is you just need to take one shot to make a color picture, no expensive filters and filter wheels to buy and you can take three full color images in the time it takes to shoot three separate filtered single color channel shots thru a mono camera.

The disadvantage of the OSC is a reduced overall sensitivity to light as each pixel is permanently covered by its Bayer matrix mosaic filter, and if adding a narrow band filter in front of the camera you effectively shut down any pixel that does not record the pass thru color of the narrow band filter used, reducing resolution and sensitivity even more.

If using calibration frames with your OSC you still only need to grab a few extra flats, darks and bias in order to have a complete calibrated image at the end of the night.

The advantage of a monochrome camera is all pixels are able to record whatever color light reaches them, depending whether a white-light luminance filter, color filter or narrow band filter is used in front of the camera, resolution and sensitivity are therefore higher than for an equivalent OSC camera using the same sized pixels.

The disadvantage of monochrome camera is three shots have to be taken to make an RGB image, or four shots if you are shooting L-RGB to gain the maximum sensitivity and resolution possible.

Add in flat frames for each color filter, thats another three or four shots at least and then darks and bias frames too, you will see a huge undertaking in time has to be made in order to just produce one good color shot with a mono camera.

It is not unusual for me to take over a hundred shots with a mono camera, thru different filters, of just a single object, and what do you do when you only mange the red and green filters before the cloud comes in for the next six weeks and no blue shots to complete the image?

As far as the software "simulating" a color image, this statement is a bit misleading, some image reconstruction and processing software, whether Ascom compatible or not, will show you a "quick" preview of the deBayered color image before allowing you to go ahead and work on the images in either un-deBayered monochrome, so that you can "calibrate" the image, removing image artefacts by the use of flat frames, dark frames and bias frames, or just go right ahead and carry out image processing on the deBayered color version.

Some processing software, mostly that supplied as dedicated software with the camera, already knows what the camera and it's Bayer pattern is so it will just go ahead with the debayer and display the full color image automatically as it downloads from the camera, but here's the thing, it is only doing this for the on screen display, the source image data is not altered by doing this, the original image file still is a monochrome, bayered image and can be opened in any other software package capable of reading the file format and you can calibrate the shot before deBayering it to display the color shot or carry out extra processing tweaks.

When you have finished processing the image to your satisfaction you can then save the file as a full color jpg or tif image, only now will it become a normal "color" image that you email to friends or post on a web site, or you can save it as a deBayered FIT file so that the next time you open it in astro processing software it will automatically open in full color, or just close it with out saving and the file will remain monochrome and non-deBayered until the next time you want to process it.

Link to comment
Share on other sites

thank you oddsocks,

that was very detailed and explained a lot. things make more sense.

interesting enough, if i use the camera with the non ascom driver in sharpcap, it saves the color fit accuratly. so i guess they modified the software or local drivers for their pattern and saves a color fit. but now i understand why all the raw data is mono and not universal

very helpful

Link to comment
Share on other sites

Just an addition comment:

the colour you seen in the re-constituted de-Bayered bears no direct (see later) relationship to the GRRB matrix....

If you zoom in to pixel level on a colour image you will NOT see a grid of green/ red/red/ blue pixels!!

You'll probably see one of the 64 million colour options available in the pallet.

The de-bayering algorithm options are many and each can give a subtle different outcome.

Each algorithm takes the corner of each GRRB pixel and by using the signal(s) from the surrounding pixels, comes up with what it thinks should be a suitable colour!!!

I've never done it, but it would be very interesting to compare a RGB mono image to the algorithms choice of colour at the same (X-Y) point.....

Craig Stark has a nice page which explains this a bit more fully....

http://www.stark-labs.com/craig/resources/Articles-&-Reviews/Debayering_API.pdf

Link to comment
Share on other sites

That is simple, just take the OSC image and debayer it, you have to do that to remove the grid mask, then in the color calibration module of the image processing package, or afterwards in Photoshop adjust the color saturation to zero.

You will be left with an monochrome image that is correctly calibrated for "luminance", it will exactly render the image with the same grey scale values that the original image had but all the color information will be missing.

Then save the image as FIT, it will be saved as a mono, debayered image and can no longer be converted back to color.

There is nothing you can do manually to a bayered OSC FIT image to recover the luminance data only that is any different to the method detailed above.

Link to comment
Share on other sites

If you open the RGB image in Ps and go to Mode, Lab Colour, then go to Channels, Split Channels, you can save the L (lightness) channel as a monochrome image.

I'm not totally in agreement with Oddsocks on LRGB imaging. The key filter in LRGB is the L filter. This shoots R and G and B simultaneously and is, therefore, at least three times as fast as a colour  filter, whether in an OSC camera or a mono with filters. If you shoot 4 hours in an OSC you get 4 x 1/3 efficiency. But if you spend an hour per filter in LRGB you get 3 x 1/3 efficiency plus 1 x full efficiency. The advantage in signal is in the ratio of 6 to 4 in favour of LRGB. LRGB is faster. That is assuming that you don't bin the colour but, if you do, the LRGB speed advantage is further extended. And finally, when I compare three hours of synthetic luminance from an RGB image with one hour's true luminance I find that they are not equivalent. The signal from the L filter is considerably stronger. I'd estimate, on this basis, that colour filters don't get up to 1/3 efficiency. Not by some margin.

Multiple LRGB flats? A pain, yes, but I stopped bothering with them some time ago. I found when I experimented a little that using a luminance flat gave the same result as using the correct flat - so that was a relief! If it doesn't work for you then it doesn't, but it works for me on the three mono rigs I use here.

Olly

Link to comment
Share on other sites

There's a common misconception that when you shoot in OSC you're getting 1/3 of the light, since each pixel has a colour filter over it than only lets that colour pass. That's not quite accurate. All else being equal, the single pixel in your OSC camera that is covered with a red filter over it is effectively the same as a single pixel on a mono CCD with a red filter over it; both pixels are only capturing light within the range passed by the filter; neither pixel is inherently more efficient in this context. It's not like each pixel is capturing all colours simultaneously and can thus only dedicate 1/3 of its efficiency to each colour.

What you're actually trading in is resolution. Because of the bayer filter in your OSC, for every 4 physical pixels you're dedicating 1 to the capture of red light, 1 to blue light and 2 to green light. With your rmono CCD you're capturing all 4 in red, all 4 in blue, all 4 in green depending on which filter you have in front of the camera. Of course, you can capture all at once by not using a colour filter at all, but if you do that you have no way of generating a colour image because you simply do not have the information.

As Nick mentioned earlier, when dealing with colour cameras, software takes this grid of raw values produced by OSC cameras, and fills in the gaps. So it takes your 1 red pixel, 2 green and 1 blue and using some smarts it converts them into 4 pixels of each, but of course, you can't add information that's not there, so even its best guess isn't what you would actually have if you really had 4 pixels of each colour.

All else being equal, I can't see any reason why taking a bayer OSC image and summing the grid wouldn't result in a value similar to what you would get if you shot mono with separate filters and summed the values.

Of course, all else isn't equal and there's a lot of other factors to consider...

Link to comment
Share on other sites

Here's a comparison from SBIG that shows one of the "other factors" I referred to, and probably one that contributed to the results in the link you posted. The dashed lines represent a mono with external filters, and the solid lines represent the OSC with bayer matrix on the sensor. As can be seen, the QE is much better in the former. Why is this? I'm not entirely sure, could be as simple as the efficiency of the bayer matrix vs the filters used in this test, which is one of the variables I was talking about when I said that "all else isn't equal".

2001M_vs._2001CM_qe.jpg

If you look at response curves for other cameras (e.g. the Atik 314L+ variants) you'll see something similar; the spectral response on the OSC is quite different.

Another big real world factor is that resolution I was talking about. Remember that if you have 1000 pixels on a OSC, only 1/4 of those are capturing red, and software guesses what the other 750 were. It might be a good guess, but it's still a guess. For a OSC to have the same resolution in each colour channel, it would either need to have a bigger sensor, or smaller pixels. If the latter, then you've spread out the light over more pixels, essentially getting you closer to the point of oversampling, which means you're getting less light in each pixel but the read noise remains constant so your SNR goes down.

I'm sure there's a lot of other factors that people more knowledgable than I am could point out as well :-)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.