Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

CCD/CMOS sentitivity


jambouk

Recommended Posts

Two questions:

1. Is it possible to do binning with a colour chip? Would this increase the sensitivity of the chip?

2. When taking a black and white image with a device which has a colour chip, at what stage in the process are colours "converted" into monochrome?

Thanks for an replies.

James

Link to comment
Share on other sites

Usually it is not possible to bin colour chips because the binning would cause the colour information to become confused. The 4x4 virtual pixel would cover a selection of coloured filters on the Bayer matrix. Some recent camera chips do, I've read, get round this. Generally the answer is no.

I think what follows is correct but it isn't gospel so get further opinions:

A Bayer Matrix-covered chip operates natively in greyscale. Only when subsequent software is told which pixels lie under which colours can it be converted into colour. However, there is another operation to perform; the grid itself must be smoothed out of the picture. Reading of the colour information and removal of the matrix grid artefact is called Debayering. Sometimes it is done in camera, sometimes not. The only time I've ever removed the grid pattern from an image without releasing the colour information has been when making flats for an OSC CCD.

Olly

Link to comment
Share on other sites

As stated above, the key to this is that you have to 'debayer' OSC images to make the final image. There are two broad ways in which this can be done:

- Retaining the original resolution of the chip. For each pixel the colour of its filter is used (e.g. Red), plus the missing colour information from surrounding pixels (Green, Blue). There are different algorithms for combining the colours with Bilinear interpolation (looks at all the adjacent pixels in a 3 x 3 grid) and VNG (Variable Number of Gradients - looks at a 5 x 5 grid of surrounding pixels, calculates the gradient of each colour in 8 directions and rejects those gradients that are too steep - a basic form of edge detection to avoid including pixels from unrelated objects) being the most common.

- Reducing the resolution by a quarter (i.e. binning). This is the 'superpixel' method which takes each block of four pixels containing a red pixel, a blue pixel and two green pixels to make one large pixel. (Typically the two green pixels are averaged to get the green component, but some software just picks one of the two greens and ignores the other).

Overall superpixel debayering should give you a better SNR exactly as binning a mono chip would. The downside for OSC (whether binning or not) is that typically the red pixels are the least sensitive and the green the most, but you can't expose the three colours for different durations to equalise the SNR like you can for a mono camera.

When making a mono image there are three different ways it can be done:

- Shooting a luminance sub on a mono chip (i.e. IR/UV block filter only).

- Shooting a single channel on a mono chip (i.e. one of red, green or blue filters).

- Converting a colour image to monochrome in post processing (either a LRGB combined image or a debayered OSC image).

The third option would typically average the the red, green and blue channels using weightings for each to produce a single brightness value for the mono image. Lots of ways to do that weighting.

There is no simple definition of 'convert to monochrome you see, it depends on what you are trying to achieve - is the mono image supposed to be an accurate representation of the luminous intensity of the target, and if so across what range of wavelengths, or an accurate map of how the scope/sensor responds to that target (e.g. a flat), or something else such as a narrowband image for science purposes?

Link to comment
Share on other sites

Usually it is not possible to bin colour chips because the binning would cause the colour information to become confused. The 4x4 virtual pixel would cover a selection of coloured filters on the Bayer matrix. Some recent camera chips do, I've read, get round this. Generally the answer is no.

Olly

So I take it this applies to DSLRs as well? Not worth tring to bin my subs? I was thinking of trying but haven't yet.

Link to comment
Share on other sites

So I take it this applies to DSLRs as well? Not worth tring to bin my subs? I was thinking of trying but haven't yet.

Ian is far more knowedgeable than I on this subject. I would say that most DSLRs cannot bin data without losing the colour information.

Olly

Link to comment
Share on other sites

Ian is far more knowedgeable than I on this subject. I would say that most DSLRs cannot bin data without losing the colour information.

Olly

I wouldn't be surprised if you were correct. Could it work by taking the color out and turning it into a type of synthesis Lum. layer? To add on top of the color data or is that not how it works?

Link to comment
Share on other sites

There were (are?) some Kodak OSC chips which could do fancy electronic clocking on read-out which really hardware-binned the red/blue/green pixels separately.

You can, of course, bin raw DSLR images (before debayering) in software to produce a new, binned bayer matrix (you might have to write the code yourself though!) .

NigelM

Link to comment
Share on other sites

A few more thoughts:

- The superpixel method is most often used when you use a OSC camera with a narrowband filter. If the RGB filters had no overlap you would only end up with data in one of the three pixels for a given NB filter (e.g. Ha would only illuminate red pixels). In practice there is a fair bit of overlap between the filters, usually the green pixels have some sensitivity in both the red and blue spectrum. You can't use the 'overlapping filter' pixels in isolation, as they will be much less sensitive to the NB wavelength than the main colour for that filter, and so much more noisy. So you average all three colours in to a single monochrome pixel. You can either do this by weighting the channels in fixed proportions corresponding to their sensitivity to the particular NB wavelength, or you could use a flat frame taken through the narrowband filter. By averaging like this you obtain extra signal from these pixels at the cost of more noise, but that noise is averaged in to the superpixel rather than varying wildly from pixel to pixel in the Bayer matrix image. Of course the resulting image is at a lower resolution than the sensor since you average 4 pixels in to one, and also monochrome as any narrowband image would be (until you combine and colour it with other filters later).

- You can also use superpixel to generate a colour image, you simply take four pixels and use the R, B and average of the two G to make values for each channel. The advantage of this is that it is very fast at debayering to create a reduced resolution colour image.

- Bilinear and VNG debayering retain the full resolution of the sensor. Whilst you end up with colour information apparently at the full resolution, it isn't really so. There is colour information for each pixel, but you have interpolated two thirds of the pixels in the R & B channels and half in the G channel. VNG is pretty successful, but even so you will end up with reduced fidelity compared to shooting the same scene with a mono camera and separate colour filters. Because the eye is less sensitive to changes in colour than it is to brightness it is not a major handicap. (Mono imagers also take advantage of this often shoot luminance at full resolution to get as much detail in brightness variation as possible. They will then bin R, G and B filters as they can achieve a satisfactory SNR in much less time by so doing, but with no significant reduction in the quality of the final image).

- I am sure it would be possible to write software to bin a larger number of pixels of each colour from a OSC separately and then combine into a lower resolution colour image. I can't see how you would end up with a better/different result than if you debayered using VNG and then down-sampled the image to a lower resolution, or indeed just used the colour superpixel method.

However you do it, OSC It is a choice between greater resolution in brightness at a cost of reduced (genuine) resolution in colour, or reduced brightness and reduced but accurate colour resolution. Ultimately you can't create information that isn't in the single image in the first place. **

** You can however tease out more data from multiple images by super-sampling. Dither your images when capturing so that each image is slightly offset from the next and then use a drizzle algorithm to stack them at a higher resolution than the native camera resolution (since the individual sensor pixels capture different parts of the sky in each exposure, drizzling is a means of 'un-averaging' data out of multiple pixels that have covered the same part of the sky. I used DSS in the past to do this; if I recall it offers drizzling but doesn't have decent control over pixel rejection (satellites, planes, etc.) when using drizzling. I currently use PixInsight, which has really good pixel rejection choices but it doesn't (yet) offer drizzling as an option, though it was discussed late last year that it might appear eventually. Computationally it is quite a hard problem to do both drizzling and pixel rejection at once (which you have to do). On normal hardware that us mortals might use as the size of the data set grows rapidly for a reasonable number of images and you have to be quite cute at managing memory and files on disk to make it work unless you're using NASA-grade kit. I don't know if anyone else has experience of other stacking software that might offer the best of both worlds?

Link to comment
Share on other sites

Could it work by taking the color out and turning it into a type of synthesis Lum. layer? To add on top of the color data or is that not how it works?

I have seen DSLR processing methods that use a synthetic Luminance for parts of the processing. This is really more about being able to control noise reduction and enhancement algorithms though. As explained above you can't really create information that doesn't exist in the first place and the luminance channel extracted from a OSC RGB image is inferior to a true luminance taken from a mono camera (in terms of noise if nothing else). I believe the synthetic luminance is a useful technique for many of the tools in PhotoShop. Personally I haven't tried it, as most of the NR and enhancement tools in PixInsight where it would be useful to process luminance and chrominance separately offer that as an option without the need to split the image in to components first (though it is entirely possible to do so if required same as it is in PhotoShop).

Not a cue for another ecumenical discussion by the way :) , it's just a different way of skinning the same cat. I've being doing exactly that over the past week using the new PI TGVDenoise tool as a matter of fact. (If anyone who has PI hasn't tried TGVDenoise yet, you really ought to give it a go. It took a couple of hours to figure it out on linear images, apparently easier on stretched ones, but it is probably going to be the weapon of choice from now on for killing off background noise in DSLR images for me).

Link to comment
Share on other sites

The "in-camera" Canon de-Bayering algorithm, I thought couldn't be by-passed?

Even in RAW, if you zoom in on the image there's always the "artifical" 64 million colours, never the core "RGGB" pixel colours?

Link to comment
Share on other sites

The "in-camera" Canon de-Bayering algorithm, I thought couldn't be by-passed?

Even in RAW, if you zoom in on the image there's always the "artifical" 64 million colours, never the core "RGGB" pixel colours?

The camera will only debayer the image if you are shooting JPEG (or RAW and JPEG in which case it debayers the latter). If you shoot RAW, the file is in a proprietary format but basically contains the monochrome pixel values and no colour information (other than the fact that we know they have been shot through a grid of RGGB filters). It also contains other metadata, including the camera's white balance settings which will be used by default when you decode the RAW. These can be overridden by terrestrial post processing software, and are usually ignored by astro-processing software. (Bear in mind that Canon DSLRs do perform on-camera processing of the pixels before they are recorded in the file unlike a CCD camera. It appears that they artificially reduce the amount of dark current on shorter long exposures which makes life harder than it needs to be, but we cannot control that at all).

Not sure what you mean by " "artifical" 64 million colours"?

- An undecoded Canon RAW contains a single brightness level per pixel. That brightness level is represented as a 16 bit number. In theory that allows for 65,536 levels of brightness, but in practice the xxxD and xxxxD cameras only record 12 or 14 bits depending on the model making for 4,096 or 16,384 discrete brightness levels. (Don't know about the high end Canons, can't afford one and if I could I'd buy a CCD anyway).

- How that gets displayed on screen depends on the software that you use to view the RAW. If you use the standard Canon tools (e.g. Digital Photo Professional) or most normal image processing packages they will automatically debayer the raw. Non-astro photographers generally aren't interested in the bayer-matrix image since they can't do anything with it, thus many packages gloss over this whole step or encapsulate it in other tools that make more sense to the terrestrial photographer; for example the white balance process is designed for terrestrial use and generally doesn't work for astronomical targets.

- On the other hand, we are interested in the bayer matrix image since we can apply calibration before debayering, i.e. Bias, Darks and Flats. None of these processes work as effectively if you debayer the calibration and light frames before calibration (especially the flats). So astro-processing software makes debayering an explicit part of the process. It can be done in three ways:

1. Some basic packages skim over the whole process (e.g. if I recall, you don't have to particularly care about debayering in DSS, it just does it for you at the right point after calibration).

2. More advanced packages allow you to decode a RAW to a monochrome image. This just puts all the pixels in a single monochrome image but leaves the bayer pattern in place. You may find them referred to as monochrome CFA images (Colour Filter Array). If you zoom in on the image you can see the checkerboard pattern of the bayer matrix.

3. The alternative approach is to decode the RAW in to a three-channel (but still bayered) image. So the red bayer pixels go in the red channel, green in green channel, etc. Again this may be called a colour CFA image and still shows the checkerboard pattern if you view RGB together, or a 'grid' with pixels surrounded by black borders if you view one channel. Note that this is not the same as debayering, since for a red pixel the corresponding green and blue values will be zero (black) and so on. The big disadvantage of this process is that the decoded image is three times as big, since for each pixel you are storing three 16 bit numbers for red, green and blue, and 5/8ths of them contain no information. This can eat a lot of disk space so option 2 is more usual.

The one advantage I am aware of for a colour CFA image is that you can use channel extraction to (say) take the red channel from one image and combine it with the blue and green channels of another before you debayer the combined set. I am planning to experiment with this as I find that for one image, a given stacking process produces a slightly less noisy red channel and a different stacking process produces a better green and blue channel.

Once you have decoded the RAW in to a CFA image you then have to store it in a file. This might be a FITS file format or a TIFF format. It doesn't affect the content of the image (though as you will realise you shouldn't convert to a PNG, JPG or similar at this stage as typically you will lose data and quality rapidly if you do).

Bear in mind that both the colour and mono CFA images still have to be debayered to produce a useable colour image and if you opened a CFA TIFF in PhotoShop it would look weird The usual processing steps I follow in PixInsight are:

- Stack bias frames in to a mono CFA master bias.

- Stack dark frames in to a mono CFA master dark.

- Calibrate individual mono CFA flat frames with master bias and master dark (calibrating master dark with master bias as part of the process).

- Stack calibrated flat frames in to a mono CFA master flat (one master flat for each imaging session).

- Calibrate individual mono CFA light frames with master bias and master dark (calibrating with master bias again) and applicable master flat.

- Debayer calibrated light frames (usually VNG debayer).

- Register (align) debayered light frames.

- Stack registered light frames in to final image (using pixel rejection to eliminate planes, satellites, etc., noise weighting of individual frames to manage their contribution to stack).

- ... Rest of processing routines.

I tend to do all of that step by step as it gives me maximum quality of the stacked image before I start the rest of the processing routine. In something like DSS all of the above processes happen pretty much automatically once you have loaded your darks, flats, lights, in to the correct image lists. PixInsight has a BatchPreProcessing script which does exactly the same job as DSS and is no harder to use for beginners, but it does hide some of the controls that I use and doesn't allow you to check quality at each step of the process. Typically I'd use BPP to do a quick stack of the images hot off the press and then go back through the manual process once I am sure I haven't got any major defects in the input images.

Link to comment
Share on other sites

Ian,

I hear you, but....maybe not 64 million colours...14 bit is fine ;-)

But if I use say AstroArtV5, to open a Canon RAW file I still get a "multi-coloured" image, which I assume has been de-Bayered.

Which astro imaging packages automatically offer the "CFA" options.....

Link to comment
Share on other sites

Ian,

I hear you, but....maybe not 64 million colours...14 bit is fine ;-)

But if I use say AstroArtV5, to open a Canon RAW file I still get a "multi-coloured" image, which I assume has been de-Bayered.

Which astro imaging packages automatically offer the "CFA" options.....

I played with trial versions of various packages last year when I wanted to move on from DSS, but I can't really remember how AA did the process. Many packages use the dcraw library to decode RAWS (there is a list on the author's page but I don't see AA there http://www.cybercom.net/~dcoffin/dcraw/). This is free open source, and regularly updated by the author to support the ever-evolving RAW file format.

There are also links to stand-alone versions of dcraw for linux, Windows and Mac. If you want to explore RAW files and like command line options, you can convert pretty much any RAW file in to any of the formats we have discussed to do experiments.

I am guessing AA is automatically debayering the RAW files for you when you open them, but I'd be surprised if there were not options tucked away somewhere to get at the CFA files. I guess it depends on how the authors approach OSC camera processing. The simplest approach is to debayer everything first and then deal with the images as normal three-channel colour images, but assuming you are expending effort on getting good bias, darks and flats you should ensure you are making the most of them by using calibration/stacking software that follows a similar process to above.

There is nothing to stop you calibrating in one package and then carrying on the process in another for example, lots of people do that kind of thing. Perhaps some people familiar with other packages could share their experiences of how they perform in this regard?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.