Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Debayering a DSLR's Bayer matrix.


RAC

Recommended Posts

For those interested and not read up on it before. This is a nice image showing the effect of the microlens array on a sensor and why it has such an impact on the QE / sensitivity of the sensor:

microlensarray.jpg

The microlens basically enlarges the light gathering surface for each pixel, increasing the amount of light reaching each photodiode (pixel) significantly and so increasing the QE / Sensitivity of the sensor.

Which reflects in the Sensor QE graphs we see.

Removing the microlens array during the debayer process decreases the light gathering capacity of each photodiode (pixel) significantly and so decreasing the QE / Sensitivity.

So while, after the debayering, you utilize all pixels when for example using a Ha filter, you will get a higher resolution, but due to the decreased QE / sensitivity, your imaging time (total integration time?) will NOT be shorter than before the debayering. It will be the same or even longer.

Edited by GuillermoBarrancos
Link to comment
Share on other sites

So while, after the debayering, you utilize all pixels when for example using a Ha filter, you will get a higher resolution, but due to the decreased QE / sensitivity, your imaging time (total integration time?) will NOT be faster than before the debayering. It will be the same or even longer.

My results don't agree with your conclusion. Removing the microlenses halved the sensitivity of the photosite, but debayering quadrupled the number of photosites sensitive to Ha and SII. The net effect is double the sensitivity of the sensor which would halve your imaging time.

Link to comment
Share on other sites

My results don't agree with your conclusion. Removing the microlenses halved the sensitivity of the photosite, but debayering quadrupled the number of photosites sensitive to Ha and SII. The net effect is double the sensitivity of the sensor which would halve your imaging time.

But not all photosites will collect for example Ha during a real exposure of a target, or you would get a completely uniform red block as image. No?

So I don't think it's that absolute. At least that is how I interpret all this.

Otherwise all sensor QE graphs would show a 75% higher QE in red band for Mono compared to OSC.

Some sensors QE graphs don't show any difference in the red band at all! Like the KAI 11002.

I stand fully corrected if I'm wrong in the end, when real imaging tests of a Ha region gets imaged 4 times as fast with a debayered DSLR than a normal fully astro modified DSLR.

Proof will be in the pudding as they say. :D

Edited by GuillermoBarrancos
Link to comment
Share on other sites

Indeed :D

Here are the results from my tests using Ha, OIII and SII filters.

An excellent test and good analysis!

I would summarise those results as follows:

1)  Losing the microlenses has reduced the sensitivity of the sensor by a factor of approx 2.

2)  Removing the colour filter array has increased the sensitivity of the sensor by allowing every pixel to receive all wavelengths.

3)  For red (and blue) wavelengths, debayering allows the sensor to capture approximately twice as many photons as previously (4x as many pixels are each receiving half the previous signal).  This must be a good thing, especially for Ha and SII narrowband imaging.  Also, since every pixel is receiving signal, the resolution of the image is increased (because interpolation between coloured pixels is no longer required).

4)  For green wavelengths, debayering does not make any difference to the number of photons captured (twice as many pixels are each receiving half the previous signal) but it does improve the resolution of detail.

Overall, debayering does allow more photons to be captured (depending on wavelength) and definitely improves resolution.  The results would be even better if the microlenses had not been lost.

Mark

  • Like 4
Link to comment
Share on other sites

Well, it's certainly the case that the microlenses are only part of the picture (sic). Removing the CFA helps to make up for the microlense losses but this might result in a loss of contrast? It would be good if sensors had a flat response across the visible spectrum but they don't. Maybe something can be gained/compensated in processing? Maybe a custom software debayering might be useful? I'm just thinking aloud :)

Cheers

Louise

Link to comment
Share on other sites

My results don't agree with your conclusion. Removing the microlenses halved the sensitivity of the photosite, but debayering quadrupled the number of photosites sensitive to Ha and SII. The net effect is double the sensitivity of the sensor which would halve your imaging time.

Thinking about it, it's only the sensitivity of individual pixels that count for the debayered sensor. Unless you 'bin' 4 x 4 (by some means) which would defeat the object of gaining resolution. So, on that basis, a debayered sensor could need up to twice the integration time... I suppose much will depend on the arc secs/pixel that one is imaging at as well as the particular target.

Oh my, it's getting late and my brain has had enough!

Louise

Link to comment
Share on other sites

We should be able to measure actual losses due to removal of the microlens if someone can take a test image indoors, remove just the microlens layer but not the cfa which is easy enough and take an image of the same object. Then compare red and blue values.

Light conditions would need to be identical though.

This test would remove the cfa from the equation.

  • Like 1
Link to comment
Share on other sites

We should be able to measure actual losses due to removal of the microlens if someone can take a test image indoors, remove just the microlens layer but not the cfa which is easy enough and take an image of the same object. Then compare red and blue values.

Light conditions would need to be identical though.

This test would remove the cfa from the equation.

That's a good idea. I would say, even better removing the microlenses only in half of the sensor -just in case the microlenses removal physically damages the CFA somehow. Then proceed to completely debayer that side (with the other side still intact) and see if the results hold true. I might actually do that.

Edited by pixueto
  • Like 1
Link to comment
Share on other sites

Gina, have you noticed a layer below the bayer matrix? When I've been debayering the 1100D, once I've gotten the green layer off with a scraper, there's a dull gold layer underneath. If I then use a polishing compound, the dull layer is removed and there's a second gold layer.

Surely that's the bottom antireflection coating colour filter, right?

Link to comment
Share on other sites

We should be able to measure actual losses due to removal of the microlens if someone can take a test image indoors, remove just the microlens layer but not the cfa which is easy enough and take an image of the same object. Then compare red and blue values.

Light conditions would need to be identical though.

This test would remove the cfa from the equation.

I believe the losses were worked out to be 50%, based on the tests I did here:

http://stargazerslounge.com/topic/166334-debayering-a-dslrs-bayer-matrix/page-81#entry2308747

Link to comment
Share on other sites

But not all photosites will collect for example Ha during a real exposure of a target, or you would get a completely uniform red block as image. No?

So I don't think it's that absolute. At least that is how I interpret all this.

Otherwise all sensor QE graphs would show a 75% higher QE in red band for Mono compared to OSC.

Some sensors QE graphs don't show any difference in the red band at all! Like the KAI 11002.

I stand fully corrected if I'm wrong in the end, when real imaging tests of a Ha region gets imaged 4 times as fast with a debayered DSLR than a normal fully astro modified DSLR.

Proof will be in the pudding as they say. :D

Because there are 2 things going on when we debayer we should look at them individually. Firstly, reduction in QE by removing the microlenses. This, as stated, was measured to be 50%. Next, is the increase in resolution, for Ha and SII, from 1 in 4 pixels to 4 in 4. Now we are gathering light at 4 times the number of photosites but it's spread over 4 times the area. Therefore, if we take the resolution gain, we have to accept the reduction in sensitivity and increased exposure durations. But, if we combine the signal from the 4 pixels into one, we're back at the previous resolution but with twice the signal (remembering removing microlenses costs us 50%).

I'm not sure how the QE charts are prepared, but looking at them I assume it's based per pixel and not the sensor as a whole, otherwise why is the green QE the same as the red and blue but it has double the number of photosites so should gather more light! If my assumption is the case, then you wouldn't see a 400% (not sure why you said 75%?) gain for mono in the red band.

So, imaging in Ha with my debayered sensor should be twice as fast (remembering removing microlenses costs us 50%). But as you say, the proof will be in the pudding :D (But i'm not sure how you world quantitatively evaluate those results?)

Link to comment
Share on other sites

An excellent test and good analysis!

I would summarise those results as follows:

1)  Losing the microlenses has reduced the sensitivity of the sensor by a factor of approx 2.

2)  Removing the colour filter array has increased the sensitivity of the sensor by allowing every pixel to receive all wavelengths.

3)  For red (and blue) wavelengths, debayering allows the sensor to capture approximately twice as many photons as previously (4x as many pixels are each receiving half the previous signal).  This must be a good thing, especially for Ha and SII narrowband imaging.  Also, since every pixel is receiving signal, the resolution of the image is increased (because interpolation between coloured pixels is no longer required).

4)  For green wavelengths, debayering does not make any difference to the number of photons captured (twice as many pixels are each receiving half the previous signal) but it does improve the resolution of detail.

Overall, debayering does allow more photons to be captured (depending on wavelength) and definitely improves resolution.  The results would be even better if the microlenses had not been lost.

Mark

I think we also need to acknowledge that the increase in resolution comes at a cost of reducing sensitivity by 50%.

Link to comment
Share on other sites

I intend to debayer half the sensor area too - with an 1100D sensor.  I will also provide and area where just the micro lenses are removed.  Then for testing I will use white light for one test with diffuser over lens to provide a flat.  I propose to repeat this test with an Ha filter in the light path and again with SII and OIII filters.  I will then see if I can take the various image areas and select a few pixels from each and make a composite image with these adjacent to each other.  The comparison of untouched sensor with the CFA and micro lens removed areas will be the most interesting.  It would be nice if I could find a way to actually measure the image brightness for these areas.

Oh, and flats taken before any debayering to confirm that the sensor area that will be debayered is actually the same sensitivity as the other side.

  • Like 1
Link to comment
Share on other sites

I intend to debayer half the sensor area too - with an 1100D sensor.  I will also provide and area where just the micro lenses are removed.  Then for testing I will use white light for one test with diffuser over lens to provide a flat.  I propose to repeat this test with an Ha filter in the light path and again with SII and OIII filters.  I will then see if I can take the various image areas and select a few pixels from each and make a composite image with these adjacent to each other.  The comparison of untouched sensor with the CFA and micro lens removed areas will be the most interesting.  It would be nice if I could find a way to actually measure the image brightness for these areas.

Oh, and flats taken before any debayering to confirm that the sensor area that will be debayered is actually the same sensitivity as the other side.

Sounds good Gina. I used a program called RawDigger to measure the brightness of the pixels.

http://www.rawdigger.com/

Do you think you could also have an area that has the antireflective coating removed as well? So you would have:

1. Original

2. Microlenses removed

3. Microlenses and CFA removed

3. Microlenses, CFA and antireflective layer removed

Or maybe that's too much of a stretch?

Link to comment
Share on other sites

I shall be setting up a proper test rig with cooling to test the sensor at each stage of debayering and will post the results.  I will post the details of the cooling system in my own 1100D debayering, cooling, FW and OAG thread and keep this thread on topic with debayering info only.

Edited by Gina
Link to comment
Share on other sites

Sounds good Gina. I used a program called RawDigger to measure the brightness of the pixels.

http://www.rawdigger.com/

Do you think you could also have an area that has the antireflective coating removed as well? So you would have:

1. Original

2. Microlenses removed

3. Microlenses and CFA removed

3. Microlenses, CFA and antireflective layer removed

Or maybe that's too much of a stretch?

Thank you - I'll take a look at RawDigger - looks like the very thing :) 

I'm not sure about the antireflective layer - depends on how things go.  It may turn out that it gets removed in places anyway.  But I'll certainly keep it in mind and make a decision later.

Link to comment
Share on other sites

Hi

Interesting piece of software. Was wondering which version you have? Can it be used like ccdinspector?

Cheers

Louise

I just downloaded the free trial and once it expired, it was still useful to some degree. I'm not familiar with ccdinspector, so couldn't say.

Link to comment
Share on other sites

I've tested the white light flats and have a working setup with standard Canon zoom lens and tissue held on with elastic band.  Using APT to control exposure and capture data.  I need to clean the sensor cover glass before using the results as there are several bits of dust on it.  Probably the best time to take the calibration data would be after removing the cover glass or after adding the epoxy resin ie. just before attacking the sensor surface.

Link to comment
Share on other sites

Now have cooling added to the test rig though without any thermal insulation so the cold finger is gathering a thick layer of ice.  I am obtaining -8C on the CF with ambient temperature of 23C.  So that's a delta T of -31C.  I'm running 60s exposures using APT and the 1100D sensor is fine.  Without the camera being in a box there is considerable light leakage and the image is showing a medium brightness at that exposure.

I'll add some plastic foam as thermal insulation and see if I can get down to -10C on the CF.  May also add a DS13B20 digital thermometer to the heatsink to see just how well the cooler is working.

I using the MCPE-127-10-25 - PELTIER COOLER, 19.6W with 60x60x40mm finned CPU cooler with fan as shown in my 450D cooling thread.  I'm running the TEC at 15v and 1.75A - just below it's maximum rating.

Edited by Gina
Link to comment
Share on other sites

Hi Gina, bit off topic but I'm just messing around with a Tec12706 Peltier and wondered how you control it, I've seen it rated at all sorts of watts up to 90 odd so what governs this rating is it just the amount of heat you're trying to dissipate or is there some way of regulating it ?

Dave

Link to comment
Share on other sites

You can control the power input to the TEC using a power MOSFET driven from one of the PWM outputs of an Arduino - see circuit diagram in my 1100D debayered with added cooling, FW and OAG thread.

With such a high power and inefficient TEC you will need a very good cooler to get rid of the heat from the hot side.

Edited by Gina
Link to comment
Share on other sites

With nighttime approaching I can get 2m subs with little illumination showing up so I'm now running 5 min subs.  A few hot pixels are beginning to show up :D  With no imaging I'm getting -8.5C but with continuous imaging the CF temperature drops tp -7.5C.  Once I get proper thermal insulation I'm sure I can get down to -10C sensor temperature which is quite sufficient :)

Edit - 5m @ ISO800 is fine so I've upped the ISO to 3200.

Edited by Gina
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.