Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

sharkmelley

Members
  • Posts

    1,323
  • Joined

  • Last visited

Everything posted by sharkmelley

  1. The noise looks like pretty typical thermal noise to me. I can't see anything I would describe as interference of any kind. Mark
  2. Wow, that is one stunning image! You've put an awful lot of hard work into that - well done. Mark
  3. An excellent test and good analysis! I would summarise those results as follows: 1) Losing the microlenses has reduced the sensitivity of the sensor by a factor of approx 2. 2) Removing the colour filter array has increased the sensitivity of the sensor by allowing every pixel to receive all wavelengths. 3) For red (and blue) wavelengths, debayering allows the sensor to capture approximately twice as many photons as previously (4x as many pixels are each receiving half the previous signal). This must be a good thing, especially for Ha and SII narrowband imaging. Also, since every pixel is receiving signal, the resolution of the image is increased (because interpolation between coloured pixels is no longer required). 4) For green wavelengths, debayering does not make any difference to the number of photons captured (twice as many pixels are each receiving half the previous signal) but it does improve the resolution of detail. Overall, debayering does allow more photons to be captured (depending on wavelength) and definitely improves resolution. The results would be even better if the microlenses had not been lost. Mark
  4. This seems to confirm that losing the microlenses has a very serious impact. Using their figures, the R,G,B pixels have values 14008, 7966 and 9381 respectively. The sum of these is 31355 and this is roughly the value we would hope to see in the debayered pixels. But instead we see 13363 which is a factor of 2.3x lower than expected. So removing the microlenses (and maybe some additional "unknown" effect) is reducing the sensitivty of the debayered sensor by a factor of 2.3 That would be a very useful test. Try it with each NB filter you can! Mark
  5. Photons must reach the photodiodes in order to be registered. But the photodiode itself is only part of the area of the sensor pixel - other necessary components also take up space on the sensor and photons hitting these will not be registered. Microlenses direct almost all the incoming light onto the photodiode so the "wastage" is reduced. There's a useful diagram on this page: http://www.microscopyu.com/articles/digitalimaging/ccdintro.html Maybe the microlenses do not account for the whole of that factor of 3 but I guess that losing them explains a large proportion of the effect. Mark
  6. That is what I fear may be the case - the loss of the microlenses may have a crucial impact. On the plus side, there will certainly be an increase in image resolution - especially for H-alpha. I continue to watch with interest.
  7. So my original analysis stands then. Removing the microlenses (and maybe some additional effect) is reducing the pixel sensitivity by a factor of very roughly 3. This has always been my understanding also. All the pixels will now be sensitive to H-alpha instead of one quarter of them. But if the price to pay is a reduction in sensitivity by a factor of 3 then not much has been gained. I'd love to be proved wrong so if someone has some evidence of the improvement. I would also happily accept that resultswith the 1100D may not be typical of what can be achieved with debayering other DSLR sensors. Mark
  8. The flaw in my argument is that the flat frame needs to be bias subtracted. I'm assuming the CR2 file you posted was recorded using white light (since the R,G & B pixels all have values) The debayered pixels are now able to collect the R,G & B elements of the light because the filter has been removed. So once the bias is subtracted then the (bias-subtracted) pixels on the left hand side (the debayered side) ought to show roughly the sum of the (bias-subtracted) R,G,B on the right hand side (the non-debayered side) unless losing the microlenses is having a very significant effect. Does anyone know the bias level for the 1100D? For other Canon sensors it is generally a power of 2 e.g. 512, 1024, 2048. Armed with the bias level, a very rough approximation can be made of the effect of losing the microlenses. Mark
  9. That's a very interesting result. Intuitively (as someone who has never tried debayering) I would expect each debayered pixel to have a value equal to the sum of R,G & B values from the CFA pixels and then reduced by a factor due to the loss of the microlenses. So are we saying that the loss of the microlenses reduces the sensitivity by a factor of around 3 or so? Or is something else happening as well? If this result is typical then it indicates to me that debayering an 1100D will result in very little overall advantage from an imaging point of view, though it's a very interesting technical exercise. Is this result also typical of those obtained from other sensor types? Thanks for posting the CR2 file, it was most informative Mark
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.