Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

The mysteries of green.


ollypenrice

Recommended Posts

  • Replies 37
  • Created
  • Last Reply

I was comparing mono RGB with mono luminance so there's no size difference. I hadn't noticed the dimension changes with OSC but I always used Registar for combining OSC with mono so it would do it without my noticing. Does the matrix crop the pixels slightly?

Olly

Link to comment
Share on other sites

Macavity, in another thread, linked to this article. http://nogreenstars.blogspot.co.uk/2013/03/why-are-there-no-green-stars.html It's excellent.

It explains why there are no green stars. (Quick explanation; the Sun peaks in energy output in the green part of the spectrum but the peak in terms of photon radiance, which is what our eyes perceive, is away from green down towards the reds. Also the eye's colour sensitivity derives from three types of receptor and the response of that for green overlaps heavily with that for red.)

So that got me thinking about the one shot colour Bayer Matrix and the question, 'Why does it have twice as many green pixels as red or blue?' When we shoot using a monochrome camera and RGB filters we don't shoot twice as much green as we do blue or red. This article http://en.wikipedia.org/wiki/Bayer_filter gives explanations. It seems the purpose of the double set of green filters is to allow them to serve as luminance. Bayer (whose christian name is Bryce - for completeness!) described the green pixels as 'luminance sensitive elements' and red and blue as 'chrominance sensitive elements.' But...

This is predicated on the fact that, in daylight, the human eye is most sensitive to green.

Now when we are taking astrophotos we are not working in daylight and we are not collecting light from the sun which is the source of our daylight spectrum. So do we want one shot colour cameras to have twice as many green pixels? I would have thought not. I would have thought that the 'luminance' function of green pixels would collapse on most astronomical targets. Many imagers use Ha (which is red) as partial luminance on emission nebulae, so a better matrix would be RRGB in these cases. For reflection nebulae I'd have thought RGB ideal.

So is this why tests show monochrome RGB-filtered cameras to be faster than OSC cameras? The OSCs have the wrong filter distribution.

And it also shows the clear advantage of using a luminance filter. The luminance filter doesn't have to guess which colour is the best to serve as luminance on this particular target, it gets the lot and the peak will be somewhere in that data. It doesn't matter where.

Olly

Been there, seen it, done it.....

http://en.wikipedia.org/wiki/Super_CCD

Fuji Super CCD = RGB+LUM

Link to comment
Share on other sites

Sadly Fuji stopped making Super CCD variants in 2010. It did make some sense to have seperate luminance photosites but maybe it was comercially unviable in the end or just not needed for most 'run of the mill' photography? Such is the way of many good ideas.

Link to comment
Share on other sites

What you are looking for is a specialist sensor for a very narrow market. I guess it's unlikely to be found.

Canon might oblige one day.  They brought out a couple of DSLRs that were targeted at the astro market although I think with those it was just the filter that was modified.  I would imagine a radically new sensor would be a different matter altogether.

Link to comment
Share on other sites

Canon might oblige one day.  They brought out a couple of DSLRs that were targeted at the astro market although I think with those it was just the filter that was modified.  I would imagine a radically new sensor would be a different matter altogether.

Yeah, they are in a very select club of DSLR makers that recognise the astro market. Their astro cams were just made with standard sensors but with filters that allowed H-a light through (AFAIK)

Link to comment
Share on other sites

as a humble unmodded DSLR user, I find I get decent results in post-processing in splitting out a synthetic luminance and RGB before processing, that way I'm not throwing away the green luminance data, even if I do kill the green in the RGB later on with SNCR.

For the synth luminance, I inspect each of the RGB channels separately - I usually find the green is the richest in SNR (unsurprising, since it has twice the data), followed by the red, which has good signal but tends to be quite noisy for me, and usually find my blue is quite poor.  I set myself a custom RGB working space, usually with lots of green, a fair amount of red and not much blue, to try to optimise the quality of the Lum channel I then extract from it. 

RGB I process in the usual way - DBE, colour calibn etc, maybe some SCNR but I'm not really a fan of too much, I find it throws out the rest of the colour balance.  Then I recombine L+RGB late in the process.

Probably not for the purists since I'm using green luminance to support R and B data, but given the constraints of my equipment, I think I'm allowed  :-)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.