Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Good way to measure / calculate color gamut of sensor?


Recommended Posts

I'm under impression that mono + RGB filters actually have smaller color gamut than OSC sensors (that strictly depends on filters used - but let's go with "regular" filters - ones that split 400-700 in tree distinct bands).

In order to calculate color gamut, we need to first find appropriate raw -> Cie XYZ transform (and color gamut will depend on it) and then see what range of xy chromaticity diagram it will cover.

Brute force approach is completely infeasible, splitting 400-700nm range in very coarse steps of 10nm will yield 30 divisions, and using only 10% intensity increments will lead to something like 10^30 combinations to be examined.

I've tried searching the internet for information on sensor color gamut estimation / calculation - but have found numerous articles that in fact state that sensors "don't have gamut" - statement which I disagree with. Maybe it is because I don't have proper understanding of the term gamut? Can't be sure. In any case, here is what I mean by color gamut of sensor + filters and what is brute force way to calculate it. Maybe someone will have idea how to simplify things and make it feasible to do calculation.

First let's define what I believe to be color gamut of the sensor. Examine following graph of QE efficiency of ASI1600:

image.png.1548c835147b55e095c60ffa45eff025.png

Note section of the graph between 600 and 630nm - it is mostly flat line. Let's assume it is flat line for the purpose of argument. Now let's look at following graph of Baader LRGB filters:

image.png.7dc64ad62314caa6c5df15dcf28379c4.png

Again observe same 600-630nm range. Red filter covers it, and there is tiny variation of transmission in this range, but we can select two points that have the same transmission. Let that be for example 605nm and 615nm.

If we shine same intensity light from two sources - one at 605nm and one at 615nm - camera sensor combination will record two exactly same values - there will be no distinction. We will get value of let's say 53e in red filter. Nothing in blue or green filter.

But important thing - given these two images (of two light sources) - we can't even in principle distinguish which one was 605nm and which was 615nm.

Now let's examine human eye response:

image.png.cb29efe58e0c055b25ad1cdc05524ddb.png

On above graph there will be difference in both L (long wavelength type) and M (medium wavelength type) cone cells. Our eye will be able to distinguish these two light sources as having different color.

This clearly shows that camera + filter combination that we are examining has less color gamut than human eye (although articles insist that neither human eye nor camera have color gamut - and that only display devices have color gamut - statement that I strongly disagree. In fact color gamut of display device is only defined in terms of human eye sensitivity - it can be larger or smaller if we take some other measurement device / define color space differently).

Now that I've explained what I mean by color gamut of sensor - let's see how to calculate/measure it. I'll explain brute force approach and why it is not feasible to do it.

Let's first look at well defined color space of CieXYZ and xy chromaticity diagram:

Here are matching functions for CieXYZ:

image.png.2a133511df50a5bea9988916b0269405.png

Now imagine any arbitrary function that spans 380nm-780nm and represents spectrum of the light of particular source. Take that function and multiply with each x, y and z matching function and integrate (sum area under resulting curve) - that will produce X, Y and Z color space values. Important thing to note is that different spectra will produce different X, Y and Z in general - but there will be very large number of spectra that will produce same X, Y and Z values. There are many different spectral representations of the same color. Pretty much like in above example with ASI1600 + RGB where I shown you two different single wavelengths that produce same response on sensor. With CieXYZ any single wavelength will have different XYZ values but there will be combination of wavelengths at particular intensities that produce same XYZ values. For CieXYZ that does not mean less color gamut - because CieXYZ is modeled to represent human vision - if XYZ is the same - so will be color that we see. If we can't distinguish difference - same is true for CieXYZ.

After we have XYZ values - we do a bit of transform to get xyY color (which means that apparent intensity is matched and xy coordinates represent pure hue in max saturation). If we take every possible light source spectrum and calculate XYZ and then derive xyY and plot x,y we will get this function:

image.png.f3bc9548cd267ed4198dd18090b1be97.png

Everything inside that curve is what we can see. Points on curve represent pure wavelengths of light.

Determining color gamut of the sensor can then be viewed as:

- take all possible spectra in 380-700nm range.

- calculate rawRGB values (multiply with sensor/filters response and sum/integrate surface under curve)

- transform each of those rawRGB values into XYZ values with appropriate transform matrix

- transform XYZ to xyZ, take x,y and plot a point on above graph

- take all xy points that we produced and see what sort of surface they cover - larger the surface, higher gamut of sensor/filter combination.

Compare that with sRGB gamut, or between sensors or sensor/filter combinations to see what combination can record how much of proper color information.

Problem is of course if we try to generate all possible spectra in 380-700nm range - as there are infinitely many of them. Even if we put any sort of crude restrictions on what the spectrum should look like we still get enormous number of combinations. If we for example say that spectra looks like bar graph with bars 10nm wide and with possible values between 0 and 100% in 10% increments - we still end up with something like 10^32 combinations to examine.

But we have seen that some spectra end up giving same result - so we don't need to examine all possible spectra to get rough idea of what sensor gamut looks like.

Anyone has any idea on how to proceed with this?

Link to comment
Share on other sites

For anyone interested in this topic it turned out that above is fairly easy to do (at least I think so at this point - I don't yet have full mathematical proof).

If we observe any two wavelengths w1 and w2 with arbitrary intensity, it turns out that in XYZ space this forms a plane.

X = a * w1 + b * w2

Y = c * w1 + d * w2

Z = e * w1 + f * w2

where w1 and w2 are parameters in this parametric form of plane (being intensities of respective wavelengths) and a,b; c,d; e,f - being values of color matching functions of X, Y and Z for particular wavelengths w1 and w2.

After we transform this plane with XYZ -> xyY transform it turns into a line. I still don't have mathematical proof of that, but I did check it numerically (graph plotting), and it is in line with this section on wiki article on CieXYZ space and xy chromaticity diagram:

Quote

If one chooses any two points of color on the chromaticity diagram, then all the colors that lie in a straight line between the two points can be formed by mixing these two colors. It follows that the gamut of colors must be convex in shape. All colors that can be formed by mixing three sources are found inside the triangle formed by the source points on the chromaticity diagram (and so on for multiple sources).

source: https://en.wikipedia.org/wiki/CIE_1931_color_space

Projection of that line on xy will also be a line.

In last step, if we derive matrix transform (3x3 matrix) between camera raw space and XYZ - it will preserve planes (matrices represent linear transforms), so any two wavelengths on our sensor with their intensities will also form a line once transformed into xy chromaticity diagram.

From this it is easy to see that all one needs to do is just take each wavelength (for example 400-700nm with step of 1nm), calculate raw triplet from sensor + filters graph, transform by transform matrix to get matching XYZ and then derive xy known XYZ->xyY transform.

After plotting that on xy chromaticity diagram, gamut of sensor/filter combination will be convex shape defined by said points.

(mind you, this shape will depend on chosen transform matrix, so choosing that matrix is another topic that is interesting, and so is transform error).

Link to comment
Share on other sites

I always enjoy reading your posts, but rarely feel like I have anything useful to contribute to the discussion.  This time, I think I have a little knowledge from a previous career in chemical manufacturing of dyes that allows me to comment on some aspects of your post.

My understanding of the term gamut (which could be wrong) is that it is always a subset of the full range of possible chromaticity and lightness combinations from an infinite variation of spectral curves.  In this respect it is true to say that both human vison, and a camera sensor do not have a gamut as such.  We can see every possible chromaticity and lightness combination (although the ability to perceive differences between 2 combinations is not uniform across the whole colour space).  Every display medium (screen or print) is defined as having a gamut because there are some points in the whole colour space that cannot be accurately represented.

The ability of human vision to differentiate between 'colours' comes from the fact that every wavelength within the visible spectrum triggers a non-zero response from at least 2 out of the 3 types of cone cells.  Our perception of colour comes from the relative strength of the response from each type of cone cell.  When it comes to a mono camera and RGB filters, the measurement doesn't follow the same pattern.  The filters sample the incident light within 3 distinct bands with very little crossover.  You correctly state that this means the camera with a red filter cannot differentiate between a source at 605nm and one at 615nm.  Perhaps human vision could be more accurately simulated if it were possible to make pseudo-RGB filters with transmission curves like the primary colour functions of the CIE standard observer?

I look forward to reading where your investigation goes from here.

Graeme

Link to comment
Share on other sites

3 minutes ago, GraemeH said:

My understanding of the term gamut (which could be wrong) is that it is always a subset of the full range of possible chromaticity and lightness combinations from an infinite variation of spectral curves.  In this respect it is true to say that both human vison, and a camera sensor do not have a gamut as such. 

I had a sneaky suspicion that I don't fully understand term gamut. In some sense I do have similar understanding of it as you - it is subset of all possible chromaticities (lightness is not important here as it is function of intensity, at least I think so).

It is useful to think in relative terms, so we can say that human vision is full gamut and any subset of that is narrower/smaller gamut than full gamut?

In any case, I fail to see how a sensor does not have property of being capable of recording whole or part of the gamut, or in fact distinguishing what human eye is capable of distinguishing.

Not all sensor/filter combinations are like this - some have "full gamut" while others do not, and we can pose a question: how large is gamut of certain sensor/filter combination.

Let me give you an example:

You have regular tri band filter/sensor combination graphs above - look at sensor QE curve and each filter transmission curve to get idea of combined curves. But one does not need to use such filters, one can use following filters:

image.png.685c59cc757522467106202a0cd41b5e.png

These are astronomik filters - they have some overlap and are more like human eye response functions - my guess is that such filter/sensor combination has larger gamut.

Same is true for OSC sensors that have response curves like this:

image.png.9605e6049e1da661228d95e80991a05c.png

Above diagram can clearly identify each wavelength by unique ratio of raw "RGB" components - I suspect that it covers whole gamut.

Another example of even smaller gamut would be:

Take regular rgb filters, any sensor and add light pollution filter to it - something like CLS or IDAS LPS P2:

image.png.362e36847e451f2750c7c8e60524b8fb.png

image.png.ee578f65cc545f6b01fcd30ed9a21a11.png

Those filters block light completely in some ranges - and filter sensor combination won't be able to even record these wavelengths, let alone make distinction between them.

I think it would be good idea to find a way to characterize sensor filter combination - ones that can produce all colors computer screen is capable of showing (sRGB gamut) and those that can't.

 

 

Link to comment
Share on other sites

28 minutes ago, vlaiv said:

I think it would be good idea to find a way to characterize sensor filter combination - ones that can produce all colors computer screen is capable of showing (sRGB gamut) and those that can't.

I think this is exactly right - the sensor doesn't have a gamut by any definition I understand, but the sensor/filter combination might do.  

Link to comment
Share on other sites

32 minutes ago, Captain Magenta said:

Roger Clark has recently added the below to his extensive site to try to help illuminate this topic, which I found interesting...

https://clarkvision.com/new_articles.html

I have couple of issues with that article.

It goes on to state that for example CIE XYZ color space is not good color space for couple of reasons:

- because it is not perceptually uniform

- because simple linear transforms between that color space and some other color spaces lead to errors

I had the impression that for above reasons (both known things) - author deems CIE XYZ color space somehow wrong or inferior.

There is another issue that I have - for example using RGB matching functions as an example without giving explanation how experiment was conducted (using three pure wavelength primaries and reflective color arrangement):

Here is quote from wiki on CieXYZ describing that:

image.png.627f6ab718929322d5330014af857614.png

Three primaries used in a test on xy chromaticity diagram - any color that could be produced by additive mixture of primaries lies in triangle.

Quote

The experiments were conducted by using a circular split screen (a bipartite field) 2 degrees in diameter, which is the angular size of the human fovea. On one side a test color was projected while on the other an observer-adjustable color was projected. The adjustable color was a mixture of three primary colors, each with fixed chromaticity, but with adjustable brightness.

The observer would alter the brightness of each of the three primary beams until a match to the test color was observed. Not all test colors could be matched using this technique. When this was the case, a variable amount of one of the primaries could be added to the test color, and a match with the remaining two primaries was carried out with the variable color spot. For these cases, the amount of the primary added to the test color was considered to be a negative value. In this way, the entire range of human color perception could be covered. When the test colors were monochromatic, a plot could be made of the amount of each primary used as a function of the wavelength of the test color. These three functions are called the color matching functions for that particular experiment.

Although Wright and Guild's experiments were carried out using various primaries at various intensities, and although they used a number of different observers, all of their results were summarized by the standardized CIE RGB color matching functions r ¯ ( λ ) {\displaystyle {\overline {r}}(\lambda )} , g ¯ ( λ ) {\displaystyle {\overline {g}}(\lambda )} , and b ¯ ( λ ) {\displaystyle {\overline {b}}(\lambda )} , obtained using three monochromatic primaries at standardized wavelengths of 700 nm (red), 546.1 nm (green) and 435.8 nm (blue).

In any case, issues that exist with color reproduction can't be attributed to inferiority of CieXYZ color space.

Link to comment
Share on other sites

39 minutes ago, GraemeH said:

I think this is exactly right - the sensor doesn't have a gamut by any definition I understand, but the sensor/filter combination might do.  

I'm still having issues with definition of word gamut :D

Here is from wiki article:

Quote

In color reproduction, including computer graphics and photography, the gamut, or color gamut /ˈɡæmət/, is a certain complete subset of colors. The most common usage refers to the subset of colors which can be accurately represented in a given circumstance, such as within a given color space or by a certain output device.

Another sense, less frequently used but still correct, refers to the complete set of colors found within an image at a given time. In this context, digitizing a photograph, converting a digitized image to a different color space, or outputting it to a given medium using a certain output device generally alters its gamut, in the sense that some of the colors in the original are lost in the process.

I see no reason why we could not adopt above definition to include - device capable of recording - in the same sense as set of colors found within an image.

Link to comment
Share on other sites

I know exactly what you are trying to do here.  Funnily enough, earlier this week I had a disagreement with someone on another astro-forum who called the Bayer filters "sloppy" because their transmission bands overlap!

It's obvious to most people (but not to the contributor to that forum) that the sharp cut-off RGB filters typically used for astro-imaging are inferior for colour reproduction.  The example of trying to image a rainbow is a great example of this.  The  sharp cut-off RGB filters cannot reproduce the continuous change of colour within the rainbow.  But this is not a problem of gamut.  Gamut applies to display devices.  For instance an LED display can reproduce all the colours within the colour triangle formed by its Red, Green and Blue LEDs.  This is its gamut.  

However, a camera with RGB filters can be considered to be full gamut because it is able to record all those colours i.e. there is no colour it is unable to record unless there are gaps between the filter transmission bands.  The problem it has is the inability to distinguish between a wide range of colours  i.e. many different colours give exactly the same RGB pixel output values from the sensor.  The concept you need is "metameric failure".  This is the inability of the camera to distinguish between colours that the human eye sees as being different.  Those who test consumer cameras will report a "sensitivity metamerism index" (SMI) for the camera which is a standard way to measure its colour accuracy.

Mark

Edited by sharkmelley
  • Like 2
Link to comment
Share on other sites

2 hours ago, sharkmelley said:

The problem it has is the inability to distinguish between a wide range of colours  i.e. many different colours give exactly the same RGB pixel output values from the sensor.

Again something similar happens to sRGB display if you try to view image that has been recorded in gamut - colors that monitor can't display will be "clipped" to colors that monitor can display. In that sense it is the same, isn't it.

In any case - I'll accept that gamut is related to reproduction rather than sensors and will look up metameric failure, so thanks for that.

Link to comment
Share on other sites

  • 2 weeks later...
6 hours ago, sharkmelley said:

I just accidentally found a document that defines gamut of a sensor.  See section 2.1:

https://corp.dxomark.com/wp-content/uploads/2017/11/EI-2008-Color-Sensitivity-6817-28.pdf

So contrary to what I thought, the concept does exist!

Mark

That paper pretty much describes what I intended to do. There are some difference though. They deal with daytime photography so illuminant needs to be taken into account.

With AP we don't need to worry about that aspect - we can treat all light as coming from a light source (to some extent even reflection nebulae although they don't generate their own light - there is simply no way to illuminate them with another type of light) that has precisely defined spectrum.

We also have a way of standardizing transform matrix between sensors. There is a couple of ways to do it, but most obvious is to derive transform matrix for a range of black body objects in certain temperature range (star colors). We can also include some "standard" narrow band lines like Ha/Hb, OIII, SII and such in our set used to compute transform matrix. That should give us "standard color card" for color calibration.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.