Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Keep close to authentic look of object or create stunning, award winning and over processed photo?


Recommended Posts

2 hours ago, vlaiv said:

You have written above following step:

CameraRGB->XYZ(D50)->sRGB

That while correct and the same as I have written above, does not help to understand what is going on. Same step can and should be written for the purposes of correctness like this:

CameraRAW->D50 adaptation->XYZ->(D50 adapted)sRGB

I don't understand what you've written above.

Chromatic adaptation has a precise meaning.  Typically when we transform from one colour space to another we need to perform a white balancing operation.  Simplistically this is a simple scaling of the XYZ values in the profile connection space.  But in addition we need to perform chromatic adaptation which "twists" the XYZ colour space so that non-white colours will be perceived "correctly" when the white balance changes.  Chromatic adaptation leaves whites and greys unchanged.  Unfortunately it is not possible to create a matrix that performs pure adaptation (e.g. Bradford, Von Kries) but it must be combined into a matrix that also performs the white balancing.

You can see this on Lindbloom's page: Chromatic Adaptation (brucelindbloom.com)

So the matrix CameraRGB->XYZ(D50) will combine white balancing and chromatic adaptation into a single matrix.  Those are not the only things the matrix does because it also includes the transformation of the colour primaries, of course.

[Edit:  Rewritten for accuracy ]

Mark

Edited by sharkmelley
Link to comment
Share on other sites

1 hour ago, sharkmelley said:

I don't understand what you've written above.

Chromatic adaptation has a precise meaning.  Typically when we transform from one colour space to another we need to perform a white balancing operation.  This is a simple scaling of the RGB values.  In addition we need to perform chromatic adaptation which "twists" the colour space so that non-white colours are perceived "correctly".  Chromatic adaptation leaves whites and greys unchanged.  A matrix that performs pure adaptation (e.g. Bradford, Von Kries) will have values that sum to 1.0 in each row.  Multiplying a unit vector by the chromatic adaptation matrix will leave the unit vector unchanged but all other colours in the colour space will be "twisted".

You can see this on Lindbloom's page: Chromatic Adaptation (brucelindbloom.com)

So the matrix CameraRGB->XYZ(D50) will combine white balancing and chromatic adaptation into a single matrix.  Those are not the only things the matrix does because it also includes the transformation of the colour primaries, of course.

 

[Edit:  Correction - the rows summing to 1 is true for Bradford but apparently not for Von Kries.  In fact mathematically you can't separate out a pure chromatic adaptation matrix at all - that was a big over simplification from my faulty memory! ]

Mark

From wiki page on Chromatic adaptation:

Quote

Chromatic adaptation is the human visual system’s ability to adjust to changes in illumination in order to preserve the appearance of object colors. It is responsible for the stable appearance of object colors despite the wide variation of light which might be reflected from an object and observed by our eyes. A chromatic adaptation transform (CAT) function emulates this important aspect of color perception in color appearance models.

An object may be viewed under various conditions. For example, it may be illuminated by sunlight, the light of a fire, or a harsh electric light. In all of these situations, human vision perceives that the object has the same color: a red apple always appears red, whether viewed at night or during the day. On the other hand, a camera with no adjustment for light may register the apple as having varying color. This feature of the visual system is called chromatic adaptation, or color constancy; when the correction occurs in a camera it is referred to as white balance.

(text is transferred as is - without any emphasize on my part).

Note last sentence - when humans see paper as white under 3000K illuminant - that is chromatic adaptation. When camera does that - it is called white balance or chromatic adaptation transformation or CAT.

There are three different types of chromatic adaptation.

First is simple scaling of primaries - this tries to make object that we perceived as white be white in color space of choice (or rather white point of that space) - this involves three scalar values

Second is matrix - a bit more complex case

Third is non linear transform, or matrix in non linear color space - often modeled with perception uniformity in mind

All of those are dealing with effect of chromatic adaptation - and not regular color space transformations.

Color space transformations preserve color - chromatic adaptation changes color to look more alike what person would have seen under given circumstances - to match different circumstances.

 

Link to comment
Share on other sites

Fair enough - it's pointless arguing about terminology.

But I still don't understand what you meant when you wrote:

CameraRAW->D50 adaptation->XYZ->(D50 adapted)sRGB

[Edit:  Don't worry - I've worked it out by re-reading the whole post. ]

Mark

Edited by sharkmelley
Link to comment
Share on other sites

I'm about to do some experiments and maybe I should best explain by describing what I'm planing to do.

I have couple of astronomy cameras: ASI1600mono + filters, ASI178 color and ASI185 color. I want to create "Camera raw" to XYZ transformations for each of them.

Easiest approach when one has QE response of camera for R, G and B filters is to use mathematics. This is what I plan to do for published QE graphs of these cameras (although I'm not 100% certain these are correct - I'll still do it as part of experiment).

You start by generating number of random spectra in relevant range - 360-830nm (XYZ matching functions that I've found have values in that range) and I calculate X, Y and Z values for each of that spectra - given with this expression:

image.png.b2881753e0f29a721baccd713de9746b.png

I also calculate rawR, rawG and rawB values in the same way - except using QE graph of camera as matching functions for rawR, rawG and rawB.

In the end I simply do least squares fitting to find matrix that will map between corresponding vectors (X, Y, Z) and (rawR, rawG, rawB).

I have now obtained matrix that transforms camera space to XYZ space (note that there is no white point / illuminant involved in this process as it is not needed).

Now imagine that I don't have camera curves - only camera, and I don't have XYZ matching functions - only ideal sensor with XYZ component responses. How would I go about creating matrix?

That part is easy as well - I either take color checker chart and illuminate it by any broad band illuminant or take any display (regardless of calibration) and show color checker pattern and I take one image with my camera and another with perfect XYZ camera.

Then I measure rawR, rawG, rawB values for each of colors in color passport and I measure X, Y and Z for them as well - I again have set of vectors in one space and vectors in other space. I can do least squares method to derive Camera Raw to XYZ transform matrix.

So far so good, both of above examples are independent of illumination and both use XYZ as reference for calibration - either matching functions in math form or actual sensor that has such response.

But what if I don't have sensor response curves nor do I have XYZ sensor - which is reality for most people (although sensor response curves can be obtained with a bit of spectroscopy - but that is not trivial either - one needs expensive calibrated equipment and lots of knowledge).

What if I have something that is relatively inexpensive and readily available? Something like this:

colorchecker-classic_01.png

And I also get this chart:

CCPPValues.jpg

and somewhere it says that this chart has been produced with D50 as illumination source.

I then say, great, I'm now going to use these sRGB values and derive XYZ standard values from them and they say D50 as illuminant and I'm going to use that to illuminate my chart and record it with camera and I have my set of vectors

But there is a catch that I'm not understanding - sRGB values are D65 white point values. If I take color number 20 and RGB 200, 200, 200 and calculate XYZ - I'll get D65 XYZ coordinates. Then I take D50 illuminant and illuminate this chart and record data - number 20 will no longer have D65 coordinates but D50 coordinates.

On one side I have Camera raw data that is recorded with D50 illumination and on the other side I have XYZ that is produced from sRGB with D65 illumiation.

When I do least squares method, I actually say - I want grey square that has been illuminated with D50 illumination to match same grey square that has been illuminated with D65 illumination. I'm not only deriving camera raw to XYZ, I'm deriving combined transformation consisting out of two - D50 to D65 white balance + conversion to XYZ.

I was under impression that this is what was going on - according to your interpretation: Take color chart, illuminate with D50 and result will end up as having 1,1,1 in sRGB. This is what you insisted will happen.

What I say, when I say that CameraRaw to XYZ(D50) is just moniker and XYZ does not have D50 as white point is same process but goes like this:

Take color chart, illuminate with Illuminant X, take sRGB values above, do chromatic adaptation from D65 to Illuminant X and then convert to XYZ color space and then do least squares method.

This produces same results, regardless of what IlluminantX is. We can say XYZ(Something) and XYZ(Other), but reality is that XYZ(Something) == XYZ(Other) so what is the point in specifying illuminant used to derive matrix?

There is point only in resulting color error. We are trying to derive fit all matrix with only 20 samples. This is not going to work in all cases (nor is there perfect matrix for each camera) and two different matrices are derived one using D50 as Illuminant X and other Illuminant A as Illuminant X and these have slightly different values - but perform same task, only with different error margin for different set of spectra.

Makes sense?

 

Link to comment
Share on other sites

I think what you are ultimately trying to do is to generate a matrix that goes from CameraRGB to absolute XYZ, in the emissive case.

I'll need to think about the details when I have some time - especially the fact that the Correction Matrices are different for different illuminants - it's an interesting puzzle which I've never thought carefully about before.

Mark

  • Like 1
Link to comment
Share on other sites

21 minutes ago, sharkmelley said:

I think what you are ultimately trying to do is to generate a matrix that goes from CameraRGB to absolute XYZ, in the emissive case.

In principle - emissive case is not different than case with color checker chart - as long as you don't try to do adjustment of white balance and leave colors as seen by sensor - rather than as expected by adapted eye.

In above case - one can use D50 illuminant and still use color checker chart - but we can't use sRGB values given. I think part of the problem is that people think they can use any white point with sRGB space - that is not true.

sRGB space is specifically designed so that D65 is 1,1,1 - it is by definition so. If you have any other illuminant and you convert to sRGB and get 1,1,1 - you have performed color adaptation transformation (or in different language - you have white balanced to that illuminant).

That is why "instructions" that I've posted above are misleading - they say - use D50 illuminant on our color checker card and expect gray square to have 1:1:1 in sRGB - but that can't happen unless you do white balance D50->D65 or you decide to ignore sRGB standard and use different XYZ->sRGB matrix (color corrected one - which is basically the same thing as doing white balance first and then doing regular XYZ->sRGB transform).

Hopefully, we have concluded that when one shoots with their DSLR and selects custom white balance (set to 0, 0) and uses dcraw to convert that image to XYZ - one gets proper "emission" case XYZ values. That is enough for astrophotography - these values are already "properly balanced" - or authentic.

We can later do correction for atmospheric reddening or whatever - but that is accurate color in terms of wavelengths.

I'm planing to derive color transform matrices for few of my cameras in different ways and see how similar they end up being (not to each other, but across methods). If all matrices end up being fairly close - then any of methods can be used.

Link to comment
Share on other sites

I'm suspicious of the chart you showed giving the sRGB and CIE L*a*b* coordinates for the ColorChecker patches.  The CIE L*a*b* figures look good and agree well with the CIE L*a*b* (D50) figures in the BabelColor spreadsheet:  https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip  But the sRGB figures are not correct - for instance the Cyan patch should be out of gamut in sRGB.

In any case, I think a practical approach to calibrating a colour matrix would be as follows:

  • Illuminate the ColorChecker with daylight and take a photo
  • Perform RGB scaling on the raw data so that the grey ColorChecker patches appear neutral i.e. Rvalue=Gvalue=Bvalue
  • Solve for the matrix that does a best fit transform from the scaled CameraRGB values to the known ColorChecker values in XYZ(D50).  It's best to perform the least squares fit in L*a*b* (D50)  -  the well known deltaE error approach

The problem is that you won't know exactly what your daylight illuminant was so we need to do an additional calibration against a known illuminant.  So take an image of a starfield and find the RGB scaling on the raw data that makes a G2V star neutral - PixInsight PhotometricColorCalibration might help here.  Using this scaling and the matrix just calibrated we create a new matrix CameraRaw->XYZ(D50) that maps a G2V star to the D50 chromaticity in XYZ.  Now apply chromatic adaptation from D50 to the chromaticity of G2V.  The result is a matrix that will map the colour of G2V from CameraRaw to the G2V chromaticity in XYZ.  You can then apply your XYZ->sRGB matrix - the one with no chromatic adaptation.

Personally I'm happy omitting the final step, so I'm happy for the G2V star to map to the D50 chromaticity in XYZ and then to become (1,1,1) in sRGB using the XYZD50->sRGB matrix.  This would then have the appearance of white to me which is what I'm trying to achieve but I accept it differs from your goal.  In fact I would use XYZD50->AdobeRGB matrix because the variable gamma of the sRGB colour space makes subsequent colour preserving operations very difficult.

The main weakness of the whole procedure is that the the resulting matrix will be subtly different depending on the exact illuminant used to take the original image of the ColorChecker.  I don't know what the answer to that is.

Mark

Edited by sharkmelley
Link to comment
Share on other sites

11 hours ago, sharkmelley said:

XYZ(D50)

Can you give me definition of XYZ(D50)?

11 hours ago, sharkmelley said:

Perform RGB scaling on the raw data so that the grey ColorChecker patches appear neutral i.e. Rvalue=Gvalue=Bvalue

No need to do that - as it will be incorporated in transform matrix.

11 hours ago, sharkmelley said:

It's best to perform the least squares fit in L*a*b* (D50)  -  the well known deltaE error approach

I will have to look that up. I'm not sure how that is going to be done as one needs to scale all triplets to unit length to avoid intensity scaling but Lab color space is non linear if I remember correctly. Yes, it involves third root.

Maybe optimization for matrix then is not to minimize sum of square of differences in XYZ space but to minimize sum of square of differences in Lab space while still calculating matrix in XYZ?

11 hours ago, sharkmelley said:

Using this scaling and the matrix just calibrated we create a new matrix CameraRaw->XYZ(D50) that maps a G2V star to the D50 chromaticity in XYZ.

Why would we want to do that? D50 and G2V both have known and different XYZ coordinates?

Mapping them to the same coordinate will distort colors in the image.

Link to comment
Share on other sites

16 minutes ago, vlaiv said:

Can you give me definition of XYZ(D50)?

XYZ(D50) is the Profile Connection Space (PCS).  It is based on XYZ but it's a perceptive colour space where the illuminant of any photographed scene is mapped to the D50 chromaticity.  In other words when you open an image in your raw converter the temperature/tint combination that you actively choose (or is chosen by default) will be mapped to D50 in the PCS.

The PCS is at the heart of all colour management because ICC profiles are based on the PCS.  It is the standard used by displays, printers etc.

I honestly cannot see how a ColorChecker can be used to generate a Colour Correction Matrix without reference to the PCS.

By the way it is really worth looking at the BabelColor spreadsheet I linked earlier: https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip

It's a fascinating resource which contains ColorChecker coordinates in many different colour spaces,  deltaE stats and even spectral power distributions of each colour patch of the ColorChecker.  The ColorChecker pages on BabelColor are also very informative: The ColorChecker Pages (Page 1 of 3) (babelcolor.com)

Mark

 

Link to comment
Share on other sites

7 minutes ago, sharkmelley said:

XYZ(D50) is the Profile Connection Space (PCS).  It is based on XYZ but it's a perceptive colour space where the illuminant of any photographed scene is mapped to the D50 chromaticity.  In other words when you open an image in your raw converter the temperature/tint combination that you actively choose (or is chosen by default) will be mapped to D50 in the PCS.

The PCS is at the heart of all colour management because ICC profiles are based on the PCS.  It is the standard used by displays, printers etc.

How are coordinates of light sources different in XYZ(50) than in XYZ color space?

Say I have these:

image.png.24713a885f07c796db48d05c1efc2408.png

What coordinates will those have in XYZ(50)?

9 minutes ago, sharkmelley said:

I honestly cannot see how a ColorChecker can be used to generate a Colour Correction Matrix without reference to the PCS.

Very simple - you take your color chart - illuminate it with any illuminant that you like and measure resulting spectrum with spectrometer for each patch.

Take resulting spectrum for each patch and calculate XYZ coordinate of that light.

Now take same illuminant and illuminate color checker chart and record it with DSLR. Measure CameraRaw values and perform least squares fitting of those vectors to above vectors.

Note that two different parties can be involved as long as they share reference illuminant and same color checker.

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

How are coordinates of light sources different in XYZ(50) than in XYZ color space?

Say I have these:

image.png.24713a885f07c796db48d05c1efc2408.png

What coordinates will those have in XYZ(50)?

I'll answer with an example. 

Illuminate a ColorChecker with illuminant A i.e. incandescent light.  The grey patches appear neutral to the human observer because of eye/brain adaptation.  Take a photo of it - a raw file.  Open the raw file in the raw converter using the "Incandescent" setting.  The processing engine will generate a XYZ(D50) intermediate colour space (i.e. the PCS) where the grey patches of the ColorChecker will have xy chromaticity of (0.34567, 0.35850) i.e. D50.  When XYZ(D50) is transformed  to sRGB, AdobeRGB etc. the XYZ(D50) chromaticity of (0.34567, 0.35850) will be mapped to (1,1,1),  (0.7, 0.7, 0.7) etc. depending on intensity which will appear white/grey.  So to the human observer, the ColorChecker patches that appeared white or grey in the original scene will appear white or grey in the final image.

I didn't realise you had access to a spectrometer but it's certainly a good alternative for calibrating your CCM from a ColorChecker.

Mark

Link to comment
Share on other sites

3 minutes ago, sharkmelley said:

Illuminate a ColorChecker with illuminant A i.e. incandescent light.  The grey patches appear neutral to the human observer because of eye/brain adaptation.  Take a photo of it - a raw file.  Open the raw file in the raw converter using the "Incandescent" setting.  The processing engine will generate a XYZ(D50) intermediate colour space (i.e. the PCS) where the grey patches of the ColorChecker will have xy chromaticity of (0.34567, 0.35850) i.e. D50.  When XYZ(D50) is transformed  to sRGB, AdobeRGB etc. the XYZ(D50) chromaticity of (0.34567, 0.35850) will be mapped to (1,1,1),  (0.7, 0.7, 0.7) etc. depending on intensity which will appear white/grey.  So to the human observer, the ColorChecker patches that appeared white or grey in the original scene will appear white or grey in the final image.

I don't really follow this.

That means that in XYZ(D50) space - Illuminant A has D50 coordinates? How about Illuminant D65, does it also have D50 coordinates? From your description it follows that it does.

We can easily see that XYZ(D50) space is not very useful space as all colors map to D50 (we can use illuminant of any color - and by analogy with above written - it will end up being D50).

5 minutes ago, sharkmelley said:

I didn't realise you had access to a spectrometer but it's certainly a good alternative for calibrating your CCM from a ColorChecker.

I personally don't have spectrometer, but have good enough spectroscope for that application (StarAnalyzer 200 is more than capable of doing that), but people making color checker cards can certainly invest into spectrometer.

From quick lookup online - latest small models are quite affordable for such business - they are about $1000 and also have quite a bit of resolution - something like 1024 or 2048 linear CMOS sensors and sensitivity in 300-1000+nm range (there are IR models that are sensitive in 1-10µm ranges).

Of course - they come properly calibrated, so you don't have to do anything special like I would need with SA200.

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

I don't really follow this.

That means that in XYZ(D50) space - Illuminant A has D50 coordinates? How about Illuminant D65, does it also have D50 coordinates? From your description it follows that it does.

We can easily see that XYZ(D50) space is not very useful space as all colors map to D50 (we can use illuminant of any color - and by analogy with above written - it will end up being D50).

We are transforming the camera raw data so that the chosen white (in this case illuminant A) will end up with coordinates (0.34567, 0.35850) in XYZ(D50).  Everything else will end up at different points in XYZ(D50) space.   If "D65" was chosen instead then the camera raw data would receive a different transformation that makes "D65" end up with coordinates (0.34567, 0.35850) in XYZ(D50).  

XYZ(D50) is a perceptual colour space where everything is relative to the chosen white. The data are transformed so that the chosen white lands on (0.34567, 0.35850) in this space and all other colours will land relative to that reference white.

Think about what happens when you apply a chromatic adaptation matrix to the XYZ space.  The XYZ coordinates are scaled which would, for instance, move illuminant A to the D50 point.  XYZ(D50) is effectively an XYZ space to which a chromatic adaptation has been applied.

Mark

 

 

Link to comment
Share on other sites

3 minutes ago, sharkmelley said:

We are transforming the camera raw data so that the chosen white (in this case illuminant A) will end up with coordinates (0.34567, 0.35850) in XYZ(D50).  Everything else will end up at different points in XYZ(D50) space.   If "D65" was chosen instead then the camera raw data would receive a different transformation that makes "D65" end up with coordinates (0.34567, 0.35850) in XYZ(D50).  

XYZ(D50) is a perceptual colour space where everything is relative to the chosen white. The data are transformed so that the chosen white lands on (0.34567, 0.35850) in this space and all other colours will land relative to that reference white.

It is much more than that - it is both relative to selected illuminant - but also to color checker square that has "neutral grey".

We don't have much information on what that neutral grey is, so in principle it can be anything.

I still have not found exact definition of XYZ(D50) space - what does coordinate in it mean? In fact, if I do search on google for XYZ(D50) - I don't get anything related to it - just text mentioning regular XYZ color space and D50 illuminant.

12 minutes ago, sharkmelley said:

Think about what happens when you apply a chromatic adaptation matrix to the XYZ space.  The XYZ coordinates are scaled which would, for instance, move illuminant A to the D50 point.  XYZ(D50) is effectively an XYZ space to which a chromatic adaptation has been applied.

When I apply chromatic adaptation to XYZ space - I'm effectively swapping colors of the light in a certain way.

You can't adapt/change XYZ space, or coordinates in it - you adapt image so that pixels in the image represent different color in XYZ space than original image. XYZ space remains the same - I just say - ok pixel 1 is no longer representing object W illuminated with illuminant V but rather object W illuminanted with illuminant Q and that illuminant Q will trigger psychological response of our brain in these conditions that is most similar to what illuminant V and object W triggered in original conditions.

However - spectra themselves don't change and associated coordinates in XYZ space don't change - you are effectively substituting lights with different lights in hope that it will trigger same psychological response.

Link to comment
Share on other sites

34 minutes ago, vlaiv said:

I still have not found exact definition of XYZ(D50) space - what does coordinate in it mean? In fact, if I do search on google for XYZ(D50) - I don't get anything related to it - just text mentioning regular XYZ color space and D50 illuminant.

Unfortunately it's very difficult to find anything that explains this and I certainly haven't done a very good job!

I found this series of articles by Jack Hogan quite useful but I'm not sure they'll tell you what you want to know:

Mark

Link to comment
Share on other sites

22 minutes ago, sharkmelley said:

I found this series of articles by Jack Hogan quite useful but I'm not sure they'll tell you what you want to know:

Thing is - I'm pretty confident that I understand what is going on, however, you keep contradicting me, so I'm trying to find out why is that. I'm happy to be proved wrong - as I'll benefit from correct understanding of the topic (if I'm indeed wrong).

Here is one example, I made a list of claims I think are true. Here is one of them:

- if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55)

You listed that you don't agree with that, but you have not shown me why you don't agree - or what would alternative true statement be.

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

Here is one example, I made a list of claims I think are true. Here is one of them:

- if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55)

Your statement is true but it doesn't exactly represent the sequence of operations in the Adobe Camera Raw processing engine.

The D65 raw camera data would be transformed to XYZ(D50) by the D65 version of the CameraRaw -> XYZ(D50) matrix, which implicitly includes white balancing from D65 to D50

The D50 raw camera data would be transformed to  XYZ(D50) by the D50 version of the CameraRaw -> XYZ(D50) matrix, with no white balancing required because they have the same white reference.

The transformation from XYZ(D50) is done using the Bradford adapted XYZ(D50)->sRGB  so the D50 point becomes (1,1,1) in sRGB.

Mark

Link to comment
Share on other sites

45 minutes ago, sharkmelley said:

The D50 raw camera data would be transformed to  XYZ(D50) by the D50 version of the CameraRaw -> XYZ(D50) matrix, with no white balancing required because they have the same white reference.

The transformation from XYZ(D50) is done using the Bradford adapted XYZ(D50)->sRGB  so the D50 point becomes (1,1,1) in sRGB.

Ok, for the benefit of others reading this thread, I must comment on this statement:

- XYZ(D50) is non existent color space, it has no definition. Using that term as name for color space is simply wrong

- XYZ is absolute color space without reference white point. Spectrum of light that represents certain color will have unique set of XYZ coordinates. There are no different coordinates for light spectrum / particular light color in XYZ depending on (fictional) white reference.

Standard illuminants D50, D55 and D65 will always have these coordinates in xy chromaticity:

image.png.39f091575697c5e9707c9e92c47f64f6.png

they are constants and don't change with any (fictional) white point.

- Using D50 as white point in sRGB space is simply wrong as sRGB standard clearly defines D65 as white point and three are no multiple white points in this color space. Only D65 light can be represented as 1,1,1 vector in sRGB space

- One can properly display any light color on calibrated sRGB monitor (that is within color gamut of that monitor) by using following "transform":  CameraRaw ->(raw_to_xyz_matrix) -> XYZ -> (standard_xyz_to_srgb_matrix) -> sRGB. No need to involve any Bradford transforms that represent white balance, nor is it required to involve illuminants.

(clarification - term light color or color of the light is here used as "compressed" light spectrum - tuple of 3 numbers representing light spectrum in form such that no distinction can be made by our visual system).

Link to comment
Share on other sites

53 minutes ago, vlaiv said:

Ok, for the benefit of others reading this thread, I must comment on this statement:

- XYZ(D50) is non existent color space, it has no definition. Using that term as name for color space is simply wrong

I don't have any more to contribute so this will be my last post in this thread.

In parting, I will mention the following:

  • Adobe's DNG specification has a whole chapter devoted to "mapping between the camera color space coordinates (linear reference values) and CIE XYZ (with a D50 white point)."
  • The principle of colour management across different devices (the reason you can open a JPG encoded in sRGB, AdobeRGB, AppleRGB etc. with correct looking colours on your screen) is defined in the ICC Specification and is built upon what it calls the profile connection space (PCS) which can be either CIE XYZ or CIE LAB where "the chromaticity of the D50 illuminant shall define the chromatic adaptation state associated with the PCS".  The image file embeds an ICC profile and the screen has its own ICC profile.  They communicate via the PCS.

Mark

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.