Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Keep close to authentic look of object or create stunning, award winning and over processed photo?


Recommended Posts

1 minute ago, sharkmelley said:

My guess is that 0/0 corresponds to the default "Daylight" setting for the camera.  Unfortunately the colour temperature corresponding to the camera's "Daylight" setting varies from camera to camera.  For instance "Daylight" on my Sony A7S corresponds to 4800K when opening the image "As Shot" in Adobe Camera Raw and on my Canon 600D it corresponds to 5200K.  In both cases a matrix interpolated between illuminant A and illuminant D65 will be used.

Mark

That is interesting - and gives us experimental grounds. You have two cameras that have different "default" settings.

How about shooting a scene with both and comparing XYZ results? What do you think should happen if they have different default settings for white balance if you compare XYZ files of the same scene?

Should it be different or the same?

Link to comment
Share on other sites

45 minutes ago, vlaiv said:

That is interesting - and gives us experimental grounds. You have two cameras that have different "default" settings.

How about shooting a scene with both and comparing XYZ results? What do you think should happen if they have different default settings for white balance if you compare XYZ files of the same scene?

Should it be different or the same?

It is interesting but it's simply a result of how Adobe generates the matrices for each camera model.  Adobe generate their own matrices just as DXOMARK generate their own.  Both colour engines will be different to the camera manufacturer's proprietary colour engine.

In any case it is something I tested by taking an image of a ColorChecker in daylight using both cameras.  The resulting colours were pretty much identical in AdobeRGB (and hence in XYZ(D50)) when processing both images "As Shot" in Adobe Camera Raw even though the associated colour temperatures were quite different.  Within reason, whatever the lighting conditions, if the correct white balance is used during processing then the grey patches of the ColorChecker will land on the D50 point of the CIE xy chromaticity chart.  

Mark

Edited by sharkmelley
  • Like 1
Link to comment
Share on other sites

11 hours ago, sharkmelley said:

It is interesting but it's simply a result of how Adobe generates the matrices for each camera model.  Adobe generate their own matrices just as DXOMARK generate their own.  Both colour engines will be different to the camera manufacturer's proprietary colour engine.

In any case it is something I tested by taking an image of a ColorChecker in daylight using both cameras.  The resulting colours were pretty much identical in AdobeRGB (and hence in XYZ(D50)) when processing both images "As Shot" in Adobe Camera Raw even though the associated colour temperatures were quite different.  Within reason, whatever the lighting conditions, if the correct white balance is used during processing then the grey patches of the ColorChecker will land on the D50 point of the CIE xy chromaticity chart.  

Mark

I think I finally understand it all.

If you try to do the same with your two cameras indoors with warm ambient light - you'll find that both cameras in "as is / original mode" will register that same grey patch to be closer to color of your ambient light on xy chart. Similarly if you have very cold ambient light like over 6000K - you'll again find that grey is for both cameras in that part of xy chart.

This is because no color balancing is taking place - as expected.

D50 marker in CameraToXYZ matrix stands for something else than white point (as XYZ does not have white point).

CameraToXYZ matrix will not be always 100% correct for all spectra as gamut of camera sensor will differ from that of XYZ color space. Cameras often can't record all colors 100% accurately and there will be some error. This error comes from QE curves of sensors and calculated CameraToXYZ matrix.

Multiple matrices are used - but they all represent the same thing Camera Raw to XYZ transform. They are made by illuminating calibration charts with either D50 or A illuminants (or you can choose third or fourth illuminant as well).

XYZ(D50) matrix will give less color error in scene that has illumination similar to D50 light source and XYZ(A) will give less color error in scene that has illumination similar to A illuminant.

This is why different conversion matrix is selected if you choose to white balance the scene - If you think that your illumination is closer to A then it is probably closer to A and XYZ(A) matrix will give less error. Similarly if white balance is closer to D50 then XYZ(D50) is chosen as it will minimize color error.

In theory you can have more of these matrices.

If you don't select to do white balance and shoot as is - then XYZ(D50) will be chose over XYZ(A) for your camera as it probably gives less total error over other matrix.

Link to comment
Share on other sites

29 minutes ago, vlaiv said:

I think I finally understand it all.

Yes, I think you do!

The only thing to clarify is that you don't have the option not to colour balance.  Whenever you open a raw file in Adobe Camera Raw the displayed temperature/tint is one used to determine the colour balance. 

There is a single combination of  temperature/tint (somewhere near D50 but it depends on the camera's calibration) that will map a D50 star to the D50 point of the xy chromaticity diagram.  Using this temperature/tint combination, all other stars will also be mapped to their correct chromaticities. This mapping of emissive light sources to their correct chromaticities in XYZ is one of things you originally wanted to achieve.   However when XYZ is then transformed to sRGB (or AdobeRGB) using the standard XYZ->sRGB (or XYZ->AdobeRGB) matrix that D50 star will be assigned equal RGB values e.g. (200,200,200) and will therefore be displayed as D65 which may not be what you want.  

Of course all of this is subject to the ability of the camera and its derived matrices to accurately reproduce colour.  A perfect camera doesn't exist and the whole processing chain is a compromise.

Mark

Link to comment
Share on other sites

2 minutes ago, sharkmelley said:

Yes, I think you do!

The only thing to clarify is that you don't have the option not to colour balance.  Whenever you open a raw file in Adobe Camera Raw the displayed temperature/tint is one used to determine the colour balance. 

There is a single combination of  temperature/tint (somewhere near D50 but it depends on the camera's calibration) that will map a D50 star to the D50 point of the xy chromaticity diagram.  Using this temperature/tint combination, all other stars will also be mapped to their correct chromaticities. This mapping of emissive light sources to their correct chromaticities in XYZ is one of things you originally wanted to achieve.   However when XYZ is then transformed to sRGB (or AdobeRGB) using the standard XYZ->sRGB (or XYZ->AdobeRGB) matrix that D50 star will be assigned equal RGB values e.g. (200,200,200) and will therefore be displayed as D65 which may not be what you want.  

Of course all of this is subject to the ability of the camera and its derived matrices to accurately reproduce colour.  A perfect camera doesn't exist and the whole processing chain is a compromise.

Mark

Our understanding of this differs still - I still maintain that D50 is not the white point of XYZ and that XYZ does not have white point.

D50 moniker in XYZ is used to note that rawToXYZ transform was derived with use of D50 illuminant. That is all.

You don't have to assign any particular color temperature in order to convert star spectrum to its exact chromaticity in XYZ color space. You just take spectrum of that star, multiply with X, Y and Z color matching functions and integrate over 380 - 700 and something range and that will give you X, Y and Z values of that star.

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

Our understanding of this differs still - I still maintain that D50 is not the white point of XYZ and that XYZ does not have white point.

D50 moniker in XYZ is used to note that rawToXYZ transform was derived with use of D50 illuminant. That is all.

You don't have to assign any particular color temperature in order to convert star spectrum to its exact chromaticity in XYZ color space. You just take spectrum of that star, multiply with X, Y and Z color matching functions and integrate over 380 - 700 and something range and that will give you X, Y and Z values of that star.

We all agree that we can take a star spectrum and generate its XYZ value by integration without reference to colour temperature.  But when processing the data from a camera, colour temperature is a crucial part of the standard Adobe, RawTherapee, CaptureOne raw conversion.

It's true that XYZ has no absolute white point but when transforming XYZ to sRGB or AdobeRGB the D50 chromaticity will end up having equal RGB values e.g. (250,250,250) or (100,100,100).  In other words D50 is the reference white for those transformations to colour spaces.

D50 is also the reference white for the previous operations.  If I take a photo of a ColorChecker under incandescent light with the camera set to "Incandescent" then the processing sequence will use the Illuminant A matrix to transform the raw data to XYZ so that the grey patches have a chromaticity of D50 in XYZ.  If I photograph the ColorChecker on a cloudy day with the camera set to "Cloudy" then the processing sequence will interpolate a matrix to the required cloudy illuminant so again the grey patches have a chromaticity of D50 in XYZ.  The temperature/tint setting is what determines which of the available CameraRGB->XYZ matrices to use or how to interpolate between the ones available.

So although XYZ does not have a white point, D50 is a crucial reference white for most operations in the processing sequence.

Mark

Link to comment
Share on other sites

1 minute ago, sharkmelley said:

It's true that XYZ has no absolute white point but when transforming XYZ to sRGB or AdobeRGB the D50 chromaticity will end up having equal RGB values e.g. (250,250,250) or (100,100,100).  In other words D50 is the reference white for those transformations to colour spaces.

This is not correct.

D50 has following XYZ coordinates:

[0.9642, 1.0000, 0.8251]

according to: https://www.mathworks.com/help/images/ref/whitepoint.html

Or according to this wiki article:

https://en.wikipedia.org/wiki/Standard_illuminant

0.34567, 0.35850 xy coordinates.

If I enter those coordinates into color conversion calculator, I get following:

image.png.ad0c489cb57c90369e7489310ac6c7f3.png

Here I used xy coordinates. XYZ are the same as those from mathworks website (to 4 decimal places truncated), I selected standard sRGB parameters - gamma as sRGB, D65 as white point and sRGB as RGB model and

calculated sRGB values are not in fact 1,1,1

D50 does not have 1,1,1 coordinates in sRGB.

If I select AdobeRGB color space with appropriate parameters (gamma 2.2, white point D65):

image.png.f501ab0754de0618c48c070ad82b73a2.png

again - D50 will not have 1,1,1 coordinates.

Only if I select color space that has D50 as white point - like Adobe Wide Gamut Color space:

image.png.1c1adee7f48be826da135811ed36c27e.png

we will get 1,1,1 for D50

In relative color spaces - white point of that color space will have 1,1,1 - but other will not

In absolute color space like XYZ - illuminants are not different to any other color - they simply have some coordinates. In fact there is type of illuminant that could be considered as white point in XYZ color space - in sense that it produces 1:1:1 ratio of XYZ (but not 1,1,1 as actual numbers will depend on how strong illuminant is and there is no upper bound) and that is E illuminant.

E illuminant is so called "equienergy" illuminant and it has 1/3, 1/3 xy chromaticity - but that is not actual white color - it is quite yellowish:

image.png.21e12f29acaa77cc8cbff6999e6f21f2.png

(taken from this page: https://en.wikipedia.org/wiki/Standard_illuminant)

 

Link to comment
Share on other sites

16 minutes ago, vlaiv said:

This is not correct.

D50 has following XYZ coordinates:

[0.9642, 1.0000, 0.8251]

according to: https://www.mathworks.com/help/images/ref/whitepoint.html

Or according to this wiki article:

https://en.wikipedia.org/wiki/Standard_illuminant

0.34567, 0.35850 xy coordinates.

If I enter those coordinates into color conversion calculator, I get following:

image.png.ad0c489cb57c90369e7489310ac6c7f3.png

 

 

You seem to be using Bruce Lindbloom's calculator but please look at the instructions for it:

How the CIE Color Calculator Works (brucelindbloom.com)

In particular note the purpose of Ref. White:

The Ref. White pop-up menu is used to change the reference white interpretation of the CIE color system.

In other words it is the white reference of the CIE XYZ and not of sRGB or AdobeRGB which we already know has a reference white of D65.  Also note that Ref. White defaults to D50 and for good reason :)

Mark

Link to comment
Share on other sites

4 minutes ago, sharkmelley said:

You seem to be using Bruce Lindbloom's calculator but please look at the instructions for it:

How the CIE Color Calculator Works (brucelindbloom.com)

In particular note the purpose of Ref. White:

The Ref. White pop-up menu is used to change the reference white interpretation of the CIE color system.

In other words it is the white reference of the CIE XYZ and not of sRGB or AdobeRGB which we already know has a reference white of D65.  Also note that Ref. White defaults to D50 and for good reason :)

Mark

On the info page of the same website, you'll see this text as well:

Quote

Why is a reference illuminant needed? If you consider the emissive case, where the color sample is viewed in a dark room, you can see its color because it is generating the light. However, when viewing a reflective (or transmissive) sample in that same environment you see nothing at all because such a sample does not generate light. The only way it can be seen is by first illuminating it. It is obvious that the appearance of the sample color is influenced by the type of illumination it receives. In order to make the XYZ value of a color sample unambiguous, the illuminant must be somehow identified and associated with it. This poses a problem since there are infinitely many possible illuminants. So instead of using an actual illuminant, a reference illuminant is used instead. Common reference illuminants are C, D50, D65 and others. The reference illuminant of an XYZ is simply made by reference. For example, "D50" refers to an entire standard spectral power distribution.

I added bold to emphasize important part. You use reference illuminant only when you want to get XYZ of "color of object" and not color of the light. There is no single color of object - it depends on illuminant.

If you want to get accurate representation of the color of the light - then you must choose same illuminant as defined in color space itself. If you are converting to sRGB color space - you must set illuminant used for that color space.

In any case - good thing that you pointed out this to me as using D50 matrix for DSLR will not produce proper color for light sources when converted to sRGB. It still needs to be converted to D65 - because people working with DSLR-s simply can't stop thinking in terms of "color of object" instead of "color of light" (I guess that is because daytime photography mostly deals with objects and not sources of light).

This also means that I calibrated my monitor wrong - and it was properly calibrated to begin with as it gave reading of D50 - for D65 white (I was expecting ~6500K value but I was getting ~5000K value). I'll have to reset it :D

 

Link to comment
Share on other sites

And here is the solution to whole color problem with DSLR cameras:

image.png.245f8fa0ae99d9b5c129131a8075d1a6.png

Instead of regular D65 matrices. These are Dradford adopted  color transform matrices from XYZ->sRGB where XYZ was messed up by use of D50 :D

Right one is correct forward XYZ->sRGB matrix.

 

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

And here is the solution to whole color problem with DSLR cameras:

image.png.245f8fa0ae99d9b5c129131a8075d1a6.png

Instead of regular D65 matrices. These are Dradford adopted  color transform matrices from XYZ->sRGB where XYZ was messed up by use of D50 :D

Right one is correct forward XYZ->sRGB matrix.

 

That's right

As you previously said, D50 has following XYZ coordinates:  [0.9642, 1.0000, 0.8251]

The right hand matrix then gives you [1.0, 1.0, 1.0] in the sRGB colour space and will be displayed as D65 chromaticity on your calibrated monitor.

It may or may not be the solution to the whole colour problem - it depends on how you generated your XYZ values from the Camera RGB values.

Mark

 

 

Link to comment
Share on other sites

10 minutes ago, sharkmelley said:

That's right

As you previously said, D50 has following XYZ coordinates:  [0.9642, 1.0000, 0.8251]

The right hand matrix then gives you [1.0, 1.0, 1.0] in the sRGB colour space and will be displayed as D65 chromaticity on your calibrated monitor.

It may or may not be the solution to the whole colour problem - it depends on how you generated your XYZ values from the Camera RGB values.

Mark

 

 

I'm going insane now :D

That is not what we need. D50 is not supposed to have 1,1,1 in sRGB white color space. D65 is white in sRGB color space.

Edited by vlaiv
Link to comment
Share on other sites

5 minutes ago, vlaiv said:

I'm going insane now :D

That is not what we need. D50 is not supposed to have 1,1,1 in sRGB white color space. D65 is white in sRGB color space.

Let's carefully consider the reflective case to begin with because the emissive case has additional complications. 

When a photographer takes a photo of a ColorChecker under incandescent light, the photographer generally wants the white and grey patches to end up with the coordinates of white and grey  e.g. [1,1,1], [0.5, 0.5, 0.5] in the destination sRGB colour space.  The photographer will select "Incandescent" in the raw converter to achieve this.  The raw converter then picks up (or interpolates) the matrix that maps the CameraRGB values of the grey patches to coordinates with the D50 chromaticity in XYZ.  Those patches with D50 chromaticity in XYZ are then mapped to white and grey  e.g. [1,1,1], [0.5, 0.5, 0.5] in the destination sRGB colour space, exactly as the photographer expects.

Mark

 

Link to comment
Share on other sites

22 minutes ago, sharkmelley said:

That's right

As you previously said, D50 has following XYZ coordinates:  [0.9642, 1.0000, 0.8251]

The right hand matrix then gives you [1.0, 1.0, 1.0] in the sRGB colour space and will be displayed as D65 chromaticity on your calibrated monitor.

It may or may not be the solution to the whole colour problem - it depends on how you generated your XYZ values from the Camera RGB values.

Mark

 

 

Ok, whenever I try to incorporate your view into the picture - I end up returning to how I originally understood it. I'm going to make a list of points, and I'll ask you to name which one you disagree and why.

- XYZ is absolute color space and as such has no white point

- every color or light has unique XYZ coordinates

- when converting to particular color space - you need to use white point of that white space.

- white point of particular color space will have 1,1,1 coordinates in that color space and all other colors will have different coordinates in that color space

- If you want to do white balance - you need to do following: convert to XYZ, then convert to LMS, do Bradford transform, return to XYZ and then return to original color space. For successful color correction you need assumed and target illuminant, or rather their XYZ coordinates

- Adobe XYZ(D50) and XYZ(A) are just camera raw to XYZ conversions that don't use white point (as there is no sense to do this) but are just the same thing that minimizes color error depending on what (if any) white balance you plan to perform later on

- if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55)

- if you want to record color of light - whatever that color is - you don't have to do any color balancing at all - just convert from XYZ to sRGB with standard and not Bradford adapted matrix

Link to comment
Share on other sites

29 minutes ago, vlaiv said:

- If you want to do white balance - you need to do following: convert to XYZ, then convert to LMS, do Bradford transform, return to XYZ and then return to original color space. For successful color correction you need assumed and target illuminant, or rather their XYZ coordinates

- Adobe XYZ(D50) and XYZ(A) are just camera raw to XYZ conversions that don't use white point (as there is no sense to do this) but are just the same thing that minimizes color error depending on what (if any) white balance you plan to perform later on

- if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55)

The bits I've quoted above are where I disagree.  The key point is that the white balance takes place as we transform from CameraRGB space to XYZ.

Let's take your white paper for example.  Illuminated with incandescent light its spectrum will be redder than when illuminated with "cloudy day" light.  The CameraRGB values will therefore be different in the two cases - one much redder than the other.  But we compensate for this as we generate the XYZ data.  In the incandescent case we use the Camera->XYZ matrix designed for the incandescent illuminant.  The white paper will end up with a chromaticity of D50 in XYZ space because that's what the "incandescent" matrix was designed to do.  In the "cloudy day" case we use the Camera->XYZ matrix designed for the "cloudy day" illuminant.  Again the white paper will end up with a chromaticity of D50 in XYZ space because that's how the matrix was designed.  Finally the XYZ->sRGB matrix will then map anything in XYZ with a D50 chromaticity to sRGB values such as [1,1,1] or [0.9, 0.9, 0.9] or [0.4, 0.4, 0.4] and they will appear as "white" or neutral grey when displayed on a calibrated screen because the eye/brain adapts to consider D65 as white.  So the two photos appear identical in sRGB even though they were taken under very different lighting conditions.

Mark

Link to comment
Share on other sites

3 hours ago, sharkmelley said:

But we compensate for this as we generate the XYZ data.

I've linked twice how XYZ data is generated with xyz matching functions. Imagine you have device that has "XYZ senzor" and records directly in XYZ in the same way sensors work and in the same way XYZ is defined to work.

We have white paper and we illuminate it with D50 and D65.

Will XYZ coordinates of paper color be different or the same? Will they have different xy chromaticity or the same?

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I've linked twice how XYZ data is generated with xyz matching functions. Imagine you have device that has "XYZ senzor" and records directly in XYZ in the same way sensors work and in the same way XYZ is defined to work.

We have white paper and we illuminate it with D50 and D65.

Will XYZ coordinates of paper color be different or the same? Will they have different xy chromaticity or the same?

In your example of a theoretical XYZ sensor, the recorded coordinates would be different.  In one case it would be the coordinates of D50 and in the other case it would be D65.

Mark

Link to comment
Share on other sites

1 minute ago, sharkmelley said:

In your example of a theoretical XYZ sensor, the recorded coordinates would be different.  In one case it would be the coordinates of D50 and in the other case it would be D65.

Mark

Ok, we now have same setup - regular DSLR, paper illuminated with D50 for one shot and D65 for the other.

What XYZ coordinates will we measure from two XYZ images produced by dcraw from raw files if custom white balance is used with 0, 0 settings?

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Ok, we now have same setup - regular DSLR, paper illuminated with D50 for one shot and D65 for the other.

What XYZ coordinates will we measure from two XYZ images produced by dcraw from raw files if custom white balance is used with 0, 0 settings?

If we assume for simplicity that the 0,0 setting corresponds to D50 then the coordinates of the white paper colour in the two XYZ images would correspond to D50 and D65.

Mark

Link to comment
Share on other sites

2 minutes ago, sharkmelley said:

If we assume for simplicity that the 0,0 setting corresponds to D50 then the coordinates of the white paper colour in the two XYZ images would correspond to D50 and D65.

Mark

If we then convert those two XYZ images to sRGB color space,

what should I expect to get for RGB values in each?

Link to comment
Share on other sites

28 minutes ago, vlaiv said:

If we then convert those two XYZ images to sRGB color space,

what should I expect to get for RGB values in each?

You would get [1,1,1] (or a less bright version) for the D50 case and something a bit bluer for the D65 case.

Mark

Link to comment
Share on other sites

7 minutes ago, sharkmelley said:

You would get [1,1,1] (or a less bright version) for the D50 case and something a bit bluer for the D65 case.

Mark

Ok, explain then following:

In dcraw code at line 135, there is following code:

const double xyz_rgb[3][3] = {			/* XYZ from RGB */
  { 0.412453, 0.357580, 0.180423 },
  { 0.212671, 0.715160, 0.072169 },
  { 0.019334, 0.119193, 0.950227 } };

that calculates XYZ from RGB.

If you multiply 1,1,1 with that you get - 0.950456, 1, 1.088754 in XYZ space and that is

0.312731, 0.329033 in xy chromaticity or - guess what - D65 illuminant

Further, if I set my monitor to 5000K, display test image that looks like this:

image.png.34fa84c65b2d26949c5ff68a9af805b6.png

(that is rgb 1,0,0; 0,1,0; 0,0,1; 1,1,1) and shoot it with my Canon DSLR in raw mode and use dcraw to extract XYZ, I get this:

0.941821632539927, 1, 0.759163712244686

Which is in xy chromaticity:

0.348696, 0.370235 - or very close to D50 that has: 0.34567 0.35850

When I convert that image to sRGB with dcraw, it looks like this:

image.png.b0be1e65226fa9af3a23121e9f433432.png

But when I set my computer screen to be 6500K and take image of same test pattern, use dcraw to extract XYZ, I get this:

0.946408248141336, 1, 1.08111955117533

which is again xy chromaticity:

0.312601, 0.330302 (again, D65 being 0.3127, 0.3290) and when I ask dcraw to generate srgb image, it looks like this:

image.png.3e33b30975e4f0cd47a3e2286323de65.png

All that I've just shown you is consistent with what I've been saying and inconsistent with what you've written above.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Ok, explain then following:

In dcraw code at line 135, there is following code:


const double xyz_rgb[3][3] = {			/* XYZ from RGB */
  { 0.412453, 0.357580, 0.180423 },
  { 0.212671, 0.715160, 0.072169 },
  { 0.019334, 0.119193, 0.950227 } };

that calculates XYZ from RGB.

If you multiply 1,1,1 with that you get - 0.950456, 1, 1.088754 in XYZ space and that is

0.312731, 0.329033 in xy chromaticity or - guess what - D65 illuminant

<<LONG DISCUSSION REMOVED>>

All that I've just shown you is consistent with what I've been saying and inconsistent with what you've written above.

You pose a very good question!

The XYZ result was unexpected (by me), so it required me to take a more detailed look at the DCRAW code to work out what was going on. 

First of all DCRAW is generating the output I would expect in sRGB given that it assumes a D65 illuminant for its white balance - remember DCRAW only has the D65 version of the Adobe DNG XYZ(D50)->CameraRGB matrix and so it is forced to assume a D65 illuminant by default.

The important DCRAW function is convert_to_rgb( ) and this uses the matrix xyzd50_srgb defined as follows:

  static const double xyzd50_srgb[3][3] =
  { { 0.436083, 0.385083, 0.143055 },
    { 0.222507, 0.716888, 0.060608 },
    { 0.013930, 0.097097, 0.714022 } };

This means that DCRAW will use the process sequence I have already described i.e.  CameraRGB->XYZ(D50)->sRGB  where XYZ(D50) is XYZ with D50 as a white reference.  Everything works exactly as expected for sRGB output.

However when XYZ output is requested, DCRAW takes the sRGB result and uses the embedded xyz_rgb matrix you referred to above (not xyzd50_srgb) to go back from sRGB to XYZ.  In other words the XYZ output that DCRAW generates is not the intermediate XYZ(D50) colour space a.k.a. the profile connection space.

Mark
 

 

Edited by sharkmelley
  • Like 1
Link to comment
Share on other sites

10 hours ago, sharkmelley said:

This means that DCRAW will use the process sequence I have already described i.e.  CameraRGB->XYZ(D50)->sRGB  where XYZ(D50) is XYZ with D50 as a white reference.  Everything works exactly as expected for sRGB output.

However when XYZ output is requested, DCRAW takes the sRGB result and uses the embedded xyz_rgb matrix you referred to above (not xyzd50_srgb) to go back from sRGB to XYZ.  In other words the XYZ output that DCRAW generates is not the intermediate XYZ(D50) colour space a.k.a. the profile connection space.

Mark

Excellent, finally we have settled on what is going on and I think we agree on mechanics.

I would also like to add that this is the wrong way to do things :D. It is in principle wrong way to do things and is a short cut for utilized for some reason by Adobe and others. In fact, it is recommended in their specification.

image.png.177c6747b99d6208818660f2e4197317.png

CameraToXYZ(D50) uses absolute camera coordinates than changes the color and translates that changed color to XYZ, then you need to use adapted (ie wrong) matrix to convert that to sRGB coordinates.

You have written above following step:

CameraRGB->XYZ(D50)->sRGB

That while correct and the same as I have written above, does not help to understand what is going on. Same step can and should be written for the purposes of correctness like this:

CameraRAW->D50 adaptation->XYZ->(D50 adapted)sRGB

Since these are all matrices - you can choose to multiply different matrices into single step - into single matrix, but above is proper way to describe what is going on.

Here is why.

Take any XYZ coordinates as recorded light. Say we take E illuminant with 1/3, 1/3 coordinates. Forward transform it with:

static const double xyzd50_srgb[3][3] =
  { { 0.436083, 0.385083, 0.143055 },
    { 0.222507, 0.716888, 0.060608 },
    { 0.013930, 0.097097, 0.714022 } }; 

then backward transform it with:

const double xyz_rgb[3][3] = {			/* XYZ from RGB */
  { 0.412453, 0.357580, 0.180423 },
  { 0.212671, 0.715160, 0.072169 },
  { 0.019334, 0.119193, 0.950227 } };

and you will get different coordinates than you started with.

There are no different coordinates in XYZ for color. Every color always has the same coordinates in XYZ. It is absolute space, you can't change coordinates of color just by selecting white point as XYZ has no white point.

If you disagree with that for some reason, try to:

- find online two different values for any illuminant in XYZ / xy

- explain how can shining that illuminant on xyz color matching function can produce different integral depending on whatever conditions

Main issue arises when people think in terms of object color - and that simply does not exists. Only thing that physically exists is color of the light. You can only think of color of object under certain illumination - and then again we are talking about color of reflected light and not color of  object.

If you want to do proper white balance to show what the color of the object would look like under different illuminant, then you have to do following:

Take color of light that you recorded in XYZ, convert that to LMS, from knowing original illuminant and destination illuminant using CAM like Bradford - you derive color adaptation matrix and convert values - then return those values from LMS space into XYZ and that will give you (approximate) color of the light that you would record if original object was illuminated with destination illuminant.

https://en.wikipedia.org/wiki/LMS_color_space

By the way, when I speak of color, and here might be the source of some confusion, I tend to speak of spectrum of light and measurement and not subjective feel / perception.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.