Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Keep close to authentic look of object or create stunning, award winning and over processed photo?


Recommended Posts

17 minutes ago, vlaiv said:

It has not been color balanced - it has been converted to XYZ color space.

Are you sure it hasn't been colour balanced?   Look at the raw data and measure the RGB values for one of the white patches on the colour chart.  I'm guessing the RGB value is predominately green. Now look at the XYZ values that emerge from DCRAW for the same white patch.  Do they correspond to the green area of the CIE XYZ colour space or are they approximately in the white area of the CIE XYZ colour space?  I'm guessing they sit in the white area and it's because colour balancing has happened.

I don't know what DCRAW is doing but I do know that the whole of chapter 6 of the DNG Specification is devoted to how the Camera RGB space is mapped to CIE XYZ and it definitely involves colour balancing.

Mark

Edited by sharkmelley
Link to comment
Share on other sites

15 minutes ago, vlaiv said:

There is such chart as a matter of fact. I used image of it (or rather image with very similar colors)........

In order for you to actually see the same color on your computer screen, when image of pigment is taken - one needs to control illumination (this is illuminant that we are talking about).  What sort of light should one use to represent "natural" color of pigment? But what is natural color of pigment? Pigment will have different hues depending on light used to illuminate it.

Computer screen shows color of pigment if image was properly color managed and your computer screen is properly calibrated that pigment would show if it was illuminated with D65 illuminant (daylight in northern / central Europe is best match for this illuminant).

In astronomy - we don't have this problem. We don't have pigments - we have light. And only thing we need to agree is same viewing conditions - something that is natural, and luckily we don't need to worry about it - computer standards do that for us. sRGB color space is well defined and when we want to represent color of light in it - we know the math and there is just two things left to do:

- calibrate camera

- calibrate computer screen

Above with CieXYZ and color correction matrix - that is "calibrate camera" part. We want to take two cameras and shoot same light source and get same numbers. Well defined numbers........

Thank you for your interesting and informative reply, I had a feeling there would be more to it, but it in a general sense perhaps I was not quite so crazy after all.

The products you mentioned obviously do a good job of colour calibrating the monitor screen, and  although the price is not unreasonable it is perhaps more suited to the professional photographer/printer.

For the moment I am happy with the information you gave me on how to debayer, it put the colour back into my life. Alright alright, I know it’s corny! 😂

  • Haha 1
Link to comment
Share on other sites

44 minutes ago, sharkmelley said:

Are you sure it hasn't been colour balanced?   Look at the raw data and measure the RGB values for one of the white patches on the colour chart.  I'm guessing the RGB value is predominately green. Now look at the XYZ values that emerge from DCRAW for the same white patch.  Do they correspond to the green area of the CIE XYZ colour space or are they approximately in the white area of the CIE XYZ colour space?  I'm guessing they sit in the white area and it's because colour balancing has happened.

I don't know what DCRAW is doing but I do know that the whole of chapter 6 of the DNG Specification is devoted to how the Camera RGB space is mapped to CIE XYZ and it definitely involves colour balancing.

Mark

X, Y an Z components are not bound - they don't go from 0-1. They go from 0 to infinity. In that sense - there is no white area of CieXYZ (no max values - no white - all is sort of "grey"). It represents photons collected from a light source separated into 3 components. In that sense - you can always double light intensity and it will double numbers.

We have white area of xyY color space - or rather we have white area in xy chromaticity diagram. xy chromaticity is 2d space - as light intensity has been normalized to 1 - or rather Y component of xyY has been normalized to 1.

In fact - it is just color without intensity associated. Both of these xy diagrams are true:

image.png.58aacd54a387d61cfe1c9b3404f7e364.pngimage.png.4c735adca0ec4d4a299cd262aaded96e.png

Although they don't display same colors. Left one does not even contain white. This is because these diagrams are free from intensity component and you can assign any intensity component to any light source representing color in this diagram.

If you look at above diagram, greys are located around 0.33, 0.33.

image.png.0f1bf635874f596eda5d8890f9014981.png

Regardless of Y - intensity, x, y are going to be around 0.33 if all three X, Y and Z are close in values. In that sense - you could say that XYZ color space is "balanced" - when all there components are similar in size - color they represent is similar to grey.

It just happens that when you take White color on sRGB image (my computer screen, and if it is properly calibrated - that should correspond to D65) and you record what my computer screen is producing and put it in XYZ coordinates - you get numbers that are close in ratio to 1:1:1 - but not equal to it.

Here it is:

image.png.3e78d0312adf6f8200715ee0e47b6f37.png

I measured mean value of XYZ image and got X to be ~15580, Y to be ~16588 and Z to be 12934

If I scale that to Y being 1, I'll get values:

XYZ = 0.93923, 1, 0.7797

image.png.11aae810cd13567429ee3de9b212ad5d.png

That actually corresponds to color temperature of 5000K rather than to that of 6504K

My computer screen is actually much warmer than it should be - or maybe it is a bit due to environment lighting since I did not take image in the dark.

  • Like 1
Link to comment
Share on other sites

13 hours ago, vlaiv said:

It just happens that when you take White color on sRGB image (my computer screen, and if it is properly calibrated - that should correspond to D65) and you record what my computer screen is producing and put it in XYZ coordinates - you get numbers that are close in ratio to 1:1:1 - but not equal to it.

Here it is:

image.png.3e78d0312adf6f8200715ee0e47b6f37.png

I measured mean value of XYZ image and got X to be ~15580, Y to be ~16588 and Z to be 12934

If I scale that to Y being 1, I'll get values:

XYZ = 0.93923, 1, 0.7797

image.png.11aae810cd13567429ee3de9b212ad5d.png

That actually corresponds to color temperature of 5000K rather than to that of 6504K

My computer screen is actually much warmer than it should be - or maybe it is a bit due to environment lighting since I did not take image in the dark.

Excellent, we can now see that DCRAW is applying white balance to the raw data as it transforms into CIE XYZ.

I took a look at the DCRAW C code last night but it's pretty obscure - I would need to build it and step through it in debugging mode to see how it does it but I don't have time for it right now.  As I said earlier, you can download the DNG Specification to read how Adobe does it - it's all in chapter 6 "Mapping Camera Color Space to CIE XYZ Space".  It's an interesting read and I'm guessing DCRAW implements a simplified version.  I downloaded the DNG SDK a couple of years ago and debugged it step by step to compare what I was seeing in the code against the DNG Specification.  I can see that the DCRAW C code does contain a hardcoded matrix for each camera but it's only a single one (i.e. it doesn't vary by colour temperature) and it corresponds to ColorMatrix in the DNG Spec (not ForwardMatrix).  I'm pretty sure DCRAW copies its matrices from Adobe.  The remaining mystery for me is which white point DCRAW is using for its mapping to XYZ - it may help to explain why you are seeing a resulting CCT of 5038 rather than something nearer 6500.

Mark

Edited by sharkmelley
Link to comment
Share on other sites

3 hours ago, sharkmelley said:

Excellent, we can now see that DCRAW is applying white balance to the raw data as it transforms into CIE XYZ.

Again - it does not. It just applies single matrix to convert from Camera color space to CIE XYZ.

3 hours ago, sharkmelley said:

I can see that the DCRAW C code does contain a hardcoded matrix for each camera but it's only a single one (i.e. it doesn't vary by colour temperature) and it corresponds to ColorMatrix in the DNG Spec (not ForwardMatrix).  I'm pretty sure DCRAW copies its matrices from Adobe.

Indeed - each camera will have single matrix that converts from camera color space to CIE XYZ.

3 hours ago, sharkmelley said:

The remaining mystery for me is which white point DCRAW is using for its mapping to XYZ - it may help to explain why you are seeing a resulting CCT of 5038 rather than something nearer 6500.

There is no white point in XYZ - XYZ is absolute color space. You use white point only when converting from XYZ which is absolute color space (it can have arbitrarily large values) - to relative color space - where each component scales from 0 to 1, and white point is (1,1,1) value.

Imagine you want to map 2d space to a square that is 1x1 in size (Each side goes from 0 to 1)

image.png.f9ad3905c32849b2c5dfbb21dafabc38.png

This is the square. White is (1,1) in top left corner. This is our "sRGB" in 2d example (much easier to show diagram of 2d and understand 2d version, but 3d version is all the same). All the values of color that we can display are in this range - in this square.

On the other hand, XYZ space is like this:

image.png.159c5284cc8de2413d727a691a043807.png

There is no upper limit on how strong a light source can be. Take a torch and assign a point to its light. Take another torch and place it next to that one. Every camera will record double values in same amount of time. Place third torch - again the same. You don't even have to use other torches - just increase exposure time - you will collect more signal - larger raw number. That is equivalent as using stronger light.

We now want to map this space to above well defined 1x1 square. We know following things:

1. Black is black - absence of light, always the same in all spaces (0,0) -> (0,0) (although some color spaces even define black point to be different than 0, 0)

2. We need primary colors of our color space, ones that we will use in combination to produce particular color

3. We need white point

In the end - we have this:

image.png.7785b544ccf3cc8e1a556cb6d540e659.png

Now we also have "distorted" quad in this space - but we can easily find matrix transform of that quad into above 0-1 square space.

This unbounded diagram contains all the colors we can possibly see and then even some colors that we can't see (for example - any coordinate with negative value - you can't have negative light). It turns out that when you define a color space in terms of above quad with primary colors and white point - that all the colors inside that quad can be generated only by those primary colors and white point defines their absolute ratio in terms of intensity.

People mistakenly think that you need equal amounts of blue and green and red to get white. What green? Not all greens are equal, and similarly not all whites are perceived as white. But regardless of that - each of these spectra has definite XYZ coordinate and will be perceived as different color (but we can't say which will be labeled as white between two "whites" - that depends on conditions).

Choose particular green, red, blue as primaries and white point - and you have new color space.

Here are coordinates of sRGB linear cube (in xyY which is just simple transform of XYZ color space):

image.png.091a6553bb8d9c13d881bcc64d9a6edc.png

As contrast, Wide gamut Adobe RGB color space is defined like this:

image.png.7fb128558f0ae2a6d20afd649911edaf.png

It actually uses single wavelength colors as primaries (which again have certain xy coordinates) and different white point.

Here is image that sort of shows above "squares/cubes" of different relative color spaces - but just projected at 2d plane in xyY absolute space:

image.png.84258dd690c52b6b7292e8eb6b8e77b4.png

What you are here seeing is this:

image.png.0c80980cb4d7d3b7339db3ad011bd1ff.png

When we think of AdobeRGB or sRGB - we always think of this regular 1:1:1 cube - but in reality, it is distorted "diamond" shape in XYZ color space.

As you can see - there is no white balance in XYZ and no white point. There is just absolute numbers that one gets from measuring signal strength - much like sensor.

White point is just a color/point in absolute XYZ that defines one vertex of our "color cube" - one with 1,1,1 coordinates, and if we want to define our cube inside XYZ space - we need to specify value of white point along with few other vertices (black point - usually 0, 0 and three primaries R, G and B).

White balance is process of deforming our cube in XYZ space - or changing transform and selecting another white point. It can be summarized with this image (back to our simple 2d representation):

image.png.6a57c21df86883d52a275e68bbd1c52a.png

 

Link to comment
Share on other sites

2 hours ago, vlaiv said:

Again - it does not. It just applies single matrix to convert from Camera color space to CIE XYZ.

Indeed - each camera will have single matrix that converts from camera color space to CIE XYZ.

There is no white point in XYZ - XYZ is absolute color space. You use white point only when converting from XYZ which is absolute color space (it can have arbitrarily large values) - to relative color space - where each component scales from 0 to 1, and white point is (1,1,1) value.

 

I'm beginning to see where our disconnect is. 

You are describing CIE XYZ as an absolute colour space and strictly speaking that's true.  With that assumption, the RGB values recorded by the camera can be directly transformed into absolute coordinates in CIE XYZ using a single matrix multiplication without reference to any scene illuminant.  Most of what you have said so far is correct and makes complete sense in the context of an absolute colour space.

However, that is not how the standard colour managed processing chain works.  The whole architecture of colour management and ICC profiles is built upon the Profile Connection Space (PCS).  The PCS is based upon CIE XYZ but it is not an absolute colour space.  Instead the illuminant D50 is defined as its white point.  Therefore the transformation from camera RGB to the PCS  involves a white balance correction from the scene illuminant (e.g. Tungsten, Daylight, Cloudy, Fluorescent) to D50.  This is what is being described in chapter 6 of the Adobe DNG Specification: "Mapping Camera Color Space to CIE XYZ Space".  The same steps are performed by the Photoshop's Camera Raw and a slightly simplified version is performed by DCRAW.

Mark

Link to comment
Share on other sites

20 minutes ago, sharkmelley said:

I'm beginning to see where our disconnect is. 

You are describing CIE XYZ as an absolute colour space and strictly speaking that's true.  With that assumption, the RGB values recorded by the camera can be directly transformed into absolute coordinates in CIE XYZ using a single matrix multiplication without reference to any scene illuminant.  Most of what you have said so far is correct and makes complete sense in the context of an absolute colour space.

However, that is not how the standard colour managed processing chain works.  The whole architecture of colour management and ICC profiles is built upon the Profile Connection Space (PCS).  The PCS is based upon CIE XYZ but it is not an absolute colour space.  Instead the illuminant D50 is defined as its white point.  Therefore the transformation from camera RGB to the PCS  involves a white balance correction from the scene illuminant (e.g. Tungsten, Daylight, Cloudy, Fluorescent) to D50.  This is what is being described in chapter 6 of the Adobe DNG Specification: "Mapping Camera Color Space to CIE XYZ Space".  The same steps are performed by the Photoshop's Camera Raw and a slightly simplified version is performed by DCRAW.

Mark

 

I think that we differ in what we understand as color managed processing chain - or rather when it stars.

Camera color space - although we call it RGB is not managed color space - not relative color space. It is again absolute color space.

Here is what proper chain looks like when you want to color balance for particular illumination:

Camera color space -> XYZ -> LMS (white balance) -> XYZ -> target color space

In case of astrophotography - since we are recording color of light and not color of object under certain lighting that we need to correct for (we have illuminant in case of shooting regular scene) above simplifies to:

Camera color space -> XYZ -> sRGB (or rather target color space - which implies D65 as white point).

Camera color space -> XYZ does not require color management as both are absolute color spaces - values are not specified relative to 3 primaries, black and white point. You don't record 0.7232 value for blue with your camera - you record some number of photons / electrons.

Link to comment
Share on other sites

17 minutes ago, sn2006gy said:

Is there sample data of what fully calibrated images we all love would/should look like?  

Here's a natural colour image of NGC7000 taken with an unmodified Nikon Z6 and processed to keep the keep the colour as accurate as possible:

ngc7000_20190825_srgb_small.jpg.28007e857cec4f8bffe75a642fbf17b0.jpg

Ignore the blue at the top right which was an internal reflection from a star.

Full size version can be found on Astrobin: NGC7000 ( sharkmelley ) - AstroBin

Mark

Link to comment
Share on other sites

25 minutes ago, sn2006gy said:

Is there sample data of what fully calibrated images we all love would/should look like?  

I can describe process the way I see it produce accurate color from start to end result for both DSLR and Astro cameras (they are slightly different):

DSLR workflow:

Take all your subs (bias, dark, flat, flat dark - or reuse bias if you have strong enough flat panel that flat exposure is short and lights) and run them thru dcraw to extract 16 bit linear XYZ images.

Calibrate and stack as you normally would.

Take Y component and save as separate mono luminance.

Take standard XYZ -> sRGB transform matrix and convert to sRGB linear.

Divide each R, G and B with max(R, G, B ) to get scaled linear RGB components.

Stretch and process Y/luminance to your liking (denoising, sharpening, non linear stretch). In case you want to emulate visual impression - just set black and white point and apply sRGB gamma.

And finally we need to do color transfer which is simply:

Take stretched luminance -> do inverse sRGB gamma, multiply with scaled R, scaled G and scaled B to get R, G and B of final image.

Apply forward sRGB gamma to each component and just compose them into image.

Astro camera workflow.

All the same, except you need to apply dcraw step yourself and convert Camera color space to XYZ color space your self by applying conversion matrix.

Deriving this matrix is tricky, but it needs to be done only once (and just applied every time after). You'll either need precise QE curve of your sensor and filters (preferably measured by trusted party rather than that published by camera vendor as they seem to just copy/paste without actually checking things) or perform relative measurement by DSLR type camera that has been calibrated at factory (in hope that camera manufacturer did a good job).

Third option would be to have accurately calibrated sRGB computer monitor.

With QE curves of sensor and filters - it is just mathematical process and one needs to write software to derive matrix (not hard, can be done in an afternoon in python for example by semi skilled programmer).

With relative method with reference - it can be done by anyone if they can use software to measure average pixel brightness and spread sheet to enter values and do some least squares fitting.

Relative method against DSLR - shoot same color checker pattern with both DSLR and Astro camera in same conditions. Use dcraw to read linear XYZ image.

For each color of color checker pattern - read off average X, Y and Z values in DSLR image.

Do the same with ASTRO camera this time noting raw R, G and B.

Now you have set of XYZ -> RGB vectors that you need to normalize (make unit length) and use least squares method to find matrix that transforms raw RGB into XYZ.

If you want to do it against calibrated computer screen - find somewhere image of color checker chart and display it on screen (in otherwise totally dark room - computer screen should be only light). Record that with astro camera.

Take RGB image, do inverse sRGB gamma and do sRGB -> XYZ conversion. This will now be your XYZ image instead of one recorded by DSLR and rest is the same (normalize vectors, solve least squares to get matrix).

Link to comment
Share on other sites

Out of interest, I decided to repeat that DSLR / screen measurement test - this time on my phone screen and in dark room.

This is XYZ simply mapped to RGB (of course, colors are totally wrong this way) and measured values:

image.png.4653acc02fc79c505dabc87e8b0ff7f9.png

image.png.3845bfd8e8c58d80053c0986c13547e9.png

Or

XYZ = 1.03, 1, 1.308

Resulting correlated color temperature is 7093K

image.png.3c45c2f76e526e2ac20efc261f2e7f90.png

Again, display is not properly calibrated and it has somewhat "cooler" white point towards the blue part of the spectrum.

Link to comment
Share on other sites

A truly fascinating discussion.  Not that I have fully understood it but I at least have a feel for the issues. Funny how blue is "cooler" and red, I assume, "hotter" as it's the exact reverse with stellar temperature! Blue stars are hot and red cool.

Regards Andrew 

Edited by andrew s
Link to comment
Share on other sites

Talking about M31 and Vlaiv's pictures. Before restarting astronomy I'd seen lots of pictures of M31 so when I first turned my scope to it I was initially very disappointed in what I saw. All I could see was a fuzzy blob. Where was the disc with its dark lanes I thought?  I thought it must be because my scope was no good. It took me a while to realise that what I was seeing was actually more "correct" than the photos I'd seen. It can be very discouraging for a beginner.

 

  

Link to comment
Share on other sites

1 minute ago, andrew s said:

A truly fascinating discussion.  Not that I have fully understood it but I at least have a feel for the issues. Funny how blue is "cooler" and red, I assume, "hotter" as it's the exact reverse with stellar temperature! Blue stars are hot and red cool.

Regards Andrew 

I was just thinking the same few moments ago when I wrote it and wondered if I used proper term :D

Then I realized that ice is cold and fire is warm and in color terms ice is bluish while fire is yellow/orange/reddish thing

  • Like 1
Link to comment
Share on other sites

4 minutes ago, vlaiv said:

I was just thinking the same few moments ago when I wrote it and wondered if I used proper term :D

Then I realized that ice is cold and fire is warm and in color terms ice is bluish while fire is yellow/orange/reddish thing

Which brings us full circle as reflection nebula can be blueish and transmission redish.

The "ish" is to imply uncertainty and trying not to restart the debate.👹

Regards Andrew 

  • Haha 1
Link to comment
Share on other sites

1 minute ago, woodblock said:

Talking about M31 and Vlaiv's pictures. Before restarting astronomy I'd seen lots of pictures of M31 so when I first turned my scope to it I was initially very disappointed in what I saw. All I could see was a fuzzy blob. Where was the disc with its dark lanes I thought?  I thought it must be because my scope was no good. It took me a while to realise that what I was seeing was actually more "correct" than the photos I'd seen. It can be very discouraging for a beginner.

 

  

I only ever managed dust lanes once with my 8" dob from my light pollution and as far as I remember - it did look like image that I posted - except I only saw "beginning" of the lane and end of it faded into blackness (or rather murkiness of LP skies) together with rest of outer structure.

What I don't remember from visual is concentrated core. It always looks more like a blob than a blob with bright core.

Link to comment
Share on other sites

I just realized that I could calibrate my computer screen by using DSLR - if I trust it to be accurate.

All I need to do is place - red, green and blue rectangle on screen together with white one and take a raw image with DSLR.

Compare XYZ values with reference values for sRGB color space and adjust R, G and B sliders so that I get good values.

That is something similar to what calibration tools do - except they use more than 3 filters to form XYZ values - they use spectral analysis.

Btw, does anyone know good "standard" light source for full spectrum? Like, can I use incandescent light bulb and make it shine on say 2700K or 3300K by some clever circuitry or maybe thermometer or whatever?

Can one cheaply purchase reference light source - like standard illuminant or similar?

I have star analyzer and I'm thinking that it can be used to measure spectral response of camera if I use known light source. That will help me to create XYZ transform mathematically rather than to trust manufacturer. I would also be able to calculate gamut of different cameras and color accuracy.

Maybe question for different part of forum, like spectroscopy?

Link to comment
Share on other sites

For what it's worth I managed to grab some time last night to build DCRAW and debug into it using some Nikon D5300 raw files. 

If you don't use the "-w" command line option then DCRAW attempts to create its white balance multipliers from the hardcoded ColorMatrix which the code has copied from DNG.  There is the following interesting comment in the DCRAW code: "All matrices are from Adobe DNG Converter unless otherwise noted."  Comparing the matrix to the matrices in a Nikon D5300 DNG file it is clear this is the matrix for the D65 illuminant.  At run time it therefore derives the white balance multipliers from this D65 matrix.  It does this whether the output file is XYZ, sRGB or whatever.

If you do use the "-w" option DCRAW obtains the white balance multipliers directly from the NEF EXIF header and these change depending on whether you set the camera's white balance to Daylight, Tungsten, Fluorescent etc.  However, since it has only the D65 matrix available for the colour correction it has to use this one.

[Edit:  If you use the "-v" verbose option DCRAW reports which multipliers it uses] 

Mark

Edited by sharkmelley
  • Like 1
Link to comment
Share on other sites

5 hours ago, sharkmelley said:

For what it's worth I managed to grab some time last night to build DCRAW and debug into it using some Nikon D5300 raw files. 

If you don't use the "-w" command line option then DCRAW attempts to create its white balance multipliers from the hardcoded ColorMatrix which the code has copied from DNG.  There is the following interesting comment in the DCRAW code: "All matrices are from Adobe DNG Converter unless otherwise noted."  Comparing the matrix to the matrices in a Nikon D5300 DNG file it is clear this is the matrix for the D65 illuminant.  At run time it therefore derives the white balance multipliers from this D65 matrix.  It does this whether the output file is XYZ, sRGB or whatever.

If you do use the "-w" option DCRAW obtains the white balance multipliers directly from the NEF EXIF header and these change depending on whether you set the camera's white balance to Daylight, Tungsten, Fluorescent etc.  However, since it has only the D65 matrix available for the colour correction it has to use this one.

[Edit:  If you use the "-v" verbose option DCRAW reports which multipliers it uses] 

Mark

You are right. I had a look at dcraw code.

Adobe DNG matrices are in fact matrices to convert to sRGB linear and not directly to XYZ.  That is why it is using D65 - as per sRGB standard.

It then uses reverse sRGB to XYZ transform matrix defined on line 135 (commend: XYZ from RGB).

This is probably because manufacturers did not see much need for direct RAW to XYZ conversion as end users operate in sRGB color space.

It however does not mean that there is need for white balancing when going directly from camera raw to XYZ.

One can easily derive direct matrix by using Adobe DNG raw to linear sRGB and multiply that with sRGB to XYZ to get direct raw to XYZ matrix (no illuminant needed).

Link to comment
Share on other sites

18 minutes ago, vlaiv said:

You are right. I had a look at dcraw code.

Adobe DNG matrices are in fact matrices to convert to sRGB linear and not directly to XYZ.  That is why it is using D65 - as per sRGB standard.

It then uses reverse sRGB to XYZ transform matrix defined on line 135 (commend: XYZ from RGB).

This is probably because manufacturers did not see much need for direct RAW to XYZ conversion as end users operate in sRGB color space.

It however does not mean that there is need for white balancing when going directly from camera raw to XYZ.

One can easily derive direct matrix by using Adobe DNG raw to linear sRGB and multiply that with sRGB to XYZ to get direct raw to XYZ matrix (no illuminant needed).

I don't agree with your interpretation because chapter 6 of the Adobe DNG Specification makes it quite clear that ColorMatrix (which is the matrix found in DCRAW code) is "The XYZ to camera space matrix".

Mark

Link to comment
Share on other sites

1 minute ago, sharkmelley said:

I don't agree with your interpretation because chapter 6 of the Adobe DNG Specification makes it quite clear that ColorMatrix (which is the matrix found in DCRAW code) is "The XYZ to camera space matrix".

Mark

It can't be XYZ to camera space matrix - maybe other way around? camera space to XYZ matrix?

Link to comment
Share on other sites

Here is an example that I just made to show that XYZ is not color balanced:

Montage.jpg.fd9bd8eb40f292e2e5d49bf1e3610f2c.jpg

I took two images one after another.

For first, I selected daytime white balance. For second I selected some other white balance (fluorescent or whatever it was - can't remember).

Then I used dcraw and told it:

1. export with camera white balance (-w option)

2. export without applying white balance (without -w, -a, -A or -r options)

3. export as XYZ

You can see that first two images are white balanced according to selected balance

Second two images are not white balanced - they are the same and they represent colors as camera sees them (not raw values but actual colors as camera sees them without white point adaptation)

Third two images represent XYZ displayed in RGB space

Although images were taken with specific white balance settings - if you don't apply those WB settings - you get same images and images that don't have "default" white balance setting (or at least daytime and fluorescent are not default WB).

White balance is just "correction of reality" - to better represent colors as we know them - but indeed, in above images my wall reflects light and spectrum of that reflected light has this color:

image.png.2e649f9d383f9f6a1e5ef219d7be21bf.png

under given settings. Wall might not have that color under different lighting - but light at the moment of capture had that color. That is the point - we don't need the color of the wall, we need color of the light.

 

Link to comment
Share on other sites

12 minutes ago, vlaiv said:

Second two images are not white balanced - they are the same and they represent colors as camera sees them (not raw values but actual colors as camera sees them without white point adaptation)

On the contrary, the images in the second row are white balanced.  DCRAW is performing a white balance using the default assumption that the scene is lit with the D65 illuminant because that is the only information the code has in the absence of reading the camera's white balance setting from the EXIF.

Mark

Link to comment
Share on other sites

17 minutes ago, sharkmelley said:

No, it's not the other way round - it's XYZ to camera space, exactly as the specification describes it:

Digital Negative Specification (DNG) (adobe.com)

Mark

Once again, you are right - for some strange reason they selected to do things very differently on multiple levels, I'll quote few things:

image.png.817a49338435507a155d4f3bd23fe929.png

From this you can see, that if you select D50 as illuminant - it will be the same as calibration illuminant and there will be no chromatic adaptation. It also shows that they decided to do color adaptation in camera raw space instead of regular way of doing it in XYZ space because it may work better in extreme cases (poor gamut of sensor or similar).

image.png.edc30970766fb9a572bd44a679d01c0d.png

Well, first step is to invert XYZtoCamera in order to use it properly and get CameraToXYZ :D. They probably opted to store XYZtoCamera for other reasons, but in reality - which ever you store - you have both versions as this matrix should be invertible.

Second point is to apply chromatic adaptation if you decide to color balance your image in particular way. If not and you want your recorded data - leave CA as identity matrix and perform no color balance.

This gives you pure CameraToXYZ - like I described above.

They introduce CameraToXYZ_D50 only to let you do chromatic adaptation in camera space - but it is not needed and can be left unchanged - like I showed above.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.