Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

sharkmelley

Members
  • Posts

    1,323
  • Joined

  • Last visited

Everything posted by sharkmelley

  1. I think what you are ultimately trying to do is to generate a matrix that goes from CameraRGB to absolute XYZ, in the emissive case. I'll need to think about the details when I have some time - especially the fact that the Correction Matrices are different for different illuminants - it's an interesting puzzle which I've never thought carefully about before. Mark
  2. Fair enough - it's pointless arguing about terminology. But I still don't understand what you meant when you wrote: CameraRAW->D50 adaptation->XYZ->(D50 adapted)sRGB [Edit: Don't worry - I've worked it out by re-reading the whole post. ] Mark
  3. I don't understand what you've written above. Chromatic adaptation has a precise meaning. Typically when we transform from one colour space to another we need to perform a white balancing operation. Simplistically this is a simple scaling of the XYZ values in the profile connection space. But in addition we need to perform chromatic adaptation which "twists" the XYZ colour space so that non-white colours will be perceived "correctly" when the white balance changes. Chromatic adaptation leaves whites and greys unchanged. Unfortunately it is not possible to create a matrix that performs pure adaptation (e.g. Bradford, Von Kries) but it must be combined into a matrix that also performs the white balancing. You can see this on Lindbloom's page: Chromatic Adaptation (brucelindbloom.com) So the matrix CameraRGB->XYZ(D50) will combine white balancing and chromatic adaptation into a single matrix. Those are not the only things the matrix does because it also includes the transformation of the colour primaries, of course. [Edit: Rewritten for accuracy ] Mark
  4. You pose a very good question! The XYZ result was unexpected (by me), so it required me to take a more detailed look at the DCRAW code to work out what was going on. First of all DCRAW is generating the output I would expect in sRGB given that it assumes a D65 illuminant for its white balance - remember DCRAW only has the D65 version of the Adobe DNG XYZ(D50)->CameraRGB matrix and so it is forced to assume a D65 illuminant by default. The important DCRAW function is convert_to_rgb( ) and this uses the matrix xyzd50_srgb defined as follows: static const double xyzd50_srgb[3][3] = { { 0.436083, 0.385083, 0.143055 }, { 0.222507, 0.716888, 0.060608 }, { 0.013930, 0.097097, 0.714022 } }; This means that DCRAW will use the process sequence I have already described i.e. CameraRGB->XYZ(D50)->sRGB where XYZ(D50) is XYZ with D50 as a white reference. Everything works exactly as expected for sRGB output. However when XYZ output is requested, DCRAW takes the sRGB result and uses the embedded xyz_rgb matrix you referred to above (not xyzd50_srgb) to go back from sRGB to XYZ. In other words the XYZ output that DCRAW generates is not the intermediate XYZ(D50) colour space a.k.a. the profile connection space. Mark
  5. You would get [1,1,1] (or a less bright version) for the D50 case and something a bit bluer for the D65 case. Mark
  6. If we assume for simplicity that the 0,0 setting corresponds to D50 then the coordinates of the white paper colour in the two XYZ images would correspond to D50 and D65. Mark
  7. In your example of a theoretical XYZ sensor, the recorded coordinates would be different. In one case it would be the coordinates of D50 and in the other case it would be D65. Mark
  8. The bits I've quoted above are where I disagree. The key point is that the white balance takes place as we transform from CameraRGB space to XYZ. Let's take your white paper for example. Illuminated with incandescent light its spectrum will be redder than when illuminated with "cloudy day" light. The CameraRGB values will therefore be different in the two cases - one much redder than the other. But we compensate for this as we generate the XYZ data. In the incandescent case we use the Camera->XYZ matrix designed for the incandescent illuminant. The white paper will end up with a chromaticity of D50 in XYZ space because that's what the "incandescent" matrix was designed to do. In the "cloudy day" case we use the Camera->XYZ matrix designed for the "cloudy day" illuminant. Again the white paper will end up with a chromaticity of D50 in XYZ space because that's how the matrix was designed. Finally the XYZ->sRGB matrix will then map anything in XYZ with a D50 chromaticity to sRGB values such as [1,1,1] or [0.9, 0.9, 0.9] or [0.4, 0.4, 0.4] and they will appear as "white" or neutral grey when displayed on a calibrated screen because the eye/brain adapts to consider D65 as white. So the two photos appear identical in sRGB even though they were taken under very different lighting conditions. Mark
  9. Let's carefully consider the reflective case to begin with because the emissive case has additional complications. When a photographer takes a photo of a ColorChecker under incandescent light, the photographer generally wants the white and grey patches to end up with the coordinates of white and grey e.g. [1,1,1], [0.5, 0.5, 0.5] in the destination sRGB colour space. The photographer will select "Incandescent" in the raw converter to achieve this. The raw converter then picks up (or interpolates) the matrix that maps the CameraRGB values of the grey patches to coordinates with the D50 chromaticity in XYZ. Those patches with D50 chromaticity in XYZ are then mapped to white and grey e.g. [1,1,1], [0.5, 0.5, 0.5] in the destination sRGB colour space, exactly as the photographer expects. Mark
  10. That's right As you previously said, D50 has following XYZ coordinates: [0.9642, 1.0000, 0.8251] The right hand matrix then gives you [1.0, 1.0, 1.0] in the sRGB colour space and will be displayed as D65 chromaticity on your calibrated monitor. It may or may not be the solution to the whole colour problem - it depends on how you generated your XYZ values from the Camera RGB values. Mark
  11. You seem to be using Bruce Lindbloom's calculator but please look at the instructions for it: How the CIE Color Calculator Works (brucelindbloom.com) In particular note the purpose of Ref. White: The Ref. White pop-up menu is used to change the reference white interpretation of the CIE color system. In other words it is the white reference of the CIE XYZ and not of sRGB or AdobeRGB which we already know has a reference white of D65. Also note that Ref. White defaults to D50 and for good reason Mark
  12. We all agree that we can take a star spectrum and generate its XYZ value by integration without reference to colour temperature. But when processing the data from a camera, colour temperature is a crucial part of the standard Adobe, RawTherapee, CaptureOne raw conversion. It's true that XYZ has no absolute white point but when transforming XYZ to sRGB or AdobeRGB the D50 chromaticity will end up having equal RGB values e.g. (250,250,250) or (100,100,100). In other words D50 is the reference white for those transformations to colour spaces. D50 is also the reference white for the previous operations. If I take a photo of a ColorChecker under incandescent light with the camera set to "Incandescent" then the processing sequence will use the Illuminant A matrix to transform the raw data to XYZ so that the grey patches have a chromaticity of D50 in XYZ. If I photograph the ColorChecker on a cloudy day with the camera set to "Cloudy" then the processing sequence will interpolate a matrix to the required cloudy illuminant so again the grey patches have a chromaticity of D50 in XYZ. The temperature/tint setting is what determines which of the available CameraRGB->XYZ matrices to use or how to interpolate between the ones available. So although XYZ does not have a white point, D50 is a crucial reference white for most operations in the processing sequence. Mark
  13. Yes, I think you do! The only thing to clarify is that you don't have the option not to colour balance. Whenever you open a raw file in Adobe Camera Raw the displayed temperature/tint is one used to determine the colour balance. There is a single combination of temperature/tint (somewhere near D50 but it depends on the camera's calibration) that will map a D50 star to the D50 point of the xy chromaticity diagram. Using this temperature/tint combination, all other stars will also be mapped to their correct chromaticities. This mapping of emissive light sources to their correct chromaticities in XYZ is one of things you originally wanted to achieve. However when XYZ is then transformed to sRGB (or AdobeRGB) using the standard XYZ->sRGB (or XYZ->AdobeRGB) matrix that D50 star will be assigned equal RGB values e.g. (200,200,200) and will therefore be displayed as D65 which may not be what you want. Of course all of this is subject to the ability of the camera and its derived matrices to accurately reproduce colour. A perfect camera doesn't exist and the whole processing chain is a compromise. Mark
  14. It is interesting but it's simply a result of how Adobe generates the matrices for each camera model. Adobe generate their own matrices just as DXOMARK generate their own. Both colour engines will be different to the camera manufacturer's proprietary colour engine. In any case it is something I tested by taking an image of a ColorChecker in daylight using both cameras. The resulting colours were pretty much identical in AdobeRGB (and hence in XYZ(D50)) when processing both images "As Shot" in Adobe Camera Raw even though the associated colour temperatures were quite different. Within reason, whatever the lighting conditions, if the correct white balance is used during processing then the grey patches of the ColorChecker will land on the D50 point of the CIE xy chromaticity chart. Mark
  15. My guess is that 0/0 corresponds to the default "Daylight" setting for the camera. Unfortunately the colour temperature corresponding to the camera's "Daylight" setting varies from camera to camera. For instance "Daylight" on my Sony A7S corresponds to 4800K when opening the image "As Shot" in Adobe Camera Raw and on my Canon 600D it corresponds to 5200K. In both cases a matrix interpolated between illuminant A and illuminant D65 will be used. Mark
  16. I'm not quite sure what you are asking so I hope the following helps. If you look in the DNG header you will typically find a couple of XYZ(D50) to CameraRGB matrices - one for illuminant A and and one for illuminant D65. If your custom white balance is near illuminant A it will use the first matrix and if your custom white balance is near D65 it will use the second. So the same raw camera data will produce very different results in XYZ(D50) space depending on which custom white balance you set. Mark
  17. Standard raw converters such as Photoshop/CameraRaw, RawTherapee, Capture One etc. require a colour temperature (and tint) to be chosen (implicitly or explicitly) as the white point. Using those tools you can't "opt out" of white balancing and the artist is therefore forced to choose the effect they desire. If you want to faithfully reproduce emissive colours then you must choose a colour temperature appropriate to your output colour space and display e.g. use D65 for sRGB and AdobeRGB on a properly calibrated display. On the other hand if you want to build your own processing engine from scratch, you can design it any way you wish. You don't even need to use a standard colour space - for instance I once designed my own "MarkRGB" colour space with extreme wide gamut primaries, linear gamma and an appropriate ICC profile. But if you want other people to take your images and see them the same way you saw them then you will need to hope that the ICC profile is correctly interpreted by the application that opens the image and that colours beyond the gamut of the display are treated sympathetically. Mark
  18. For scenes with purely emissive light sources (e.g. night-time city-scapes, fireworks, astrophotography) photographers typically use the daylight setting. Experienced astrophotographers are sometimes a bit more picky and will choose a certain star class or average galaxy as their white point. However, if you want your monitor with a D65 white point to faithfully display your emissive colours (rather than relying on the eye/brain to adapt to D65) then you would choose D65 as your white point. Mark
  19. On the contrary, I believe it does mean that the data has been white balanced to D50 and I'll explain why by means of an example. Take two photos of a colour chart - one with tungsten light and one with daylight. If you process your data with the correct settings then the resulting two images will look virtually identical. A high level description of the Adobe Camera Raw and DCRAW processing sequence is the following: CameraRGB -> XYZ(D50) -> sRGB where XYZ(D50) is the all important profile connection space We already know that the XYZ(D50)->sRGB matrix is a standard matrix, so if the two images look identical in sRGB they must also look identical in XYZ(D50). So it's clear that both images received a different white balance during the CameraRGB -> XYZ(D50) operation. Mark
  20. On the contrary, the images in the second row are white balanced. DCRAW is performing a white balance using the default assumption that the scene is lit with the D65 illuminant because that is the only information the code has in the absence of reading the camera's white balance setting from the EXIF. Mark
  21. No, it's not the other way round - it's XYZ to camera space, exactly as the specification describes it: Digital Negative Specification (DNG) (adobe.com) Mark
  22. I don't agree with your interpretation because chapter 6 of the Adobe DNG Specification makes it quite clear that ColorMatrix (which is the matrix found in DCRAW code) is "The XYZ to camera space matrix". Mark
  23. For what it's worth I managed to grab some time last night to build DCRAW and debug into it using some Nikon D5300 raw files. If you don't use the "-w" command line option then DCRAW attempts to create its white balance multipliers from the hardcoded ColorMatrix which the code has copied from DNG. There is the following interesting comment in the DCRAW code: "All matrices are from Adobe DNG Converter unless otherwise noted." Comparing the matrix to the matrices in a Nikon D5300 DNG file it is clear this is the matrix for the D65 illuminant. At run time it therefore derives the white balance multipliers from this D65 matrix. It does this whether the output file is XYZ, sRGB or whatever. If you do use the "-w" option DCRAW obtains the white balance multipliers directly from the NEF EXIF header and these change depending on whether you set the camera's white balance to Daylight, Tungsten, Fluorescent etc. However, since it has only the D65 matrix available for the colour correction it has to use this one. [Edit: If you use the "-v" verbose option DCRAW reports which multipliers it uses] Mark
  24. Here's a natural colour image of NGC7000 taken with an unmodified Nikon Z6 and processed to keep the keep the colour as accurate as possible: Ignore the blue at the top right which was an internal reflection from a star. Full size version can be found on Astrobin: NGC7000 ( sharkmelley ) - AstroBin Mark
  25. I'm beginning to see where our disconnect is. You are describing CIE XYZ as an absolute colour space and strictly speaking that's true. With that assumption, the RGB values recorded by the camera can be directly transformed into absolute coordinates in CIE XYZ using a single matrix multiplication without reference to any scene illuminant. Most of what you have said so far is correct and makes complete sense in the context of an absolute colour space. However, that is not how the standard colour managed processing chain works. The whole architecture of colour management and ICC profiles is built upon the Profile Connection Space (PCS). The PCS is based upon CIE XYZ but it is not an absolute colour space. Instead the illuminant D50 is defined as its white point. Therefore the transformation from camera RGB to the PCS involves a white balance correction from the scene illuminant (e.g. Tungsten, Daylight, Cloudy, Fluorescent) to D50. This is what is being described in chapter 6 of the Adobe DNG Specification: "Mapping Camera Color Space to CIE XYZ Space". The same steps are performed by the Photoshop's Camera Raw and a slightly simplified version is performed by DCRAW. Mark
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.