Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.



  • Content Count

  • Joined

  • Last visited

Community Reputation

1,153 Excellent


About sharkmelley

  • Rank
    Sub Dwarf

Profile Information

  • Gender
  • Location
    Tenterden, Kent
  1. I like your idea of plotting deltaE on the xy chromaticity diagram. Mark
  2. Going back to the original question, the RGB filter sets with a gap at the sodium wavelength would have caused an interesting problem for imaging comet Neowise in 2020. If you remember, Neowise had orange in its tail caused by sodium ions. A "gapped" RGB filter set would actually record nothing for this part of Neowise's tail. In that sense the gamut of a sensor makes sense i.e. the gamut is the range of colours that can be recorded. Some colours such as the sodium wavelength in the above example cannot be recorded at all. For RGB filter sets (or OSC cameras) with overlapping transmission bands, all colours can be recorded so the filter set is full gamut in that sense. However the problem is that many different colours will be recorded with the same RGB ratios - a good example is imaging a rainbow with sharp cutoff RGB filters. The result will be three distinct bands of red green and blue as @vlaiv mentions earlier. The problem here is the inability to distinguish between distinct colours i.e. many different colours give exactly the same RGB values. This is the concept of "metameric failure" i.e. the inability of the RGB filters to distinguish between colours that the human eye sees as being different. Mark
  3. I don't have any more to contribute so this will be my last post in this thread. In parting, I will mention the following: Adobe's DNG specification has a whole chapter devoted to "mapping between the camera color space coordinates (linear reference values) and CIE XYZ (with a D50 white point)." The principle of colour management across different devices (the reason you can open a JPG encoded in sRGB, AdobeRGB, AppleRGB etc. with correct looking colours on your screen) is defined in the ICC Specification and is built upon what it calls the profile connection space (PCS) which can be either CIE XYZ or CIE LAB where "the chromaticity of the D50 illuminant shall define the chromatic adaptation state associated with the PCS". The image file embeds an ICC profile and the screen has its own ICC profile. They communicate via the PCS. Mark
  4. Your statement is true but it doesn't exactly represent the sequence of operations in the Adobe Camera Raw processing engine. The D65 raw camera data would be transformed to XYZ(D50) by the D65 version of the CameraRaw -> XYZ(D50) matrix, which implicitly includes white balancing from D65 to D50 The D50 raw camera data would be transformed to XYZ(D50) by the D50 version of the CameraRaw -> XYZ(D50) matrix, with no white balancing required because they have the same white reference. The transformation from XYZ(D50) is done using the Bradford adapted XYZ(D50)->sRGB so the D50 point becomes (1,1,1) in sRGB. Mark
  5. Unfortunately it's very difficult to find anything that explains this and I certainly haven't done a very good job! I found this series of articles by Jack Hogan quite useful but I'm not sure they'll tell you what you want to know: Color: From Capture to Eye | Strolls with my Dog Color: Determining a Forward Matrix for Your Camera | Strolls with my Dog Linear Color: Applying the Forward Matrix | Strolls with my Dog Mark
  6. We are transforming the camera raw data so that the chosen white (in this case illuminant A) will end up with coordinates (0.34567, 0.35850) in XYZ(D50). Everything else will end up at different points in XYZ(D50) space. If "D65" was chosen instead then the camera raw data would receive a different transformation that makes "D65" end up with coordinates (0.34567, 0.35850) in XYZ(D50). XYZ(D50) is a perceptual colour space where everything is relative to the chosen white. The data are transformed so that the chosen white lands on (0.34567, 0.35850) in this space and all other colours will land relative to that reference white. Think about what happens when you apply a chromatic adaptation matrix to the XYZ space. The XYZ coordinates are scaled which would, for instance, move illuminant A to the D50 point. XYZ(D50) is effectively an XYZ space to which a chromatic adaptation has been applied. Mark
  7. I'll answer with an example. Illuminate a ColorChecker with illuminant A i.e. incandescent light. The grey patches appear neutral to the human observer because of eye/brain adaptation. Take a photo of it - a raw file. Open the raw file in the raw converter using the "Incandescent" setting. The processing engine will generate a XYZ(D50) intermediate colour space (i.e. the PCS) where the grey patches of the ColorChecker will have xy chromaticity of (0.34567, 0.35850) i.e. D50. When XYZ(D50) is transformed to sRGB, AdobeRGB etc. the XYZ(D50) chromaticity of (0.34567, 0.35850) will be mapped to (1,1,1), (0.7, 0.7, 0.7) etc. depending on intensity which will appear white/grey. So to the human observer, the ColorChecker patches that appeared white or grey in the original scene will appear white or grey in the final image. I didn't realise you had access to a spectrometer but it's certainly a good alternative for calibrating your CCM from a ColorChecker. Mark
  8. XYZ(D50) is the Profile Connection Space (PCS). It is based on XYZ but it's a perceptive colour space where the illuminant of any photographed scene is mapped to the D50 chromaticity. In other words when you open an image in your raw converter the temperature/tint combination that you actively choose (or is chosen by default) will be mapped to D50 in the PCS. The PCS is at the heart of all colour management because ICC profiles are based on the PCS. It is the standard used by displays, printers etc. I honestly cannot see how a ColorChecker can be used to generate a Colour Correction Matrix without reference to the PCS. By the way it is really worth looking at the BabelColor spreadsheet I linked earlier: https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip It's a fascinating resource which contains ColorChecker coordinates in many different colour spaces, deltaE stats and even spectral power distributions of each colour patch of the ColorChecker. The ColorChecker pages on BabelColor are also very informative: The ColorChecker Pages (Page 1 of 3) (babelcolor.com) Mark
  9. I'm suspicious of the chart you showed giving the sRGB and CIE L*a*b* coordinates for the ColorChecker patches. The CIE L*a*b* figures look good and agree well with the CIE L*a*b* (D50) figures in the BabelColor spreadsheet: https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip But the sRGB figures are not correct - for instance the Cyan patch should be out of gamut in sRGB. In any case, I think a practical approach to calibrating a colour matrix would be as follows: Illuminate the ColorChecker with daylight and take a photo Perform RGB scaling on the raw data so that the grey ColorChecker patches appear neutral i.e. Rvalue=Gvalue=Bvalue Solve for the matrix that does a best fit transform from the scaled CameraRGB values to the known ColorChecker values in XYZ(D50). It's best to perform the least squares fit in L*a*b* (D50) - the well known deltaE error approach The problem is that you won't know exactly what your daylight illuminant was so we need to do an additional calibration against a known illuminant. So take an image of a starfield and find the RGB scaling on the raw data that makes a G2V star neutral - PixInsight PhotometricColorCalibration might help here. Using this scaling and the matrix just calibrated we create a new matrix CameraRaw->XYZ(D50) that maps a G2V star to the D50 chromaticity in XYZ. Now apply chromatic adaptation from D50 to the chromaticity of G2V. The result is a matrix that will map the colour of G2V from CameraRaw to the G2V chromaticity in XYZ. You can then apply your XYZ->sRGB matrix - the one with no chromatic adaptation. Personally I'm happy omitting the final step, so I'm happy for the G2V star to map to the D50 chromaticity in XYZ and then to become (1,1,1) in sRGB using the XYZD50->sRGB matrix. This would then have the appearance of white to me which is what I'm trying to achieve but I accept it differs from your goal. In fact I would use XYZD50->AdobeRGB matrix because the variable gamma of the sRGB colour space makes subsequent colour preserving operations very difficult. The main weakness of the whole procedure is that the the resulting matrix will be subtly different depending on the exact illuminant used to take the original image of the ColorChecker. I don't know what the answer to that is. Mark
  10. I think what you are ultimately trying to do is to generate a matrix that goes from CameraRGB to absolute XYZ, in the emissive case. I'll need to think about the details when I have some time - especially the fact that the Correction Matrices are different for different illuminants - it's an interesting puzzle which I've never thought carefully about before. Mark
  11. Fair enough - it's pointless arguing about terminology. But I still don't understand what you meant when you wrote: CameraRAW->D50 adaptation->XYZ->(D50 adapted)sRGB [Edit: Don't worry - I've worked it out by re-reading the whole post. ] Mark
  12. I don't understand what you've written above. Chromatic adaptation has a precise meaning. Typically when we transform from one colour space to another we need to perform a white balancing operation. Simplistically this is a simple scaling of the XYZ values in the profile connection space. But in addition we need to perform chromatic adaptation which "twists" the XYZ colour space so that non-white colours will be perceived "correctly" when the white balance changes. Chromatic adaptation leaves whites and greys unchanged. Unfortunately it is not possible to create a matrix that performs pure adaptation (e.g. Bradford, Von Kries) but it must be combined into a matrix that also performs the white balancing. You can see this on Lindbloom's page: Chromatic Adaptation (brucelindbloom.com) So the matrix CameraRGB->XYZ(D50) will combine white balancing and chromatic adaptation into a single matrix. Those are not the only things the matrix does because it also includes the transformation of the colour primaries, of course. [Edit: Rewritten for accuracy ] Mark
  13. You pose a very good question! The XYZ result was unexpected (by me), so it required me to take a more detailed look at the DCRAW code to work out what was going on. First of all DCRAW is generating the output I would expect in sRGB given that it assumes a D65 illuminant for its white balance - remember DCRAW only has the D65 version of the Adobe DNG XYZ(D50)->CameraRGB matrix and so it is forced to assume a D65 illuminant by default. The important DCRAW function is convert_to_rgb( ) and this uses the matrix xyzd50_srgb defined as follows: static const double xyzd50_srgb[3][3] = { { 0.436083, 0.385083, 0.143055 }, { 0.222507, 0.716888, 0.060608 }, { 0.013930, 0.097097, 0.714022 } }; This means that DCRAW will use the process sequence I have already described i.e. CameraRGB->XYZ(D50)->sRGB where XYZ(D50) is XYZ with D50 as a white reference. Everything works exactly as expected for sRGB output. However when XYZ output is requested, DCRAW takes the sRGB result and uses the embedded xyz_rgb matrix you referred to above (not xyzd50_srgb) to go back from sRGB to XYZ. In other words the XYZ output that DCRAW generates is not the intermediate XYZ(D50) colour space a.k.a. the profile connection space. Mark
  14. You would get [1,1,1] (or a less bright version) for the D50 case and something a bit bluer for the D65 case. Mark
  15. If we assume for simplicity that the 0,0 setting corresponds to D50 then the coordinates of the white paper colour in the two XYZ images would correspond to D50 and D65. Mark
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.