Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

sharkmelley

Members
  • Posts

    1,323
  • Joined

  • Last visited

Posts posted by sharkmelley

  1. On 04/05/2021 at 11:47, MarsG76 said:

    The only "processing" carried out was to convert the DSLR image to grayscale and match the histograph of the QHY raw to resemble the unstretched DSLR image out of the camera... reason for the stretch is that the DSLR does a in camera stretch before exporting the CR2 RAW.

    I'm not sure what you mean.  The values in the CR2 RAW are linear unstretched values, just like the values in the QHY raw file are linear unstretched values.

    Mark

  2. 14 hours ago, rsarwar said:

    Was wondering if anyone knows how to map ISO values from DSLRs are to be mapped onto the Gain scale. Without the right mapping, this comparision will not be possible.

    This site gives you the ISO for unity gain (i.e. 1e-/ADU): DxOMark Derived Sensor Characteristics (photonstophotos.net)

    The QHY268 gain setting for unity gain (1e-/ADU) will be available probably in the documentation.  You can then adjust upwards and downwards from there.  You will want to choose a gain where both DSLR and QHY268 are operating in the same mode i.e. high gain or low gain.

    Mark

    • Thanks 1
  3. 14 hours ago, MarsG76 said:

    I see your point but this can not be right... if it was it would all but make it pointless to spend the premium money on Astro cams, and sticking to modded DSLRs with big pixels would deliver same or similar results...

    The main difference is the noise reduction from the cooling.  Otherwise a modded DSLR/mirrorless can produce very similar results because in many cases dedicated one-shot-colour astrocams use the same sensor as a modded DSLR/mirrorless camera.

    Mark

  4. The 40D is approx 14 years old!  There have been huge advances since then.  Look at this chart comparing the 40D to a more recent camera:

    https://www.photonstophotos.net/Charts/Sensor_Characteristics.htm#Canon EOS 40D_14,Canon EOS 7D Mark II_14

    The 7DII has nearly twice the QE of the 40D.  That means the sensor as a whole will capture twice as many photons from the same scene in the same exposure time.

    Mark

    • Thanks 1
  5. The Canon 40D uses an old sensor with very low QE (quantum efficiency).  A more recent camera will have vastly improved QE.  You can choose either a modern consumer camera and have it modified or a dedicated OSC astro-camera because in many cases they use exactly the same sensor.  Both will be a huge improvement over the modified 40D.  Obviously the dedicated astro-camera will have much lower thermal noise because of cooling.

    Mark

    • Thanks 1
  6. 8 hours ago, barrie greenwood said:

    Tried out this but unfortunately  the nikon D90 isn't supported by it 😕

    You're right - sorry about that, I need to fix it!  The Nikon D90 uses the exactly same compression scheme as the D5300, so in the meantime the earlier version, version 2 will work fine because it applies the Nikon D5300 correction which is identical to the required D90 correction.

    https://www.cloudynights.com/topic/746131-nikon-coloured-concentric-rings/?p=10860569

    Mark

  7. There's been a lot of discussion in his thread about the walking noise but none about the coloured rings in the background.  The coloured rings are almost certainly caused by Nikon's lossy data compression.  It's a common problem when imaging with Nikons and it was discussed on a recent thread over on Cloudy Nights:

    Nikon Coloured Concentric Rings - DSLR, Mirrorless & General-Purpose Digital Camera DSO Imaging - Cloudy Nights

    Mark

    • Like 1
  8. 1 hour ago, ajh499 said:

    On a 183mm?

    I didn't think that happened with that sensor

    It's the first example I've seen from the ASI183MM but the evidence here is quite conclusive.  It's very obvious in the blue channel.

    The OAG won't be the cause and the only way you can reproduce this on a cloudy night is to use an incandescent (i.e. full spectrum) artificial star at a sufficient distance form the scope, so you can focus it..

    Mark

  9. I haven't used Affinity Photo but I'm happy to share the x,y values of the Photoshop curves, if that helps.  If you need them in the range 0-1, you can scale them.

    x Arcsinh10 Arcsinh30 Arcsinh100 Arcsinh300 Arcsinh1000
    0 0 0 0 0 0
    4 12 22 34 51 65
    8 25 39 52 70 84
    14 41 57 71 89 103
    22 59 76 90 107 120
    32 78 95 108 124 136
    44 97 113 125 139 151
    62 120 134 145 158 168
    82 142 154 163 174 183
    105 163 173 181 190 197
    130 182 191 197 204 210
    155 200 206 211 217 221
    180 215 220 224 228 231
    205 230 233 235 238 240
    230 243 244 245 247 248
    255 255 255 255 255 255


    Mark.

  10. Going back to the original question, the RGB filter sets with a gap at the sodium wavelength would have caused an interesting problem for imaging comet Neowise in 2020.  If you remember, Neowise had orange in its tail caused by sodium ions.  A "gapped" RGB filter set would actually record nothing for this part of Neowise's tail.  In that sense the gamut of a sensor makes sense i.e. the gamut is the range of colours that can be recorded.  Some colours such as the sodium wavelength in the above example  cannot be recorded at all.

    For RGB filter sets (or OSC cameras) with overlapping transmission bands, all colours can be recorded so the filter set is full gamut in that sense.  However the problem is that many different colours will be recorded with the same RGB ratios - a good example is imaging a rainbow with sharp cutoff RGB filters.  The result will be three distinct bands of red green and blue as @vlaiv mentions earlier. The problem here is the inability to distinguish between distinct colours  i.e. many different colours give exactly the same RGB values.  This is the concept of "metameric failure" i.e. the inability of the RGB filters to distinguish between colours that the human eye sees as being different.

    Mark

    • Like 3
  11. 53 minutes ago, vlaiv said:

    Ok, for the benefit of others reading this thread, I must comment on this statement:

    - XYZ(D50) is non existent color space, it has no definition. Using that term as name for color space is simply wrong

    I don't have any more to contribute so this will be my last post in this thread.

    In parting, I will mention the following:

    • Adobe's DNG specification has a whole chapter devoted to "mapping between the camera color space coordinates (linear reference values) and CIE XYZ (with a D50 white point)."
    • The principle of colour management across different devices (the reason you can open a JPG encoded in sRGB, AdobeRGB, AppleRGB etc. with correct looking colours on your screen) is defined in the ICC Specification and is built upon what it calls the profile connection space (PCS) which can be either CIE XYZ or CIE LAB where "the chromaticity of the D50 illuminant shall define the chromatic adaptation state associated with the PCS".  The image file embeds an ICC profile and the screen has its own ICC profile.  They communicate via the PCS.

    Mark

  12. 11 minutes ago, vlaiv said:

    Here is one example, I made a list of claims I think are true. Here is one of them:

    - if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55)

    Your statement is true but it doesn't exactly represent the sequence of operations in the Adobe Camera Raw processing engine.

    The D65 raw camera data would be transformed to XYZ(D50) by the D65 version of the CameraRaw -> XYZ(D50) matrix, which implicitly includes white balancing from D65 to D50

    The D50 raw camera data would be transformed to  XYZ(D50) by the D50 version of the CameraRaw -> XYZ(D50) matrix, with no white balancing required because they have the same white reference.

    The transformation from XYZ(D50) is done using the Bradford adapted XYZ(D50)->sRGB  so the D50 point becomes (1,1,1) in sRGB.

    Mark

  13. 34 minutes ago, vlaiv said:

    I still have not found exact definition of XYZ(D50) space - what does coordinate in it mean? In fact, if I do search on google for XYZ(D50) - I don't get anything related to it - just text mentioning regular XYZ color space and D50 illuminant.

    Unfortunately it's very difficult to find anything that explains this and I certainly haven't done a very good job!

    I found this series of articles by Jack Hogan quite useful but I'm not sure they'll tell you what you want to know:

    Mark

  14. 3 minutes ago, vlaiv said:

    I don't really follow this.

    That means that in XYZ(D50) space - Illuminant A has D50 coordinates? How about Illuminant D65, does it also have D50 coordinates? From your description it follows that it does.

    We can easily see that XYZ(D50) space is not very useful space as all colors map to D50 (we can use illuminant of any color - and by analogy with above written - it will end up being D50).

    We are transforming the camera raw data so that the chosen white (in this case illuminant A) will end up with coordinates (0.34567, 0.35850) in XYZ(D50).  Everything else will end up at different points in XYZ(D50) space.   If "D65" was chosen instead then the camera raw data would receive a different transformation that makes "D65" end up with coordinates (0.34567, 0.35850) in XYZ(D50).  

    XYZ(D50) is a perceptual colour space where everything is relative to the chosen white. The data are transformed so that the chosen white lands on (0.34567, 0.35850) in this space and all other colours will land relative to that reference white.

    Think about what happens when you apply a chromatic adaptation matrix to the XYZ space.  The XYZ coordinates are scaled which would, for instance, move illuminant A to the D50 point.  XYZ(D50) is effectively an XYZ space to which a chromatic adaptation has been applied.

    Mark

     

     

  15. 4 minutes ago, vlaiv said:

    How are coordinates of light sources different in XYZ(50) than in XYZ color space?

    Say I have these:

    image.png.24713a885f07c796db48d05c1efc2408.png

    What coordinates will those have in XYZ(50)?

    I'll answer with an example. 

    Illuminate a ColorChecker with illuminant A i.e. incandescent light.  The grey patches appear neutral to the human observer because of eye/brain adaptation.  Take a photo of it - a raw file.  Open the raw file in the raw converter using the "Incandescent" setting.  The processing engine will generate a XYZ(D50) intermediate colour space (i.e. the PCS) where the grey patches of the ColorChecker will have xy chromaticity of (0.34567, 0.35850) i.e. D50.  When XYZ(D50) is transformed  to sRGB, AdobeRGB etc. the XYZ(D50) chromaticity of (0.34567, 0.35850) will be mapped to (1,1,1),  (0.7, 0.7, 0.7) etc. depending on intensity which will appear white/grey.  So to the human observer, the ColorChecker patches that appeared white or grey in the original scene will appear white or grey in the final image.

    I didn't realise you had access to a spectrometer but it's certainly a good alternative for calibrating your CCM from a ColorChecker.

    Mark

  16. 16 minutes ago, vlaiv said:

    Can you give me definition of XYZ(D50)?

    XYZ(D50) is the Profile Connection Space (PCS).  It is based on XYZ but it's a perceptive colour space where the illuminant of any photographed scene is mapped to the D50 chromaticity.  In other words when you open an image in your raw converter the temperature/tint combination that you actively choose (or is chosen by default) will be mapped to D50 in the PCS.

    The PCS is at the heart of all colour management because ICC profiles are based on the PCS.  It is the standard used by displays, printers etc.

    I honestly cannot see how a ColorChecker can be used to generate a Colour Correction Matrix without reference to the PCS.

    By the way it is really worth looking at the BabelColor spreadsheet I linked earlier: https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip

    It's a fascinating resource which contains ColorChecker coordinates in many different colour spaces,  deltaE stats and even spectral power distributions of each colour patch of the ColorChecker.  The ColorChecker pages on BabelColor are also very informative: The ColorChecker Pages (Page 1 of 3) (babelcolor.com)

    Mark

     

  17. I'm suspicious of the chart you showed giving the sRGB and CIE L*a*b* coordinates for the ColorChecker patches.  The CIE L*a*b* figures look good and agree well with the CIE L*a*b* (D50) figures in the BabelColor spreadsheet:  https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip  But the sRGB figures are not correct - for instance the Cyan patch should be out of gamut in sRGB.

    In any case, I think a practical approach to calibrating a colour matrix would be as follows:

    • Illuminate the ColorChecker with daylight and take a photo
    • Perform RGB scaling on the raw data so that the grey ColorChecker patches appear neutral i.e. Rvalue=Gvalue=Bvalue
    • Solve for the matrix that does a best fit transform from the scaled CameraRGB values to the known ColorChecker values in XYZ(D50).  It's best to perform the least squares fit in L*a*b* (D50)  -  the well known deltaE error approach

    The problem is that you won't know exactly what your daylight illuminant was so we need to do an additional calibration against a known illuminant.  So take an image of a starfield and find the RGB scaling on the raw data that makes a G2V star neutral - PixInsight PhotometricColorCalibration might help here.  Using this scaling and the matrix just calibrated we create a new matrix CameraRaw->XYZ(D50) that maps a G2V star to the D50 chromaticity in XYZ.  Now apply chromatic adaptation from D50 to the chromaticity of G2V.  The result is a matrix that will map the colour of G2V from CameraRaw to the G2V chromaticity in XYZ.  You can then apply your XYZ->sRGB matrix - the one with no chromatic adaptation.

    Personally I'm happy omitting the final step, so I'm happy for the G2V star to map to the D50 chromaticity in XYZ and then to become (1,1,1) in sRGB using the XYZD50->sRGB matrix.  This would then have the appearance of white to me which is what I'm trying to achieve but I accept it differs from your goal.  In fact I would use XYZD50->AdobeRGB matrix because the variable gamma of the sRGB colour space makes subsequent colour preserving operations very difficult.

    The main weakness of the whole procedure is that the the resulting matrix will be subtly different depending on the exact illuminant used to take the original image of the ColorChecker.  I don't know what the answer to that is.

    Mark

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.