Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

sharkmelley

Members
  • Posts

    1,323
  • Joined

  • Last visited

Everything posted by sharkmelley

  1. I'm not sure what you mean. The values in the CR2 RAW are linear unstretched values, just like the values in the QHY raw file are linear unstretched values. Mark
  2. This site gives you the ISO for unity gain (i.e. 1e-/ADU): DxOMark Derived Sensor Characteristics (photonstophotos.net) The QHY268 gain setting for unity gain (1e-/ADU) will be available probably in the documentation. You can then adjust upwards and downwards from there. You will want to choose a gain where both DSLR and QHY268 are operating in the same mode i.e. high gain or low gain. Mark
  3. There will be a big advantage because the modern sensor has at least twice the QE of your 40D sensor. Also you have the advantages that mono gives you over one-shot-colour i.e. you can shoot luminance and narrowband is not impeded by the bayer matrix. Mark
  4. I purposely said modded DSLR/mirrorless i.e. the bare sensor without the filter stack which is removed during modification.
  5. The main difference is the noise reduction from the cooling. Otherwise a modded DSLR/mirrorless can produce very similar results because in many cases dedicated one-shot-colour astrocams use the same sensor as a modded DSLR/mirrorless camera. Mark
  6. The 40D is approx 14 years old! There have been huge advances since then. Look at this chart comparing the 40D to a more recent camera: https://www.photonstophotos.net/Charts/Sensor_Characteristics.htm#Canon EOS 40D_14,Canon EOS 7D Mark II_14 The 7DII has nearly twice the QE of the 40D. That means the sensor as a whole will capture twice as many photons from the same scene in the same exposure time. Mark
  7. The Canon 40D uses an old sensor with very low QE (quantum efficiency). A more recent camera will have vastly improved QE. You can choose either a modern consumer camera and have it modified or a dedicated OSC astro-camera because in many cases they use exactly the same sensor. Both will be a huge improvement over the modified 40D. Obviously the dedicated astro-camera will have much lower thermal noise because of cooling. Mark
  8. The Panasonic sensor in the ASI1600MM behaves differently for exposures shorter than around 0.1sec. So if the flats are longer than 0.1 sec (which they usually will be) then flat darks should be used. However this isn't a general rule for CMOS cameras. Mark
  9. You're right - sorry about that, I need to fix it! The Nikon D90 uses the exactly same compression scheme as the D5300, so in the meantime the earlier version, version 2 will work fine because it applies the Nikon D5300 correction which is identical to the required D90 correction. https://www.cloudynights.com/topic/746131-nikon-coloured-concentric-rings/?p=10860569 Mark
  10. There's been a lot of discussion in his thread about the walking noise but none about the coloured rings in the background. The coloured rings are almost certainly caused by Nikon's lossy data compression. It's a common problem when imaging with Nikons and it was discussed on a recent thread over on Cloudy Nights: Nikon Coloured Concentric Rings - DSLR, Mirrorless & General-Purpose Digital Camera DSO Imaging - Cloudy Nights Mark
  11. It certainly does have the microlens diffraction! In fact the ASI178 was one of the first cameras that we properly analysed: https://www.cloudynights.com/topic/635937-microlens-diffraction-effect-explained/?p=8967693
  12. It's the first example I've seen from the ASI183MM but the evidence here is quite conclusive. It's very obvious in the blue channel. The OAG won't be the cause and the only way you can reproduce this on a cloudy night is to use an incandescent (i.e. full spectrum) artificial star at a sufficient distance form the scope, so you can focus it.. Mark
  13. It looks suspiciously like microlens diffraction. Mark
  14. I haven't used Affinity Photo but I'm happy to share the x,y values of the Photoshop curves, if that helps. If you need them in the range 0-1, you can scale them. x Arcsinh10 Arcsinh30 Arcsinh100 Arcsinh300 Arcsinh1000 0 0 0 0 0 0 4 12 22 34 51 65 8 25 39 52 70 84 14 41 57 71 89 103 22 59 76 90 107 120 32 78 95 108 124 136 44 97 113 125 139 151 62 120 134 145 158 168 82 142 154 163 174 183 105 163 173 181 190 197 130 182 191 197 204 210 155 200 206 211 217 221 180 215 220 224 228 231 205 230 233 235 238 240 230 243 244 245 247 248 255 255 255 255 255 255 Mark.
  15. It's a microlens diffraction pattern. There's a similar example from same ASI178MM camera here: Strange diffraction pattern - Imaging - Discussion - Stargazers Lounge It's sometimes confusingly referred to as microlensing which of course is an astronomical term meaning something completely different. Mark
  16. I like your idea of plotting deltaE on the xy chromaticity diagram. Mark
  17. Going back to the original question, the RGB filter sets with a gap at the sodium wavelength would have caused an interesting problem for imaging comet Neowise in 2020. If you remember, Neowise had orange in its tail caused by sodium ions. A "gapped" RGB filter set would actually record nothing for this part of Neowise's tail. In that sense the gamut of a sensor makes sense i.e. the gamut is the range of colours that can be recorded. Some colours such as the sodium wavelength in the above example cannot be recorded at all. For RGB filter sets (or OSC cameras) with overlapping transmission bands, all colours can be recorded so the filter set is full gamut in that sense. However the problem is that many different colours will be recorded with the same RGB ratios - a good example is imaging a rainbow with sharp cutoff RGB filters. The result will be three distinct bands of red green and blue as @vlaiv mentions earlier. The problem here is the inability to distinguish between distinct colours i.e. many different colours give exactly the same RGB values. This is the concept of "metameric failure" i.e. the inability of the RGB filters to distinguish between colours that the human eye sees as being different. Mark
  18. I don't have any more to contribute so this will be my last post in this thread. In parting, I will mention the following: Adobe's DNG specification has a whole chapter devoted to "mapping between the camera color space coordinates (linear reference values) and CIE XYZ (with a D50 white point)." The principle of colour management across different devices (the reason you can open a JPG encoded in sRGB, AdobeRGB, AppleRGB etc. with correct looking colours on your screen) is defined in the ICC Specification and is built upon what it calls the profile connection space (PCS) which can be either CIE XYZ or CIE LAB where "the chromaticity of the D50 illuminant shall define the chromatic adaptation state associated with the PCS". The image file embeds an ICC profile and the screen has its own ICC profile. They communicate via the PCS. Mark
  19. Your statement is true but it doesn't exactly represent the sequence of operations in the Adobe Camera Raw processing engine. The D65 raw camera data would be transformed to XYZ(D50) by the D65 version of the CameraRaw -> XYZ(D50) matrix, which implicitly includes white balancing from D65 to D50 The D50 raw camera data would be transformed to XYZ(D50) by the D50 version of the CameraRaw -> XYZ(D50) matrix, with no white balancing required because they have the same white reference. The transformation from XYZ(D50) is done using the Bradford adapted XYZ(D50)->sRGB so the D50 point becomes (1,1,1) in sRGB. Mark
  20. Unfortunately it's very difficult to find anything that explains this and I certainly haven't done a very good job! I found this series of articles by Jack Hogan quite useful but I'm not sure they'll tell you what you want to know: Color: From Capture to Eye | Strolls with my Dog Color: Determining a Forward Matrix for Your Camera | Strolls with my Dog Linear Color: Applying the Forward Matrix | Strolls with my Dog Mark
  21. We are transforming the camera raw data so that the chosen white (in this case illuminant A) will end up with coordinates (0.34567, 0.35850) in XYZ(D50). Everything else will end up at different points in XYZ(D50) space. If "D65" was chosen instead then the camera raw data would receive a different transformation that makes "D65" end up with coordinates (0.34567, 0.35850) in XYZ(D50). XYZ(D50) is a perceptual colour space where everything is relative to the chosen white. The data are transformed so that the chosen white lands on (0.34567, 0.35850) in this space and all other colours will land relative to that reference white. Think about what happens when you apply a chromatic adaptation matrix to the XYZ space. The XYZ coordinates are scaled which would, for instance, move illuminant A to the D50 point. XYZ(D50) is effectively an XYZ space to which a chromatic adaptation has been applied. Mark
  22. I'll answer with an example. Illuminate a ColorChecker with illuminant A i.e. incandescent light. The grey patches appear neutral to the human observer because of eye/brain adaptation. Take a photo of it - a raw file. Open the raw file in the raw converter using the "Incandescent" setting. The processing engine will generate a XYZ(D50) intermediate colour space (i.e. the PCS) where the grey patches of the ColorChecker will have xy chromaticity of (0.34567, 0.35850) i.e. D50. When XYZ(D50) is transformed to sRGB, AdobeRGB etc. the XYZ(D50) chromaticity of (0.34567, 0.35850) will be mapped to (1,1,1), (0.7, 0.7, 0.7) etc. depending on intensity which will appear white/grey. So to the human observer, the ColorChecker patches that appeared white or grey in the original scene will appear white or grey in the final image. I didn't realise you had access to a spectrometer but it's certainly a good alternative for calibrating your CCM from a ColorChecker. Mark
  23. XYZ(D50) is the Profile Connection Space (PCS). It is based on XYZ but it's a perceptive colour space where the illuminant of any photographed scene is mapped to the D50 chromaticity. In other words when you open an image in your raw converter the temperature/tint combination that you actively choose (or is chosen by default) will be mapped to D50 in the PCS. The PCS is at the heart of all colour management because ICC profiles are based on the PCS. It is the standard used by displays, printers etc. I honestly cannot see how a ColorChecker can be used to generate a Colour Correction Matrix without reference to the PCS. By the way it is really worth looking at the BabelColor spreadsheet I linked earlier: https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip It's a fascinating resource which contains ColorChecker coordinates in many different colour spaces, deltaE stats and even spectral power distributions of each colour patch of the ColorChecker. The ColorChecker pages on BabelColor are also very informative: The ColorChecker Pages (Page 1 of 3) (babelcolor.com) Mark
  24. I'm suspicious of the chart you showed giving the sRGB and CIE L*a*b* coordinates for the ColorChecker patches. The CIE L*a*b* figures look good and agree well with the CIE L*a*b* (D50) figures in the BabelColor spreadsheet: https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip But the sRGB figures are not correct - for instance the Cyan patch should be out of gamut in sRGB. In any case, I think a practical approach to calibrating a colour matrix would be as follows: Illuminate the ColorChecker with daylight and take a photo Perform RGB scaling on the raw data so that the grey ColorChecker patches appear neutral i.e. Rvalue=Gvalue=Bvalue Solve for the matrix that does a best fit transform from the scaled CameraRGB values to the known ColorChecker values in XYZ(D50). It's best to perform the least squares fit in L*a*b* (D50) - the well known deltaE error approach The problem is that you won't know exactly what your daylight illuminant was so we need to do an additional calibration against a known illuminant. So take an image of a starfield and find the RGB scaling on the raw data that makes a G2V star neutral - PixInsight PhotometricColorCalibration might help here. Using this scaling and the matrix just calibrated we create a new matrix CameraRaw->XYZ(D50) that maps a G2V star to the D50 chromaticity in XYZ. Now apply chromatic adaptation from D50 to the chromaticity of G2V. The result is a matrix that will map the colour of G2V from CameraRaw to the G2V chromaticity in XYZ. You can then apply your XYZ->sRGB matrix - the one with no chromatic adaptation. Personally I'm happy omitting the final step, so I'm happy for the G2V star to map to the D50 chromaticity in XYZ and then to become (1,1,1) in sRGB using the XYZD50->sRGB matrix. This would then have the appearance of white to me which is what I'm trying to achieve but I accept it differs from your goal. In fact I would use XYZD50->AdobeRGB matrix because the variable gamma of the sRGB colour space makes subsequent colour preserving operations very difficult. The main weakness of the whole procedure is that the the resulting matrix will be subtly different depending on the exact illuminant used to take the original image of the ColorChecker. I don't know what the answer to that is. Mark
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.