Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

sharkmelley

Members
  • Posts

    1,323
  • Joined

  • Last visited

Everything posted by sharkmelley

  1. Excellent, we can now see that DCRAW is applying white balance to the raw data as it transforms into CIE XYZ. I took a look at the DCRAW C code last night but it's pretty obscure - I would need to build it and step through it in debugging mode to see how it does it but I don't have time for it right now. As I said earlier, you can download the DNG Specification to read how Adobe does it - it's all in chapter 6 "Mapping Camera Color Space to CIE XYZ Space". It's an interesting read and I'm guessing DCRAW implements a simplified version. I downloaded the DNG SDK a couple of years ago and debugged it step by step to compare what I was seeing in the code against the DNG Specification. I can see that the DCRAW C code does contain a hardcoded matrix for each camera but it's only a single one (i.e. it doesn't vary by colour temperature) and it corresponds to ColorMatrix in the DNG Spec (not ForwardMatrix). I'm pretty sure DCRAW copies its matrices from Adobe. The remaining mystery for me is which white point DCRAW is using for its mapping to XYZ - it may help to explain why you are seeing a resulting CCT of 5038 rather than something nearer 6500. Mark
  2. Are you sure it hasn't been colour balanced? Look at the raw data and measure the RGB values for one of the white patches on the colour chart. I'm guessing the RGB value is predominately green. Now look at the XYZ values that emerge from DCRAW for the same white patch. Do they correspond to the green area of the CIE XYZ colour space or are they approximately in the white area of the CIE XYZ colour space? I'm guessing they sit in the white area and it's because colour balancing has happened. I don't know what DCRAW is doing but I do know that the whole of chapter 6 of the DNG Specification is devoted to how the Camera RGB space is mapped to CIE XYZ and it definitely involves colour balancing. Mark
  3. You've still missed my question. The raw data has a very strong green channel. In the TIFF that DCRaw produces, do the white and grey squares of the colour chart look green? If not, has it been white balanced and where did the white balance come from? I'm guessing it might be a D50 white balance since XYZ-D50 is typically used as the Profile Connection Space. Mark
  4. I may be missing something, so let me ask the question in a different way. If you look at your raw data you will almost certainly find that the green channel has much higher pixel values than red or blue for any colour that is approximately white or grey. Why doesn't the bottom row of your colour chart appear green? So where does the white balance take place and what white balance is used? Mark
  5. I have a question. What colour temperature (i.e. white balance) is DCRaw using when you execute that command? The white balance (e.g. Daylight, Cloudy, Tungsten etc.) is required to transform from camera RGB to CIE XYZ. Mark
  6. Yes this should be possible. Essentially you need to take your image data and apply a white balance consistent with D65 plus a CCM consistent with D65. But don't attempt to generate a CCM from a photo taken of a screen. Instead you need something like a ColorChecker illuminated with a proper D65 broadband light source. BTW you're right that we won't find a D65 star for our PixInsight PhotometricColourCalibration! Mark
  7. If you want the colours in the image that appears on your monitor to identical to the emitted light from an astronomical object then you are right - you need to apply a D65 white balance and use a D65 CCM. If you are using PixInsight's PhotometricColourCalibration to perform the white balance then you would choose an appropriate white reference - maybe an F-type star. The only problem with that approach is that G2V stars like our sun would not appear to be white in the displayed image because of the way the eye will adapt to the D65 reference white of the monitor. Mark
  8. I'll try to explain this using a thought experiment. Let's assume you take a photo of a scene (including a grey step wedge) illuminated with a D50 light source. When the data are correctly colour balanced for D50, the white step might have an RGB value of (250,250,250). If we now apply the CCM for sRGB, the RGB value of this white will remain unchanged as (250,250,250) and it will appear white to you when displayed on the computer monitor. However, if you take a photo of the monitor displaying this "white" it will have a slightly blue tint (which you can measure in the raw data) because sRGB is displaying white with a D65 illuminant. However the image of the scene on your monitor appears quite normal because your eye immediately adapts to seeing D65 as white just like your eye immediately adapts to walking into a room lit by incandescent lights. So although the original scene was lit with a D50 illuminant, the image of that scene appearing on a D65 monitor looks quite normal because of the eye's adaption. It is only when you put the monitor next to the original scene that you realise they actually look noticeably different and you realise a visual "trick" is being played on you. The CCM we need to apply to the image data is the CCM relevant to the original light source and not the reference white of the destination colour space. Mark
  9. PCC does not work well on the Nikon D5300 because the Nikon raw data filtering causes stars to turn green. This makes it impossible to correctly calibrate star colours. Yes your sequence for light pollution subtraction is good. Here's the PixelMath for applying the D5300 CCM (using the CIE-D50 version): The matrix still works well for a modified camera as long as you use an IR/UV blocking filter - I've tested this with a ColorChecker. But if you are using light pollution filters then the colours can go seriously wrong. The big issue with a modified camera is that it's no longer possible to produce a "true colour" image. If you calibrate the white balance for correct star colours then regions of hydrogen emissivity will appear reddish instead of the correct pink colour. Mark
  10. The basic steps I use to process a DSLR image as "true colour", starting with stacked linear data, are the following: Apply white balance (I generally use the WB multipliers for "daylight" or use PixInsight's PhotometricColourCalibration or similar) Apply the camera specific Colour Correction Matrix (CCM) Apply the gamma for the working colour space (typically sRGB or AdobeRGB) If you do this correctly, the result will be a colorimetric image with colour tones limited only by the camera's ability to accurately reproduce colour and the gamut of colour space chosen. Typically the final appearance is fairly bland and you may wish to saturate the colour slightly to make it appear "right". For some reason the eye seems to need this. Some additional quick points: Subtraction of light pollution must take place before gamma is applied. I'm assuming the CCM is the DXOMARK CCM which goes straight from the camera's raw colour space to sRGB, avoiding the unnecessary complication of the intermediate CIE XYZ colour space. If you are going to use AdobeRGB then you need a different CCM or apply the sRGB CCM followed by the sRGB to AdobeRGB matrix. Colour preserving stretches such as ArcsinhStretch (which preserve R:G:B ratios of each pixel) need to be applied before the gamma or you must apply them in a colour space such as AdobeRGB which uses a constant gamma instead of the variable gamma used by sRGB. I therefore definitely recommend working in AdobeRGB because the gamma is very easy to apply and because ArcsinhStretch still works correctly after the gamma has been applied. Obviously I have just given the main steps here because a detailed tutorial would be pages long. The sRGB to AdobeRGB matrix referred to earlier is this one: 0.7152 0.2848 0.0000 0.0000 1.0000 0.0000 0.0000 0.0412 0.9588 Mark
  11. DXOMARK publishes CCMs for many consumer cameras. For instance the CCM for the Nikon D5300 can be found at the following link and clicking on the "Color Response" tab. Nikon D5300 - DxOMark It assumes that white balance has already been applied. Mark
  12. It's very odd but as you say, it can remain a mystery 😉 Mark
  13. It certainly looks like a typo to me. The iif statement does absolutely nothing because the result of the iif statement is not applied to any image. In fact that's a good thing because the effect of the statement iif((starsonly<=0.01),starsonly,0.001) would be to remove all the stars from the starsonly image by capping their value to 0.001! Mark
  14. What you've written doesn't make much sense to me. The result of the "iif" statement is never used. The part of the expression that does all the work is ~(~Starless*~starsonly) It can be re-written as: 1- (1-Starless)*(1-starsonly) The effect is to create a resulting image where the bright parts of each image are combined to produce a result that is brighter than either of the originals. But they are not combined in as additive manner. It probably works well for stretched data but if your data are linear then Starless+starsonly would work better Mark
  15. It does sound like there is something wrong with the focuser - I don't have the problem you are describing. Good luck with replacing it! Mark
  16. I find there is some backlash in the standard focuser so I always make adjustments in the same direction - against gravity. Always apply the focus lock after any adjustment. I improved the accuracy of the adjustments by adding a makeshift pointer to the focusing knob with a focusing scale. Combining these strategies allows me to make repeatable and very accurate focusing adjustments. Mark
  17. It's certainly true that a full frame sensor is more demanding on the collimation than a smaller sensor. Mark
  18. Yes, that guide is correct and is easier to understand than the garbled instructions in the Tak Epsilon manual. Mark
  19. Yes and no. It simply admits the possibility of additional errors creeping it. For instance, one of your wife's hairs might go slightly slack without you realising it. Using the "Additional Collimation" technique would prevent this causing a collimation error. Mark
  20. Clarifying my earlier post, those adjusters that shouldn't be touched are the "vis d'appui" in this photo from a French website:
  21. The manual's description of the "Additional Collimation" process is badly written, just like the rest of the manual. What it means is that when you rotate the focuser, the crosshairs might describe a circle. If so, then the centre of this circle (and not the actual crosshair position) is the reference point you need to use to line up the dots when you are collimating. As for the mirror, you shouldn't overtighten the 3 grub screws that adjust the lateral position of the mirror cell. Also there is the possibility the mirror is distorted as a result of overtightening something that bears directly on the mirror glass. On the rear of the mirror cell there are 3 adjusters that are in contact (or should be in contact) with the rear surface of the mirror glass. These should never be touched and the manual doesn't even mention them. But maybe someone, somewhere at some time did make the mistake of touching them, so either they are too slack and don't support the weight of the glass or they are too tight and distort it. However I wouldn't want to touch these without the benefit of an optical test bench. Mark
  22. That's looking pretty bad. We can be pretty sure the spacing of the flattener to the sensor is correct because you are using the correct adaptor. But even the stars in the centre of the image are mis-shapen - they don't have circular symmetry but some kind of coma. Even if the spacing were wrong this would not happen. The collimation must be a long way off to cause this but I'm at a loss to understand why. In the manual there is a section titled "Additional Collimation (Perfecting Collimation)" and it describes how to rotate the focuser, checking the crosshairs remain central. What happens if you try this? If your crosshairs are off-centre then your collimation will be off as well. Mark
  23. I've never had stars as bad as those on the right-hand-side. I can only guess it's the collimation. Mark
  24. Well done changing the focuser. Yes, you appear to have the right adaptor - same box as for my Canon. The manual shows a couple of grub screws under the focus locking knob. Unscrew the knob to access them. Mark
  25. Oh, wow! You've changed the focuser! Have the stars ever been perfect since the focuser was changed? If not, it's probably because you have a larger number of degrees of freedom to deal with. A Tak Epsilon is difficult enough to get right in its normal configuration but the additional complications with a focuser replacement put it well beyond my level of experience. Good luck! [Edit: Is that the standard Tak wide mount Nikon adaptor you are using?] Mark
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.