Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

sharkmelley

Members
  • Posts

    1,323
  • Joined

  • Last visited

Posts posted by sharkmelley

  1. 13 hours ago, vlaiv said:

    It just happens that when you take White color on sRGB image (my computer screen, and if it is properly calibrated - that should correspond to D65) and you record what my computer screen is producing and put it in XYZ coordinates - you get numbers that are close in ratio to 1:1:1 - but not equal to it.

    Here it is:

    image.png.3e78d0312adf6f8200715ee0e47b6f37.png

    I measured mean value of XYZ image and got X to be ~15580, Y to be ~16588 and Z to be 12934

    If I scale that to Y being 1, I'll get values:

    XYZ = 0.93923, 1, 0.7797

    image.png.11aae810cd13567429ee3de9b212ad5d.png

    That actually corresponds to color temperature of 5000K rather than to that of 6504K

    My computer screen is actually much warmer than it should be - or maybe it is a bit due to environment lighting since I did not take image in the dark.

    Excellent, we can now see that DCRAW is applying white balance to the raw data as it transforms into CIE XYZ.

    I took a look at the DCRAW C code last night but it's pretty obscure - I would need to build it and step through it in debugging mode to see how it does it but I don't have time for it right now.  As I said earlier, you can download the DNG Specification to read how Adobe does it - it's all in chapter 6 "Mapping Camera Color Space to CIE XYZ Space".  It's an interesting read and I'm guessing DCRAW implements a simplified version.  I downloaded the DNG SDK a couple of years ago and debugged it step by step to compare what I was seeing in the code against the DNG Specification.  I can see that the DCRAW C code does contain a hardcoded matrix for each camera but it's only a single one (i.e. it doesn't vary by colour temperature) and it corresponds to ColorMatrix in the DNG Spec (not ForwardMatrix).  I'm pretty sure DCRAW copies its matrices from Adobe.  The remaining mystery for me is which white point DCRAW is using for its mapping to XYZ - it may help to explain why you are seeing a resulting CCT of 5038 rather than something nearer 6500.

    Mark

  2. 17 minutes ago, vlaiv said:

    It has not been color balanced - it has been converted to XYZ color space.

    Are you sure it hasn't been colour balanced?   Look at the raw data and measure the RGB values for one of the white patches on the colour chart.  I'm guessing the RGB value is predominately green. Now look at the XYZ values that emerge from DCRAW for the same white patch.  Do they correspond to the green area of the CIE XYZ colour space or are they approximately in the white area of the CIE XYZ colour space?  I'm guessing they sit in the white area and it's because colour balancing has happened.

    I don't know what DCRAW is doing but I do know that the whole of chapter 6 of the DNG Specification is devoted to how the Camera RGB space is mapped to CIE XYZ and it definitely involves colour balancing.

    Mark

  3. You've still missed my question.  The raw data has a very strong green channel.  In the TIFF that DCRaw produces, do the white and grey squares of the colour chart look green?  If not, has it been white balanced and where did the white balance come from?

    I'm guessing it might be a D50 white balance since XYZ-D50 is typically used as the Profile Connection Space.

    Mark

  4. I may be missing something, so let me ask the question in a different way.  If you look at your raw data you will almost certainly find that the green channel has much higher pixel values than red or blue for any colour that is approximately white or grey.  Why doesn't the bottom row of your colour chart appear green?  So where does the white balance take place and what white balance is used?

    Mark

  5. 9 minutes ago, vlaiv said:

    And success!

    Here is recipe that I used for this:

    get dcraw and convert your raw image to 16bit XYZ image (not doing any special processing like application of white balance or anything, I used following command line: dcraw -o 5 -4 -T IMG_0942.CR2 where XYZ color space is listed as 5 in list of supported color spaces: -o [0-6]  Output colorspace (raw,sRGB,Adobe,Wide,ProPhoto,XYZ,ACES))

     

    I have a question.  What colour temperature (i.e. white balance) is DCRaw using when you execute that command?  The white balance (e.g. Daylight, Cloudy, Tungsten etc.)  is required to transform from camera RGB to CIE XYZ.

    Mark

  6. 46 minutes ago, vlaiv said:

    My point being - we want CCM that will do following:

    When you take a picture of light source and you display it on screen, two things happen:

    you see the color of light source and image on screen being the same color (regardless of the actual color) and when you image screen next to source of light with any camera - and you apply its CCM - you get same linear RGB values for both monitor and light source.

    Yes this should be possible.  Essentially you need to take your image data and apply a white balance consistent with D65 plus a CCM consistent with D65.  But don't attempt to generate a CCM from a photo taken of a screen.  Instead you need something like a ColorChecker illuminated with a proper D65 broadband light source.

    BTW you're right that we won't find a D65 star for our PixInsight PhotometricColourCalibration!

    Mark

  7. 26 minutes ago, vlaiv said:

    This part I'm having issue with - astronomical object don't have original light source that shines on them that you need to create CCM for. You need to create CCM for D65 in order for light that is emitted by object to have same XYZ coordinates as computer screen emitted light of image of astronomical object. Right?

    You want light from object and light from computer screen when viewed "next to each other" to appear as same color - regardless of what our brain sees that color to be - pure white, yellowish tint or whatever.

    If you want the colours in the image that appears on your monitor to identical to the emitted light from an astronomical object then you are right - you need to apply a D65 white balance and use a D65 CCM.   If you are using PixInsight's PhotometricColourCalibration to perform the white balance then you would choose an appropriate white reference - maybe an F-type star.

    The only problem with that approach is that G2V stars like our sun would not appear to be white in the displayed image because of the way the eye will adapt to the D65 reference white of the monitor.

    Mark

  8. 1 hour ago, vlaiv said:

    DxOMark gives D50 and A illuminant.

    sRGB requires D65 illuminant as white point.

    I'll try to explain this using a thought experiment. 

    Let's assume you take a photo of a scene (including a grey step wedge) illuminated with a D50 light source.  When the data are correctly colour balanced for D50, the white step might have an RGB value of (250,250,250).  If we now apply the CCM for sRGB, the RGB value of this white will remain unchanged as (250,250,250) and it will appear white to you when displayed on the computer monitor.  However, if you take a photo of the monitor displaying this "white" it will have a slightly blue tint (which you can measure in the raw data) because sRGB is displaying white with a D65 illuminant.  However the image of the scene on your monitor appears quite normal because your eye immediately adapts to seeing D65 as white just like your eye immediately adapts to walking into a room lit by incandescent lights.

    So although the original scene was lit with a D50 illuminant, the image of that scene appearing on a D65 monitor looks quite normal because of the eye's adaption.  It is only when you put the monitor next to the original scene that you realise they actually look noticeably different and you realise a visual "trick" is being played on you.

    The CCM we need to apply to the image data is the CCM relevant to the original light source and not the reference white of the destination colour space.

    Mark

  9. 27 minutes ago, endless-sky said:

    Thank you both! Lots to read and digest.

    As for your workflow, Mark, I am lost after point 1: PhotometricColorCalibration, which I already do - and that's about my stopping point... 😅

    You also mentioned light pollution subtraction. I am assuming DBE (or ABE) - I prefer the former. I have been doing this as step 1 (well, maybe not exactly 1, but after master integration and dynamic crop to get rid of the edges). Then I do PCC and also check Background Neutralization in the same process (used to do BN before, but then I saw it already built in and started doing it along with PCC). Does this sequence sound "correct" so far?

    Would absolutely love a detailed tutorial on the following steps, if you can spare the time, as I have no clue on how to adjust for CCM.

    Assuming D5300, would I pick the values given by Color Response --> CIE-D50 in the link you posted?

    Also, other point: I astromodified my camera and put a UV/IR cut filter in front of the sensor. I assume the same matrix doesn't apply anymore? Even more so if there's a light pollution filter (L-Pro) in the imaging train. Correct?

    PCC does not work well on the Nikon D5300 because the Nikon raw data filtering causes stars to turn green.  This makes it impossible to correctly calibrate star colours.

    Yes your sequence for light pollution subtraction is good.

    Here's the PixelMath for applying the D5300 CCM (using the CIE-D50 version):

    D5300PixelMath.png.61041e28d9eeb8c8dde59e435c54b297.png

    The matrix still works well for a modified camera as long as you use an IR/UV blocking filter - I've tested this with a ColorChecker.  But if you are using light pollution filters then the colours can go seriously wrong.

    The big issue with a modified camera is that it's no longer possible to produce a "true colour" image.  If you calibrate the white balance for correct star colours then regions of hydrogen emissivity will appear reddish instead of the correct pink colour.   

    Mark

     

     

     

    • Like 1
  10. The basic steps I use to process a DSLR image as "true colour", starting with stacked linear data, are the following:

    • Apply white balance (I generally use the WB multipliers for "daylight" or use PixInsight's PhotometricColourCalibration or similar)
    • Apply the camera specific Colour Correction Matrix (CCM)
    • Apply the gamma for the working colour space (typically sRGB or AdobeRGB)

    If you do this correctly, the result will be a colorimetric image with colour tones limited only by the camera's ability to accurately reproduce colour and the gamut of colour space chosen.  Typically the final appearance is fairly bland and you may wish to saturate the colour slightly to make it appear "right".  For some reason the eye seems to need this.

    Some additional quick points:

    • Subtraction of light pollution must take place before gamma is applied
    • I'm assuming the CCM is the DXOMARK CCM which goes straight from the camera's raw colour space to sRGB, avoiding the unnecessary complication of the intermediate CIE XYZ colour space.  If you are going to use AdobeRGB then you need a different CCM or apply the sRGB CCM followed by the sRGB to AdobeRGB matrix.
    • Colour preserving stretches such as ArcsinhStretch (which preserve R:G:B ratios of each pixel) need to be applied before the gamma or you must apply them in a colour space such as AdobeRGB which uses a constant gamma instead of the variable gamma used by sRGB.  I therefore definitely recommend working in AdobeRGB because the gamma is very easy to apply and because ArcsinhStretch still works correctly after the gamma has been applied.

    Obviously I have just given the main steps here because a detailed tutorial would be pages long.

     

    The sRGB to AdobeRGB matrix referred to earlier is this one:

    0.7152  0.2848  0.0000

    0.0000  1.0000  0.0000

    0.0000  0.0412  0.9588

     

    Mark

    • Like 1
  11. 32 minutes ago, vlaiv said:

     

    In the end - I did not help you much since I don't know PI that much, but first step would be to find out CCM for D5300 (for D65 illuminant) or to derive it yourself for CieXYZ color space.

     

    DXOMARK publishes CCMs for many consumer cameras.  For instance the CCM for the Nikon D5300 can be found at the following link and clicking on the "Color Response" tab.

    Nikon D5300 - DxOMark

    It assumes that white balance has already been applied.

    Mark

    • Like 2
  12. 9 hours ago, steppenwolf said:

    Not a typo!!

    It certainly looks like a typo to me. 

    The iif statement does absolutely nothing because the result of the iif statement is not applied to any image.  In fact that's a good thing because the effect of the statement iif((starsonly<=0.01),starsonly,0.001) would be to remove all the stars from the starsonly image by capping their value to 0.001!

    Mark

  13. What you've written doesn't make much sense to me.  The result of the "iif" statement is never used.

    The part of the expression that does all the work is ~(~Starless*~starsonly)

    It can be re-written as:  1- (1-Starless)*(1-starsonly)

    The effect is to create a resulting image where the bright parts of each image are combined to produce a result that is brighter than either of the originals.  But they are not combined in as additive manner. 

    It probably works well for stretched data but if your data are linear then Starless+starsonly  would work better

    Mark

    • Like 2
  14. I find there is some backlash in the standard focuser so I always make adjustments in the same direction - against gravity.  Always apply the focus lock after any adjustment.  I improved the accuracy of the adjustments by adding a makeshift pointer to the focusing knob with a focusing scale.  Combining these strategies allows me to make repeatable and very accurate focusing adjustments.

    Mark

     

  15. 13 hours ago, tooth_dr said:

    I decided to go back to the smaller 8mp CCD sensor as I wanted to image around the full moon in Ha.  The scope is performing well at that size of sensor with current collimation, so I might stick with it for the time being. 

    It's certainly true that a full frame sensor is more demanding on the collimation than a smaller sensor.

    Mark

  16. 14 hours ago, tooth_dr said:

    Hi Ciarán. There aren’t adjustment screws on the focuser like a polar scope.

    I had a look at it again last night and I followed a different guide, and this is different to how I was reading the Tak manual so I’m hopeful  

     

    Yes, that guide is correct and is easier to understand than the garbled instructions in the Tak Epsilon manual.

    Mark

  17. 7 minutes ago, tooth_dr said:

    Doesnt that make a bit of a mockery of the whole process of being super precise and lining up the secondary dot below the cross hairs, if in fact I have to line then up with my interpretation of the centre of a circle?

    Yes and no.  It simply admits the possibility of additional errors creeping it. For instance, one of your wife's hairs might go slightly slack without you realising it.  Using the "Additional Collimation" technique would prevent this causing a collimation error.

    Mark

  18. The manual's description of the "Additional Collimation" process is badly written, just like the rest of the manual.  What it means is that when you rotate the focuser, the crosshairs might describe a circle.  If so, then the centre of this circle (and not the actual crosshair position) is the reference point you need to use to line up the dots when you are collimating.

    As for the mirror, you shouldn't overtighten the 3 grub screws that adjust the lateral position of the mirror cell.  Also there is the possibility the mirror is distorted as a result of overtightening something that bears directly on the mirror glass.  On the rear of the mirror cell there are 3 adjusters that are in contact (or should be in contact) with the rear surface of the mirror glass.  These should never be touched and the manual doesn't even mention them.  But maybe someone, somewhere at some time did make the mistake of touching them, so either they are too slack and don't support the weight of the glass or they are too tight and distort it.  However I wouldn't want to touch these without the benefit of an optical test bench.

    Mark

  19. 3 hours ago, tooth_dr said:

    Last night i took a few images. There is a lot of coma despite the scope looking fairly well collimated 🤷

    L_0037.NEF 73.08 MB · 1 download

     

    That's looking pretty bad.  We can be pretty sure the spacing of the flattener to the sensor is correct because you are using the correct adaptor.  But even the stars in the centre of the image are mis-shapen - they don't have circular symmetry but some kind of coma.  Even if the spacing were wrong this would not happen.  The collimation must be a long way off to cause this but I'm at a loss to understand why.

    In the manual there is a section titled "Additional Collimation (Perfecting Collimation)" and it describes how to rotate the focuser, checking the crosshairs remain central.  What happens if you try this?  If your crosshairs are off-centre then your collimation will be off as well.

    Mark

     

  20. Oh, wow! You've changed the focuser! 

    Have the stars ever been perfect since the focuser was changed?  If not, it's probably because you have a larger number of degrees of freedom to deal with.  A Tak Epsilon is difficult enough to get right in its normal configuration but the additional complications with a focuser replacement put it well beyond my level of experience.

    Good luck!

    [Edit:  Is that the standard Tak wide mount Nikon adaptor you are using?]

    Mark

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.