Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. If you are really worried about color reproduction - neither of the two. You want something like this: Although people praise filters with clear cut off points - such filters are not good for good color reproduction (true color). Take for example OIII nebula - and observe above Baader filters. They overlap in 500nm part - just barely. Imagine for a moment that they don't overlap but are perfectly continuous. OIII light would fall either in blue filter or green filter - and you would never be able to get teal color of OIII nebula - it would always render as blue or green. How much of all possible colors can system distinguish is called gamut. Filters that don't overlap have rather small gamut. They are good for astronomical work in sense that you can easily split wavelengths if you have sharp cut offs and want to know how much photons fall in 500-600nm range for example - but are not good for color reproduction. Even with color balance. For that same reason, OSC cameras have overlapping filter responses: That helps to better reproduce actual color. But there are other things to consider with filters as well - like reflections, being parfocal and similar. Many people use baader filters and produce nice images with nice looking colors regardless of the fact that they might be a bit off in comparison to the real thing.
  2. I think that most confusion comes from the fact that cameras always think in terms of illuminants and reflectance model and we here discuss emission model. In Adobe DNG they specify XYZ with D50 illuminant. Does that mean that data was somehow white balanced to D50? No it does not. Please look carefully at this section on wiki: https://en.wikipedia.org/wiki/CIE_1931_color_space#Computing_XYZ_from_spectral_data In emissive case - there is no mention of illuminant - there is no need for it. We are only dealing with color of the light and not color of the objects In reflective case - particular illuminant is used when wanting to record color of object. This is because object don't have color on their own - color is not in object - color is in light spectrum. We need to illuminate that object with particular illuminant and only then we get light spectrum. Camera profiles are not made with emission light sources (for some reason) but rather by shooting calibration charts. Standard is that one should use D50 to illuminate those charts. I guess it is easier for camera industry to be done like that rather than use reference light sources and spectroscopy to define transform matrices.
  3. It is not assuming they are D65 white balanced - it just does not perform white balance and uses D65 when converting from XYZ to sRGB. It leaves CA above as identity matrix and when converting XYZ coordinates it has D65 included in XYZ -> sRGB matrix.
  4. Once again, you are right - for some strange reason they selected to do things very differently on multiple levels, I'll quote few things: From this you can see, that if you select D50 as illuminant - it will be the same as calibration illuminant and there will be no chromatic adaptation. It also shows that they decided to do color adaptation in camera raw space instead of regular way of doing it in XYZ space because it may work better in extreme cases (poor gamut of sensor or similar). Well, first step is to invert XYZtoCamera in order to use it properly and get CameraToXYZ . They probably opted to store XYZtoCamera for other reasons, but in reality - which ever you store - you have both versions as this matrix should be invertible. Second point is to apply chromatic adaptation if you decide to color balance your image in particular way. If not and you want your recorded data - leave CA as identity matrix and perform no color balance. This gives you pure CameraToXYZ - like I described above. They introduce CameraToXYZ_D50 only to let you do chromatic adaptation in camera space - but it is not needed and can be left unchanged - like I showed above.
  5. Here is an example that I just made to show that XYZ is not color balanced: I took two images one after another. For first, I selected daytime white balance. For second I selected some other white balance (fluorescent or whatever it was - can't remember). Then I used dcraw and told it: 1. export with camera white balance (-w option) 2. export without applying white balance (without -w, -a, -A or -r options) 3. export as XYZ You can see that first two images are white balanced according to selected balance Second two images are not white balanced - they are the same and they represent colors as camera sees them (not raw values but actual colors as camera sees them without white point adaptation) Third two images represent XYZ displayed in RGB space Although images were taken with specific white balance settings - if you don't apply those WB settings - you get same images and images that don't have "default" white balance setting (or at least daytime and fluorescent are not default WB). White balance is just "correction of reality" - to better represent colors as we know them - but indeed, in above images my wall reflects light and spectrum of that reflected light has this color: under given settings. Wall might not have that color under different lighting - but light at the moment of capture had that color. That is the point - we don't need the color of the wall, we need color of the light.
  6. You are right. I had a look at dcraw code. Adobe DNG matrices are in fact matrices to convert to sRGB linear and not directly to XYZ. That is why it is using D65 - as per sRGB standard. It then uses reverse sRGB to XYZ transform matrix defined on line 135 (commend: XYZ from RGB). This is probably because manufacturers did not see much need for direct RAW to XYZ conversion as end users operate in sRGB color space. It however does not mean that there is need for white balancing when going directly from camera raw to XYZ. One can easily derive direct matrix by using Adobe DNG raw to linear sRGB and multiply that with sRGB to XYZ to get direct raw to XYZ matrix (no illuminant needed).
  7. I've seen some of yours beautiful focusers in one thread when I was searching to see what people have done with 3d printing. I realized that I need a 3d printer when my list of ideas of what could be 3d printed reached about 10 items. I should have made a list back then as I've forgotten some of them now, but new ideas often spring to mind. Latest one, one that I'm concerned with in terms of strength and durability is M90 to ID96 adapter. That is quite a big part (tolerance issues due to shrinking when cooling), will be load bearing and is easiest to print in orientation the least suitable for load bearing. It is supposed to carry focuser + everything that is attached to focuser on my Skywatcher Evostar 102 F/10 achromat to replace stock focuser. Another thing that I would love to print but have concerns if it will work - is small helical focuser. I have this idea of building small telescopes out of "scrap optics" - like used up / broken binoculars or similar. I don't see issues with 3d printing lens cell which will then be mounted in aluminum tube. I want to print the rest of the scope - including EP holder and focuser. Here I'm worrying about smoothness and tolerances. Smoothness can be achieved with sanding things down, but I'm not sure how sand down very fine detail properly - like straight and helical slots and pin that rides those.
  8. I just realized that I could calibrate my computer screen by using DSLR - if I trust it to be accurate. All I need to do is place - red, green and blue rectangle on screen together with white one and take a raw image with DSLR. Compare XYZ values with reference values for sRGB color space and adjust R, G and B sliders so that I get good values. That is something similar to what calibration tools do - except they use more than 3 filters to form XYZ values - they use spectral analysis. Btw, does anyone know good "standard" light source for full spectrum? Like, can I use incandescent light bulb and make it shine on say 2700K or 3300K by some clever circuitry or maybe thermometer or whatever? Can one cheaply purchase reference light source - like standard illuminant or similar? I have star analyzer and I'm thinking that it can be used to measure spectral response of camera if I use known light source. That will help me to create XYZ transform mathematically rather than to trust manufacturer. I would also be able to calculate gamut of different cameras and color accuracy. Maybe question for different part of forum, like spectroscopy?
  9. I only ever managed dust lanes once with my 8" dob from my light pollution and as far as I remember - it did look like image that I posted - except I only saw "beginning" of the lane and end of it faded into blackness (or rather murkiness of LP skies) together with rest of outer structure. What I don't remember from visual is concentrated core. It always looks more like a blob than a blob with bright core.
  10. I was just thinking the same few moments ago when I wrote it and wondered if I used proper term Then I realized that ice is cold and fire is warm and in color terms ice is bluish while fire is yellow/orange/reddish thing
  11. Out of interest, I decided to repeat that DSLR / screen measurement test - this time on my phone screen and in dark room. This is XYZ simply mapped to RGB (of course, colors are totally wrong this way) and measured values: Or XYZ = 1.03, 1, 1.308 Resulting correlated color temperature is 7093K Again, display is not properly calibrated and it has somewhat "cooler" white point towards the blue part of the spectrum.
  12. I can describe process the way I see it produce accurate color from start to end result for both DSLR and Astro cameras (they are slightly different): DSLR workflow: Take all your subs (bias, dark, flat, flat dark - or reuse bias if you have strong enough flat panel that flat exposure is short and lights) and run them thru dcraw to extract 16 bit linear XYZ images. Calibrate and stack as you normally would. Take Y component and save as separate mono luminance. Take standard XYZ -> sRGB transform matrix and convert to sRGB linear. Divide each R, G and B with max(R, G, B ) to get scaled linear RGB components. Stretch and process Y/luminance to your liking (denoising, sharpening, non linear stretch). In case you want to emulate visual impression - just set black and white point and apply sRGB gamma. And finally we need to do color transfer which is simply: Take stretched luminance -> do inverse sRGB gamma, multiply with scaled R, scaled G and scaled B to get R, G and B of final image. Apply forward sRGB gamma to each component and just compose them into image. Astro camera workflow. All the same, except you need to apply dcraw step yourself and convert Camera color space to XYZ color space your self by applying conversion matrix. Deriving this matrix is tricky, but it needs to be done only once (and just applied every time after). You'll either need precise QE curve of your sensor and filters (preferably measured by trusted party rather than that published by camera vendor as they seem to just copy/paste without actually checking things) or perform relative measurement by DSLR type camera that has been calibrated at factory (in hope that camera manufacturer did a good job). Third option would be to have accurately calibrated sRGB computer monitor. With QE curves of sensor and filters - it is just mathematical process and one needs to write software to derive matrix (not hard, can be done in an afternoon in python for example by semi skilled programmer). With relative method with reference - it can be done by anyone if they can use software to measure average pixel brightness and spread sheet to enter values and do some least squares fitting. Relative method against DSLR - shoot same color checker pattern with both DSLR and Astro camera in same conditions. Use dcraw to read linear XYZ image. For each color of color checker pattern - read off average X, Y and Z values in DSLR image. Do the same with ASTRO camera this time noting raw R, G and B. Now you have set of XYZ -> RGB vectors that you need to normalize (make unit length) and use least squares method to find matrix that transforms raw RGB into XYZ. If you want to do it against calibrated computer screen - find somewhere image of color checker chart and display it on screen (in otherwise totally dark room - computer screen should be only light). Record that with astro camera. Take RGB image, do inverse sRGB gamma and do sRGB -> XYZ conversion. This will now be your XYZ image instead of one recorded by DSLR and rest is the same (normalize vectors, solve least squares to get matrix).
  13. I think that we differ in what we understand as color managed processing chain - or rather when it stars. Camera color space - although we call it RGB is not managed color space - not relative color space. It is again absolute color space. Here is what proper chain looks like when you want to color balance for particular illumination: Camera color space -> XYZ -> LMS (white balance) -> XYZ -> target color space In case of astrophotography - since we are recording color of light and not color of object under certain lighting that we need to correct for (we have illuminant in case of shooting regular scene) above simplifies to: Camera color space -> XYZ -> sRGB (or rather target color space - which implies D65 as white point). Camera color space -> XYZ does not require color management as both are absolute color spaces - values are not specified relative to 3 primaries, black and white point. You don't record 0.7232 value for blue with your camera - you record some number of photons / electrons.
  14. It might be that some subs were taken at -10C, some at -11 and some at -13C I would personally inspect fits headers to see what actual temperatures are there and then shoot set of darks for each of them, but as a first step - I would use regular -10C master dark to see how bad result will be (it might turn out quite fine). If your camera is CCD - you don't have to worry about that - you take set of bias subs and use dark optimization/scaling. That should provide you with good dark calibration. You can just use your regular -10C darks that you would otherwise use.
  15. Again - it does not. It just applies single matrix to convert from Camera color space to CIE XYZ. Indeed - each camera will have single matrix that converts from camera color space to CIE XYZ. There is no white point in XYZ - XYZ is absolute color space. You use white point only when converting from XYZ which is absolute color space (it can have arbitrarily large values) - to relative color space - where each component scales from 0 to 1, and white point is (1,1,1) value. Imagine you want to map 2d space to a square that is 1x1 in size (Each side goes from 0 to 1) This is the square. White is (1,1) in top left corner. This is our "sRGB" in 2d example (much easier to show diagram of 2d and understand 2d version, but 3d version is all the same). All the values of color that we can display are in this range - in this square. On the other hand, XYZ space is like this: There is no upper limit on how strong a light source can be. Take a torch and assign a point to its light. Take another torch and place it next to that one. Every camera will record double values in same amount of time. Place third torch - again the same. You don't even have to use other torches - just increase exposure time - you will collect more signal - larger raw number. That is equivalent as using stronger light. We now want to map this space to above well defined 1x1 square. We know following things: 1. Black is black - absence of light, always the same in all spaces (0,0) -> (0,0) (although some color spaces even define black point to be different than 0, 0) 2. We need primary colors of our color space, ones that we will use in combination to produce particular color 3. We need white point In the end - we have this: Now we also have "distorted" quad in this space - but we can easily find matrix transform of that quad into above 0-1 square space. This unbounded diagram contains all the colors we can possibly see and then even some colors that we can't see (for example - any coordinate with negative value - you can't have negative light). It turns out that when you define a color space in terms of above quad with primary colors and white point - that all the colors inside that quad can be generated only by those primary colors and white point defines their absolute ratio in terms of intensity. People mistakenly think that you need equal amounts of blue and green and red to get white. What green? Not all greens are equal, and similarly not all whites are perceived as white. But regardless of that - each of these spectra has definite XYZ coordinate and will be perceived as different color (but we can't say which will be labeled as white between two "whites" - that depends on conditions). Choose particular green, red, blue as primaries and white point - and you have new color space. Here are coordinates of sRGB linear cube (in xyY which is just simple transform of XYZ color space): As contrast, Wide gamut Adobe RGB color space is defined like this: It actually uses single wavelength colors as primaries (which again have certain xy coordinates) and different white point. Here is image that sort of shows above "squares/cubes" of different relative color spaces - but just projected at 2d plane in xyY absolute space: What you are here seeing is this: When we think of AdobeRGB or sRGB - we always think of this regular 1:1:1 cube - but in reality, it is distorted "diamond" shape in XYZ color space. As you can see - there is no white balance in XYZ and no white point. There is just absolute numbers that one gets from measuring signal strength - much like sensor. White point is just a color/point in absolute XYZ that defines one vertex of our "color cube" - one with 1,1,1 coordinates, and if we want to define our cube inside XYZ space - we need to specify value of white point along with few other vertices (black point - usually 0, 0 and three primaries R, G and B). White balance is process of deforming our cube in XYZ space - or changing transform and selecting another white point. It can be summarized with this image (back to our simple 2d representation):
  16. X, Y an Z components are not bound - they don't go from 0-1. They go from 0 to infinity. In that sense - there is no white area of CieXYZ (no max values - no white - all is sort of "grey"). It represents photons collected from a light source separated into 3 components. In that sense - you can always double light intensity and it will double numbers. We have white area of xyY color space - or rather we have white area in xy chromaticity diagram. xy chromaticity is 2d space - as light intensity has been normalized to 1 - or rather Y component of xyY has been normalized to 1. In fact - it is just color without intensity associated. Both of these xy diagrams are true: Although they don't display same colors. Left one does not even contain white. This is because these diagrams are free from intensity component and you can assign any intensity component to any light source representing color in this diagram. If you look at above diagram, greys are located around 0.33, 0.33. Regardless of Y - intensity, x, y are going to be around 0.33 if all three X, Y and Z are close in values. In that sense - you could say that XYZ color space is "balanced" - when all there components are similar in size - color they represent is similar to grey. It just happens that when you take White color on sRGB image (my computer screen, and if it is properly calibrated - that should correspond to D65) and you record what my computer screen is producing and put it in XYZ coordinates - you get numbers that are close in ratio to 1:1:1 - but not equal to it. Here it is: I measured mean value of XYZ image and got X to be ~15580, Y to be ~16588 and Z to be 12934 If I scale that to Y being 1, I'll get values: XYZ = 0.93923, 1, 0.7797 That actually corresponds to color temperature of 5000K rather than to that of 6504K My computer screen is actually much warmer than it should be - or maybe it is a bit due to environment lighting since I did not take image in the dark.
  17. I was looking at that line of printers prior to Ender 3 series and it was my favorite. Then I realized that there is no much difference in print quality between series - only print volume size and price. Depends really on how large things you want to print and I can't think anything that can't be printed out of multiple parts in Ender 3 print volume that I would want to print.
  18. There is such chart as a matter of fact. I used image of it (or rather image with very similar colors). It's not cheap. There are "copies" of such chart - but all those charts are pigments. Pigments work differently then light (I'll explain later a bit more). In any case - these are used for exactly what you encountered - look at image on screen, purchase paint and get what you asked for. That is called color matching. There are many different things that go into that process. Paint is pigment and color of pigment depends on two things - pigment itself and light used to illuminate pigment. These two act together to produce spectrum of light that you see as color. In order for you to actually see the same color on your computer screen, when image of pigment is taken - one needs to control illumination (this is illuminant that we are talking about). What sort of light should one use to represent "natural" color of pigment? But what is natural color of pigment? Pigment will have different hues depending on light used to illuminate it. Computer screen shows color of pigment if image was properly color managed and your computer screen is properly calibrated that pigment would show if it was illuminated with D65 illuminant (daylight in northern / central Europe is best match for this illuminant). In astronomy - we don't have this problem. We don't have pigments - we have light. And only thing we need to agree is same viewing conditions - something that is natural, and luckily we don't need to worry about it - computer standards do that for us. sRGB color space is well defined and when we want to represent color of light in it - we know the math and there is just two things left to do: - calibrate camera - calibrate computer screen Above with CieXYZ and color correction matrix - that is "calibrate camera" part. We want to take two cameras and shoot same light source and get same numbers. Well defined numbers. Calibrate computer screen is usually done at factory - and not really good, mind you as most computer screens have brightness / contrast and other controls and people like to "adjust" things as they see fit. However, if one wants to calibrate monitor - there are solutions out there and I think anyone wanting to produce proper color (and especially professionals working with art / print / photography) need to calibrate their monitor. Checkout X-Rite and SpyderX tools among others. These are little USB devices that measure spectra of colors of your computer screen (or even phone screen) and let you set it properly and produce color profiles.
  19. It has not been color balanced - it has been converted to XYZ color space. I specified it in command line. I said I wanted XYZ color space as opposed to raw color space. This however is not white balance. White balance means something else. This simply means that if you take one DSLR - like Canon 750d and other like Nikon D5300 and you shoot a light source with each and measure raw values - you will get different triplets. Each according to camera QE in each channel. However if you take those raw images and ask dcraw to export XYZ images - you will get the same triplets of numbers for each camera. If we had camera that has QE curves exactly the same as XYZ color matching functions - you would take such camera, record raw and get same numbers without any special conversion. It raw would be the same as XYZ color space. Think of XYZ color space as "standard / universal" sensor with specific QE per each channel and channels are called X, Y and Z. It just happens that D65 illuminant is close to 1, 1, 1 in XYZ color space, and if I don't do conversion to sRGB but rather just assign XYZ to RGB of sRGB color space - what is white will stay relatively white (there is actually a bit of difference).
  20. Actual XYZ data when directly mapped to sRGB looks like this (notice darker tone due to missing gamma): Colors are messed up but white is still "white". Or close to white. That is because most illuminats are in fact very close to what we perceive as white in sRGB space. here is RGB of 1,1,1 with D65 - XYZ is not mostly "green" - it is very close and it is somewhat bluish white if we assign XYZ to RGB directly. Here we are using D65 white point implicitly in conversion matrix. XYZ is absolute color space - meaning it actually measures number of photons in absolute terms not relative to some white point or triplet of colors. sRGB is representing colors as fractions of three colors that have specific xy coordinates. Only thing that is missing is "max" value - or what color is at the corner of RGB cube - again on xy chromaticity diagram. That is white point of color space and for sRGB it is D65. In order to answer your question - think following: You have same computer screen and it shows RGB = 1,1,1 as some color. Most of the time, that color will look as pure white. But if you are in well lit room that is using mostly bluish lights, then RGB(1,1,1) on your computer screen will look yellowish - it will no longer be pure white. Similarly in well lit room that is illuminated by warm light - computer screen will start looking too cold - bluish white. Nothing changed in light coming from computer screen - it has same spectrum as it had previously. White point is used to compensate for that. White point is also used to compensate for something else - that is our perception. Many people mistakenly think that just because RGB(1,1,1) is white - you need to mix equal parts of red, green and blue - but that is not correct. Here we can see that green takes 80%, red takes 54% and blue only 44% of "brightness". Fact that we are having 0-1 range for R, G and B - is just convention that white is the brightest thing that should be displayed and how much blue and how much red and how much green go into particular white - dictates actual physical ranges of brightness of each component. You need 4 set of numbers to define transform matrix - you need primary R, G and B in XYZ color space and you also need coordinates of white point. In that sense "white balance" is encoded in XYZ -> sRGB transform matrix.
  21. None - camera raw RGB (not to be confused with RGB models) is very similar to XYZ. Both represent absolute QE type curves over visible spectrum. Both have the same way of producing a value. Here is how XYZ values are produced: X value is simply x(dash) times light spectrum integrated over visible range. x(dash) is "QE of X" (where QE is percentage and actual curve used here is number without unit). Same thing happens when we want to find out what our camera sensor for red channel will produce - it will integrate over visible spectrum - spectrum of our source times QE of red filter. Camera raw space (although we call it RGB) is absolute space not relative. It also does not have white point. For this reason you don't need white point when converting between raw RGB and XYZ. In fact, if one produced sensor that has QE curves that are identical to x(dash), y(dash), z(dash) - there would be no need to transform between the two, or transformation would be identity matrix. It is also reason why you can't directly use raw RGB values as color space RGB values and you need to do color correction. When people do this - their images turn all green or red and they need to "color balance" when in reality they need to do matrix transform raw -> XYZ -> sRGB (linear) Since these are just two matrix multiplications - you can combine them into one - by pre multiplying matrices and do straight raw -> sRGB, but since we don't have actual raw->XYZ for DSLR cameras, we then do two step thing (actually such matrix must exist in raw file as dcraw is using it to produce XYZ values).
  22. And success! Here is recipe that I used for this: get dcraw and convert your raw image to 16bit XYZ image (not doing any special processing like application of white balance or anything, I used following command line: dcraw -o 5 -4 -T IMG_0942.CR2 where XYZ color space is listed as 5 in list of supported color spaces: -o [0-6] Output colorspace (raw,sRGB,Adobe,Wide,ProPhoto,XYZ,ACES)) This makes tiff image that is actually completely wrong in color - because it is in XYZ and not sRGB color space. Then it is easy - you just apply transform from XYZ to sRGB given by this: I used approximation gamma of 2.2 rather than above 2.4 with 0.0031308 cut off, but that is close enough for this test. I also made image brighter by not paying much attention to black/white point - I just used 0 and 16384. That is of course of no consequence for astrophotography - as we set those in processing anyway.
  23. I just realized that we don't need to do any of that. Apparently, dcraw is capable of producing XYZ output, and then we can just simply use XYZ to sRGB transform matrix. I'm trying to do it now, but having some trouble ...
  24. I don't think you can actually take any star to be reference white as plankian locus does not intersect with D65 white point. None of standard illuminants is actually on plankian locus.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.