Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That was more rhetorical question to give example that steep filter sides don't necessarily mean better SNR. For narrowband filters that is certainly the case - narrower the band and steeper the sides - better the filter but that is not true for filters used in broad band. Steep cutoffs have their uses when you want to just capture certain band and want to do that at best SNR possible. Again - not the case when you want to get best color reproduction.
  2. This is not correct. D50 has following XYZ coordinates: [0.9642, 1.0000, 0.8251] according to: https://www.mathworks.com/help/images/ref/whitepoint.html Or according to this wiki article: https://en.wikipedia.org/wiki/Standard_illuminant 0.34567, 0.35850 xy coordinates. If I enter those coordinates into color conversion calculator, I get following: Here I used xy coordinates. XYZ are the same as those from mathworks website (to 4 decimal places truncated), I selected standard sRGB parameters - gamma as sRGB, D65 as white point and sRGB as RGB model and calculated sRGB values are not in fact 1,1,1 D50 does not have 1,1,1 coordinates in sRGB. If I select AdobeRGB color space with appropriate parameters (gamma 2.2, white point D65): again - D50 will not have 1,1,1 coordinates. Only if I select color space that has D50 as white point - like Adobe Wide Gamut Color space: we will get 1,1,1 for D50 In relative color spaces - white point of that color space will have 1,1,1 - but other will not In absolute color space like XYZ - illuminants are not different to any other color - they simply have some coordinates. In fact there is type of illuminant that could be considered as white point in XYZ color space - in sense that it produces 1:1:1 ratio of XYZ (but not 1,1,1 as actual numbers will depend on how strong illuminant is and there is no upper bound) and that is E illuminant. E illuminant is so called "equienergy" illuminant and it has 1/3, 1/3 xy chromaticity - but that is not actual white color - it is quite yellowish: (taken from this page: https://en.wikipedia.org/wiki/Standard_illuminant)
  3. Our understanding of this differs still - I still maintain that D50 is not the white point of XYZ and that XYZ does not have white point. D50 moniker in XYZ is used to note that rawToXYZ transform was derived with use of D50 illuminant. That is all. You don't have to assign any particular color temperature in order to convert star spectrum to its exact chromaticity in XYZ color space. You just take spectrum of that star, multiply with X, Y and Z color matching functions and integrate over 380 - 700 and something range and that will give you X, Y and Z values of that star.
  4. How about this then: This is well defined color space - in fact used as absolute colorimetric space. Most if not all of our color matching theory is based on that color space. We don't need to invent anything special for that space.
  5. Why not extend their coverage and create overlap - like Astronomik ones? right shape has more "volume" compared to left - which means more photons captured - higher signal, better SNR.
  6. Why is this: Poor? You can use normalized response curves and then scale with coefficient ...
  7. I think I finally understand it all. If you try to do the same with your two cameras indoors with warm ambient light - you'll find that both cameras in "as is / original mode" will register that same grey patch to be closer to color of your ambient light on xy chart. Similarly if you have very cold ambient light like over 6000K - you'll again find that grey is for both cameras in that part of xy chart. This is because no color balancing is taking place - as expected. D50 marker in CameraToXYZ matrix stands for something else than white point (as XYZ does not have white point). CameraToXYZ matrix will not be always 100% correct for all spectra as gamut of camera sensor will differ from that of XYZ color space. Cameras often can't record all colors 100% accurately and there will be some error. This error comes from QE curves of sensors and calculated CameraToXYZ matrix. Multiple matrices are used - but they all represent the same thing Camera Raw to XYZ transform. They are made by illuminating calibration charts with either D50 or A illuminants (or you can choose third or fourth illuminant as well). XYZ(D50) matrix will give less color error in scene that has illumination similar to D50 light source and XYZ(A) will give less color error in scene that has illumination similar to A illuminant. This is why different conversion matrix is selected if you choose to white balance the scene - If you think that your illumination is closer to A then it is probably closer to A and XYZ(A) matrix will give less error. Similarly if white balance is closer to D50 then XYZ(D50) is chosen as it will minimize color error. In theory you can have more of these matrices. If you don't select to do white balance and shoot as is - then XYZ(D50) will be chose over XYZ(A) for your camera as it probably gives less total error over other matrix.
  8. Thing is that human visual system behaves in the same way - any color we see will be generated by infinite number of spectra. If our eyes can do it, why not camera - we also have three different "filters" - why would it be a problem to match our filters to telescope ones. I agree. For those that enjoy that side of astronomy - more information or specific information is better than producing visually equal color, but to quote myself from the start of this discussion: All of this was with above goal in mind - color reproduction. When one wants to reproduce colors that we would see when looking at particular light. Again, as above - that is not bad thing in itself - some enjoy creating realistic images of object out there and realism comes in many forms - one of which is visual color matching.
  9. That is interesting - and gives us experimental grounds. You have two cameras that have different "default" settings. How about shooting a scene with both and comparing XYZ results? What do you think should happen if they have different default settings for white balance if you compare XYZ files of the same scene? Should it be different or the same?
  10. According to this: https://www.simplify3d.com/support/materials-guide/properties-table/ HIPS, PC and PP are only one with service temps above 100°C. I would add ABS/ASA and Nylon as well as high temp resistant materials. Don't really think it would be easy to encounter 90°C+ in everyday use (unless submerged in boiling water or something like that). On that list, PC and PETG look like very decent materials with good properties. I'm not sure how much coefficient of thermal expansion is applicable. I watched one video on dimensional accuracy of 3d prints and person in video measured 0.5% size reduction on material that is declared as 0.4%. Supposedly, it was some sort of enhanced material - not sure which one - was it ABS or PLA - in any case, I thought that such small dimensional change is not consistent with above linked table. For example PLA has 68µm/m/°C. It is printed at 190-220°C and let's assume that room temperature is 20°C so that is roughly 200°C of difference. 68µm * 200 = 13.6mm = 1.36% - about three times more than above quoted value and PLA has one of smallest coefficients of thermal expansion. How come that there is such mismatch?
  11. Indeed it is. One could to sensor raw to XYZ transform solely on stars for example, however that is only very narrow sample to derive accurate transform as most stars will be concentrated around Plankian locus and that is only very small section of chromaticity diagram. For that reason, I would like to have way to derive transform independently and then just use stars to remove atmospheric reddening of image. If I had more confidence in XYZ produced with DSLR - that would be simple way to calibrate things. Maybe I could use computer screen as reference once I calibrate it properly for sRGB with calibration tool? I did try to calibrate my screen with DSLR - and white point part was not hard, I managed to set proper white point for my screen using RGB controls (at least according to reading from my DSLR if it is to be believed) however, I have issues with gamma. There is no simple way to set that, but when I measure gamma - it is not sRGB standard one. I create say 5-10 grey squares and assign what should be linear light intensity 0, 20, 40, 60, 80 and 100. When I take image of my screen and measure and normalize values I get deviation - 20% is for example 18% and 60% is closer to 66% - so it is not just simple power law - gamma curve is "wavy" rather than wrong power - sometimes being over and sometimes under expected value.
  12. That helps quite a bit. We just need this library: https://www.eso.org/sci/facilities/paranal/decommissioned/isaac/tools/lib.html and some code to calculate XYZ / xy chromaticity of different stellar classes. Now, on to the final piece of puzzle - where to find accurate sensor QE curves, or at least easy method to measure it (like using Star Analyzer and affordable light source of known spectrum - is there such thing?).
  13. Yes, but there will be areas where such filters can distinguish more than straight edged interference filters. If you look at this diagram - left pair of wavelengths are indistinguishable. So is far right pair of lines - but lines in the middle - will have different ratio of R and G, and G and B - and we will be able to tell them as separate colors. In fact - in regions where filters overlap - we can distinguish them as separate colors. If you look at CieXYZ that is standard absolute colorimetric space: "filters" (here matching functions but really the same thing) overlap all over the place and sometimes even all three. CieXYZ covers whole gamut of human vision and therefore even single wavelengths of light will have different XYZ coordinates and be distinguishable as are to human eye. There are filters that are better suited to color reproduction than both baader and astronomik or any other interference filters out there. Look at absorption filters. Problem is that they often have low QE or rather transmission - sometimes even below 50% and for this reason people don't want to use them - but they provide best filters in terms of color accuracy. You can combine for example above filters to get good mix / coverage. You just need to perform color calibration, but as you see, blue and green are only 67-68% peak transmission. For all intents and purposes, it does not matter much if it is less than 1-2% - it won't have enough "resolution" to make a difference. Indeed, you have to perform color calibration for any combination of camera and filter. However, it is 3x3 matrix as linearity must be preserved (addition of light and variation of intensity by exposure length) so we have 3 component vector to 3 component vector linear transform - that is 3x3 matrix. Two components of vector being 0 means that all possible values are on a straight line - that will be the same when you project it. You can't get better variance of color in projected space - it will still be some shades of single color as it is in primary space.
  14. I think you are not far off with that effective temperature thing, here is comparison between Sun and Vega: Sun is pretty close but Vega is quite off ... Then I guess we have to figure out how similar are spectra of the same stellar class (across stellar classes).
  15. Btw, are you sure you are right with above graph? source wiki. Then there is this: On the page for Plank's law.
  16. What is that sudden drop in intensity below about 380nm? Is that spectrum of A0V star from earth and that is atmospheric attenuation of UV or is that actual spectrum of star as would be recorded outside of atmosphere - and why does it differ significantly from black body?
  17. You don't really have to do any special experiment, you can just wonder what will rainbow look like when recorded in Baader RGB model. Here - look at this image: I have marked 4 different emission lines in B part of the spectrum. Let's suppose that they are of equal intensity. What will RGB filters record for each one of them: G and R will simply be 0 as each of those lie outside of G and R parts of spectrum, right? Only B will have some value. And B values will be fairly close in intensity - they will be QE of sensor multiplied with transmission of Baader B filter - which is mostly 100% or small variations. So first line will be 0, 0, 0.55 second will be 0, 0, 0.58 third will be 0, 0, 0.62 and fourth will be 0, 0, 0.62 (for example). These are not rainbow colors - these are all blues with different intensity. Instead of recording this: you'll end up with something like this: I'd say that's a few missing colors in that image, right?
  18. Out of interest, how similar are spectra of particular stellar class. Say you take couple of A0V (or some different stellar class) and you compare their spectra. How much difference there will be - and especially if you integrate signal over some range for example every 10nm (so you have ~30 samples over visible range 400-700nm). We don't have to use effective temperature - we can take reference spectra for each class and calculate XYZ coordinates or xy chromaticity for each class (provided that different stars of same stellar class give same integral values).
  19. Main issue as far as I see it is that people don't really understand all that well color correction and tools available don't really go that deep to be useful in astrophotography. I'll probably do example at some point, as I have Baader filters and ASI1600 - which I bought before I bothered to learn all this color related stuff. I'll take DSLR type image of color checker chart and one with ASI1600 and I'll try to calculate color correction matrix. That way it will be easy to see how accurate are colors produced with such RGB filters. I already did this for my OSC camera - ASI178mc, so process is not that difficult. There is also other way to do it - if there was QE graph of ASI1600 (or any other sensor) available that was reliable. I have managed to find two graphs of QE of ASI1600, one published by ZWO and another measured by Christian Buil (very experienced amateur spectroscopist). They are different and not by small margin. If I had such accurate data - then it would be easy to write software to calculate both conversion matrix and resulting gamut. In any case - there is quite a bit of debate going on - on topic of actual color of astronomical objects, and if you are interested, please join in:
  20. That is step in right direction, but again, if we want to be correct - we must not think in terms of white and yellow - but rather spectra. If you think that G2V star should have 1:1:1 coordinates in the image - that again depends on what color space you select. With sRGB - neither direct G2V nor G2V when looked thru the atmosphere will be exactly 1:1:1. This is because spectrum of D65 - which has 1:1:1 in sRGB looks like this: Sun has following spectrum: I cut roughly at 400-700nm and Red is at sea level while yellow is on top of the atmosphere. Here is what wiki says for D65: It is averaged from spectrometric data. In color space with D50 illuminant as white point - it will have these coordinates It does not mean that this color suddenly become different - it is still same spectral distribution - it just means that color space has 1:1:1 placed at different point / different spectral distribution - by convention. In any case - taking a star and trying to white balance on it - will not give you proper colors - only relatively close colors. Taking bunch of stars and their spectra and calculating their XYZ values from their spectra and then deriving transform matrix from your raw measured values to those XYZ values by method of least squares and transforming your image using that matrix and applying XYZ to sRGB conversion matrix and applying sRGB gamma - will produce proper colors Problem is that we don't have exact spectra for stars in our image and best we have is spectral class, or effective temperature with Gaia DR2 data.
  21. Let me ask you following. If I take my camera and select custom white balance and leave everything at 0 and export to XYZ color space: - Will there be any white point applied - Will there be any white balancing done
  22. Except there is no white balancing involved when you want to faithfully show emission type image. White balancing is ambiguous. I have image of white piece of paper. Which white balanced image is correct: Or original unbalanced image: Yes, it is white piece of paper - illuminated with reddish light on one side and bluish light on the other side. What is it that we want to show in the image? That piece of paper is white? How do we choose which illuminant is correct one for this scene? Want to show what really happened - we don't balance image.
  23. Want to record piece of paper being white - you need to white balance Want to record flash light giving off yellow light - no need to white balance
  24. Do the same with stars or torches that produce different color lights. What illuminant should you use for those? Will the color of the light change if you change scene illumination? Look at two colored lights and change lighting in your room - will those colors change?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.