Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Here are some results. I'll have to do multiple posts as there is lot of data ... Results are quite unexpected - or maybe it is just matter of interpretation. I think I need to find better way to visualize the differences between cameras. Let me first do some intro: I found curves for bunch of things: ASI1600, ASI185, ASI178, Baader filters, ZWO glass covers (both UV/IR and AR) and combine them into meaningful profiles. For ASI1600 I used Buil's measurement rather than published data (although I created both profiles) since I've seen his measurement of Baader filters and they match Baader published curves - so I don't have reason to suspect that ASI1600 is wrong. However, I do have reason to suspect published sensor curves are correct - no way of verifying that information. For that reason - this should be considered theoretical rather than actual result for each of these cameras. If I manage to confirm results by different means - then we will be in position to treat results as actual. Let me start by doing QE graphs of "contestants". ASI178mc + ZWO UV/IR coated window glass: ASI185 + ZWO AR coated window + Baader IR/UV cut filter (that is how I'm going to measure real camera later with different methods for comparison): ASI1600 Buil + ZWO AR coated window + Baader R, G and B interference ccd filters: I then created python script that will generate 1,000,000 random spectra in 360-830nm range and calculate both XYZ (using standard color matching curves for 2° observer) and Raw response from each camera. Then I used least squares method to solve for transform matrix. Just to be sure that solution is stable enough (we are using random set of spectra each time and due to minimizing error - there will be variance in resulting matrices), I ran it 3 times on ASI178 model. Here are results: First: 0.710320557517913,0.38596304011600513,-0.18196468723815176 0.1691364171282709,1.0066983090786776,-0.5077560059357316 0.05762047461274392,-0.16837359062685783,1.4747700892512243 Second: 0.7106530004897218,0.3857756561317365,-0.18198339828143478 0.16918327451701587,1.006894913610129,-0.5081409335605773 0.05718726281193481,-0.1686434711209549,1.4756586916969725 Third: 0.7103502553581972,0.38568029007752436,-0.1815377132161584 0.16911938483565908,1.0066373572193015,-0.5076413483013693 0.057729996373306496,-0.1681957338241362,1.47435009985627 These differ on third decimal place so we could say they are stable enough. Resulting Raw to XYZ transform matrices: ASI178: 0.710320557517913,0.38596304011600513,-0.18196468723815176 0.1691364171282709,1.0066983090786776,-0.5077560059357316 0.05762047461274392,-0.16837359062685783,1.4747700892512243 ASI185: 0.6687070598566506,0.43457328602775613,-0.34725651432413374 0.1774728749070924,1.055721622412892,-0.7360695789108003 0.1847243138023235,-0.26623470123526377,1.7405880294984606 ASI1600: 0.6135793545429793,0.3021158351418402,0.186178032627276 0.2678263945071415,0.7665371985113655,0.075145287086023 -0.0616657224686888,-0.16815021009453934,0.9726970661590604 (sorry about number of digits - it does not make much sense to include that many digits since precision is lower, but I just went copy/paste). Next we will examine gamut of each camera.
  2. M42 is much brighter than other two objects. Fact that you are imaging at 1500mm FL and using short exposures is the reason you are having issues with fainter objects. You are working at 0.59"/px - which is way too high resolution even for premium AP mount and most sky conditions. Without going into too much detail - with short exposures read noise becomes important factor - when you spread out light so thin on your pixels - then it really becomes issue. You need very strong signal to offset read noise issue. M42 has it - others don't, so total of 1h or less is simply not going to work on those targets. You can do few things to improve situation: 1. pay attention to focus 2. Bin your subs after calibration and debayering - at least 4x4 In fact - if you still have your original stacked image (still linear without any processing) - bin it 4x4 and try processing it again. This will improve things somewhat, but don't expect miracles.
  3. Actually, there is reason to believe that it was somewhat brighter than now. It has to do with surface brightness and the fact that planetary nebulae are expanding over time. I don't know how much it expanded since then, but it is fair to assume that it is measurable amount (if we had image from back then and one shot now).
  4. Ok, for the benefit of others reading this thread, I must comment on this statement: - XYZ(D50) is non existent color space, it has no definition. Using that term as name for color space is simply wrong - XYZ is absolute color space without reference white point. Spectrum of light that represents certain color will have unique set of XYZ coordinates. There are no different coordinates for light spectrum / particular light color in XYZ depending on (fictional) white reference. Standard illuminants D50, D55 and D65 will always have these coordinates in xy chromaticity: they are constants and don't change with any (fictional) white point. - Using D50 as white point in sRGB space is simply wrong as sRGB standard clearly defines D65 as white point and three are no multiple white points in this color space. Only D65 light can be represented as 1,1,1 vector in sRGB space - One can properly display any light color on calibrated sRGB monitor (that is within color gamut of that monitor) by using following "transform": CameraRaw ->(raw_to_xyz_matrix) -> XYZ -> (standard_xyz_to_srgb_matrix) -> sRGB. No need to involve any Bradford transforms that represent white balance, nor is it required to involve illuminants. (clarification - term light color or color of the light is here used as "compressed" light spectrum - tuple of 3 numbers representing light spectrum in form such that no distinction can be made by our visual system).
  5. Thing is - I'm pretty confident that I understand what is going on, however, you keep contradicting me, so I'm trying to find out why is that. I'm happy to be proved wrong - as I'll benefit from correct understanding of the topic (if I'm indeed wrong). Here is one example, I made a list of claims I think are true. Here is one of them: - if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55) You listed that you don't agree with that, but you have not shown me why you don't agree - or what would alternative true statement be.
  6. It is much more than that - it is both relative to selected illuminant - but also to color checker square that has "neutral grey". We don't have much information on what that neutral grey is, so in principle it can be anything. I still have not found exact definition of XYZ(D50) space - what does coordinate in it mean? In fact, if I do search on google for XYZ(D50) - I don't get anything related to it - just text mentioning regular XYZ color space and D50 illuminant. When I apply chromatic adaptation to XYZ space - I'm effectively swapping colors of the light in a certain way. You can't adapt/change XYZ space, or coordinates in it - you adapt image so that pixels in the image represent different color in XYZ space than original image. XYZ space remains the same - I just say - ok pixel 1 is no longer representing object W illuminated with illuminant V but rather object W illuminanted with illuminant Q and that illuminant Q will trigger psychological response of our brain in these conditions that is most similar to what illuminant V and object W triggered in original conditions. However - spectra themselves don't change and associated coordinates in XYZ space don't change - you are effectively substituting lights with different lights in hope that it will trigger same psychological response.
  7. I don't really follow this. That means that in XYZ(D50) space - Illuminant A has D50 coordinates? How about Illuminant D65, does it also have D50 coordinates? From your description it follows that it does. We can easily see that XYZ(D50) space is not very useful space as all colors map to D50 (we can use illuminant of any color - and by analogy with above written - it will end up being D50). I personally don't have spectrometer, but have good enough spectroscope for that application (StarAnalyzer 200 is more than capable of doing that), but people making color checker cards can certainly invest into spectrometer. From quick lookup online - latest small models are quite affordable for such business - they are about $1000 and also have quite a bit of resolution - something like 1024 or 2048 linear CMOS sensors and sensitivity in 300-1000+nm range (there are IR models that are sensitive in 1-10µm ranges). Of course - they come properly calibrated, so you don't have to do anything special like I would need with SA200.
  8. How are coordinates of light sources different in XYZ(50) than in XYZ color space? Say I have these: What coordinates will those have in XYZ(50)? Very simple - you take your color chart - illuminate it with any illuminant that you like and measure resulting spectrum with spectrometer for each patch. Take resulting spectrum for each patch and calculate XYZ coordinate of that light. Now take same illuminant and illuminate color checker chart and record it with DSLR. Measure CameraRaw values and perform least squares fitting of those vectors to above vectors. Note that two different parties can be involved as long as they share reference illuminant and same color checker.
  9. Can you give me definition of XYZ(D50)? No need to do that - as it will be incorporated in transform matrix. I will have to look that up. I'm not sure how that is going to be done as one needs to scale all triplets to unit length to avoid intensity scaling but Lab color space is non linear if I remember correctly. Yes, it involves third root. Maybe optimization for matrix then is not to minimize sum of square of differences in XYZ space but to minimize sum of square of differences in Lab space while still calculating matrix in XYZ? Why would we want to do that? D50 and G2V both have known and different XYZ coordinates? Mapping them to the same coordinate will distort colors in the image.
  10. I like SHO better as well, but to my eye, technique where starless version is processed separately and then stars put back in - seems too artificial. I guess that is because some of the stars are lost. I think it is down to my expectations - brain is trained to see much more stars on that level of stretch than can be seen in the image. I don't mind stars being tight - it's just that there are so few of them. I don't mind starless version without stars - just showing nebulosity. It his this "middle ground" that seems confusing.
  11. In principle - emissive case is not different than case with color checker chart - as long as you don't try to do adjustment of white balance and leave colors as seen by sensor - rather than as expected by adapted eye. In above case - one can use D50 illuminant and still use color checker chart - but we can't use sRGB values given. I think part of the problem is that people think they can use any white point with sRGB space - that is not true. sRGB space is specifically designed so that D65 is 1,1,1 - it is by definition so. If you have any other illuminant and you convert to sRGB and get 1,1,1 - you have performed color adaptation transformation (or in different language - you have white balanced to that illuminant). That is why "instructions" that I've posted above are misleading - they say - use D50 illuminant on our color checker card and expect gray square to have 1:1:1 in sRGB - but that can't happen unless you do white balance D50->D65 or you decide to ignore sRGB standard and use different XYZ->sRGB matrix (color corrected one - which is basically the same thing as doing white balance first and then doing regular XYZ->sRGB transform). Hopefully, we have concluded that when one shoots with their DSLR and selects custom white balance (set to 0, 0) and uses dcraw to convert that image to XYZ - one gets proper "emission" case XYZ values. That is enough for astrophotography - these values are already "properly balanced" - or authentic. We can later do correction for atmospheric reddening or whatever - but that is accurate color in terms of wavelengths. I'm planing to derive color transform matrices for few of my cameras in different ways and see how similar they end up being (not to each other, but across methods). If all matrices end up being fairly close - then any of methods can be used.
  12. I don't think you made error and prompted by this discussion I set myself a goal of actually calculating potential color gamut for this among other combinations. We are yet to see how much inferior if at all this combination is in contrast to OSC cameras. Hopefully, I'll finish that soon and post results ...
  13. I'm about to do some experiments and maybe I should best explain by describing what I'm planing to do. I have couple of astronomy cameras: ASI1600mono + filters, ASI178 color and ASI185 color. I want to create "Camera raw" to XYZ transformations for each of them. Easiest approach when one has QE response of camera for R, G and B filters is to use mathematics. This is what I plan to do for published QE graphs of these cameras (although I'm not 100% certain these are correct - I'll still do it as part of experiment). You start by generating number of random spectra in relevant range - 360-830nm (XYZ matching functions that I've found have values in that range) and I calculate X, Y and Z values for each of that spectra - given with this expression: I also calculate rawR, rawG and rawB values in the same way - except using QE graph of camera as matching functions for rawR, rawG and rawB. In the end I simply do least squares fitting to find matrix that will map between corresponding vectors (X, Y, Z) and (rawR, rawG, rawB). I have now obtained matrix that transforms camera space to XYZ space (note that there is no white point / illuminant involved in this process as it is not needed). Now imagine that I don't have camera curves - only camera, and I don't have XYZ matching functions - only ideal sensor with XYZ component responses. How would I go about creating matrix? That part is easy as well - I either take color checker chart and illuminate it by any broad band illuminant or take any display (regardless of calibration) and show color checker pattern and I take one image with my camera and another with perfect XYZ camera. Then I measure rawR, rawG, rawB values for each of colors in color passport and I measure X, Y and Z for them as well - I again have set of vectors in one space and vectors in other space. I can do least squares method to derive Camera Raw to XYZ transform matrix. So far so good, both of above examples are independent of illumination and both use XYZ as reference for calibration - either matching functions in math form or actual sensor that has such response. But what if I don't have sensor response curves nor do I have XYZ sensor - which is reality for most people (although sensor response curves can be obtained with a bit of spectroscopy - but that is not trivial either - one needs expensive calibrated equipment and lots of knowledge). What if I have something that is relatively inexpensive and readily available? Something like this: And I also get this chart: and somewhere it says that this chart has been produced with D50 as illumination source. I then say, great, I'm now going to use these sRGB values and derive XYZ standard values from them and they say D50 as illuminant and I'm going to use that to illuminate my chart and record it with camera and I have my set of vectors But there is a catch that I'm not understanding - sRGB values are D65 white point values. If I take color number 20 and RGB 200, 200, 200 and calculate XYZ - I'll get D65 XYZ coordinates. Then I take D50 illuminant and illuminate this chart and record data - number 20 will no longer have D65 coordinates but D50 coordinates. On one side I have Camera raw data that is recorded with D50 illumination and on the other side I have XYZ that is produced from sRGB with D65 illumiation. When I do least squares method, I actually say - I want grey square that has been illuminated with D50 illumination to match same grey square that has been illuminated with D65 illumination. I'm not only deriving camera raw to XYZ, I'm deriving combined transformation consisting out of two - D50 to D65 white balance + conversion to XYZ. I was under impression that this is what was going on - according to your interpretation: Take color chart, illuminate with D50 and result will end up as having 1,1,1 in sRGB. This is what you insisted will happen. What I say, when I say that CameraRaw to XYZ(D50) is just moniker and XYZ does not have D50 as white point is same process but goes like this: Take color chart, illuminate with Illuminant X, take sRGB values above, do chromatic adaptation from D65 to Illuminant X and then convert to XYZ color space and then do least squares method. This produces same results, regardless of what IlluminantX is. We can say XYZ(Something) and XYZ(Other), but reality is that XYZ(Something) == XYZ(Other) so what is the point in specifying illuminant used to derive matrix? There is point only in resulting color error. We are trying to derive fit all matrix with only 20 samples. This is not going to work in all cases (nor is there perfect matrix for each camera) and two different matrices are derived one using D50 as Illuminant X and other Illuminant A as Illuminant X and these have slightly different values - but perform same task, only with different error margin for different set of spectra. Makes sense?
  14. From wiki page on Chromatic adaptation: (text is transferred as is - without any emphasize on my part). Note last sentence - when humans see paper as white under 3000K illuminant - that is chromatic adaptation. When camera does that - it is called white balance or chromatic adaptation transformation or CAT. There are three different types of chromatic adaptation. First is simple scaling of primaries - this tries to make object that we perceived as white be white in color space of choice (or rather white point of that space) - this involves three scalar values Second is matrix - a bit more complex case Third is non linear transform, or matrix in non linear color space - often modeled with perception uniformity in mind All of those are dealing with effect of chromatic adaptation - and not regular color space transformations. Color space transformations preserve color - chromatic adaptation changes color to look more alike what person would have seen under given circumstances - to match different circumstances.
  15. Excellent, finally we have settled on what is going on and I think we agree on mechanics. I would also like to add that this is the wrong way to do things . It is in principle wrong way to do things and is a short cut for utilized for some reason by Adobe and others. In fact, it is recommended in their specification. CameraToXYZ(D50) uses absolute camera coordinates than changes the color and translates that changed color to XYZ, then you need to use adapted (ie wrong) matrix to convert that to sRGB coordinates. You have written above following step: CameraRGB->XYZ(D50)->sRGB That while correct and the same as I have written above, does not help to understand what is going on. Same step can and should be written for the purposes of correctness like this: CameraRAW->D50 adaptation->XYZ->(D50 adapted)sRGB Since these are all matrices - you can choose to multiply different matrices into single step - into single matrix, but above is proper way to describe what is going on. Here is why. Take any XYZ coordinates as recorded light. Say we take E illuminant with 1/3, 1/3 coordinates. Forward transform it with: static const double xyzd50_srgb[3][3] = { { 0.436083, 0.385083, 0.143055 }, { 0.222507, 0.716888, 0.060608 }, { 0.013930, 0.097097, 0.714022 } }; then backward transform it with: const double xyz_rgb[3][3] = { /* XYZ from RGB */ { 0.412453, 0.357580, 0.180423 }, { 0.212671, 0.715160, 0.072169 }, { 0.019334, 0.119193, 0.950227 } }; and you will get different coordinates than you started with. There are no different coordinates in XYZ for color. Every color always has the same coordinates in XYZ. It is absolute space, you can't change coordinates of color just by selecting white point as XYZ has no white point. If you disagree with that for some reason, try to: - find online two different values for any illuminant in XYZ / xy - explain how can shining that illuminant on xyz color matching function can produce different integral depending on whatever conditions Main issue arises when people think in terms of object color - and that simply does not exists. Only thing that physically exists is color of the light. You can only think of color of object under certain illumination - and then again we are talking about color of reflected light and not color of object. If you want to do proper white balance to show what the color of the object would look like under different illuminant, then you have to do following: Take color of light that you recorded in XYZ, convert that to LMS, from knowing original illuminant and destination illuminant using CAM like Bradford - you derive color adaptation matrix and convert values - then return those values from LMS space into XYZ and that will give you (approximate) color of the light that you would record if original object was illuminated with destination illuminant. https://en.wikipedia.org/wiki/LMS_color_space By the way, when I speak of color, and here might be the source of some confusion, I tend to speak of spectrum of light and measurement and not subjective feel / perception.
  16. Ok, explain then following: In dcraw code at line 135, there is following code: const double xyz_rgb[3][3] = { /* XYZ from RGB */ { 0.412453, 0.357580, 0.180423 }, { 0.212671, 0.715160, 0.072169 }, { 0.019334, 0.119193, 0.950227 } }; that calculates XYZ from RGB. If you multiply 1,1,1 with that you get - 0.950456, 1, 1.088754 in XYZ space and that is 0.312731, 0.329033 in xy chromaticity or - guess what - D65 illuminant Further, if I set my monitor to 5000K, display test image that looks like this: (that is rgb 1,0,0; 0,1,0; 0,0,1; 1,1,1) and shoot it with my Canon DSLR in raw mode and use dcraw to extract XYZ, I get this: 0.941821632539927, 1, 0.759163712244686 Which is in xy chromaticity: 0.348696, 0.370235 - or very close to D50 that has: 0.34567 0.35850 When I convert that image to sRGB with dcraw, it looks like this: But when I set my computer screen to be 6500K and take image of same test pattern, use dcraw to extract XYZ, I get this: 0.946408248141336, 1, 1.08111955117533 which is again xy chromaticity: 0.312601, 0.330302 (again, D65 being 0.3127, 0.3290) and when I ask dcraw to generate srgb image, it looks like this: All that I've just shown you is consistent with what I've been saying and inconsistent with what you've written above.
  17. If we then convert those two XYZ images to sRGB color space, what should I expect to get for RGB values in each?
  18. Ok, we now have same setup - regular DSLR, paper illuminated with D50 for one shot and D65 for the other. What XYZ coordinates will we measure from two XYZ images produced by dcraw from raw files if custom white balance is used with 0, 0 settings?
  19. I've linked twice how XYZ data is generated with xyz matching functions. Imagine you have device that has "XYZ senzor" and records directly in XYZ in the same way sensors work and in the same way XYZ is defined to work. We have white paper and we illuminate it with D50 and D65. Will XYZ coordinates of paper color be different or the same? Will they have different xy chromaticity or the same?
  20. Ok, whenever I try to incorporate your view into the picture - I end up returning to how I originally understood it. I'm going to make a list of points, and I'll ask you to name which one you disagree and why. - XYZ is absolute color space and as such has no white point - every color or light has unique XYZ coordinates - when converting to particular color space - you need to use white point of that white space. - white point of particular color space will have 1,1,1 coordinates in that color space and all other colors will have different coordinates in that color space - If you want to do white balance - you need to do following: convert to XYZ, then convert to LMS, do Bradford transform, return to XYZ and then return to original color space. For successful color correction you need assumed and target illuminant, or rather their XYZ coordinates - Adobe XYZ(D50) and XYZ(A) are just camera raw to XYZ conversions that don't use white point (as there is no sense to do this) but are just the same thing that minimizes color error depending on what (if any) white balance you plan to perform later on - if you have white paper that you shot and it was illuminated with some illuminant - say D50 or D55 and you want to show it as white color paper in sRGB, you need to do white balancing D50->D65 or D55->D65 and that will show paper color as 1,1,1 or white in sRGB. You do white balancing by using particular Bradford adapted XYZ->sRGB transform matrix (either D50 or D55) - if you want to record color of light - whatever that color is - you don't have to do any color balancing at all - just convert from XYZ to sRGB with standard and not Bradford adapted matrix
  21. I'm going insane now That is not what we need. D50 is not supposed to have 1,1,1 in sRGB white color space. D65 is white in sRGB color space.
  22. I think it is rather simple - use it if you can. How to tell if you can? That is also easy to check. Create few darks with say 30s exposures and few darks with 90s exposures. Take set of bias as well. - stack each set - take 30s master, subtract bias and multiply with 3 - take 90s master, subtract bias Compare resulting two - if they are the same - you can comfortably scale your darks. What does it mean to be the same? Well - subtract the two and examine resulting sub. It should have mean value of 0 and be pure noise without any visible patterns in the image. Do Fourier transform on it - that also should be pure noise without any distinctive features. Btw - it is better to scale long darks down then it is to scale short darks "up" - "noise wise".
  23. You do know how that list was made? Messier actually tried to find comets and kept "bumping" into these "smudges" that were comet like, and decided to make a list so that next time he bumped into one - he knew it was not a comet. I guess he started with one he bumped into most often. That says something about how easy is to find that target?
  24. And here is the solution to whole color problem with DSLR cameras: Instead of regular D65 matrices. These are Dradford adopted color transform matrices from XYZ->sRGB where XYZ was messed up by use of D50 Right one is correct forward XYZ->sRGB matrix.
  25. On the info page of the same website, you'll see this text as well: I added bold to emphasize important part. You use reference illuminant only when you want to get XYZ of "color of object" and not color of the light. There is no single color of object - it depends on illuminant. If you want to get accurate representation of the color of the light - then you must choose same illuminant as defined in color space itself. If you are converting to sRGB color space - you must set illuminant used for that color space. In any case - good thing that you pointed out this to me as using D50 matrix for DSLR will not produce proper color for light sources when converted to sRGB. It still needs to be converted to D65 - because people working with DSLR-s simply can't stop thinking in terms of "color of object" instead of "color of light" (I guess that is because daytime photography mostly deals with objects and not sources of light). This also means that I calibrated my monitor wrong - and it was properly calibrated to begin with as it gave reading of D50 - for D65 white (I was expecting ~6500K value but I was getting ~5000K value). I'll have to reset it
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.