Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I would not know that. Doesn't that depend on competition itself? Rules of competition and judges can best tell you what is valued in that particular competition.
  2. I can create sort of tutorial once transform matrix is known, but creating transform matrix for particular setup is probably problematic part. I've written about it before in this thread - one needs reference calibration device and a bit of recording, math and spreadsheet work to derive transform matrix. Alternatively, maybe software exists that does that for us? I haven't searched but creating color correction matrices is common operation - photographers use it all the time when using color checker passports.
  3. Depends what you are trying to achieve. White reference is needed when you are performing transform from XYZ color space which is absolute color space, into color space that deals with perceptual qualities. It is one of quantities needed to describe environment. Let me give you two distinct scenarios and explain that a bit better. Scenario 1. You want to show the color of a star on your computer screen in such a way that if you had actual light of that star next to your computer screen and you looked at those two colors side by side - you would say - these are the same color. Scenario 2. You are in space suite floating in outer space and you view star with your own eyes and see the color of the star. You remember what that light looked like. You now see image of the star on your screen, and you want to be able to say - yes that is exact color of the star as I remember seeing when I was in outer space. First scenario is scenario of color matching - we want to match physical quality of the light so when you see them side by side you can say - yes they are the same color. Second scenario is trying to match your perception. Thing is - human perception changes with environment and in order to match perception - we need to alter physical side of light so that it produces the same perceptional response with new enivronment (in first instance you were in dark looking at faint light from star/galaxy, in second case you are in lit room having certain illumination and looking at much brighter image on your computer screen - same light will produce two different "color sensations" in those different environments). All this time I'm advocating for - Scenario 1. or possibly scenario 2. with clearly defined viewing conditions for both environments. In fact, we have "destination" environment in first scenario as well - we will all be looking in some environment. This viewing environment is defined by sRGB standard. If we create jpeg or png image and we don't include color space information - sRGB color space is implied. sRGB color space has clearly defined viewing environment: https://en.wikipedia.org/wiki/SRGB Here you can see that white point of the color space is D65. Encoding and ambient white point has to do with materials being photographed - not applicable in our case as we don't illuminate anything in outer space. That is the second thing that I'm advocating - we should treat all the light that we capture in astrophotography as emissive case and not reflective case. Rationale behind this is: - Most sources are in fact emissive - stars, open / globular clusters are formed out of those stars, galaxies are formed out of those same stars, emission nebulae also emit their light. Only reflective objects are gas/dust in form of reflection nebulae or IFNs. - We handle reflective case because we are used to looking objects in different lighting conditions. White point is used because you see sheet of white paper as being white both under sunlight, under fluorescent light and under candle light. In all of these cases light reflected of the paper is different - yet we perceive it as white. One of reasons we use white point is so we can see paper as white both in given viewing conditions and later when looking at it at our computer screen on image we took. We expect paper to be white in both cases as we are used to it. Reflection nebulae and IFNs never fall into this category - we never get to see them under different light. We can never change illumination that illuminates them - they are of "fixed" spectrum as far as we are concerned - unlike light reflected of paper which can change depending on the light we turn on. Since it is fixed spectrum - it behave as emission case (we can't change the spectrum of the star either) - it is fixed for all observers on earth not only for us. Given all the above, we can either: Use XYZ color space that our calibration produced - do stretch on Y component that is perceptually very close to luminance (although XYZ is not perceptually uniform color space - neither is our stretch linear so we need not worry about it - as long as we are happy how the stretch looks like in mono) and scale all pixels with ratio of stretched_Y to starting_Y (or rather their X and Z components) - in the end with simply do XYZ -> sRGB conversion and we are done, or We use perceptual color space like CIECAM02 or newer version CAM16 (still not CIE standard) to produce perception correlates - like we are floating in outer space and looking at light that has intensity amplified (stretch of XYZ data by stretching Y) and then we produce same perceptual color in our viewing environment - again needs to be sRGB since that is implied in jpeg / png image. After you have done that - well, tweak colors as you please - or just leave them as they are.
  4. It is not true color, of course, but there is strong reasoning behind it as it was originally devised. - it is very easy to just directly map components to channels - Ha which is usually the strongest signal (best SNR) and also tends to be present in most parts of the target is mapped to green that carries the most luminance information - No point in mapping Ha and SII to true color as they are virtually indistinguishable by color - 656nm and 672nm are so close that we see them both as deep red. SHO was originally used as scientific tool - way to visualize gasses distribution in the same image rather than looking at three different images. I guess that scientists loved the way it looks although it is not accurate / natural rendition of the target so they decided to release it to general public (probably under influence of PR team - there is often pressure to justify funds and scientists often need to make their work likeable as well).
  5. I think that is is possible to do the both, and in fact I would argue that color calibration is necessary for quality artistic work. If color is completely arbitrary thing - why do we take RGB data at all? Why not spend all that time doing luminance and then assign colors completely arbitrary to the image? That would create both deeper and more "artistic image". I think that most people prefer to have "bounds" or "guideline" in their artistic expression - they want to start with some color and then shape it to their liking. If so - why not start with "correct / calibrated" colors rather than arbitrary raw colors. Starting with correct colors means that you have common starting point whether you are processing data from your own camera or someone else's - or in case you have multiple cameras - you'll always have common starting point. This avoids dependence on your particular camera and situation "I've learned to process the data from my own camera - yours is so much harder for me to get the colors I want".
  6. White reference is related to perceptual color - not to physical color. We can see certain spectrum of the light as being white in certain conditions, but we will see same spectrum of the light as little bluish or maybe little yellowish or even reddish in different conditions - depending on our environment. Similarly - we can see one spectrum of the light being white and see different spectrum of the light as being white under different conditions. D50 and D65 will both be seen as white in certain circumstances - but when you observe them next to each other - you'll clearly see the difference and it can happen that neither of them is perceived as white in that moment. Point of calibration is not to deal with perceptual phenomena of color, point of calibration as first step in processing of data is to "align" one's instrument with agreed upon standard so that all can measure the same thing. We color calibrate our camera so it can reliably produce XYZ values (within error margin) that will match both CIEXYZ standard observer and other properly calibrated cameras. Using any sort of "reference white" in this context - is like calibrating our ruler to length of what we believe one meter is long rather than calibrating against agreed upon standard. Reference white only comes into picture once we start dealing with perceptual side of things - depending on what we want to achieve (modeling viewer conditions), and should be used in last step of processing workflow.
  7. This is where I have problem with that process from PI - color calibration of astronomical images does not require white reference.
  8. I've found that amp glow / starburst thing is properly calibrated out on all my CMOS sensors - as long as they are thermally stable, and I don't see that being an issue at all. In fact - here is stretched master dark for above image of M51: While not quite starburst kind of glow - it is still present in stretched dark (this one being binned x8 for size/display) Here is master dark from my other camera: This one is real starburst (ASI178mcc - cooled color model, now discontinued) - yet it also calibrates out. Important thing to note with these CMOS sensors - is to skip bias - it is not needed, to calibrate with darks of exact exposure length (no dark scaling unfortunately - if that is important to you). As far as pixel size / sensor size thing, here is my advice and reasoning behind it. Get largest affordable sensor that has set point cooling . Sensor size is speed. I see a lot of people looking for "fast" scopes - in terms of F/ratio, but F/ratio is not equal to speed of telescope (here speed refers to time needed to get to target SNR or alternatively - SNR achieved in set amount of time). F/8 telescope can be faster than F/5 telescope - if paired with proper camera / pixel size. Take for example this: FOV is comparable in terms of size, one is F/9 scope and other is F/7 scope. If one bins pixels to get to same sampling rate (arc seconds per pixel) - it is the aperture that rules. 6" of aperture will gather more light than 4.5" - simple as that (and of course - made possible by larger sensor). For that reason, I advocate larger sensor - 294 even if there is amp glow / starburst - it can simply image more of the sky in single go (want to take larger target with smaller sensor? - need to do mosaics - with larger sensor it might fit in FOV at once. Only restriction is fully corrected field diameter of telescope). In fact - between 533 and 183 - I'd choose 183. They are of approximately the same size, and while 533 is newer sensor and free from amp glow - I advocate full / proper calibration anyway and amp glow is there fore non issue really - ASI183 has smaller pixel - which means more flexibility in the way you bin / debayer your data. With 2.4µm pixel size - 805mm of focal length gives 0.61"/px - that is good "baseline" to be binned as bin x2 will give you 1.22"/px, Bin x3 will be 1.83"/px and Bin x4 will give you 2.44"/px. I believe that last two would be most used resolutions with 115mm scope - 1.83"/px and 2.44"/px
  9. I think that is better way to stop down the lens as you say - there is no diffraction from blades. Either that or 3d printed aperture mask that screws in the filter thread.
  10. It's also worth noting that CMOS cameras come with very small pixels. I think that has to do with all the megapixel craze that has caught up smartphone and compact camera market (higher megapixel count is always better, right? ) Not long ago when CCDs were main astro imaging sensor - pixel sizes were in 5µm - 9µm range. Some models even had very large pixels like 24µm. Latest cmos sensors are at 3.75µm or below - going down to 2.4µm in astro applications - and easily down to 1.2µm in smartphones. This can easily lead to oversampling. CMOS sensors can't bin in hardware - but they bin in software and only difference is level of read noise. With CCDs binning will result in larger effective pixel size - but read noise will remain the same. With software binning - same thing happens to effective pixel size - but read noise grows as bin factor (x2 bin - x2 increase in read noise, x3 bin - x3 increase and so on). Good thing is that CMOS sensor have very low read noise compared to CCDs. When I was choosing my main imaging camera - two models were came up that were interesting and within my budget - camera with Kaf8300 CCD and ASI1600. First one has 5.4µm size and about 9e of read noise (depends on camera model), while ASI1600 has 3.8µm and 1.7e of read noise. I then realized that I can bin x2 and x3 depending on needs ASI1600 3.8µm pixels to get 7.6µm pixel size or 11.4 (both pixel sizes that are matched by other CCD cameras) with read noise of 3.4e or 5.1e - both still well below Kodak CCD offering (later being comparable to some Sony low read noise CCD sensors). Another good thing about software binning is that you can decide on bin factor after you take the image. Night was particularly good and seeing fine - go with higher resolution, and if not - bin more to recover even more SNR.
  11. Image can look good and mount can handle well even at crazy oversampling - that is not really the point. This image was captured at 0.5"/px. It looks fine enough, right? Point with over sampling is spreading of the light over more pixels than necessary. If above image is capture at 0.5"/px then light coming from 1"x1" is spread over 4 pixels - each pixel gets only 25% of that light - so signal per pixel is much smaller and SNR is much worse than it could be. Again to image above - that image has resolution of about 2"/px. I binned it to 1"/px because it made no sense to keep it at 0.5"/px - in fact I consciously made decision when I was putting my gear together that I'll be using binning with that particular scope as it is 8" RC scope with 1600mm of FL. Here it is at 1"/px viewed at 100% zoom: You can clearly see that it is no longer as sharp as it can be - simply resolution is not there for 1"/px (and that is with 8" scope). In general, for DSO imaging, resolution of your image will depend on three factors - seeing, mount performance (guiding RMS) and scope aperture. You can very easily measure what is proper resolution for any given image - simply by looking at FWHM of stars in the image. Sampling rate should be FWHM in arc seconds divided with 1.6. Problem with smaller scopes is that their Airy disk alone is getting bigger and when you add seeing and guiding on top of that - you won't be able to achieve very fine resolution. In fact - having image that actually has resolution of 1"/px is very very hard and only really achievable with 8"+ apertures in best conditions. Just for comparison - Diameter of airy disk for 115mm telescope is 2.28". In order to sample at 1"/px with such scope you need 1.2" FWHM seeing and no mount guiding errors. HEQ5 that you plan to use is very decent mount - I have it as well, however, it will guide only down to 1" RMS out of the box. You need to seriously mod it and tune it to get it to guide at 0.5" RMS. With 1" guide RMS - you won't be able to achieve 1"/px with 115mm scope - even if sky is completely still and seeing is perfect (no seeing error). Combined blur from scope aperture and mount tracking errors will be greater than 1.6" FWHM stars. If you are so set at 533 sensor - by all means go for it. And if you don't want to get reducer flattener as well (533 won't need it as it is 1" sensor), that is fine as well. In that case, do seriously consider how you are going to process your data. You'll need to bin x2 at least (and binning of OSC data is not straight forward as with mono - if you debayer with interpolation and then bin - you don't get same SNR improvement as most of your pixels have correlation).
  12. Very nice, I find those images exact match -visually- to my F/10 4" achromat when seeing is good. I wondered why there is such distinct pinkish/red cast that other scopes don't give and answer is surprisingly simple. It is due to chromatic aberration - although it is not seen as fringing in above image (it is there visually - but just a hint of purple halo in my scope). I managed to reproduce it easily - just take any "regular" Jupiter image and then leave green channel alone, do mean blur on red channel (maybe 2px) and just a tad larger mean blur on blue channel (maybe 3px) and resulting image will look like thru achromatic refractor. This simulates defocus in blue and red parts of spectrum - and is really interesting.
  13. Nice image. I guess people were referring to converging beam configuration rather than putting the filter at the front. If you place filter between sensor and lens - then it sits in converging beam, and with F/2 beam - angles the light hits the filter can be very large - like >15°. This reduces efficiency of the narrowband filters as it shifts light away from the filter band. It acts as aperture stop for all the rays hitting sensor at larger angles. In the configuration that you were using - you again did not use lens at F/2. 135mm F/2 lens has aperture of 67.5mm (135 / 67.5 = F/2). When you place 2" filter in front - you are effectively creating aperture mask. 2" filter has something like 46-47mm of clear aperture - let's go with 47mm. 135 / 47 = ~F/2.9. Your lens was effectively stopped down to F/2.9. Not sure if this is better or worse than having filter in converging beam though. It's probably better. In any case, those are just technical details.
  14. It is actually closer to 4/3 being 23mm diagonal sensor. I would strongly suggest you consider adding FF/FR to the setup. Not so much for field flattening effect as for focal length reduction (get field flattener that is also reducer).
  15. How about ASI294mc? You'll be oversampling with 1"/px as well. In reality with that setup - you want to target 1.6-1.8"/px resolution. Add Riccardi x0.75 FF/FR and you'll be "right on target" with ASI294mc at 1.6"/px
  16. https://www.slrlounge.com/purpose-behind-rubber-thing-canon-camera-strap/
  17. What do you hope to gain from NB stars? They usually don't look nice and most people try to correct their color in some way during processing. If you really want to have proper star profile - there is another option - take set of short exposures at the end of imaging session for that filter and use those short exposures to replace clipped parts of the image. This is nice way to handle clipped parts of the image in general like in RGB imaging where you want to preserve color / definition in both star cores and clipping regions of bright targets. As for NB - I'd say to get RGB color for stars and then create starless version of NB image / nebula (see starnet++ for example) and then put back in colorful stars from RGB data.
  18. @Astro Noodles No need to apologize. I do share your opinion - everyone should themselves choose level of science and/or art to be present in their work. My concern is only that science part is not that readily available / understood and I thus want for people to familiarize themselves with that part so they can properly choose the balance between the two.
  19. Just as an example - here is (wrong) calibration that I did some time ago when experimenting. I say wrong, because math used was slightly wrong (and yet it gave very good results). Problem with math is that I normalized both raw and xyz values - which should not be done (I wanted to produce matrix that does not scale values unnecessarily - but I did it the wrong way). This was template used, calibrated camera was ASI178 and I used Xiaomi Mi A1 phone as calibration source (in hope that display is properly calibrated to sRGB at factory) I displayed above image on my phone and recorded it with ASI178mcc camera, raw result was this: (shows how much raw data differs from RGB - yet we readily use it as RGB and later we need to "remove green cast" and what not - but all of it is properly handled with color management) This is what happens when we simply "white balance" - we make sure that bottom row of squares has RGB ratios of 1:1:1 One more step needs to be incorporated - and that is gamma transform for sRGB image, so after applying sRGB gamma we get: Now that really looks like starting set of colors - but some colors are not quite right. They can be made better looking - more like original colors by using color correction matrix, and then result looks like this: (I pasted source again next to it for comparison). There are still very small differences between the two - some of them are due to sensor gamut (thing that we discussed that there will be some residual errors), some are due to wrong math (which I later realized) and some are due to calibration source used (I'm not sure how accurate my phone screen is). Even with all of that - color matching is very good in my opinion.
  20. You are quite right, I can't determine what is and what is not relevant to this discussion, and I'm not going to try to force my opinion or what not on people. Idea behind this thread was to point out that people don't pay much attention to color management in astrophotography. Pop art side of things was just a side joke because I made mosaic in first post to demonstrate vast landscape of colors that are produced from the same data (and thus enforce idea that color management is neglected) and it resembled pop art by Andy Warhol. I'm not saying that one should use "standardized" method of processing their images, what I am saying is that if one wants to faithfully reproduce color of their subject (faithfulness part is open to debate) there is a standard set of mathematical operations to follow and I'd like for people to be familiarized with that. Similarly - image you posted followed very standardized set of mathematical operations to produce colors that are in the image - except those are not "true / faithful" colors of that object - nor elements / wavelengths involved. In that matter - we can discuss above image - it has very standardized way of composing things and yes - multiple people following that standard will produce similar colors in that image as well. Where it fails to relate to my original objection is faithfulness of color.
  21. That image is very distinct as color in that case has nothing to do with art. Color here is used simply to represent certain element. It is purely scientific stuff - Ha is mapped to green, OIII is mapped to blue and SII is mapped to red - SHO == RGB. If you split channels of that image - you'll get SII, Ha and OIII images without any additional processing (they are stretched individually as mono images). As such is really does not belong in this discussion as color component here does not relate to color but to "chemistry".
  22. Interesting idea, and yes, that will minimize overall error over the image. Not sure if it how much benefit it would bring. It needs to be analyzed in context of deltaE. In case that you weigh pairs based on frequency of likelihood that particular RAW is encountered in the image - then you'll reduce deltaE for majority of pixels in the image, but will raise deltaE for very few pixels in the image. If you look at most of astro images - majority of the pixels are in fact background pixels - that don't contain much color in the first place. It seems that you run a risk of increasing deltaE in few more important pixels - like stars and objects so you could lower deltaE for majority of background pixels? I guess that better approach would be to pick calibration colors in such way as to represent majority of colors that will be encountered in actual objects. Usually this is used: But even that one is interesting because of Orange / Brown thing. First two squares (top left) are actually very close in hue although brightness is different. I would say that best approach would be to take this diagram: And sample points inside triangle in some regular way, and then add another set of colors from this: Which is range of colors from black body of a certain temperature. By the way, there is way to do all of this purely in calculation - derive appropriate RAW - XYZ transform matrix of your choosing by only doing calculations - provided you have exact QE sensitivity of your imaging system. Instead of using triplets, your input parameters will be different spectra. In that case, you are not limited to sRGB set of colors for calibration and you can even do special type of calibration for emission type targets as you can take any combination of Ha/Hb/OIII into account. Problem is - getting exact QE curves for your setup. From what I've seen - published sensor QE curves and measured QE curves often differ significantly.
  23. Linear transform is well justified (we can even say required) by the nature of light. We expect our sensor to behave linearly because light behaves linearly - if we amplify light intensity, well we expect photon count to increase linearly (at a given wavelength). When we mix two lights - their photon counts add up. You want to preserve this linearity and thus transform has to be linear. Naturally we can create better fit using higher order transforms - but that will not work well due to nonlinearity. There are couple of points to understand here. First is - light from most astronomical objects is rather limited in chromaticity. This is because light comes from stars and most stars behave like black body object at a certain temperature. Globular clusters - stars Open clusters - stars Reflection nebulae - starlight (reflected parts of the same spectra as above) Emission nebulae - handful of emission lines (Ha, Hb, SII, NII, OIII and so on - which predominant intensity from Ha/Hb, OIII). Above is chromaticity diagram showing all colors without luminance / saturation part (full luminance and saturation). There is small line in the middle of the diagram - that is Plankian locus - all star colors fall along that line (or very close proximity). Here is the same diagram with sRGB triangle: Any image encoded as Jpeg or PNG without explicit color profile will be assumed as sRGB image - meaning that it will be able to show any color within that triangle. Don't be confused by colors outside of triangle - actual colors outside of the triangle are a bit more saturated - but they could not be shown in above image so regular colors are used. In any case - only color of very dim stars can't be shown properly in sRGB image. On the other hand - No single spectral line can be shown properly in sRGB image. That is something we must live with. Wider Gamut displays will bring close to that capability when they become standard. Any light source and its color can be represented as a dot in above diagram. If you have mix of such light - resulting light is somewhere inside concave polygon created by source light dots. That is why we can create any light inside that triangle with simple linear combination of R, G and B primary colors of sRGB color space (vertices of triangle). Similarly most emission nebulae colors will fall into triangle between three primary emission lines 500nm, 656nm and 486nm If you are calibrating against colors that can be shown on computer screen - you are covering quite a bit of astronomical spectra, but also - you'll end up using those colors in your final image anyway, as you can't show any other color outside of sRGB gamut (you could if you used wide gamut image format and you viewed image on wide gamut display). In the end - there is something called deltaE. That is distance between colors in color space. If color space is uniform color space - well then numerical distance between coordinates corresponds to perceived difference between colors. It turns out that there is deltaE below which we simply can't see difference between two colors - they look the same. When we use star based color calibration (often offered in astro software) - we are in effect using only points along that Plankian locus line to derive transform matrix. Using colors that are more spread out will lead to better deltaE over more of above graph. Camera manufacturers have been using this technique for ages. They manage to do fairly good color reproduction across vast landscape of colors we encounter in daily life. Check out this for example: https://www.imatest.com/docs/colormatrix/ It shows how to use color checker chart to create custom CCM (color correction matrix) for given lighting conditions in daytime photography (you'll find similarity with above approach). I would not recommend using color checker passport as that depends on illumination type and you don't have XYZ reference (except when calibrating against DSLR - then you can use anything). For our applications it is better to use emission sources (computer screen in dark room) as thinking of white balance can confuse people (we need not think of white balance at this stage at all).
  24. Yes, of course Yes, except there is no THE raw to XYZ. There is family of RAW to XYZ transforms that you can derive and each one will be more suitable for particular case, but in principle - you can only have one RAW to XYZ that you'll be using if you accept small error in some XYZ values that depends on your camera, chosen transform and original spectrum of the light you are recording - in another words, some spectra will be more precisely recorded as XYZ while some less so. Yes, this part can be clearly defined - except for one step and that is luminance processing step. You can think of it as exposure control - how much of the dark stuff you want to make bright enough to be seen. It also incorporates any sharpening, denoising and all the things that we usually do. You can also choose whether or not to perform perceptual adaptation in the end. I think that perceptual adaptation would give more pleasing results as it would do what people usually instinctively do - try to boost saturation. Yes you can, and only thing you need is calibrated reference. Your calibrated reference can be DSLR camera, computer screen that is properly calibrated (they usually are not and you need calibration device to perform proper calibration) or perhaps smart phone screen if that one is properly calibrated at the factory. - if you have DSLR camera, you need to set it to custom white balance and set that to neutral - like this: (color balance shift is set to 0,0). you also need to use Faithful mode on your camera (to avoid any processing by camera - like vivid colors and such that "enhances" the image). Next thing is to take any display - calibrated or not, in dark room, have it display range of colors and shoot it with both DSLR and your astro camera that you are attempting to calibrate. With DSLR raws you can use dcraw software and ask it to extract XYZ data for you. You will have raw data from your astro camera as is. Then you need to measure pixel values (as average over color surface) for each color in both XYZ image and your RAW image and that creates pairs of vectors XYZ and RAW. You can find transform matrix that transforms RAW into XYZ by using least squares method - this can be done in spreadsheet software. - if you have calibrated computer screen or calibrated smart phone screen - procedure is very similar. You take set of colors and display it on your computer screen. For each of those colors you calculate XYZ value from RGB values. There is well defined transform from sRGB to XYZ. Just make sure your computer screen / smart phone screen is calibrated for sRGB with white point of D65 (sometimes calibration is done for D50 although expected color space sRGB is in fact encoded with D65). Here you are relying on your display device to properly render XYZ values instead of on DSLR to read of XYZ values from colors displayed. Rest is the same - record raw with your camera and derive transform matrix based on XYZ, RAW pairs of values for list of colors Once you have your RAW -> XYZ transform, imaging workflow would be like this: 1. Record raw data 2. Apply transform matrix to get XYZ values. 3. Make a copy of Y and that will be your luminance. If you are doing LRGB - use L here instead of copy of Y 4. Stretch that luminance so that you get pleasing monochromatic image of your target, apply any noise reduction and sharpening 5. If needed - apply inverse sRGB gamma function to luminance. This is needed because software displays luminance as sRGB and you want it to be linear - as we will later apply sRGB gamma again 6. Take original XYZ values and replace them with following Xnew = X * StretchedLum / Y, Ynew = StrechedLum, Znew = Z * StrechedLum / Y This step is in essence - replacing each pixel value with equivalent pixel where light is the same only amplified to match what you saw as luminance. This produces XYZ of processed image. Here you have a choice. You can convert it directly to sRGB to have finished image without perception transforms. This is done by simply applying XYZ->sRGB transform. Or you can use for example CAM16 color appearance model. This would be done as follows: Take XYZ values for each pixel and set CAM16 parameters as follows: - illuminant to equal energy illuminant (consistent with emissive source) - adapting luminance to 0 (we are in outer space and there is no surrounding light - alternatively set it to very low value like 5% or 10%) - background luminance - same as above 0 or 5%-10% - surround 0 (dark) This will give you J, a, b values (for CAM16_UCS) Take again CAM16 parameters and now set them up as follows: - illuminant to D65 - adapting luminance to 20% - background luminance to 20% - surround to 1.5 (dim to average) Transform J, a, b to new XYZ In the end use standard XYZ to sRGB to finish the image.
  25. Not sure what you are asking, but I can explain why we see different color when physical color is the same, or rather I can explain principles of color appearance models which deal with our perception. When we talk about color in terms of physical quantities we use tristimulus values like RGB or XYZ or LMS and so on - these are coordinates in particular color space and represent physical response of our vision system. These do not describe what is the color we see (our perception or when our brain gets in the mix). Those are handled by three different quantities - Luminance, Hue and Saturation. If we want to match perception rather than physical characteristics of the light, here is the process: We record XYZ or physical component of the light, but we also record environmental conditions - like type of illumination, level of illumination - what is general color surrounding our color and things like that. From all of these we derive Luminance (or perceived brightness - not to be confused with luminance filter although they are somewhat related), hue and saturation of particular color. When we want to reproduce that perceived color under different circumstances - like when you are sitting in your room at your computer with different type of illumination, different background and so on, we do reverse - we take Luminance, Hue and Saturation and add these new environment factors and we come up with different set of XYZ values that we need to emit to person in order to induce same perceived color. This is how color appearance models work. In above case, we have two times "forward" transform of that CAM model. We have same XYZ stimulus, but we have different environment. Environment here being relationship to other parts of the image and our brain interpreting some parts of image as being in shadow and other parts as being in light. This creates two different Luminance, Hue and Saturation values - in this case two different Luminance values because there is difference in "shadow / light" part. Although physics of light says we have exact same light/color in both cases - our perception of it differs.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.