Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I did not think of that, you might be right, but I never said that measurement is solely dependent on kit used - in fact, you will be much better off throwing time at the problem rather than money
  2. Again something similar happens to sRGB display if you try to view image that has been recorded in gamut - colors that monitor can't display will be "clipped" to colors that monitor can display. In that sense it is the same, isn't it. In any case - I'll accept that gamut is related to reproduction rather than sensors and will look up metameric failure, so thanks for that.
  3. Correct. As far as I know - there was a version of drivers with locked offset at 50, but now there seems to be version that has advanced settings where you can again adjust offset. This happens with PRO version of cameras. My ASI1600 has offset setting in drivers enabled all the time. I might be wrong at above though. Masterflat should be one with master flat dark applied - not entirely sure, but that is the usual name for final flat used for calibration - masterflat. You can check that by loading single flat and measuring mean ADU value on it and doing the same with master flat - if they are the same, then flat dark was not removed, if they differ (mean value of master is a bit lower) then master flat dark was removed. If you are talking about files that you have uploaded at beginning of this thread - no they do not suffer from such artifacts, and any artifact that you are seeing is due to software used to view them. Some software will use bilinear interpolation when rescaling images for view at smaller size (zoomed out) - try looking at 100%, 1:1 pixel ratio (one screen pixel - one image pixel) - there should not be artifacts present. Bilinear interpolation can create such bands - those are in fact not signal bands, but bands of different SNR (slightly lower noise in some regions due to bilinear interpolation) - and will only show if you stretch so that noise is clearly visible. Here is what your light looks like to me: That is left edge of image stretched to insanity Btw, I can also recreate effect which you are seeing by resampling to quarter size (about fit to screen) with bilinear resampling: Here is what stretched histogram version looks like (gimp level / curves): Again no sign of artifacts that you mention. It's full of stars
  4. I'm still having issues with definition of word gamut Here is from wiki article: I see no reason why we could not adopt above definition to include - device capable of recording - in the same sense as set of colors found within an image.
  5. I have couple of issues with that article. It goes on to state that for example CIE XYZ color space is not good color space for couple of reasons: - because it is not perceptually uniform - because simple linear transforms between that color space and some other color spaces lead to errors I had the impression that for above reasons (both known things) - author deems CIE XYZ color space somehow wrong or inferior. There is another issue that I have - for example using RGB matching functions as an example without giving explanation how experiment was conducted (using three pure wavelength primaries and reflective color arrangement): Here is quote from wiki on CieXYZ describing that: Three primaries used in a test on xy chromaticity diagram - any color that could be produced by additive mixture of primaries lies in triangle. In any case, issues that exist with color reproduction can't be attributed to inferiority of CieXYZ color space.
  6. I had a sneaky suspicion that I don't fully understand term gamut. In some sense I do have similar understanding of it as you - it is subset of all possible chromaticities (lightness is not important here as it is function of intensity, at least I think so). It is useful to think in relative terms, so we can say that human vision is full gamut and any subset of that is narrower/smaller gamut than full gamut? In any case, I fail to see how a sensor does not have property of being capable of recording whole or part of the gamut, or in fact distinguishing what human eye is capable of distinguishing. Not all sensor/filter combinations are like this - some have "full gamut" while others do not, and we can pose a question: how large is gamut of certain sensor/filter combination. Let me give you an example: You have regular tri band filter/sensor combination graphs above - look at sensor QE curve and each filter transmission curve to get idea of combined curves. But one does not need to use such filters, one can use following filters: These are astronomik filters - they have some overlap and are more like human eye response functions - my guess is that such filter/sensor combination has larger gamut. Same is true for OSC sensors that have response curves like this: Above diagram can clearly identify each wavelength by unique ratio of raw "RGB" components - I suspect that it covers whole gamut. Another example of even smaller gamut would be: Take regular rgb filters, any sensor and add light pollution filter to it - something like CLS or IDAS LPS P2: Those filters block light completely in some ranges - and filter sensor combination won't be able to even record these wavelengths, let alone make distinction between them. I think it would be good idea to find a way to characterize sensor filter combination - ones that can produce all colors computer screen is capable of showing (sRGB gamut) and those that can't.
  7. Dark flats are used to create master flat. You take set of flats - stack those (average stack method). You take set of dark flats - stack those (again average stack method). Subtract the two and that will give you master flat (flat stack - flat dark stack). Again, what seems to be the issue with your flats? I've downloaded two attached files but can't really see what is wrong with them?
  8. For anyone interested in this topic it turned out that above is fairly easy to do (at least I think so at this point - I don't yet have full mathematical proof). If we observe any two wavelengths w1 and w2 with arbitrary intensity, it turns out that in XYZ space this forms a plane. X = a * w1 + b * w2 Y = c * w1 + d * w2 Z = e * w1 + f * w2 where w1 and w2 are parameters in this parametric form of plane (being intensities of respective wavelengths) and a,b; c,d; e,f - being values of color matching functions of X, Y and Z for particular wavelengths w1 and w2. After we transform this plane with XYZ -> xyY transform it turns into a line. I still don't have mathematical proof of that, but I did check it numerically (graph plotting), and it is in line with this section on wiki article on CieXYZ space and xy chromaticity diagram: source: https://en.wikipedia.org/wiki/CIE_1931_color_space Projection of that line on xy will also be a line. In last step, if we derive matrix transform (3x3 matrix) between camera raw space and XYZ - it will preserve planes (matrices represent linear transforms), so any two wavelengths on our sensor with their intensities will also form a line once transformed into xy chromaticity diagram. From this it is easy to see that all one needs to do is just take each wavelength (for example 400-700nm with step of 1nm), calculate raw triplet from sensor + filters graph, transform by transform matrix to get matching XYZ and then derive xy known XYZ->xyY transform. After plotting that on xy chromaticity diagram, gamut of sensor/filter combination will be convex shape defined by said points. (mind you, this shape will depend on chosen transform matrix, so choosing that matrix is another topic that is interesting, and so is transform error).
  9. Not sure what this relates to? Bose Einstein statistics of general idea that there is a signal of certain magnitude and that measurement yields numerical value that is polluted by noise? If we are talking about signal and measurement, and if we have mathematical model that describes that process, why do you think it is not useful to think about it as if there were definite signal with definite photon rate, but it is the measurement that can't be done with absolute precision? It's a bit like thinking about probability in propensity sense. We can say that there is a definite probability that dice will land certain number - that being 1/6 - although there is no finite number of measurements that will yield that value with certainty (in sense that adding more measurements can't disturb it / make it more precise). We don't talk about that probability being dependent on number of measurements - we accept it as maybe fictional but in the same time for our purposes very real tendency with definite and precise numerical value associated.
  10. I had a feeling that this sort of thinking will cause much confusion. Idea was to try to introduce a sort of mathematical thinking rather than anecdotal when it comes what signal is, what measurement gives and above all importance of SNR and how it relates to imaging. You are right - I stand corrected. Excluding some subs with measured 0 value will not change image - it will only change SNR of resulting stack - same as if we choose to discard any sub regardless what the measured value is. It will however skew our measurement if we want to determine absolute value (not important for image but it is important if one tries to measure photon flux). Average of 0,0,0,1,1 = 2/5 is not the same as average of 1,1 = 2/2 = 1. Discarding some measurements just because they are zero introduces bias error. I was referring to the statement that I've heard couple times before, and it goes: "You can't record something that is fainter than background noise." That is in fact not true. Suppose that we have some galaxy that is faint and signal from it is less than read noise of the camera. In fact it is so faint that there are some subs that contain no photons from that galaxy. Regardless of above, signal of the galaxy is recorded on each frame - with certain error. Stack enough such subs and you will be able to get the noise down so that SNR is enough to see that galaxy. It is not out of reach by virtue of being below noise in single sub. Let's say that SNR in single sub is something like 0.1 on average - galaxy is 10 times fainter than noise in that sub. You need something like 3-5 SNR to be able to detect galaxy. Stack 10000 such subs and resulting SNR for that galaxy will be 10 - you will be able to see it clearly. No one said that you can take 1 minute sub and multiply it with 10 and get 10 minute sub. You can take measured signal from 1 minute sub and multiply it with 10 and you will get measured signal in 10 minute sub (roughly there will be uncertainty in measured value). Problem is that there is noise associated with each measurement and when you multiply your 1 minute sub - you will multiply measured values - which are not pure, but polluted with measurement noise - so you will end up with same measured value of signal but different noise value - both in sense it will be random but also in numerical value. 100 minute sub will produce smoother result not because signal suddenly became stronger just because we've chosen to do measurement differently. Noise level is associated with measurement, and the way we measure changes noise level - this is why result is smoother image. For image intensity - you can choose to add measured values together or you can choose to multiply single sub with some constant. Both will change intensity of the image. What we are interested in is SNR. That is why we stack - not to increase image intensity but to boost SNR. Ok, I'm so confused with this one. Most of the light that we are dealing with when taking images of astronomical objects is in fact thermal light (star light), but according to sources, Bose Einstein distribution has SNR of: This means that it is always less than 1, regardless of how much exposure time we use?
  11. It looks like HDR combination tool will do exactly what I described - so you should use that, here is article about it: https://www.lightvortexastronomy.com/tutorial-producing-an-hdr-image.html Mind you - it advocates use of 64bit precision - but I don't think its necessary. In fact I was earlier concerned if 32bit can give enough precision when doing large number of subs stacking - but I neglected the fact that it is floating point precision, so 32bit float point is generally sufficient for all operations on the image. If you read thru that tutorial, you will find that we agree on some points like this one: It is very much equivalent to mine: replace pixels with 90% or higher value ... (I used 0.9 value in my example, but 0.8 is perfectly valid as well - will not make much difference).
  12. Stacking images is in fact HDR imaging even if single exposure duration is uses. But above observation does not answer your question. We are satisfied with image once we have high enough SNR. For bright stars and bright central core of galaxy, single short exposure will result in rather high base SNR hence you need just a hand full of them to make them with enough SNR to go into the image. One approach that will work good - take a few short exposures (like 10-15 seconds is enough) and stack those using average method (no fancy stacking needed there). Scale pixel values to match those from long exposure (multiply with ratio of exposures - if base exposure is 5 minutes and short exposure is 10s - multiply with 10s/300s = 1/30 - or divide with 30 same thing). Now that you have matching photon flux it is the simple matter of blending in two stacks and you can do it with pixel math (or similar tools): for pixel math it would be something like : resulting pixel value is equal to long stack value if value is below threshold (set it to something like 90% of brightest pixel in long stack) - else use pixel value from short stack. for other kind of blending - make selection of all pixels in long stack that have higher value than 90% of highest value in the image. Copy pixels using same selection (long stack and short stack need to be registered / aligned) from short stack and paste them onto that selection in long stack.
  13. Indeed it is - at least in the way I'm advocating for people to think about signal / image. One might argue that we use word signal to describe different things, and indeed I'm guilty of that in many previous discussions on topic of signal and signal/noise ratios and such (sometimes we take shortcuts and don't express ourselves properly). Problem with that approach is: If we think of signal as measured value and not intrinsic property of the source - we could be arriving to wrong conclusions. Namely following couple seems to be common: - You can't record image / display image with low measured values, or rather image that has average numerical value of 1 will be somehow better quality than image that has average numerical value of 0.0001. This can stop one seeing that it is not absolute values that are important but rather ratios of measured values (one pixel being twice the value of another and three times value of third and fourth ...). Image is relative pixel values - regardless of absolute values that we assign to each of the pixels as long as we maintain their ratio (describe them in compatible units - be that photons per hour or per millisecond) - in fact to make image presentable we often give up their proper ratios and introduce non linear transform (but that is extension to this topic). - Some subs contain no signal. Suppose that we have source that for our purposes emits one photon per hour and we take one minute subs. As many subs would end up with measured value of 0 - if we associate word signal with measured value as opposed with intrinsic property of source - we can conclude that many of those subs are in fact wasted - they did not capture anything - no "signal" is present on them - but we would be wrong in thinking so. If we follow that line of thought - nothing would prevent us from discarding such subs (as they are meaningless contain no signal) - however if one does that - they will end up with wrong image. From previous point, extension (that can sometimes be heard) - you can't record target that is too dim for single exposure, or you can't record signal that is below noise level in single sub. In any case - if we associate term signal - with intrinsic property of object (or shall I say object / setup relationship) and understand that each sub contains the same signal, it opens up possibility of easier understanding of the whole process of image acquisition and also histogram manipulation. It can help one understand what image really is (in data / pixel values sense and their relative ratios). It also helps understand signal/noise ratio - and how ultimately that is only important thing for a good image.
  14. This is very good start for understanding We have 10 photons per minute and we have 100 photons per 10 minutes. Are those two different? The reason I said that we should leave noise for now is because noise is related to measurement. It is not "intrinsic" property of signal. Signal is what it is. Act of measurement introduces noise in resulting numerical value, and the way you measure will impact how much noise there will be. If you measure above signal for one minute - you will conclude that it is in fact 0.1666 photons / second, but the noise associated with measurement will be related to how long you performed measurement for - SNR will be 3.1622776.... (if we imagine perfect system - no read noise and such). Once you measure for 10 minutes - you will also conclude that signal is in fact 0.166666 / second - but noise associated with measurement is less - SNR will be 10. Same goes for stacking with average - measured signal will be the same (but noise will be reduced due to repeated measurement). Point being - there is no sub that contains no signal - all subs contain signal and they contain exactly the same signal, even 0.0001s exposure sub will contain that signal. Difference is only in the noise associated with each measurement. We could argue that numerical value that we got from measurement is also different - but that is really not important - it's a bit like saying I'm "27 tall" (ok, but what units, can we compare that to someone else's height?), or maybe my speed is 27 kilometers (per minute? per hour? per day?) I think above way of thinking is the key for understanding that it is all about SNR and that measured numerical value can be arbitrary large or small - it will not matter as long as we are happy with level of noise in our measurement.
  15. Here is a hint: Let's leave noise part aside for a moment as it is related to measurement. We are not talking about measurement here - we are talking about signal strength regardless of measurement.
  16. This is actually almost a trick question - but I feel one that needs to be answered, as I believe many people doing imaging don't in fact know the proper answer to it. In recent discussion with @Rodd about ins and outs of stacking and SNR - this came up and it occurred to me that many people would not be able to answer it properly. Here is question in a nut shell: Given same scope, same camera, same target and sky conditions, with only signal in mind (disregard noise for the moment), how do signal levels compare in following cases: 1. Single one minute exposure 2. Average stack of 10 one minute exposures 3. Sum stack of 10 one minute exposures 4. One 10 minute exposure What do you think?
  17. Cross "flares" or diffraction spikes are feature of reflecting telescope that has spider support of secondary mirror. It has nothing to do with eyepiece. Take your 25mm eyepiece and point a scope to very bright star - you should be able to see it as well. You can also see it in images like this one: Just look around internet for astro images - and you will find images with stars that have cross shape. Outer field in Panaview can suffer from what is called coma as well - this is related again to telescope design and nothing to do with eyepiece (but wide field long focal length eyepieces make it more visible). It looks like this: Main fault of the eyepiece will be "seagull" like star shapes at the edges, a bit like this: Or combined with above coma - more like this (wings come from astigmatism and tail from coma):
  18. I hope you don't mind me posting this analysis - I just wondered what is it that makes stars stand out like that. Not something that I can easily quantify with words. It turns out that you have slight "terrace" in your star profile in some channels that eye/brain is not used to (we usually see and expect smooth profile - smooth transition from star to background - much like gaussian shape), but in your image above, this is what happens: Green channel: Blue channel: Red is for most part and does not suffer this: Trick would be when processing stars separately to make transition smoother and more gaussian like without this terrace effect.
  19. Very nice image. In fact only "objection" that I have is that usage of starnet is a bit obvious. I concluded that you've used it without really reading it in your image description. There is something about star - nebula transition that makes it too "hard", but this can only be seen in close scrutiny.
  20. There is no real distinction between video and photograph as far as sensor goes - video is just fast sequence of individual photographs put together. For that reason we can talk only about single photographs or what is sometimes called exposure. Maybe best way to explain actual use of gain is via analogy and example. What I'm about to say is related to single exposure or video feed, it does not relate to advanced techniques used for producing an image from telescope. It is best if you take things slowly and don't dive into all complexity at once - most people get overwhelmed by it. Imagine you have a water tap, and a bucket. You can open up tap and water will flow and it will take some time to fill the bucket. If flow is stronger it will take less time to fill the bucket. If bucket is larger it will take more time to fill it. There is relationship between strength of flow, size of bucket and time it takes to fill it up (or fill it 3/4 or half of it). Strength of flow in above analogy is amount of light from target - not all targets have same brightness. Some stars are brighter than others - same is true for nebulae, clusters and galaxies. Size of bucket is analogy for gain. Low gain is large bucket, high gain is small bucket. Filling time in this simple analogy is exposure time (how much it takes to make a single image). For image to be bright on screen, you need to "fill the bucket". If target is not bright in single exposure, you can do two things - either use "smaller bucket" (that will fill faster) - increased gain, or take more time to fill it up - longer exposure. If target is bright enough you can use short exposure and larger bucket. For simplicity - leave gain at something like 30% - and try to take image. If you can't see target, first try increasing exposure length - it is not uncommon to have exposures that last for dozen of seconds. If you still can't see the target in long exposure - then raise the gain. Hope above makes sense. Light pollution is common problem for many people enjoying this hobby. Don't be tempted to put cover on the scope to stop surrounding light getting in. It will prevent some of surrounding light from getting in but it will also prevent light that you want to get - that from your target, from getting in the scope - and net result will be worse image. Using camera is rather nice way to still be able to "observe" (or rather record / look on your computer screen) astronomy targets. This is because light "adds" so your camera will record sum of light coming from light pollution and light coming from target. Way to deal with light pollution in this case is to "subtract" it from the final image. Light pollution is relatively uniform and it will just be uniform signal in your image. There is something called "black point" in the image that you can often adjust - either in your capture application or in image processing application. By adjusting this black point, you are in fact subtracting that uniform light pollution glow. Light pollution will hurt your image in other ways - so it's best to have as minimal light pollution as possible, but understanding of that is also a bit advanced and best left for later once you master basics. Try to see if your capture application has options for what is usually called Histogram manipulation, or black point. Adjusting those will let you see the target regardless of any light pollution that might get into the scope.
  21. Unfortunately, good imaging refractors tend to be a bit more expensive so there is no cutting corners there. However, not all is lost here. Above scope produces less chromatic aberration than one would expect from fast achromatic doublet of that aperture. You can further reduce it by few tricks. You can try using Wratten #8 filter in front of camera. Not sure if there is clip in version, but you can get 2" version and screw that in your DSLR adapter (if it is 2" one). Another trick is to use aperture stop. This can be simple cardboard with circular cutout at the center placed over telescope aperture (just make sure it is centered and cut out is clean without rough edges). This will reduce amount of light entering your scope so telescope will be slower - you will need to expose for longer to get nice image (same as stopping down the lens). Here is comparison of stars in similar 100mm F/5 scope when using first, second and combined approaches: Columns contain same image of star but stretched to different level (strong, medium and light stretch). Rows contain following: clear aperture, clear aperture + wratten #8, 80mm aperture mask, 80mm+#8, 66mm aperture, 66mm + #8, .... I think that for that scope I found that 66mm aperture mask and #8 produces image with virtually no chromatic blur / halo. Following image was taken with such combination: As you can see there is no purple halo around stars that can be seen (but note spikes / rays around bright stars at the bottom - that is because aperture mask was not cut smoothly). For reflections - that depends on filters that you use, and sometimes it is unavoidable. If they are caused by filters or similar - you might try to replace used filter to see if that helps. Sometimes only thing that you can do is fix that in post processing. Here is example of filter reflections in my RC scope. This was due to UHC filter used: here is doughnut of light around that bright star. I switched to narrowband filters instead of UHC and did not have such issues any more (although NB brought their own issues - such is AP, constant struggle )
  22. Hi Samantha and welcome to SGL. Gain is a bit like volume on your computer speakers. When you turn the volume up - you hear louder music, when you turn volume down it becomes quieter. If you turn it down too much you won't be able to hear anything and if you turn volume too much up - sound will get distorted. Similar thing happens with gain when you are recording - either video or still image with your camera. Gain controls brightness of your recorded image. Too low gain and image will be dark, too much gain and image will be too bright. Keep gain somewhere in the middle to get nice looking image. Fact that you are getting white or yellow image is quite all right. Astronomy cameras come without lens attached and what you see is just unfocused light being picked up by camera. Telescope acts as a lens and when you attach your camera to telescope and find proper focus - image will be nice and sharp.
  23. I'm somewhat confused with what you've written. If scope is achromatic doublet - then purple halo is very much to be expected. In fact in bottom image, level of purple halo is much less than I would expect from F/5.5 achromatic doublet. You need at least slow well corrected ED doublet to avoid chromatic aberration if not proper APO triplet. This one is confusing because I suppose first image is of Capella, right? It has diffraction spikes and no chromatic aberration visible at all - that image was in all likelihood taken with reflector and not refractor. Thing that is in image that could be interpreted as some sort of halo around star is just unfocused reflected light. Setup used to make that image has some optical element that is not properly coated and that results in reflection halo (Maybe everything is properly coated - Capella is such a strong star and produces so much light that there is bound to be reflection artifact visible in long exposure).
  24. I'm under impression that mono + RGB filters actually have smaller color gamut than OSC sensors (that strictly depends on filters used - but let's go with "regular" filters - ones that split 400-700 in tree distinct bands). In order to calculate color gamut, we need to first find appropriate raw -> Cie XYZ transform (and color gamut will depend on it) and then see what range of xy chromaticity diagram it will cover. Brute force approach is completely infeasible, splitting 400-700nm range in very coarse steps of 10nm will yield 30 divisions, and using only 10% intensity increments will lead to something like 10^30 combinations to be examined. I've tried searching the internet for information on sensor color gamut estimation / calculation - but have found numerous articles that in fact state that sensors "don't have gamut" - statement which I disagree with. Maybe it is because I don't have proper understanding of the term gamut? Can't be sure. In any case, here is what I mean by color gamut of sensor + filters and what is brute force way to calculate it. Maybe someone will have idea how to simplify things and make it feasible to do calculation. First let's define what I believe to be color gamut of the sensor. Examine following graph of QE efficiency of ASI1600: Note section of the graph between 600 and 630nm - it is mostly flat line. Let's assume it is flat line for the purpose of argument. Now let's look at following graph of Baader LRGB filters: Again observe same 600-630nm range. Red filter covers it, and there is tiny variation of transmission in this range, but we can select two points that have the same transmission. Let that be for example 605nm and 615nm. If we shine same intensity light from two sources - one at 605nm and one at 615nm - camera sensor combination will record two exactly same values - there will be no distinction. We will get value of let's say 53e in red filter. Nothing in blue or green filter. But important thing - given these two images (of two light sources) - we can't even in principle distinguish which one was 605nm and which was 615nm. Now let's examine human eye response: On above graph there will be difference in both L (long wavelength type) and M (medium wavelength type) cone cells. Our eye will be able to distinguish these two light sources as having different color. This clearly shows that camera + filter combination that we are examining has less color gamut than human eye (although articles insist that neither human eye nor camera have color gamut - and that only display devices have color gamut - statement that I strongly disagree. In fact color gamut of display device is only defined in terms of human eye sensitivity - it can be larger or smaller if we take some other measurement device / define color space differently). Now that I've explained what I mean by color gamut of sensor - let's see how to calculate/measure it. I'll explain brute force approach and why it is not feasible to do it. Let's first look at well defined color space of CieXYZ and xy chromaticity diagram: Here are matching functions for CieXYZ: Now imagine any arbitrary function that spans 380nm-780nm and represents spectrum of the light of particular source. Take that function and multiply with each x, y and z matching function and integrate (sum area under resulting curve) - that will produce X, Y and Z color space values. Important thing to note is that different spectra will produce different X, Y and Z in general - but there will be very large number of spectra that will produce same X, Y and Z values. There are many different spectral representations of the same color. Pretty much like in above example with ASI1600 + RGB where I shown you two different single wavelengths that produce same response on sensor. With CieXYZ any single wavelength will have different XYZ values but there will be combination of wavelengths at particular intensities that produce same XYZ values. For CieXYZ that does not mean less color gamut - because CieXYZ is modeled to represent human vision - if XYZ is the same - so will be color that we see. If we can't distinguish difference - same is true for CieXYZ. After we have XYZ values - we do a bit of transform to get xyY color (which means that apparent intensity is matched and xy coordinates represent pure hue in max saturation). If we take every possible light source spectrum and calculate XYZ and then derive xyY and plot x,y we will get this function: Everything inside that curve is what we can see. Points on curve represent pure wavelengths of light. Determining color gamut of the sensor can then be viewed as: - take all possible spectra in 380-700nm range. - calculate rawRGB values (multiply with sensor/filters response and sum/integrate surface under curve) - transform each of those rawRGB values into XYZ values with appropriate transform matrix - transform XYZ to xyZ, take x,y and plot a point on above graph - take all xy points that we produced and see what sort of surface they cover - larger the surface, higher gamut of sensor/filter combination. Compare that with sRGB gamut, or between sensors or sensor/filter combinations to see what combination can record how much of proper color information. Problem is of course if we try to generate all possible spectra in 380-700nm range - as there are infinitely many of them. Even if we put any sort of crude restrictions on what the spectrum should look like we still get enormous number of combinations. If we for example say that spectra looks like bar graph with bars 10nm wide and with possible values between 0 and 100% in 10% increments - we still end up with something like 10^32 combinations to examine. But we have seen that some spectra end up giving same result - so we don't need to examine all possible spectra to get rough idea of what sensor gamut looks like. Anyone has any idea on how to proceed with this?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.