Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. This reminded me of one more cause of target drift - one that happens exclusively after slewing. Backlash. If there is enough backlash in the system, this can happen - you center target with controller and then target moves from the center - sometimes small amount and sometimes it can even leave FOV (high mag, large backlash). This happens if you slew to target in one direction but not when doing it in opposite direction (usually in one axis only, but can be in both with AltAz mounts). In this case, Mount is tracking properly, or rather drive system is tracking but backlash in gear system means that this motion is used up to clear backlash instead of tracking. Once backlash is cleared up - mount continues to track as it should.
  2. I don't think that using x2 barlow should produce that effect. I regularly use dob at x200+ magnification and planets don't just zoom out of the view. Even with regular eyepieces of 50 degrees, if you center your target, at sidereal of 15"/s and at magnification of x270 (that is two times x135) you'll still get (25 * 60 * 60 / 270) / 15 = 22 seconds for planet to exit FOV. I would not call that zooming out of the FOV. That is without any tracking. With my cheap AzGti I used magnifications of x236 at one time and image happily stayed in center of the FOV. This is $300 mount head we are talking about. You pretty much can't get cheaper than that. What could possibly happen would be following: - Barlow produces unacceptable level of imbalance that causes some sort of slip in mount / mount drive? - Barlow moves focal point outward and focusing mechanism causes image shift at that point (this is with SCT scope, right?) - Adding barlow makes rear end of SCT too heavy and mirror tilt happens? - Wrong tracking mode engaged - lunar or solar rate instead of sidereal?
  3. Indeed I did but that was in relation to a bit more serious equipment if my memory serves me well. Something like 12 or more inch newtonian and probably serious mount? Here we are talking about 5" scope mounted on HEQ5 mount. Over the time, I think I developed rule of thumb for appropriate lower sampling limit (or upper if you will - never now what would be appropriate term - lower in number, but higher in "magnification"): <80mm 2-3"/px 80-100 1.7-2"/px 100-120 1.5-1.7"/px 120-150 1.3-1.5"/px and so on. I would not go for 1"/px unless I had at least 8" aperture and very good mount. In fact - I do have 8" aperture and less then serious mount and I have system that samples at 1"/px and I believe that 95% of time it is over sampling.
  4. I would say don't Maybe even better statement would be - Mak 127 has rather small fully corrected and illuminated field for its aperture?
  5. I think that ST102 will be under mounted on either of those two mounts. I had ST102 and used it on AZ3 and AZ4 - it was much better on AZ4 but I did miss slow motion controls sometimes. For this reason if you want to get ST102 bundled with a mount, maybe look at this one: https://www.teleskop-express.de/shop/product_info.php/info/p10003_Skywatcher-Startravel-102-on-AZ5---Rich-Field-Refractor-on-alt-azimuth-Mount.html ST102 is not the best scope to observe planets with - there will be a lot of chromatic aberration. Barlow included is together with other starter kit included - very basic and while it will do what it is supposed to do (give you bigger magnification) - it is not going to be very good. Same holds for 10mm eyepiece. 25mm is actually quite decent eyepiece and you won't need to replace that one for some time (until you acquire taste for better eyepieces and you have the budget for it). Diagonal is also not quality item - it serves its purpose but there are better items out there.
  6. That depends how your sensor is oriented. In both cases - south near meridian and west or east at DEC zero - you'll use RA motion and look for DEC drift. If you orient sensor so that RA direction is in line with horizontal axis of sensor, then in both cases motion of the star will be left/right and drift will be up/down. In any other case - only one thing is sure - RA and DEC motion should be perpendicular to each other (if everything is ok and mount is orthogonal - and it should be). Simplest way to orient sensor so that RA is horizontal is again to find bright star, start exposure and slew in RA - if line on exposure is horizontal - you are done, if not - rotate camera (for the same angle line is to horizontal on the image - but not necessarily in same direction - depends on how telescope inverts image) and repeat until line is horizontal.
  7. There are two places where you need to do above procedure. It has to do with X-Y coordinate system and projections and all that math stuff. When you point your scope due south at celestial Meridian (or just before or after), any drift that happens in that part of the sky will be due to Polar alignment error in horizontal - or in azimuth. So first step is to adjust your mount in azimuth by aiming at the star due south at meridian. Second step is to do the same procedure aiming at eight east or west and DEC 0. Since this is 90 degrees to south - you are now doing alignment in altitude - or vertical. This is why you need to have your mount level to begin with - only if mount is level you can "divide" adjustments like this so they are not dependent to each other. DARV method says - start your exposure with telescope tracking and wait for 5 seconds. This little wait time is for star to "burn" a little reference dot at the beginning of the path. Then press your controller so that telescope slews at x1 speed in RA direction. Do that for half of "exposure" then reverse direction (just press opposite button) and hold for second part of exposure. If you want to do 5 minute exposure without trailing - do above for 2.5 minutes in one direction and 2.5 minutes in other direction. You can start by doing shorter runs - like 30seconds or one minute if your polar alignment is poor at the beginning, but as soon as you start improving it - switch to full duration of exposure. Makes sense?
  8. Indeed, look at the post above by @Davey-T and follow that link - DARV is very simple and straight forward (and clever I might add) way to do "manual" drift align. I would throw in a simple modification since you can't do regular polar align and then improve with drift. Make your first couple of iterations shorter - about half a minute as drift is probably going to be significant. Later extend it to make polar alignment accurate.
  9. Long time ago, I drift aligned with a piece of software called EQAlign or something like that. Let me see if I can find it. Here it is: http://eqalign.net/e_eqalign.html Unfortunately it requires WDM style drivers (like web camera) and I'm not sure you will find those for DSLR (you could - I've seen a piece of software that lets DSLR act as web cam for skype and such). You don't really need software to do drift align. You can do it yourself by using long exposures and bright stars and judging trail that star makes due to drift (some sort of measurement of trail length). You need to orient camera so that you know which direction is DEC (vertical for example). There will be drift in RA as well - but you don't need to address that for polar alignment (but it can as well mess up your long exposures - check if PEC feature is available with your mount). There is a limit to how much auto guider can fix things. Auto guiding works by using short exposures like second or two. It issues commands to mount to get to proper position - this is called pulse. Auto guiding software measures reference star position and determines how long this pulse should be to correct the position. It uses guide speed which is usually set to fraction of sidereal rate - x1 or lower like x0.25. Sidereal rate is about 15"/s. This means that in theory, auto guiding can fix for drift that is 15"/s if set to operate on sidereal guide speed. In practice, you want your drift rate to be much lower than that as you don't want to correct each guide cycle - that also creates elongated stars - as mount moves back and forth all the time. In reality - any decent polar alignment will have rather small drift rate. You can use this calculator to calculate drift rate due to polar alignment error: http://celestialwonders.com/tools/driftRateCalc.html
  10. Drift alignment method works with both guide scope / camera and primary scope camera - it is just a matter of convenience which one you'll use. For good drift alignment and general goto precision, it is important to have your mount tripod - level and stable (not moving). Auto guiding will correct for polar alignment errors, but it can't work miracles. The better your polar alignment is from the start - less work auto guider needs to perform. There is only one case where you don't want perfect polar alignment - if your mount has DEC backlash. In that case it is better to have slight polar alignment error which then causes DEC error to be to one side only (due to drift). Auto guider will correct this error and corrections will always be to one side as well. This prevents changing of direction in DEC and circumvents any issues with DEC backlash. PHD2 has algorithm called "resist switch" - and I think it is default DEC guiding algorithm. It is used precisely for this reason - small PA error is going to produce drift to one side only - everything else is either seeing or wind or mount roughness.
  11. Lockie has done several of these entry level scope reviews and I think it's worth having a look at each one of them. You'll find all the scopes mentioned here being reviewed in one way or the another - ST102, Mak102, etc ...
  12. Optics of said scopes is most certainly the same and in principle they should show you the same. What differs is usability and maybe some other features. Let's examine both pairs to see possible differences. In first example, AZ-EQ avant is a better mount. Although they look the same, az-eq is made so it can be used in both EQ and AltAz mode. With StarQuest that is probably not the case. I'm not 100% sure on that one but StarQuest looks like regular EQ mount from SW in design. Regular EQ mounts from SW can still be used in AltAz mode, but that requires modification and you can't really make them dual mode easily. This example of EQ3 shows why is that so: https://www.cloudynights.com/topic/272037-eq-mount-in-alt-az-mode StarQuest appears to be the same design. Then there is matter of tripod. It looks like AZ-EQ avant tripod is better quality / more stable and lighter one. As a bonus, you can take AZ-EQ avant mount head and attach it to any photo tripod (which means that you can upgrade tripod easily). In second case, scope is the same, but details on scope will be different. Regular ST102 comes with rings and dovetail and can be carried by different mounts that accept vixen dovetail. One on StarQuest has some sort of bolt connection to the mount head - less versatility. It could be the case that things on OTA are of lesser quality on StarQuest - for example plastic parts. This is done to save weight rather than to save money as StarQuest is really light weight mount. Something similar has been done with Newtonain scopes and Maks that go on this light weight mounts. Redesigned mirror cells - ones that don't support collimation and look like they are made out of plastic. Compare these two images: Second one does not have collimation screws and back end has been redesigned.
  13. Did you by any chance ever see atmospheric correction matrix (or calculated one), to save me trouble of doing it? I suspect that it will be different for different air mass, but just a few should be enough to cover most of the cases - like 90, 60, 45 and 30 degrees above horizon.
  14. I don't know technical details of how the CCM should be defined (as per standard or what is usually done). I did not calculate CCM on white balanced data - it was done on raw data without any other modification so I suspect it contains white balancing information "bundled" together. Is it customary to separate the two? First apply white balancing as simple red and blue scaling functions - like I did as an example above to compare it to matrix approach, and then derive CCM afterwards? My approach was fairly simple, and I'm sure that is how CCM is derived usually. Measure linear values of recorded and reference RGB. Scale those triplet vectors to unity length (to avoid matrix scaling the values) and then use linear least squares to solve for transform matrix. I can now see how CCM can be a separate thing. Similarly I'll now need to derive atmospheric correction matrix (ACM? ) - depending on air mass and other factors and it is also going to be "stand alone" type that can be applied after CCM while data is still linear. Here is base code for calculating attenuation that I've found and plan to examine and adopt: https://gitlab.in2p3.fr/ycopin/pyExtinction/-/blob/master/pyExtinction/AtmosphericExtinction.py I just really need simple attenuation per wavelength graph so I can again take a set of (reasonable) spectra that represent colors and apply atmospheric extinction to it and see what color it represents then. Same least squares approach as above.
  15. You'd think that it would be light weight but it is not. Well, it depends on what you compare it to - it is certainly light weight compared to HEQ5 or EQ6 class mount, but let's be realistic. Can you move it assembled in one hand? Head is approx 4.4kg, tripod is 5.7kg, it has 2x3.4kg of counter weights - let's assume you are using only one so 3.4kg. Let's also assume that your imaging rig has something like 1.5kg max. That totals to 15Kg. I know that I can lift a compact object weighing at 15Kg and carry it needed distance but I'm not sure how I would feel about bulky sensitive object of that weight. AzGTI with tripod weighs about 4kg (3.9kg). Add counter weight of 1kg and add another 0.5kg for wedge and 1.5kg of imaging kit and it totals to something like 6kg? EQM-35 Pro will certainly have less issues like backlash / PEC / tracking and goto precision and it can carry more. It is higher class mount / step up AzGTI, and that shows both in weight of the rig and carry capacity (both almost double).
  16. I think that we need to separate two things. Image that scope forms at focal plane and image that we see and/or record. Point a telescope to a planet or a star and image will be formed at focal plane of that telescope regardless of who is watching or if there is camera attached to the telescope. That image behaves as described above in post by @chiltonstar - it is original image convolved with PSF of telescope and projected at certain scale. Rayleigh criteria is equally applicable to point sources or low contrast features in what it describes. You can apply it to two stars (same or different intensities) or you can for example apply it to line pairs (again same or different intensity). It tells you how close these features need to be in order for image to show them as separate features. It boils down to whether a graph of light intensity will have a single peak or double peak. Only after all of this we can talk about ability of detector - be that eyepiece + human eye or camera sensor at prime focus (or any other configuration) - to record / detect above resolved feature. Human eye requires certain conditions to be able to capture details in the image that are important. It needs certain magnification and certain level of contrast - and these two are not in trivial relationship as we have seen from above graphs and examples with number 17. Similarly - camera sensor also needs certain conditions. It needs adequate pixel size compared to image scale at focal plane. It also needs enough integration to achieve good SNR for contrast of the features and so on. In above sense, aperture is the natural limit of what can be resolved and yes, if we talk about resolving (even very fine contrast details) - a very specific term different from detecting - it will resolve in focal plane according to Rayleigh criteria. If we on the other hand talk about recording that light intensity function without a loss, we can't use Rayleigh criteria any more, we need to use Nyquist. Similarly, if we talk just about detection - again we need to use other criteria in that case. What we should not do is mix criteria and "cross apply" as it might lead us to wrong conclusions.
  17. I believe it is quite the opposite - high number of captured frames is better image. There is limited amount of time you can spend on a target since planets rotate (even with derotation). Large number of captured frames means large number of good frames given equal probability of a good frame due to seeing at the time of the capture. Large number of frames stacked means better SNR and better SNR means that you can sharpen image more without bringing the noise to the surface.
  18. Problem with this explanation is understanding of what it means to resolve and what it means to detect. Take any old double star - I'll detect it with even very small aperture. I'll say - there is a star there. You'll ask "Is it a double star", and I'll reply - "Dunno, I have not resolved it yet". Take any image of Encke division and ask the same question - is it a single gap or dual gap? Or yet better - can you measure width of it from any of those images? You can't cheat physics. Telescope aperture is about resolving not about detecting. Well, that too - you need large aperture to detect faint stuff - you won't be able to see mag15 star with 3" scope for sure, but in this context we are talking about resolving. No amount of magnification, nor sharpness/quality of optics can make telescope resolve beyond what physics says it is capable / allows to be done.
  19. Indeed - it can only be used to compare same type scope of same aperture. Even if aperture is the same in size - strehl will not be meaningful between scopes that have different aperture "shape" - like unobstructed vs obstructed aperture. Both can have same quality wavefront but like two different apertures - MTF will be different between them. I would put it like this: Strehl ratio does not depend on wavelength - but there are scopes that have different optical characteristics with respect to wavelength and hence they will have different wavefront and strehl at different frequencies - namely telescopes with refracting elements. You could similarly say that SCT will have strehl that depends on back focus used? Spherical correction depends on mirror separation and position of instrument measuring wavefront accuracy (and hence mirrors separation) will change this.
  20. I don't think you are right here. Strehl is measure of wavefront accuracy and is ratio of two values for single telescope: (How much energy ends up in airy disk of actual telescope) / (how much energy would end up in airy disk for perfect telescope of same aperture). As such - it does not depend on aperture size, nor on wavelength used, or rather - It is just a number and as number it is comparable across scopes. What it can't tell you is different aspects of performance of the telescope. It won't tell you what sort of detail telescope should be able to resolve - that depends on aperture. It does not tell you light gathering capability of telescope, and so on.
  21. If you see something like 1/10th of a wave - that should stand for PV wavefront quality. Sometimes RMS figure is used and it should be much lower - like 1/10. Sometimes RMS figure is used and it should be much "higher" like 1/20. 1/4th wave is diffraction limited - and these days mass produced Chinese scopes are often better than that - Strehl of 0.8 or higher. I would say that 1/6th of wave is good optics, 1/8th is very good optics and 1/10th of wave and above is exceptional optics. These are all PV values (peak to valley). Two scopes having 1/6th of PV won't have same other measures - like RMS and Strehl. They will be comparable in other measures but there is no direct "conversion" factor between the two. Look at this table: 1/8 of spherical aberration equates to 0.95 strehl but same error from turned edge results in 0.993 strehl - which is much better. It also tells you that 1/4th wave PV is 1/13.4th RMS and 1/8th PV equals 1/27th RMS - again for spherical aberration.
  22. We can talk about what would be proper color in astro image, and no there is no way to ensure 100% proper color as our eye would see, simply because we want to show both faint and bright structure - we often compress dynamic range that is beyond human capacity to see at the same time. Humans can see that much dynamic range but eye employs tricks to enable us to do it. In bright daylight our pupils contract. In very faint conditions our pupils dilate and we also undergo dark adaptation. Color perception is different under these conditions - so we can't have it all at the same time. Anyways - there is what we could call "as scientifically accurate as possible" rendition of image. You don't need to necessarily expose for same periods of time simply because camera won't have required quantum efficiency in all bands - this is precisely why we develop these transform matrices. What you can do however is if you expose for longer - meaning more subs of equal length use average stacking method (instead of addition - but this is probably by default in almost every astro stacking software). If you use different sub duration - just mathematically scale data while still linear. You can think of it as - calculating photon flux per second for each sub and then using that as resulting data (divide each stack with corresponding sub duration in seconds). When applying flats, make sure flats are normalized to 1 for full illumination. Next step would be to apply above mentioned transform matrix to transform your raw rgb data into sRGB linear rgb. Following step would be to apply correction for atmospheric attenuation. This is what I'll be doing next for above data - we want image as it would look like from space not from here on earth where it depends on atmosphere and target altitude above horizon (air mass). In the end, you have proper sRGB balanced linear data and it is time to compose rgb image out of it. First step would be to create (or use if you took LRGB approach) - luminance file. You stretch it and not the colors. If you don't have it, you can calculate it from sRGB linear data by using following formula: Y component of XYZ color space is closely related to luminance perception of human eye. It is also related to L part of Lab color space (you can also convert it to L from Lab if you want perceptual uniformity - but this really does not matter if you are going to stretch your data, as L from Lab is just power law over linear data): You can see that luminance only depends on Y component (and Y of reference white point) and third power function (again there is split here - linear in low values and power law other wise as it mimics human vision like sRGB). Ok, so we have our luminance and we stretch it, denoise it, sharpen it - do all the processing until we have monochromatic image that we are happy with as our final image. Final step would be to apply color information to it. We take our linear r, g and b, and first step would be to do inverse gamma function on our processed L - to bring it into linear regime. Next for each pixel we normalize linear rgb like this r_norm = r / max(r,g,b), g_norm = g / max(r,g,b) and b_norm = b/(max(r,g,b). We get our final color components as: r_lin = r_norm * l_linear g_lin = g_norm * l_linear b_lin = b_norm * l_linear and we take r_lin, g_lin and b_lin and do forward gamma transform on them to get our final R, G and B in sRGB color space that represents our image. BTW, doing narrowband imaging and then trying to compose it into true color image is rather counter productive. There is almost no distinction in color between Ha and SII as they are very close in wavelength (both are just red and if you keep their true color you won't be able to tell the difference). You can true color compose Ha + OIII image and for most part it would have color that can be displayed in sRGB color system. However, like we mentioned before pure Ha and pure OIII can't be displayed in sRGB. If you want to know what sort of color you could expect from Ha and OIII data in true color - chromaticity diagram is your friend: Any combination of intensities from Ha and OIII wavelengths of light is going to produce colors that are along the black line in image above - line that joins respective points of wavelengths along the outer path. Since we use only linear transforms in color spaces involved (sRGB gamma is just encoding thing - there is still linear sRGB part and it uses regular matrix for transformation) - any linear combination of intensities a * Ha + b * OIII will be a line and under matrix transform will remain line and all colors that we can possibly get will be along that line. Btw, this is reason why sRGB colors are all inside triangle above - vertices of that triangle are sRGB primaries - pure red, green and blue of sRGB space and every possible color that can be made using linear combination of them lies on lines joining them - which covers whole triangle (green + red can create orange and orange connected with a line with blue can create some pale magenta hue or whatever other combination you can do). This means that if you do image of nebula or galaxy with DSLR / color camera and do proper color balance you can actually end up with some whiteish / green-teal type of color for Ha+OIII nebulosity. For narrowband images it is still best to keep false color approach and just explain which color is what gas (HSO, SHO, etc ...)
  23. For a bit more technical discussion on that topic, see here:
  24. Color balancing transform and my own derived transform serve the same purpose, so there is no much point in combining these two. You can pick either of the two. I did not mention what the scaling coefficients are: Red is 1.985 Blue is 1.309 From above examples, it looks like my measured and derived matrix will give better results - at least on these test colors (which should be good representatives of colors out there). Gamma transform is non linear and therefore can't be encoded in matrix. It is part of sRGB standard - and should be used only if color space of resulting image is sRGB. Computer standards say that any image without embedded color profile should be treated as sRGB image. Gamma is basically power law. Gamma 2.2 just means that linear RGB colors in range of 0-1 get transformed by doing power to 1/2.2 (2.2th root) and going from gamma corrected to linear is opposite - calculate by doing power to the 2.2. This is approximate transform. True transform is a bit more complicated, and here it is written down: So it is split into two parts - for very dark values, gamma transform is linear, and for values larger than about 0.003 (in 0-1 space), it uses exponent of 1/2.4. This image show difference between actual gamma and approximation - red line is actual gamma function of sRGB and dashed line is approximation with 2.2. Blue line is same as red only plotted in log space - so you can clearly see linear part as being constant in log space. BTW, gamma function exists for the same reason magnitude system exists. Human vision is non linear with respect to intensity. If you take RGB values and compare for example 0.2, 0.4 and 0.6 (each being white so (0.2, 0.2, 0.2), (0.4, 0.4, 0.4), (0.6, 0.6, 0.6)) - you would expect to visually say something like - second color is brighter than first as much as third color is brighter than the second. Scale of such colors should form nice linear gradient. And it does in gamma corrected space such as sRGB. Problem is that this does not happen in linear space. If you take light sources and make first light source give off 2 million photons per second, second light source give off 4 million photons per second and third light source give off 6 million photons per second - and you look at them, you won't see nice linear brightness distribution, although both number of photons and energy in watts (think of it as flux of some sort) given off by sources is linear. This is why we have magnitude system for stars - which is also log based (power law). Camera "sees" is linear space and raw data recorded with camera of these three lights will record 2, 4 and 6 ADUs (or whatever other numbers depending on lens, distance and integration time) and if we directly convert that to sRGB we will see nice linear gradient on screen, but our eyes wont see it as linear in real life. Image on the screen will be wrong! This is why it is important to do proper transforms and we almost never do it in astronomical imaging for some reason (probably lack of understanding or simply not caring about it - there is old notion that there is no proper color in astrophotography anyway - which I think is wrong). As for 3d coordinate space, well first you need to understand that RGB space used for displays is actually smaller than space of all colors. Many colors can't be reproduced on computer screen, and in fact, no pure wavelength color can be reproduced on computer screen (nor in almost any other color space apart from absolute color spaces). This means that rainbow will always look different in real life compared to a photograph. Some colors in real life end up with negative coordinates in RGB space - since we can't emit negative light - these colors are simply outside of RGB color space. Similarly, with above transformations, there could be cases where we get results that are outside sRGB color space. Examine transform: new green = -0.62*r+1.23*g-0.21*b (this time rounded to two decimal places). It states that we should subtract both raw_r and raw_b value from raw_g to get linear green. What happens if raw_g is zero and raw_r and raw_b have some value? Clearly result will be negative. Such color will be outside of sRGB linear color space. Can it happen? Yes it can, but not as often as you might think. Here, look at QE graph for ASI178: Green matching function never falls to 0. If raw_r has value greater than 0 the so must raw_g, and similarly with raw_b. There are cases where new green will be negative. Take for example Ha signal and pretend it is at 650nm (it is 656nm in reality but it is easier to read off 650nm values from the graph). raw_r will be 0.8, raw_g will be about 0.16 and raw_r about 0.045. Clearly new green = -0.62 * 0.8 + 1.23 * 0.16 - 0.21 * 0.045 = -0.496 + 0.1968 - 0.00945 = -0.30865 So this color is outside of sRGB space with our transform. sRGB can't show pure Ha signal. Is this true? Yes it is, look at chromaticity diagram for sRGB This is something called xy chromaticity diagram. Each hue irrespective of lightness can be mapped to xy coordinate like this. All colors that can be displayed by sRGB standard fall into coloured triangle (it shows mapping of actual colors). On the edge of this horseshoe shape all pure wavelengths are mapped. If you look for 650 or 656 - it will be some where to the right next to edge of this shape between 620 and 700 (these are nanometers btw) - it is outside of sRGB color triangle. What do we do with colors that fall outside of sRGB color space when trying to do color transform from our camera? We should really map it to closest color that can be shown. Simple but not most accurate way to do it would be to simply clip results to 0-1 range. In our example above resulting green would simply be set to 0. Why is this not the best way? Because sRGB is not what we call perceptually uniform color space. It means that finding closest point to original point in 3D that falls within our color space is not necessarily what we would call "closest color" - perceptually. For example - Take pure blue, pure red and pure green colors from sRGB space. Which one do you think is most similar to pure black? I would say blue. However, they are geometrically the same distance from (0, 0, 0) - each one is at distance "1". We can then modify our method of finding replacement color above to - convert color to perceptually uniform color space and then find closest color that is in sRGB space. In the end to address actual geometrical transform. I don't know how it looks like. Not sure if makes any sense to know it as you can't relate it to color in any easy way. It is however matrix transform which means it is linear transform. It will possibly scale space, rotate it, translate it, shear it, perspective distort it, but straight lines will remain straight. It won't do any sort of "warping" of the space. I want to mention just one more thing in the end - method of deriving transform matrix. It is really nothing special. Most DSLR cameras have similar matrices defined for different white balancing scenarios (and you can sometimes find these online, and I believe they are embedded in camera raw file as well). You take both reference image and recorded image. Make sure you have them in linear space. In this case that means doing reverse gamma transform on reference image since recorded raw image is already in linear space. You then measure average R, G and B values in each square for both reference and recorded image. You scale those rgb vectors to unity length for both reference and recorded image and in the end do least squares solving to find matrix that will convert list of measured vectors into reference vectors. I used ImageJ and Libre office calc to do all of that.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.