Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm starting to feel that this discussion is all over the place and we might be loosing the sense of what initial direction was (at least at some point - at very beginning you offered your data to others to be processed so you can compare different approaches). At some point we started discussing real color of the object in question. Certain workflow can either provide you with that color or not. Fact that you are using "standard" workflow in PI does not mean it will provide you with proper color of the image, nor that it should be "the way" images are ought to be processed. That holds for "true color" approach as well. At some point in discussion we touched on that subject as well - how much value there is in rendering true color of the target and what people expect out of image. If you want to get that true color of the object - regardless of what it actually is, there is a "set of rules" to be followed. Same as with any other data that you measure - if you want your measurement to be accurate and represent the thing that you measured - you need to make sure you know what you are doing with the data and you don't want to skew the data in any way. While in linear stage, ratio of R, G and B represents color. If you change this ratio - you are changing the color, simple as that. This is related to how light works and how our vision works. In other color spaces, you don't need to preserve ratio, but in linear RGB (and for example XYZ color space) - you do. Any operation that messes up this ratio of three components is going to change color. Non linear operations on RGB triplets change ratios. It does not matter if you stretch single channel, or stretch RGB image (which is just applying same stretch to each R, G and B components of color image) - you are changing RGB ratios and hence changing color. That is perfectly fine if you don't want to be bothered with accurate color - just use workflow that you are used to and that produces what you perceive as nice image. But if you want to get that true color - you need to work in a way that will allow you to do it - namely only linear transforms on RGB color components. Proper colorimetric calibration of your data (btw - PI photometric color calibration will not provide you with accurate color - it also uses data from the image and has no concept of color in sense that we are using it here - it assumes that color is - "astronomical color" or rather color index - difference between two photometric filter measurements) is also needed to provide you with true color. I outlined workflow that will give you proper color if you have properly calibrated color data. Issue is to take camera and filters (btw this works on OSC cameras too) and somehow measure and deduce proper transform matrix needed for color calibration. That was my goal - to demonstrate this workflow, make usable method of color calibration and present as a result proper color of M101 for reference - of what it actually looks like.
  2. If you applied curves after color calibration - colors will be wrong. Non linear transforms don't preserve colors.
  3. You can't increase saturation on a single channel - saturation is property of color - or rather perception of color. You can change it only if you change the color - make it a different color. In my workflow you don't stretch color information at all. That way, RGB ratio is preserved while in linear stage. In the end gamma of 2.2 is applied because it is required for sRGB color space - which we use for our images on the web (web standard is that if you have image without color space information - it will be treated as if it was sRGB color space). Here is workflow that I use: 1. luminance is stretched and mapped to 0-1 range 2. Color is calibrated using color calibration matrix (point of my work here is to find suitable color calibration matrix - same thing that makes above raw images into proper image made by camera with factory embedded transform matrix) 3. Normalized ratio of RGB is computed by r = r/max(r,g,b), g = g/max(r,g,b), b = b/max(r,g,b) 4. Luminance is transformed by inverse gamma 2.2 5. Luminance is multiplied by r, g and b to produce linear r, g and b 6. Resulting linear r, g and b are then forward transformed with gamma 2.2 to give sRGB R, G and B channels which are simply combined into final image. Only stretch that I do is on luminance. This way you won't have stretch artifacts, color is preserved and it is proper if you do good calibration, no star cores will be burned out if you have full range R, G and B subs (they are not saturating - luminance can saturate, that is not important as it is only one that is stretched and will compress star profiles anyway).
  4. Non linear transform does not preserve RGB ratios.
  5. Now something that I promised. Comparison of different color data from DSLR. This is Canon 750d, Image was taken in both RAW and Jpeg at the same time (camera created both). Here is jpeg from camera - just resized down: Colors rendered in the image faithfully show what observer was seeing in those conditions (good color balance and factory color calibration for that sensor). This is raw image, converted to Fits with FitsWorks, just debayered and channel composition done in Gimp. It is obvious that this image is still in linear space (no required gamma applied - everything is darker) and there is no color calibration done. Observer saw very different scene (see image above). Here is now raw image with histogram adjustment in Gimp (just a simple curve stretch as we would do for astro photography): This looks more like image above in terms of gamma (that is to be expected as simple curves stretch very much resembles gamma in shape), but regardless - colors are way off.
  6. No it is not. At least we might be again using different terminology. Color calibration means adjusting color information so that is properly measured. It is adjusting for instrument differences with respect to color capture. Aim of color calibration is: - If two people image same target with different equipment and produce color of that object - it needs to match between the two of them. - If person is imaging two different objects and we know that these two objects have same color - data after color calibration needs to match - it should indicate that those two objects indeed have same color. Since color is perceptual thing, there is one more requirement for color calibration: - if you take an image of object (or light source) and color calibrate it properly and take same object (or light source) and show it to people side by side - they will all tell you - yes that is the same color. Therefore color calibration is not just adjusting for instrument response since color is perceptual thing - it is also color matching, or it needs to satisfy color matching.
  7. Term color balance really has no place here. It is related to daytime photography and fact that you are creating image in one lighting conditions and want to display image in another "conditions" - or rather you want to display colors of the object close to viewing conditions. That is what color balance really is. Color balance does not deal with raw camera data as that part is handled for you by DSLR camera. It transforms raw data in actual true color data. Here in astrophotography we are using raw color data from camera. It has not been transformed to actual color data, but it is used as if it were. This gives "strange" colors of objects - or rather colors that are not true. Problem is - this is how it has been done always and people are used to such colors in astrophotography. There is also "more is better" culture present. People like to see more - more faint stuff, more color, more "bling" ... I'm going to try to do a demonstration for you in a minute of what is happening with colors in astrophotography - by means of comparison to daytime photography. I'm going to make a raw image of colorful object with my DSLR and then I'm going to show you out of camera jpeg and I'm going to show you what raw color without color calibration looks like (I'm going to treat raw from camera as it was astro image - same workflow people use).
  8. That is because histogram is something totally different and has no relevance to what we are discussing here.
  9. Nope, try putting 1:1:1 color ratio into photshop / gimp and you will see that it is completely grey color. So is 50/50/50 - or any other combination that has equal amounts of R, G and B.
  10. Significantly. I don't really understand PCC in PI. I tried to understand it and read that document but there are sentences there that diverge significantly from my understanding of color, color models / spaces and color matching. For example this: In my understanding, white reference is tied to display medium not phase of capture. Therefore white point will be chosen such that it corresponds to color space you are going to present your data in. For most purposes we present our work in web image formats (png, jpg). These by default use sRGB color space unless you specify different color space and provide transform for it (ICC profile in PNG). sRGB has well known white point of D65. There is no choice there - nor you can arbitrarily set one. PCC deals with photometric filters and defines color as: It therefore does not deal with human visible color My approach is not actually using data from the image - this is attempt to create "calibration profile" from sensor and filter information. I simulate how would sensor that has certain QE graph behave when paired with filters that have certain response graphs if exposed to light from a black body of certain temperature and compare that with standard color for that temperature. From that I derive transform matrix that needs to be applied to data. This is very similar to what DSLR manufacturers do when they create white balance presets for their cameras. Except the fact that their preset depends on illumination used (hence you can choose - daylight, shade, tungsten, incandescent ....) but here you don't have illuminated object - but rather source of light and as such does not change and has certain "signature". For that reason we only derive single "preset". If you want to be precise with this approach - you should include information such as atmosphere response, and instrument response. I've found this very nice image: From British Astronomical Association paper here: https://britastro.org/vss/ccd_photometry.htm and I'm going to "apply" it as well for the data from the thread I linked to at the beginning of this one.
  11. We are clearly using different terminology here. By color balance I mean ratio of linear R, G and B components for a single color. Not any sort of relationship between colors. Saturation will change that ratio. here we see some RGB ratio for color in rectangle. It is 1 : 0.6 : 0.6 (RGB) Now look values as I increase saturation: Color is now more saturated than one above (I just went Color/Saturation and added some). Now ratios are no longer the same 1 : 0.45 : 0.51 If you change saturation it will change ratios of RGB and vice verse.
  12. For this method - it does not really matter that black body only approximates real stars. I just needed some sort of spectra that: a) is well defined so I can calculate exact color information in CIEXYZ color space and values that will be obtained by sensor/filter combination for source giving off that spectrum. b) It has good enough spread in color space (issue with real stars is that most of the stars are in very narrow range of temperatures and extremes like very hot stars are rare and chances that we'll find them in the image are slim) c) colors that we work with are colors that we are likely to encounter in astrophotography - or close to those colors I agree that getting accurate color is not easy - but this is because we don't yet have developed workflow nor tools to do it. Removing background light pollution gradients is not easy task either, but once tools are available and people figure out how to incorporate those into their processing workflow - we start getting nicer images. Theory of color matching exists for many years now (CIE XYZ color space was developed in 1931) - we just need to sensibly apply it to astrophotograpy and we will get accurate colors.
  13. Not sure why would that be. Images display nicely on my computer. Maybe we should ask others if they have the same issue?
  14. Notice that above bar does not go to gray. Gray is another mix of primary colors. What you are calling low amounts of RGB will end up being dark like left side of the image - it won't get to go to gray as if "color is washed out". In order to wash out the color - you need to get different mix of R, G and B. In any case - above calibration is not good either. I just realized that I missed not one but two important things - first is photon energy and second is atmospheric extinction vs wavelength. Could be that images end up more saturated then now
  15. With color matching "color intensity" as you put it - means different color. It is not light intensity - meaning intensity of the light making up color. It is about proper RGB ratio regardless of intensity. Intensity is dictated by luminance. It is important that ratio stays the same. Look what happens when you keep RGB ratio the same, just change intensity: It is the same color - darker or lighter - same color Btw - I just realized I messed up above I did not account for photon energy in my code. Back to the drawing board.
  16. This is essentially product of some of my research and failure to do stellar calibration from stars in the image. More on this particular topic can be found here: Because color calibration failed on sample of stars from the image in range of 3000-7000K (actually a bit less), I decided to do algorithmic approach to determining correct transform matrix. Here are results: First a little intro. I used CIE XYZ matching functions found online for calculating XYZ values of black bodies of different temperature. Known matrix was then applied to transform this to sRGB linear space. For data I used published QE graph on ASI1600 from ZWO: And same for Baader Filters: I used following web based utility to read off values from these images: https://automeris.io/WebPlotDigitizer/ Result of read plotted in LibreOffice: and for filters: red: Not all curves were sampled over all wavelength range - missing values are assumed 0. Now the fun part. This graph represents error in linear RGB space prior to calibration and after calibration. Error is calculated as distance in 3d Euclid space between two RGB unity vectors - Expected sRGB linear unity vector for black body of certain temperature (X-axis) calculated from CIEXYZ matching functions and transformed into linear sRGB space and Column K - raw RGB values that would be obtained with ASI1600 camera and Baader filters - with perfect telescope (no color impacting optics). Column L is same thing but this time raw RGB values were corrected by obtained correction matrix. We can see significant drop in error. However, error is still quite large in red / yellow zone. This is because sharp cut-off filters are not well suited for capturing true color. This is why color sensor have smooth varying R, G and B filters on pixels. Here are two more graphs that will be of interest - these show difference in actual R, G and B values. First raw data: Here we can see that green is pretty much as strong as it should be. We also see that red is weaker than it should be and we see that blue is stronger than it should be. Same graph after correction: I just realized that I did not account for energy of photons. Stay tuned / watch this space need to redo whole thing
  17. Finally managed to do somewhat ok color calibration, only problem is - I don't know if it is correct It does look correct(ish). I'm going to explain details in separate thread that I'm going to link to. For now, just two images here - one gamma corrected color and one linear rgb (essentially wrong but closer to what are used to see). Linear RGB (wrong): sRGB gamma applied - correct color rendition: This calibration was performed on stars (or rather ideal black body emitters) with temperatures in range 2000K-50000K (200K interval - total of 240 calibration samples). Some assumptions that might not be true: - Rodd's telescope does not provide "tone" of it's own - glass is completely neutral - ASI1600 QE graph published by ZWO is correct - Filters used were Baader RGB filters and their published transmission graphs are correct. - Atmosphere is completely transparent - or another way to look at it: this is what image looks like when viewed thru our atmosphere without correction
  18. Intensified blue in spiral arms has roots in science actually. There are two reasons for too much blue in images. First has to do with science - at some point, it was very interesting to scientists to show certain features of galaxies emphasized. Like we do with Ha layer on top of our images. Star forming regions contained hot young stars and for galaxy formation and evolution science it was very important to distinguish galaxies that have star forming regions to those that don't. Boosting blue emphasizes those regions because of hot young stars that have bluish light. Second reason has to do with quantum mechanic . If you look at response curves of modern sensors - they all more or less have following feature - green part of spectrum is most sensitive and blue and red ends of spectrum are less. Sometimes blue is a bit more sensitive then red and more often - it is red part of spectrum that is more sensitive than blue. Fact that green is the most sensitive of the three - plays into our hands. Human vision is such that we perceive brightness from green color the most. For the sake of argument we could say that blue is about as sensitive as red (fact that red is often more sensitive will just add up later). Because of the way cameras work - we actually need to boost blue to get camera signal close to what we see. We define color in terms of energy and cameras are photon counting devices. Photon energy depends on wavelength - and blue photons have higher energy than red photons. There will be more red photons for same energy total than blue photons. If we have same energy light source (one that shines equally in all frequencies of visible spectrum - illuminant E), our cameras will gather more red then blue photons. In order to compensate that - we need to boost blue component.
  19. I'm planing to do a little demonstration as soon as I purchase needed kit to do it. I guess that this will help people understand what is going on. In my quest for color calibration - one of options out there is to do relative color calibration. I'm currently developing algorithmic approach for above problem of narrow range of stars - since star calibration failed as it only uses very small subset of possible stars. I'm working now on doing synthetic stars in range 2000 - 50000K. That should be plenty of range. In any case - relative color calibration depends on already calibrated source - like good DSLR camera, color passport (that does not need to be calibrated one for this purpose - just printed good range of colors will do) and regular lens. That is the part of kit that I'm missing at the moment - a way to mount regular lens to my ASI1600. I want to show what "regular" processing that we use in astrophotography does to normal colors and daytime photography - by providing side by side shot - one with DSLR in common lighting conditions (summer is coming and I was thinking regular sunlight as source of light) and with astro camera and rgb filters. That way we can compare how accurate colors are when you shoot them the way we are shooting astrophotography. I wonder if anyone did this and posted results online? I have not searched to see if I can find this. Maybe someone already having this kit could give it a go?
  20. I did not read other responses, but for method I outlined above - it is important to have tripod level as any tilt on tripod base (left/right) translates into error in start marker position - same as if time was not properly set. I say get it reasonably level (bubble should be centered) but don't obsess about it.
  21. It is seriously interesting - it looks like someone pinned it down in one place to "the sky"
  22. I don't have experience with polar alignment procedure with handset, but I can tell you some points that might help you out. I polar align using EQMod and laptop - but I guess it should be fairly similar. I polar align without scope on my mount - that makes things easier as there are no counter weights or scope to get in the way. You say you have your polar scope aligned - that is good, I don't have to explain that part. 1. First thing to do is to get Polaris dead in the center by using alt and az polar controls of your mount 2. Next thing to do is to bring Polaris "up" until it hits "clock dial". You need to have your mount level for this to work - so that is important part - use bubble level before putting mount head on tripod - it needs to be level. 3. Now you need to bring 12 o'clock to Polaris. You can either unlock your RA clutch and rotate RA until you get 12 o'clock mark to Polaris or use hand controller and slew in RA until you do the same. 4. Tell controller that you are in "Polar home position" - or that 12 o'clock "is up". Not sure how this is done with hand controller. 5. Tell controller to move 12 o'clock mark to correct position. You see this is main part of polar alignment process. First you and mount need to agree that certain mark (here 12 o'clock) "is up" - or at highest position on a circle. You ensure this by placing Polaris in circle center and then just moving up via alt movement of head - until you reach top of the circle and then rotating circle until 12 is at Polaris. Then controller can simply turn RA axis until 12 mark get where Polaris should be. 6. Using alt and az polar alignment controls - put Polaris at new 12 o'clock position - mind those three circles and year markers - try to get it right. 7. After you are done and before you want to do actual alignment of goto system - return scope to home position (scope home position, not polar home position) - that means - scope up and pointing towards Polaris. Now you can begin goto alignment. Btw, don't worry if polar scope 12 o'clock mark is not "up" with the mount - it depends how it is installed at the factory - and you can end up with CW bar being to the side or even up. Sometimes it is easier to choose one of other markers - like 9 o'clock or 6 o'clock to align on - just make sure that you use the same marker throughout procedure.
  23. Top part almost got me headache. There is something wrong with panel alignment - everything is "doubled" / looks out of focus (but not telescopic focus - too much beer out of focus ) Bottom part is nice and sharp.
  24. @Rodd What filters did you use for above image? Astrodon or Baader? I'm trying another approach for calibration - to derive transform matrix from QE of camera and filters.
  25. I'm seriously reluctant to post this image because I fear for Rodd's health and his interest in astronomy. Rodd, please don't take this image seriously as it is example of partial calibration / failed calibration. I'll explain why it failed and why it gives result that it gives: First a bit of background - this calibration was done with 18 stars. Image was plate solved by Astrometry.net, 18 stars randomly picked (well much more, but many were not found in Simbad), their effective temperature extracted from VizieR / GAIA DR2 and photometric measurement done in AstroImageJ. This provided 18 color points to calculate transform matrix. Here is transform matrix: And here is "Delta E" (calculated in linear RGB), for different temperatures: As you can see - error starts to be quite large at temperature of 6550K - ~0.07, you'll also notice very limited temperature range: ~ 3990K - 6790K used for calibration. Issue being that as we have seen previously, A, O and B class stars tend to be rare in main sequence population - and luck has it, I did not manage to get single of them in my calibration set (luck or simple probability? ). In any case, I did not have one blue star in my calibration set. For that reason error starts to increase on blue end of the scale and given that I had only a few samples below 5200K - error starts to increase there as well. In any case, here is the image: You'll notice that it is completely devoid of blue - but that is because of the poor calibration - I did not have a single blue star in my calibration set. Nor deep orange for that matter.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.