Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. It is good presentation to help people understand that perception of color is one thing and physics of color something else. If you take the same color and put it in different contexts - and people in one context will see it as brown, and in another see it as orange - it is not color that changes it is something else. This is not limited to brown - there are plenty of other cases where this happens, but again - that does not mean that we cannot measure it, match it and reproduce it. Take for example this: You can clearly "see" that A and B are of different color, right? Well, they are actually the same color: There are many things that impact how we perceive particular stimuli. We get the same light hitting our eye - but its our brain that decides what color we "see". In astrophotography, I think that we need to make sure that first is true - that we take care of light hitting our eye be the same "type" of light that was emitted by celestial target. What color our brain is going to make out of it - is not up to astrophotographer as he/she can't control viewing conditions.
  2. You are of course right, but I wonder how much error there is depending on selected transform. If you think about it - even scientific photometric measurements are done in one system or another - for example UBVRI or ugriz. These have specific frequency response (much like different camera sensors). No sensor has exact response like these filters, yet photometry is readily done in different photometric systems and conversion between systems performed. Scientists must correct for their imaging sensor to get compatible results withing system and also cross system conversion needs to be performed. This is for scientific data and not color reproduction. If this area can tolerate errors in measurement - I don't see why we in astrophotography can't. Indeed, if you take two sensors and record XYZ value with each (take raw and then convert to XYZ using selected transform matrix) how different XYZ values will be? We can certainly call it error of measurement rather than error in procedure and difference between XYZ values may well be within deltaE for most spectra?
  3. Yes they can - same process as with any other spectra. We integrate line spectrum of Ha or OIII emission and we get X, Y and Z. In fact - given that we have only one line, X, Y and Z values will be values of X, Y and Z matching curves at respective wavelengths times source intensity. Issue with monochromatic light sources is that they lay outside of gamut of most display devices. We can't represent them as tristimulus RGB values, but we can as XYZ.
  4. Just realized that we said the same thing - only use word color to mean different thing (both valid). You use it to denote perception of light stimuli, while I use it to denote physical quality of light stimuli. First can't be measured while latter can.
  5. Color can be measured as XYZ tristimulus value. Perceptual response will be different - but if I show you two light spectra both having same XYZ tristimulus value next to each other - in any one conditions, you'll conclude that those colors match. In different conditions, you'll perceive different color - and that is fine - we have perception that depends on a lot of things. Sometimes particular temperature feels cold and sometimes hot - depending on our surroundings. Sometimes same sound level seems loud and sometimes quiet. Perception depends on conditions - but that does not mean that physical phenomena related to our perception can't be measured, reproduced and matched.
  6. Most are apod images - collected and presented by various people - and again, hence the variance. If I had to choose, based on my (limited) knowledge of astrophysics and color theory, this version:
  7. Indeed, for some reason we miss the ability of repeated measurement as far as color goes. Take any two images of a galaxy - any galaxy or other object, and if I give you distance in arc seconds between two stars in the image - you'll be able to provide me with size of object in arc seconds. No matter what people do in their processing - measurement of size of object will not change between images. I can do the same thing with color - given the color / temperature of these stars in the image - give me XYZ ratio of particular part of object. Answer should be the same across images. We are not talking about perception - we are talking about measurement of physical quantity. Almost no images are calibrated properly so we can do above color thing. We simply don't care about color to that extent that even from same set of data we end up with vastly different colors.
  8. Here is something that really bothers me Can we do the same for say temperature or length? Can we say - let us each measure length of some object and then someone will decide what correct length of reference object? We have standards, and we all know how long one meter is, and we can each measure object and given that there is defined standard - we can compare our measurements and we can reproduce given length. Now, I can feel that 1 meter is too long and someone else can feel that 1 meter is rather short - but that won't change our measurements or ability to reproduce correct 1 meter. Same holds for color. We have measurement standard for it. We can reproduce it after we measure it. Someone will perceive it as being orange, others as being brown (dark or lit up room) - but it will be the same color. Perception aside - we have standard and way to measure and reproduce things, yet will still feel like there is a need for arbiter to decide what is right.
  9. I think you are somewhat missing the point here. I'm not seeking a way for all of us to get the same perception - it is individual thing. I'm just suggesting that we should be able to do color matching - much like in above example that I gave with: Although we might perceive colors differently as individuals - all of us are quite capable of saying - these two colors are the same or very close - or these two colors are rather different. One of the aims of photography in general is to match the color. If something is blue, you like it to be blue in the photo as well, you don't feel comfortable if it turns orange. Possibly. In some way, you contributed greatly by offering resource for anyone interested in color management to follow up. By raising awareness that color should be managed in amateur AP workflow we can bring the field forward?
  10. You raise a very valid point - how can we tell if our color rendition is "good" or "bad". I think there are two approaches, one, probably foreign to most people would be to rely completely on theory of color and light. Luckily there is other approach - the way most people learn the skill of processing and that would be by trial and error. We can all do the following: Shoot a daytime scene with a device that gives us good color rendition (DSLR / compact camera / smart phone). We can see color with our eyes in real time and one captured by device. We can verify that they agree. Next step would be to take bunch of images with our astrophotography camera. We can make aperture stop to limit amount of light and we can use very short exposures - to get exposure compatible with AP levels of light. Then we take those raw exposures and stack them (no need for alignment as scene is stationary and there are no stars), we can apply all the calibration frames we apply in our regular AP framework and in the end - we process that image. Will we be able to reproduce colors taken with reference device (and seen by our eyes). For all those that worry about color balance (actually color adaptation transform) - don't shoot illuminated scene - shoot your computer monitor in dark room - that will simulate emission scenario that is closer to real astronomy scene. In fact - I can see a competition, if someone shoots relevant data, they can share it for all the others to attempt to process image so that colors are close match to reference image (but point being not to rely on reference in processing stage - only verification once you are done).
  11. Depends what your goal is. Again - skies are local phenomena and we can measure it locally and remove its influence if we wish so. Someone might want to capture - color as it looks from the surface of the earth - in that case, they will leave influence of atmosphere present. If on the other hand, we wish to capture color of object as seen from within our solar system - just orbiting the Sun - then we will remove influence of atmosphere.
  12. Using Andy's work was just for emphasis. I find it quite amusing that you can take bunch of images of the same object, put those in mosaic and get something resembling Andy's art, because each of those has different color cast / mix. First of all, I'd like to be clear about some things. I do apologize to anyone feeling bad because of this, like I said, this is not direct critique of their particular work - it is critique of the process we all utilize when processing astrophotography data. However, I will not forfeit my right to criticize what I believe is wrong approach. Am I saying that at least 8 out of 9 images are wrong - well, yes, that is what I'm essentially saying - but don't take it in context of this competition - take it in broader context. You can always justify this particular case by being a contest and having artistic freedom to do so - and I respect that. You can't really do the same for arbitrary celestial object and images found online. Yes, take any galaxy and search for images and you'll find same or almost the same level of variation in color. I see a lot of comments like (and again, I'm not referring to this thread alone - what I'm talking about has been predominant note for quite some time): this is only a hobby, color can be chosen in artistic manner, there is no real color and so on... To me that seems simply like dodging the issue of proper color management. Would you feel the same if it were something else: Here, look at my artistic freedom I'm not advocating one prescribed / technical way of processing images. What I'm saying is that we need to embrace proper color management in the same way we embrace flats or darks - it is part of processing workflow - but maybe not part of post processing workflow. I'd argue that one needs to master technique before they start to bend it in accordance to their artistic vision.
  13. I just realized something, and I need to apologize to all whose work I used. I did not mean this as a critique of your work directly, but rather of the process that is predominant in AP community. I used your renderings because it gave me chance to easily emphasize what I think is wrong with AP in general - and not only your work. Precisely because it is competition with the same data - hence same RGB values.
  14. Color calibration does not require use of astrophysics and we don't need to check our data against known objects. It can be easily done by inspecting the behavior of measurement device - after all, that is what calibration means - adjusting measuring device so that it is in line with standard of measurement. In this case, since above was not done, I don't really see the problem in using one known property of object to do the same calibration (like relationship between spectrum / temperature and color of the star - as there is no single star in the image - there are many and either we get a surprise that they are all different then regular stars or they are regular stars ). It is a bit like inserting ruler into image of object so we have comparison for size. You touch up here at a very important point regarding color. We use color to represent both physical quantity and a subjective feel. It would be a good idea to distinguish the two. It is a bit like temperature. We can measure temperature of an object with thermometer and record absolute value of temperature. We can also feel that object by hand and record what we feel. These two types of measurement can be in disagreement. If you put your hand in bowl of water that is at 10°C on a hot summer day - you'll feel that it is pleasantly chilling, but same water on a cold winter day will feel lukewarm. We now have two different sensations for same physical quantity. Level of light does not change physical characteristics of the light - it does not change its spectrum. Light will be of the same color regardless if there is little or plenty of it. Our perception / feel of it does change. Once we control physical aspect of it - the we can choose to control feel aspect as well at a later stage - that is what Color Appearance Models do (CAMs - https://en.wikipedia.org/wiki/Color_appearance_model) In order to measure physical property of light - we don't need to incorporate our feel into that yet, not we need to know what the object looks like. It is the same way we can measure 10°C of water in the bowl. This is where I feel people fail - they skew those 10°C measured even before they get to stage where they can use that information to impart particular appearance to end observer. Color information collected by camera is correct reproduction of the light of course, and why do we remove green? Well, that is because we don't have full understanding of color and how it works. Most people think that they can take raw "RGB" data from camera and use it as "the RGB" data for the image - and this is not true. Camera must be first color calibrated to the standard we use in image processing. Cameras have different QE curves for R, G and B, right? To get back to thermometer analogy - it's like we have thermometers with different number scales on them. No one would expect to get correct temperature in Celsius from arbitrary number scale on each thermometer - we would of course first seek transform rule, and after we do that - all thermometers will agree on their reading - they will all show same temperature, and in camera case - they will all show the same color. We actually don't need calibrated display device in order to render proper color of the image. Calibrated display device is needed only if we are the ones choosing the color and we need our choice to conform to a standard. In astrophotography, if we want to show the real color of object - then we can't really choose it. Choice of color is for creative artists like designers. We can argue that there is place for creative arts in astrophotography, and ok, I'll accept that some people will like to tweak color of object for artistic purpose, however, I also think that proper approach to that is to control the color from the beginning and having complete understanding of what one is doing - otherwise, it's just splashing some paint on the canvas Luckily it is a bit more easy than that. It turns out that our eyes work a bit like very low resolution spectroscopes - producing only 3 numbers instead of hundreds of measurements along 400nm-700nm range. Same is true for cameras - they also produce three values and it turns out that these three values are enough to reproduce (more or less accurately) original color. In fact - we have very well defined XYZ color space that does exactly that - pretend to be very low resolution spectroscope that is compatible with our vision system and if you take any two light sources that produce same XYZ value although they might have difference on finer scale of spectrum - people will see those two as the same color. Trick of proper color management is to use above XYZ as a first step - we want our cameras to produce same values as XYZ - this should be our measurement scale that we all agree on. Seeing M81/M82 the way we see them in our images is probably physically impossible. Even if one could fly off to certain distance to the pair so that they appear as large - they won't appear as in our images. That does not mean that we can't observe the color of them or their shape. It is obvious that we can observe their shape and features from the photo. We never argue if that is proper shape or proper features. Why is that? Because we can easily match those across different images. It is repeated same measurement. Color is not - not because there is no proper color - but because we don't measure it properly. We don't produce the same color results because we don't use proper workflow. We justify that by artistic freedom but in reality we are robbing ourselves of getting to better understand the objects - much like if we would loose proper shape of object due to our processing. We can never see bacteria with our own eyes - yet we don't think that images capturing them are somehow false. Is it? What color is the "sky" - blue, red, brown, pink? I'm guessing that at most - one of them is correct, that means that all others are slightly off - just a difference between blue and red away
  15. This is part that worries me. Not some future AI - but rather novice astrophotographers without clear understanding of color processing workflow - looking up images of that object online and then trying to replicate color / results. Just by looking at other people's work - we are starting to form an opinion on what "good" astro photo should look like, and I'm not sure that I like that. I really do like the idea that happens in regular daytime photography. You can press the button and out comes image that really resembles what you are seeing with your eyes. No major color change except for say color adaptation if lighting was significantly different. We know what sort of workflow produces such image in daytime photography - why not replicate it to astrophotography? That way we will get consistent color that represents the color of the object imaged.
  16. To be honest, I have no idea where to find it in LR5 - I haven't used that app myself, but it is very standard operation for image manipulation software like PS and Gimp. For example, you can find it in Gimp grouped with blur filters (if you know where Gaussian blur is in LR5 - you are likely to find Median there as well)
  17. I think that your best bet is to do it manually. Make layer copy of your image and do median filter on it. Set radius to at least 3px (4-5px preferably). This will kill of most bright small features - hot pixel included. Now take layer mask and just "paint" in median version in places where you spot hot pixel.
  18. It is usually better to pick your gear based on targets you want to image rather than budget or some other consideration. What do you want to image? If you want to concentrate on wide field / low resolution work - then yes, 70mm scope + portable mount is a good solution. 130PDS is 650mm of focal length versus 350mm of 70mm F/6 scope paired with field flattener / focal reducer of about x0.8. That x2 in focal length and consequently x2 in FOV size for any given sensor. It is also x2 in resolution for given pixel size. With 130PDS you can go wide field by doing mosaics, but with smaller scope and less precise mount - you can't get higher resolution image.
  19. Not sure I follow. I just said that this FPL-51 triplet is better corrected for astrophotography than any fast FPL-53 doublet. It is rather affordable scope as well.
  20. Visually yes, but for imaging - not so sure. I've seen images taken with OSC sensors and 102mm F/7 FPL53 scope and there is residual blue bloat around hot stars. Yes, it can be removed in post processing or by use of Astronomik L3 filter, it won't be there if one uses mono + LRGB, but still - I'd rather use triplet for imaging if I had the choice.
  21. As far as I can tell, consensus is that it is very good piece of kit - for both visual and imaging. It is sold under many different labels, StellarVue, Telescope Service, AstroTech, TechnoSky just to name a few. You might be interested in review of that scope by Ed Ting (lately in form of YouTube videos - so that is new): https://www.youtube.com/watch?v=sE18PBQRzDQ
  22. What about something in between? I often read here on SGL that people starting with astrophotography want to just capture what is there and share that with their friends and family. We can't say that such images are necessarily taken for their scientific value, but when I read the above - I can't help but think of vacation photos people take. Say you were in Greece on your summer holiday and you went to Acropolis of Athens. You want to share that experience with your friends and family and you take image of say Parthenon. What is the image you want to share with your friends and family? This one: Or this one: If we attribute all to artistic freedom, just because there might be lack of knowledge for proper processing workflow - are we doing disservice to others that come after us and look up to current work as reference? It is no wonder that myths like "no correct color" are being perpetuated.
  23. Mount is all about two things - stability and precision of tracking. EQ5 can handle up to 9-10kg in total and recommended limit for imaging is 6.5kg (according to FLO - usually taken to be 2/3 of total capacity). HEQ5 can handle up to 15Kg for visual and about 11kg for imaging. HEQ5 is heavier mount than EQ5 - I don't have figures but it is certainly at least 4-5kg heavier all together - which means more stability. HEQ5 is more precise in tracking and has more powerful stepper motors (0.287642" per step for EQ5 vs 0.143617" per step for HEQ5). EQ5 can be purchased as manual mount and retrofitted with RA/DEC drives (even single tracking drive) - HEQ5 was powered version "from the beginning" (it was designed that way). In the end, if you mod / tune HEQ5 you can have mount that guides at 0.5" RMS. Out of the box it will do 1" guide RMS most of the time. I don't have stats for EQ5 with regard to this.
  24. I think that this is just perpetuated myth in AP. All cameras have unique raw data - yet manage to reproduce color without much trouble. Look at this image for example: I've taken this image to show how easy it is to reproduce color. I took color wheel and displayed it on my computer screen - then I took my smart phone and made image of it (this was actually real time view of what the camera is seeing) and I took another smart phone and made above image. Without any special calibration or color balance (you can see that ambient light is fairly warm) - these devices were able to fairly accurately reproduce colors. There might be some saturation and brightness variation - but these colors match. It's not like blue has been replaced with red or whatever. If smartphones can easily do it, why do we have such hard time to accurately represent color (maybe because smartphones are, well, smart? ).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.