Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Ok, here is what I mean by there is no need for white balance in astrophotos, or rather we need to find CCM for "no white balance" or however you want to call it. I took this image: Displayed it on screen (actually - I had this thread opened on computer screen and there was this image displayed) and I took my DSLR camera and chose faithful style and custom color balance - where I did not change it but left it all at zeros / center / whatever, like default. I took raw image of that of screen and piece of white paper. My computer screen shows just a tad over saturated colors - or is that camera with "vibrant" colors although I specified faithful? But colors match pretty good. However - white paper in the left part of the image is not white at all - it is rather murky brown. That is because I have artificial light on next to me - 2700K - warm yellow light mixing with cloudy daylight coming from windows. To my eyes that piece of paper looks white - as my mind is doing its thing - but colors on screen look as they should - as in image. If I try to color balance in any way - I get this: Sure, now paper looks ok, but colors on screen no longer look right. My point being - we want CCM that will do following: When you take a picture of light source and you display it on screen, two things happen: you see the color of light source and image on screen being the same color (regardless of the actual color) and when you image screen next to source of light with any camera - and you apply its CCM - you get same linear RGB values for both monitor and light source. Hm, that got me thinking - maybe we can do CCM like that. Take image and display it on mobile phone. Take image of it. Display that image on computer screen and take another image of both computer screen and phone - refine CCM iteratively until you get same raw values for both screen and phone?
  2. This part I'm having issue with - astronomical object don't have original light source that shines on it that you need to create CCM for. You need to create CCM for D65 in order for light that is emitted by object to have same Cie XYZ coordinates as computer screen emitted light of image of astronomical object. Right? You want light from object and light from computer screen when viewed "next to each other" to appear as same color - regardless of what our brain sees that color to be - pure white, yellowish tint or whatever. In order for us to see them as same color regardless of what that color is - we need to match not spectrum but CieXYZ coordinates. Coordinates are matched if same white point is used - in camera and on screen - in this case D65. Or am I missing something? I have interesting experiment, let me process the raw data ...
  3. This is even worse than I thought https://ninedegreesbelow.com/photography/srgb-color-space-to-profile.html Here is good website with a bunch of articles on color spaces, color profiles (important when working in color managed software like Photoshop and gimp) and conversions between. In any case, here is acid test if your color correction workflow is good: Take a raw image with your camera. Use your favorite app to make regular image out of it. Try not to do white balance on it - let it record what it sees if possible - something like faithful or whatever. Take the same raw file and use software like FitsWork to extract raw fits data. Debayer that raw fits data and then use your workflow and compose image out of it (as you would with astro image). See how much different it is to above image handled by specific software.
  4. I'm lost a bit here. DxOMark gives D50 and A illuminant. sRGB requires D65 illuminant as white point. How do we convert matrix from one illuminant to the other without knowing spectral response?
  5. That is one nice looking PI Mine is a bit slimmer, but leaves some things exposed:
  6. White balance is not proper term to use in context of astrophotography. I'll briefly explain why. Our visual system (eyes + brain) has a very interesting feature - it "adopts" to environment and we will perceive color differently depending on our viewing environment. This is particularly pronounced in white color. When something has white color in our eyes - that neither means that it is the "the whitest white" nor that it in fact white color in "absolute terms" (although there is no really such thing as white is product of our perception and it changes depending on conditions). We may see object as being pure white - and this can instantly change when something "whiter" enters our field of view. Sheet of white paper is good for this as it is generally really white. If you have white wall and you see it as white - just put a sheet of white paper next to it and then think - is the wall still white? Another interesting side of this is that we can take that sheet of paper and in sunlight or cloudy day or next to incandescent or fluorescent light - we will always see it as white (given that it is our reference white and it will force our brain to see it as white). Read more about it here: https://en.wikipedia.org/wiki/Chromatic_adaptation https://en.wikipedia.org/wiki/Color_constancy White balance is technique used to compensate for the fact that our brain is flexible and camera is not. Camera is very deterministic in what it measures - it will not measure things differently depending on conditions - it will always make same measurement. You take your camera - you shoot a scene in artificial light - you come back to different environment like your computer and you look at the image - and it is all wrong - you don't remember seeing those colors back when you were looking at the scene. You want colors to look as you remember them. This is what white balance does - it matches what your brain saw under one lighting conditions (at the scene) to what your brain sees under different lighting conditions (at computer). In astrophotography - there is simply no need to do that as there are no changing lighting conditions - you don't have someone turn on tungsten light to shine on nebula and now you want to see how it looks under different environment. In astrophotography you have no changing lighting - each object has unique light - either emitting their own or reflecting near by sources. Hold on, but we are still viewing images at computer screen, right? There must be some environment that we are looking images in and our brain will be "tuned" to that environment. What's with that, you might ask? There indeed is. It is defined in sRGB standard. sRGB is used on computer screens for a reason, or rather sRGB was defined to accommodate working at computer screens. It has precisely defined set of conditions where sRGB colors are perceived correctly. These are parameters according to sRGB standard. You can find more here: https://en.wikipedia.org/wiki/SRGB#Viewing_environment In any case, if you use D65 white point for your display and are in dimly lit room that has slightly warmer ambient light (D50) and looking at not too bright screen (80 cd/m2), etc ... your brain will see colors as they are intended to be seen. Here is important thing - you don't need to calculate white point - sRGB standard is defining that for you, white point of your astro photo should be D65 as long as you intend that image to be viewed on computer screen or shared on internet (like posted on websites and similar). You don't need to do white balance - as long as you encode your color information in sRGB format - you are fine. What should you do then? You should do color correction not white balance. The sensor that you are using has certain QE response for R, G and B colors. These are not R, G and B colors of sRGB color space, nor they are R, G and B of any particular RGB color space. They are just arbitrary color filters that usually resemble R, G and B in their function and hence the names (they are of some red and some green and some blue color - but each of them are different depending on particular camera and not the same as primary colors of any particular rgb color space). For example, here is QE curve of ASI294 - color sensor: And this is response curve of "ideal sensor" - or response curves of CieXYZ color space (absolute/reference color space): (note that these are called x, y and z although they also roughly correspond to r, g and b - to signify distinction - they are not red green and blue). In general, there is a transform matrix (3x3 matrix of numbers) that will transform any raw RGB triplet from your camera to CieXYZ color space. CieXYZ color space has well defined matrix that transforms it to sRGB linear coordinates: And if you multiply these two matrices - you'll get direct transformation of raw RGB triplet to sRGB linear triplet. That is color correction. That is what makes different cameras with different response curves - given same color (well - first part is responsible - raw to CieXYZ, this second part is just that you can show it on computer screen - if you want to print it, you would transform it to CMYK or some other color profile of your printer). In principle you can do the above with photometry and perform photometric color calibration properly by taking raw RGB measurements for known stars - or rather stars of known stellar class and temperature, and hence their XYZ values and then solving for transform matrix. I was hoping that PI photometric color calibration does that - but it does not - it tries for some strange reason to "white balance" the image and uses photometric color index (which is just astronomy expression and loosely relates to actual color) instead of proper color spaces. To understand relationship between star temperature, Cie XYZ values and sRGB values - here is handy conversion tool: http://www.brucelindbloom.com/index.html?ColorCalculator.html So if I want to know values for star that has 7800K temperature - I enter the number and press the button (CCT one): In the end - I did not help you much since I don't know PI that much, but first step would be to find out CCM for D5300 (for D65 illuminant) or to derive it yourself for CieXYZ color space. I did a quick search for CCM for D5300 but I could not find much except this discussion (but not actual data): https://www.cloudynights.com/topic/591807-altering-color-correction-matrix-for-d5300a/ You can try to derive it yourself, but you would need either sRGB calibrated monitor or color checker chart and nice sunny day in western/central Europe (see D65 illuminant definition). Another approach would be to have exact QE chart of your sensor - then we could derive CCM against CieXYZ curves. Third one would be to use your eye to judge color calibration (probably worst approach). Take your cell phone and make it display this image (or any similar one): Turn off all lights - let your mobile phone be only light source and take RAW image with your camera. Now load that Raw photo into your favorite app that let's you fiddle with illumination settings and choose color temperature that will make colors on your phone match those on your computer screen (this requires that you computer screen is properly calibrated). Once you get it - save image as png. Now you need to open that png in some application that let's you do pixel math to reverse gamma and to measure each of three channels for linear light and then compare that to raw values that are not adjusted. As I'm writing this - I'm realizing work involved and the fact that no one will actually want to do this just to get proper color. Photographers do it with color checker charts and software already made to handle this. Our astro software does not have those features implemented yet and I don't think people will bother until they have suitable tools.
  7. Sure. I know that you are not pretending to shoot from somewhere else, but sometimes you remove influence of atmosphere without giving it much thought. For example - that IFN is at least 5 magnitudes fainter (so IFN is 1% or less of sky signal) than natural sky and you can't show it unless you remove sky glow in the image.
  8. Will it redden also surrounding light or will that be blue shifted? In fact, what will be the primary source of light coming from IFN? Background lighting or surrounding start light?
  9. From my research, most of plastic used in 3D printing is actually strong enough as is. Much more important is print/slice orientation and parameters and infill. But even with basic settings - things are very strong. Check this video out for comparison: https://www.youtube.com/watch?v=ycGDR752fT0
  10. It's not hard at all to accept anything that we can verify with measurement. On the other hand - I know how people often process their images and I know what in that process might lead to nebula showing brown color when in fact it is not. There is rather simple way of finding out what the actual color of dark nebula is. - Take set of color calibration images and devise color transform matrix for your setup (that will produce true color in the sense we discussed above - or if you will, if you shot the scene I linked above and compared to any other camera - you would get the same colors as all other cameras - within gamut differences) - Do photometric measurement on calibrated data - Devise linear R, G and B values (or X, Y, Z - any color space really) and represent that color in sRGB color space to be shown here on SGL as color sample (an thus be accessible to most people).
  11. Out of interest, since I don't still own 3D printer (it is on my future list ...) Can I print something like M90 to 96 ID flange adapter on it that will be of satisfactory quality / rigidity? Also question - how does one overcome "fear of plastic parts"? I mean, when it comes to telescopes - I guess we have heard, read and experienced many times that plastic parts are low quality parts. This has become somewhat of a norm in the way we think about things - you know "Plastic focuser? Pass that one ..." and here I am, thinking of attaching focuser (and consequently - all that goes on it) to OTA with a piece of plastic. I don't really need to further describe uneasy feeling, right?
  12. My point is that actual color of object is not the same color that you have in your images. I listed two possible reasons and I believe that brown color in most images is due to combination of the two. One in a form of a quiz question - and that one is particularly interesting as it will show something many people do not expect - and is very much related to the whole color thing. Other is the fact that our atmosphere shifts color of object to red by scattering shorter wavelengths more than longer (for that same reason sky appears blue and not white). So if you take what is inherently grey object and observe it thru our atmosphere it can easily happen that it looks brown instead - and you conclude that object is brown - while it is grey in reality. There are many things where I don't agree with Clark. In fact, many statements can be easily shown to be incorrect. Take for example this: This is from the page on nebula color. This is XY chromaticity diagram (incorrectly displayed on sRGB profile while showing colors of whole human vision gamut). Simple math shows that colors that are generated as linear combinations of sources (points on this diagram) - lie on lines connecting these dots. Above marked area is much smaller than what can be generated by marked narrow band sources. In fact, proper diagram explaining that would be: Any color inside marked outline can be generated by source that consist from linear combination of marked single wavelength sources (hydrogen alpha - gamma, OIII and helium I). If we want to be more accurate - we will actually use proper XY chromaticity diagram that can be show in .jpeg/png image, and image would look like this: This really means that these sources are capable of generating almost all colors our display devices (computers and phone screens that support sRGB profile) are capable of showing except for saturated greens which lie outside of marked region. Btw, notice that sRGB triangle is actually - triangle. This means that any sRGB color can be produced with Green, Red and Blue components in linear mix - and actual Green, Blue and Red - have coordinates of vertices in triangle - again showing that any linear combination will lie on line / region enclosed by set of points that have chromaticity of sources.
  13. It also depends on what is the resolution that you want your image looked at. Let's take your camera - 80D as a starting point. It has 6000x4000 sensor with 3.7µm pixels. With 300mm lens - you'll be working at 2.54"/px. Let's leave aside the fact that I think that sort of resolution is too high for camera lens, if one looks at 100% zoom (screen pixel == image pixel) they will be looking at your image at 2.54"/px Say you post your image online, on facebook or here on SGL. Majority of the people will not see it at full resolution and odds are you'll even scale it down to help upload it (reduce size). They will be viewing it at something like 1500 x 1000 at most. That is no longer 2.54"/px - but rather 10.16"/px. Any perceived error will be reduced x4. To answer your question number 3. I've found that I can push AzGti without guiding at that sort of resolution ~10"/px for up to a minute. Longer than that and there is trailing due to periodic error. With guiding you'll probably extend that to multiple minutes, however, due to mount itself - I would never use it to image at resolutions of about 2.5"/px. You need to be at twice lower resolution at least - something like 4"-5"/px to be comfortable with this mount. Even when guiding. It is in fact widefield mount.
  14. Indeed, mount is fundamental part in AP setup - one that you should really start with and allocate most of the budget toward. Problem is that visual and AP don't overlap well in that area. Visual setups favor larger aperture newtonian scopes - these are most cost effective way to get into serious observing, both planetary and deep sky. AP setups require EQ style mount. Newtonians on EQ style mount are ergonomically probably the worst combination . Eyepiece and finder get into most awkward positions as you point scope in different directions around the sky. You constantly need to rotate OTA in rings to overcome this - and larger the scope - quicker it becomes an issue while observing. Having eyepiece at the front of the scope also means that you get widest range of eyepiece location heights with EQ mount. With refractor and compound scopes where eyepiece is at the back of the scope and you have diagonal mirror - things are much more comfortable on EQ mount. You don't need to rotate whole scope - you just rotate diagonal. Range of eyepiece heights also varies less. In order to overcome all of that - people often end up having multiple scopes and multiple mounts. This really depends on your budget. AP is rather expensive, and that is something you should be aware from the start. It requires quite a bit of planning and thinking ahead in what you want to achieve in order not to spend too much when getting sidetracked. Getting 8" F/6 dob for visual only while contemplating your AP journey and perhaps reading a book like this one: https://www.firstlightoptics.com/books/making-every-photon-count-steve-richards.html (often recommended to beginners, I have not read it myself but have confidence that sane and helpful advice is given) is not the worst idea out there .
  15. Hi and welcome to SGL. I must say that I don't really follow what you are asking From what I figured, you do have idea what sort of scope you want to use - you just don't know if you want goto now? Right? You also mention AP in the future? To be honest, in my view Dob and AP don't mix. Not because of dob mount - it's more because of telescope. There is only one telescope that is suitable to be put on dob mount and used for AP - and that is F/5 newtonian. Possibly in 6" or 8" format. Some people use 10" or even 12" Newtonian for AP, but that is huge step up and I would not recommend it for beginner in AP. I have 8" f/6 newtonian and I did mount that OTA for astrophotography and really only used it once or twice for testing purposes. Tube is too large and heavy for Heq5 mount and acts as a sail in even slight breeze. However, I enjoy it very much as purely visual scope on manual dob mount. Realistically what you are looking at is 6" F/5 newtonian (as there are no F/5 8" dobs as far as I know). Getting one on dob mount is fairly easy. There is this scope: https://www.firstlightoptics.com/dobsonians/sky-watcher-heritage-150p-flextube-dobsonian-telescope.html However, I'm rather reluctant to recommend that scope to anyone except people on tightest budget. It is good scope for visual in everything except focuser - which is not the best. However, focuser is very important for astrophotography. If you want to have scope that is good for visual and usable in future for astrophotography, then look at this model: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-150p-ds-ota.html You can either DIY dob mount for it, or maybe, because it is short tube, opt for AltAZ mount for visual. This one will be very good: https://www.firstlightoptics.com/alt-azimuth-astronomy-mounts/skywatcher-skytee-2-alt-azimuth-mount.html If you want to "future proof" yourself from the start, you want goto and you want alt az mount for observing, you can actually get all of that if you pair above scope with either of these mounts: https://www.firstlightoptics.com/equatorial-astronomy-mounts/skywatcher-az-eq5-gt-geq-alt-az-mount.html or heavier version: https://www.firstlightoptics.com/equatorial-astronomy-mounts/skywatcher-az-eq6-mount.html Both of these work in both Alt-Az and Equatorial mode. Alt-Az is very practical for observing, and equatorial is much better for astrophotography.
  16. When doing a stack, if you select a reference frame and do linear fit on that to normalize frames - then I think you don't need to do any special processing for changing over imaging session - as each other sub will be normalized to exact time of selected reference frame. After stacking it is rather easy to do color correction for atmosphere - you select couple of stars and calculate correction based on their stellar class / color index (similar to differential photometry). Only thing you need to be careful about is interstellar reddening. However - people don't do above - in part as it is not available as simple "click here" option in processing software and doing it by hand requires both knowledge and tools. Most popular software package for processing PixInsight - has tool that I expected to do above - but it turned out that it does not. It is named Photometric color calibration - and that name suggests above process, however it does not do actual color calibration in terms of image - it does some sort of "color index" calibration for photometry. You can read details here: https://pixinsight.com/tutorials/PCC/
  17. Well, for all those interested in practical answer to this question, here is interesting website: https://www.dpreview.com/reviews/image-comparison/fullscreen And here is screen shot of that tool: You can select up to 4 different cameras to be compared in various parts of above test panel and look at results. Just to be further clear with what I'm trying to say: For me, camera is measurement device. Expecting two different cameras to produce different results and being fine with that is like expecting two rulers to measure different dimensions of the box and saying - you know, it depends on who is measuring and there is artistic freedom in how you measure Artistic freedom comes after you measure - in the way you write your numbers down. You can scribble them on a napkin or you can do some calligraphy wonder carved on a piece of frosted glass. As long as you keep the numbers same - so everyone can tell dimensions of the box - you preserve authenticity of measurement. Here we are discussing something else - what happens when you can no longer read the numbers properly. That is the part I'm concerned with. To further extend this analogy, not doing proper color calibration and not being careful what color space you are using in the end and doing proper transform to that color space - is like measuring in some arbitrary units of length (like 1/3 of an inch is basic unit), writing numbers down and then not telling anyone what units you used to measure the box. Take two such results - one written on napkin and one carved on frosted glass and there will be difference in artistic impression but also - if you compare numbers, you'll find them different and wonder - how on earth could these two represent same box? I'm still not convinced that IFN and dark nebulae have reddish/brown tone to them, and I'll explain why. First - here is sort of a quiz question for everyone. Say you have jpeg image and it has color (240, 48, 31) (240, 48, 21): If you were to take that color on your screen and cover it with ND filter that removes exactly 1/3 of light and make another square with different RGB values - what RGB values should you use in order to match color between filtered and this squares? Answer to this question has to do with brown color of nebulosity among other things. Second thing that causes me not to accept brown color as color of interstellar dust (I was split second away from making a joke on dark matter here) is this: Yes, these are image of the same object, and yes - one is white-yellow while other is yellow-orange. Both are shot with presence of something that obviously alters color of the object - a sort of filter that seems to remove blue parts of the spectrum when you shoot thru it. Yes, atmosphere Look at this graph: This is in magnitudes - which means that is even steeper in ratios. So what do we get when we apply that model? If we start with nice gray: and we reduce green some and blue some more: What, brown? No way!
  18. I can do that whenever I sit relaxed outside and look at the stars. I look at the bright star with my left eye and I can see it as I should - single point of light. If seeing is good it won't flicker - maybe slightly changing brightness every so often. Then I switch to my right eye and that is pattern that I see instead of single pin point star. It is quite distinct and I just remembered it - no need to draw it or anything.
  19. I think that curvy retina could be diagnosed by changing aberration with respect to incident angle. PSF would change depending on angle between the star and eye.
  20. Each surface where two mediums with different refractive index meet is degree of freedom for optical designer. With air (or oil) spaced doublet you really have 4 such surfaces: First lens has air/glass and then glass / air surface (or glass / oil) and second lens has again air/glass (or oil/glass) and then glass/air surface. Each of these 4 surfaces (first lens front and back, second lens front and back) can have different curve to it and combination of these curves (and distance between them) defines optical properties of system. With cemented doublet - you only have 3 surfaces - you have air / glass1, then glass1/glass2 and finally glass2/air (as glasses are touching - they must have the same curve where they meet). Less surfaces to work with and less things you can tweak for optimum optical performance.
  21. Real thing - type of lens, you can have air spaced, oil spaced or cemented doublet (probably even other kinds depending on material used between lens). Air has refractive index that needs to be matched to glass when designing optical characteristics. Air spaced is usually better than cemented (more surfaces that can be independently curved) and is light enough as construction does not require additional weight.
  22. Interestingly enough - even people who take processing very seriously and try not to push their data beyond what it can deliver, that like tight stars and detail and want their denoising to be subtle and unnoticeable - can't make repeatable results. This is the reason why we have this discussion in the first place and the reason why there are long held beliefs - that "there is no actual true color of the object and you can do what you want" or "no image is authentic" or whatever. But that is not reality. Two people should be able to produce same looking image of celestial object using different gear if they agree on basic set of rules - like use authentic color, stretch luminosity in certain range (like mag27 is black point mag18 is white point, gamma is set at 4, etc ...) It is just when we define exact protocol that we can get repeated results. When you take two different DSLR cameras and take image of object, you will get the same image provided you use the same settings and the rest of the protocol is defined at factory.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.