Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. It really depends on how you process your color image. What do you think about following approach: ratio_r = r / max(r, g, b) ratio_g = g / max(r, g, b) ratio_b = b / max(r, g, b) final_r = gamma(inverse_gamma(stretched_luminance)*ratio_r) final_g = gamma(inverse_gamma(stretched_luminance)*ratio_g) final_b = gamma(inverse_gamma(stretched_luminance)*ratio_b) where r,g,b are color balanced - or (r,g,b) = (raw_r, raw_g, raw_b) * raw_to_xyz_matrix * xyz_to_linear_srgb_matrix This approach keeps proper rgb ratio in linear phase regardless of how much you blow out luminance due to processing so there is no color bleed. It does sacrifice wanted light intensity distribution but we are already using non linear transforms on intensity so it won't matter much. Again - it is not subjective thing unless you make it. I agree about perception. Take photograph printed on paper of anything and use yellow light and complain how color is subjective - it is not fault in photograph - it contains proper color information (within gamut of media used to display image). I also noticed that you mention monitor calibration in first post as something relevant to this topic - it is irrelevant to proper color calibration. Image will contain proper information and with proper display medium it will show intended color. It can't be responsible for your decision to view it on wrong display device. Spectrum of light is physical thing - it is absolute and not left to interpretation. We are in a sense measuring this physical quantity and trying to reproduce this quantity. We are not actually reproducing spectrum with our displays but tristimulus value since our vision system will give same response to different spectra as long as they stimulate receptors in our eye in equal measure. This is physical process and that is what we are "capturing" here. What comes after that and how our brain interprets things is outside of this realm.
  2. Somehow I don't see them be priced at 1000e. I also don't like the fact that largest of sensors used is 17mm diagonal. Well, I stand corrected - cheaper models are indeed around 1000e mark - but that is for sensors up to 11mm diagonal.
  3. I will have to disagree with you on several points. In fact even you are in disagreement with yourself on certain points, like this one: Important point being - non linear camera response, and then in next segment you write: Which is definition of camera linearity - as long as for same source and different exposures, results have ratio of intensity same as ratio of exposure lengths - camera is linear. In fact - most cameras these days are very linear in their response, or at least linear enough not to impact color that much. This is common misconception. Color theory, while having certain tolerance / error is well defined. There is well defined absolute color space - XYZ color space, and any source out there will have particular coordinate in XYZ color space, or more importantly xy chromaticity - because, as you pointed out magnitude of tristimulus vector depends on exposure length among other things. Given transform matrix between color space of camera and XYZ color space - one can transform raw values into XYZ color space, with certain error - which depends both on chosen transform matrix but also on characteristics of camera. Transform that produces least amount of error is usually taken. Once we have xy chromaticity of source - then it is easy to transform that to color space of reproduction device. There will again be some errors involved - this time because display devices are not capable of displaying whole XYZ color space for example - gamut is lower. Choice of sRGB color space for images that will be shared on internet and viewed on computer screens is very good choice as it is de dacto standard and expected color space if color profile is not included with image. All above means that if two people using different cameras shoot the same target and process color in the same proper way - they will get the same result - within above described limits (just camera transform matrix errors since both will produce images in same gamut space - sRGB so that source of error will be the same). Not only that such images will look the same, but if you take all spectra of all sources in the image and ask people to say how much recorded image is different in color from those spectra (pixel - for pixel) you will get least total error compared to any other type of color processing - or in another words it will be best possible match in color.
  4. Sort of. Not like in nuclear reactor where we have stable chain reaction (unlike in explosion where it is uncontrolled chain reaction). Using radioisotopes means using small amount of radioactive material - radioactive enough to keep its own temperature "lukewarm" and then using Seebeck generator or Thermoelectric generator that is a solid state device that works on temperature differential - on one side warm radioisotope on the other - cool empty space (or conducting element exposed to empty space). Heat flow generates electricity.
  5. I'm not recommending that you do that - I was just showing that it can be done really easily. I don't like effect that this sort of processing produces.
  6. You are using Gimp as far as I can see? It is really easy to get that sort of histogram shape that has been discussed above, here are steps (on generic bell shaped noise image): Step 1: do levels to do initial linear stretch and get background to be visible (I made random noise image to resemble background of astro image - I added one pixel star to get contrast range of astro image): Step 2: Now we have nice bell shaped histogram after first linear stretch, in second we do the curves like this: Most left point will be raised a bit - so our output is limited to the bottom of not going all the way to the black. Same point on "x" axis is starting to "eat" into histogram. Next point is just "pivot" point so we get nice smooth rising curve and next two points are just classical histogram stretch. This configuration of curves is often seen in tutorials and it produces histogram looking like this: flat on left side and almost bell shaped on opposite side - a bit more "ease out" because we applied gamma type histogram stretch in that section. Just one round of levels and one round of curves with more or less "recommended" settings and we have produced that effect.
  7. I've also noticed that often background looks funny in the images you are producing. I don't think it is necessarily background clipping - in this example it is more to do with distribution of the background that gives it such feel. You are right to say that your histogram is not clipping - but it is not bell shaped either: This is green channel - histogram in range 0-32 binned to 33 bins (each number one bin) in first image you posted: Same thing in second image that you posted - one with color correction and background that looks better: Although your background is not clipping - histogram of it shows left side to be very steep in first image - it is almost if histogram was indeed clipping but not at 0 value but rather somewhere around 3-4. Second image shows histogram more resembling to bell shaped curve as you would expect from natural looking background - and this shows in the image - background looks better indeed. At least to my eye, and as far as I can tell @ollypenrice agrees with me:
  8. Nice capture. To my eye, this image shows typical color balance that one would get from StarTools processing. If you aim for more realistic color in stars you really need to do proper color calibration of your image. For reference here are few good graphs: First color range: Second - frequency of stars by type: Mind you, this second graph is too saturated - that happens if you don't gamma correct your colors for sRGB standard and yet use image that implies sRGB standard. I think that first scale is more accurate but does not go as deep as this scale (which is to be expected as O type stars are about 0.00003% of stars so above range maybe stopped at B rather than going all the way to O type). Match those two with your image above and you will see that you are too cyan/aqua, and possibly over saturated in your star colors.
  9. Do you mind posting original sub, unless that exact jpeg came out of the camera? Fact that above image is 1920x1280 means that either you or camera made image smaller. If this image as is came from camera - then Canon probably implemented very nice way to reduce image size - use of algorithms similar to binning. Additional thing that can happen is jpeg smoothing. In any case, combination of those parameters could make image look as x10 longer exposure, so image was not in fact 11s but about 110 seconds. Still impressive result.
  10. I honestly can't see the distinction , but I do understand that you have certain requirements. As I don't have experience with either, I can't help much except to point out that Canon lens is a bit on a heavy side to just hang of the camera body - you might want to look into some sort of bracing system - like rings and dovetail to support the lens.
  11. Two very different focal lengths. Do you have any idea what camera will you be using and what is intended FOV? Do you already own either of them? My personal choice for such small focal length would be something like this: https://www.firstlightoptics.com/william-optics/william-optics-2019-zenithstar-73-ii-apo.html + https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html (provided that focuser on scope has M63 thread - and being 2.5" R&P - I would believe that to be the case. Also I would probably go for TS photoline 72mm scope over WO one).
  12. Actually, now that I've inspected image further - it's not shadow - it is reflection. At first I noticed just dark spot - but it is bright doughnut around this dark spot that is the issue - it is reflection of unfocused light: Maybe this screen shot will help to see it easier: That sort of thing happens when there is a bright star in the frame and it can't be corrected with flats - it is reflection - same as this here: There is a faint halo around that star. In your image it looks like culprit is Alnitak - it is same distance in opposite direction with respect to optical axis.
  13. No, light can't reach it as it is behind secondary mirror, so you can't see center mark sticker. Even if you could - it would be so much out of focus that it would not register. What you are seeing here is dust shadow - a normal thing and is corrected with flats. Given the size of it and the fact you are using newtonian scope and hence very likely coma corrector - it is probably on the surface of coma corrector. Take flats to correct it (in fact you should be taking flats anyways). Here is screen shot of one of my flats from a RC scope: It shows a lot of "center marks"
  14. Can't really tell - I'm not using anything in terms of collimation aid. I do occasional touch-ups on a star - pop in high mag EP, keep star centered (Polaris does great job with dob mounted manual tracking scopes) and do slight defocus and observe how concentric rings are.
  15. I think that really important part of the puzzle here is precision of the focus. Next time if you feel like experimenting - try not having perfect focus and seeing how it affects threshold visibility of stars.
  16. My guess is that it should be 50/50 for no WB, but I can't be 100% sure. If you had trouble with ASCOM driver download - did you try changing USB speed parameter in ASCOM driver (lowering it)? BTW for ASCOM driver to work - you need to have both driver installed (just in case if you only tried ASCOM and it did not work).
  17. I think this solves the problem. You need to use ASCOM driver to do your captures. Once you install that driver, you will have option in SGP to choose ASCOM driver and there you will be able to set gain and offset.
  18. This part got me worried as well. I could not find any reference of WB in ASCOM drivers online. In fact, I found one discussion on ZWO user forum that talks about WB settings - and there Sam stated clearly that ASCOM drivers don't have this setting. I can think of only one possibility for changed White Balance. Have you used SharpCap by any chance at any point? I suspect that there might be issue between native and ASCOM drivers for this camera model. In native drivers you can set WB and it is set to something like 50/100 and 90/100. I remember that later is for blue, and former is perhaps for red channel, but I'm not 100% sure on that one. Next thing that I read in the same post on ZWO forum is that once you set those WB values - they will not change. Maybe if you set WB values in native driver and ASCOM driver somehow works with native driver to produce images - it is quite possible that ASCOM results are impacted by native driver and remembered WB settings. This is of course just a theory - maybe even a bit far fetched - but that is only explanation I can offer on this. Above is taken from internet - it shows sharpcap with ASI294 camera and relevant controls have been marked with arrow. You can't check for negative values - as you'll never get negative values out of camera. You can only conclude that there is histogram clipping to the left - by examining pixel ADU values and looking at the histogram. Here are values for subs you posted: 1) Gain 1 offset 0 It has 0 as min - this is very strange. I think it is again consequence of that white balance thing (I have no other explanation for it). All other ASI cameras that I've seen behave like this - if they have less than 16 bit ADC - all ADU pixel values that come from camera will be multiplied with either 4 or 16 - depends if camera is 14bit or 12bit. This is because 12/14bit values are stored within 16bit word in MSB (most significant bit) with LSBs being 0. That makes number values - multiplied by 4 (2^2 for 14bit camera) or 16 (2^4 for 12bit camera). I have never seen 0 as ADU value. On my ASI178 which is 14bit - lowest number I've seen is 4. On my ASI1600 which is 12bit - lowest number I've seen is 16. In any case - histogram shows that there is clipping to the left and this combination of offset and gain is not good. 2) Offset 10 and Gain 1: This clearly shows that histogram is separated from left side and minimum number is 476. Next to it is max number - which is odd number - 1533. You can't have odd numbers if every value is multiplied with 4. Again this shows that values have been altered from default format. Now important bit - I can't tell if histogram is ok or not as it has been white balanced. I suspect it is ok - but look how far right is peak for blue. It is not that far right because it has that much offset - it is that far right because it has been multiplied with some value - white balanced. What if it was too left before multiplication. Offset 64 is way too high for this camera - no need to go that high. Offset 10 is quite ok. Use Gain 120 and offset 10 and I think you will be ok. Big question remains - have you used SharpCap and if maybe WB settings were changed from that software?
  19. Here is synthetic luminance from wiped channels in 1/4, 1/2, 1/4 ratio, stretched beyond breaking point - just to show that there is residual vignetting: Cropped and a bit touched up: It's actually quite decent
  20. Use 32bit float point. I prefer the use of Fits format as it is standard format for astronomy images, but many people prefer tif because it is easier to work with - most software for image processing will open that but not fits. Gimp opens both. Do note that older versions of Photoshop can't work with 32bit data and you need 16bit data, and Gimp prior to 2.10 is limited to 8bit data only. Wim gave better advice than I could because I don't work with DSLRs, but in case your flat calibration does not work - it could be due to darks rather than flats. I would suggest that you use bias as master flat dark, use bias and darks and turn on dark optimization in DSS and then try calibration like that. While still not perfect, I think that would be the best way to calibrate your data.
  21. Thing with cameras is that they can't produce negative values. One would think that because we are counting photons, we don't need negative values, and in perfect world that would be the case, but since there is read noise present, and read noise is gaussian type noise centered on 0 - meaning it can have negative values, subs that don't contain light can end up being negative due to noise (actual pixel value can't be negative, but pixel + noise can be negative). Since camera can't produce those negative values, only positive, we need to make sure numbers that we read off from camera contain all the information - we do that by adding the offset. So if pixel values for example are in range of -10 to +10 (actually zero because there is no light, but read noise is such that it produces noisy values between -10 and +10) - we can say - let's add 20 to each pixel value and then we will have all the values to be positive -in range +10 to +30. We have all the same information if we later subtract that 20 - we will get our negative values. Now if you imagine following - row of numbers -1,0,1,1,-1,-1,0,1 .... random equal spread of -1, 0 and 1 values and you decide to take average of that. Average value will be 0. Let's now do the same, except this time do the "clipping" - we throw away all negative values and replace them with 0. Now we have 0,0,1,0,1,0,0,1,1 .... sequence. We take average of that value and it is no longer 0 - it must be higher than zero. Act of clipping changed some things - it changed our noise distribution and changed our mean value. If clipping occurs when one is doing darks - it can happen (and most likely it will happen) that no clipping will occur when one is doing lights. There is Sky level (light pollution) that will create real offset and all those negative values will end up as positive values. Since darks are clipped but lights are not - they no longer match and that leads to issues with calibration (and it even affects flat calibration) but also due to changed noise statistics - stacking is less effective in removing noise (stacking works when you have mix of poisson and gaussian distributions, or each one individually, but it will not work as good on clipped data). This is why it is important to set offset properly - and you determine how to set offset properly by examining bias subs. If all pixel values are above lowest value - you are safe. There is exception to this rule - if you have "cold" pixels that don't have much bias signal in them and are always very low in value - but hopefully you will have no or very few of those (recently I've seen example of ASI1600 camera that has 12 cold pixels - it created confusion with above "make all pixels higher" rule). What sort of issues with flat calibration do you have?
  22. Ok, here are my findings: 1. There is no light leak so you are ok with that. What I suspect is going on with bayer pattern in the darks / bias subs is some sort of white balance in drivers. I'm not familiar with drivers of this particular camera, but if you turned on automatic white balance in drivers - you might want to turn that off. I'm suspecting AWB because this camera is 14bit ADC camera - that means that all values in recorded raw sub should be divisible by 4 (16 bit total, 14 bit used - remaining two bits set to 0 - same as number being multiplied by 4). It is not case with your bias and dark files - there are odd values as well as even. This can happen only if values have been changed once they were read out of the camera - for example AWB process in drivers could explain that. AWB could also explain bayer patter. Since there is some offset to pixel value (pixels in bias are not 0, but rather positive value roughly equal for each pixel) - multiplying that with different factors for each color will create different values. Even values in bayer matrix show this could be so - since strongest is blue then red and green is the weakest. If you look at ASI294mc QE curves - it is other way around - green is strongest, red is second and blue is the weakest - which means that they have to be multiplied with certain factors to "even out" giving strengths of pixels in bias/dark that we see. In any case - I would recommend turning off auto white balance in drivers. With that turned on, I can't judge if offset of 10 is good value because I can't see actual read out values of pixels but only WB corrected values. Another thing - AWB should not mess up your calibration and everything should work fine. 2. Your subs indeed work fine. Here is comparison on one sub between proper calibration and raw sub without calibration: This is the same sub - left has been calibrated by (ligth - avg_dark) / (avg_flat - avg_flat_dark) while right one is without calibration. Both subs were binned x8 to get enough SNR to show features of the image. As you can see - you get pretty decent sub with calibration - no vignetting and it looks clean in comparison to raw sub. As you see - another confirmation that above is probably AWB, as that is not going to affect calibration, and indeed calibration works fine. 3. I would just recommend you in the future to use Gain 120, to turn off AWB and check if offset has good value - and continue taking nice images HTH
  23. Nice data, I believe there could be more in there - maybe if you post 32bit tif? In any case - here is Gimp 2.10 quick processing: I'm not sure that flat fielding worked as it should - there is still vignetting in that image. But on the bright side - Running man is visible as well and I think it can be pulled out with a bit more effort and 32bit tif version.
  24. It's rather late over here so I'll be brief for now, but I'll take a closer look tomorrow in the morning. If you are seeing bayer pattern in both bias and darks - it could be that you have a light leak. It might seem to you that capping off camera and being in dark room is going to solve light leak problem, but you need to understand that your camera has only AR coated window and that you don't have any UV/IR cut filter on it. In fact here is what transmission of your anti reflexive cover window on camera looks like: It will pass all wavelengths in IR part of spectrum (700nm and above). Another important thing to remember is that plastic is often transparent to IR wavelengths. So just putting plastic cover on your camera and being in dark room might not be enough to prevent IR leak - especially if you have heat sources near camera. Best way to be sure you don't have any light or IR leak is to do following: put plastic cover on and then place camera "face down" on wooden table, or better yet - use aluminum foil to make additional layer on top of plastic cover (wrap camera in aluminum foil). I use first approach with wooden table, but some people do it like this: I'll have a look at your subs tomorrow to see if I can figure out anything from them. Btw, for ASI294 I would use gain 120 - as that puts camera into low read noise domain, and I would raise offset until I get very nice histogram separation from the left side on my bias subs. Not sure what that should be with ASI294, but with my ASI1600 I use offset of 64. One way of doing it is to set gain that you are going to use, and then set offset at particular value - for example offset 10 that you already used. Then shoot a bunch of bias subs and see if there are pixels that are too low in value - something like 4ADU or similar and if histogram is well away from left side. If you find that it's not the case - raise offset and repeat. Once you find value that you are happy with - then prepare your darks with those settings (temp, gain, offset and wanted duration) and use same settings when you go out and gather actual data - lights. Don't use bias in calibration though - it is not needed - darks contain both dark signal and bias signal so they alone will calibrate rights properly (also some CMOS sensors have issues with bias, and until you check your camera for bias issues - it's best to avoid using it in calibration).
  25. If they have same central obstruction, which is not the case, if they have same transmission, which is not the case, if they have same F/ratio, which is here the case, if they have same surface quality and correction - strehl ratio - which might or might not be the case and are affected the same by atmoshpere - then aperture rules . I support the notion that aperture rules, and I'm quite familiar with the idea that newtonians and other mirrored systems in general have slightly lower light grasp then their aperture would suggest - like we shown above - that depends on quality of coatings and size of central obstruction. For what is worth, 6" F/8 Newtonian is going to blow away 6" F/8 refractor of the same price class in almost every aspect. It will also be outclassed in almost every aspect by high quality ED/APO 6" F/8 scope, but on the other hand - if you pay that much for a mirror and general construction of newtonian - gap to ED/APO closes real fast (never going to be equal, but very close).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.