Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I don't think it should Calibration does not really care if sensor is Mono or OSC - math and procedure is the same (except maybe flat normalization - but that can be skipped and it will still work). Maybe it has something to do with issues reported with narrow band / duo band filters. I personally don't like using those on OSC cameras anyway, but yes, funny things with odd angles could be down to pixel matrix.
  2. Maybe because authors as well as users don't fully understand processing workflow, or are influenced by "box" - they make software the way users are accustomed to using it. In any case - adding at least "let me do this for you" kind of wizard might not be bad idea (and possibly with steps that users can inspect if they want to learn further).
  3. Because humans can be very very poor at judging things. If I give you two subs - like on 3 minutes and on of two minutes and ask you following question: Imagine first one being stacked 6 times and second one being stacked 9 times - which one will have less total noise, would you be able to tell just by looking at the subs? Proper way to handle this situation is to use both theory and experiment - but in a correct way. Be sure that you know what you are measuring and why. It is enough to measure just single calibrated sub for background signal level if you know your e/ADU and your read noise. In fact - you can do it before imaging time - on some of your old subs and adjust exposure time for future sessions.
  4. I don't think there is as much variety in data as might seem at first. PSF tells you all you need to know about image sharpness / resolution Stacking tells you all you need to know about pixel intensity and noise statistics (this is by the way very under utilized information - no one is using standard deviation of pixel samples - although it gives you very straight forward data on the noise) Color calibration is done once per equipment profile (so it is even easier than frame calibration that we regularly use). Stretching can be done in such way as to suppress background noise (again, statistics tells us what is noise and how much of it there is) and highlight data without over exposing bright parts (whatever is bright and is not star core needs to be kept from over exposing). Gamma curve can be kept standard or adjusted so that there is the most information in the image (calculating image entropy for different gamma settings). I think that variety in data is not the issue - it is variety of "processing workflows" that people use, possibly without fully understanding what they are doing.
  5. There is really not much difference between the two - "well lit daytime image" and DSOs. Sure, photon flux is very different - but we use long exposure to compensate for that. Don't think there will be Astro camera with single button - but software part is very real option. Equivalent of making perfectly calibrated stacked, color calibrated, sharpened and noise reduced image is already there in form of math / algorithms developed.
  6. Well, that is the point exactly - it should not be hard to process either of two images for "proper" result. After all - it happens in cameras and in mobile phones - you point them at something, you press the button and you get very nice looking image that corresponds to what you see.
  7. Just realized that I used fits works and that it applies white balance, so above data is not quite raw. Will use dcraw to extract actual raw data and I'll post it later.
  8. Forgot to mention - scene was shot with Canon 750d if someone fancies having a go at proper color processing workflow. https://www.dxomark.com/Cameras/Canon/EOS-750D---Measurements
  9. I also did very basic processing in Gimp - levels, curves, some saturation and denoising. The way I usually process image when I don't pay much attention to color workflow. It turned out much better than I expected: Even with boosting saturation colors were rather difficult to reproduce (can't get sky as blue as above for example). Wavelet denoising in Gimp does work wonders though.
  10. For anyone wanting to give it a go, I prepared some files I took two exposures of the same scene. One is properly exposed and processed "in camera", while second is very under exposed (to simulate large dynamic range of astrophotography) and as a consequence - rather noisy. Here is reference image: (yes fellow astronomers, that is clear sky, that is how it looks ) Here are extracted colors (split debayer) of raw image for same scene - try your astro processing and see what sort of result you get. red.fits green.fits blue.fits
  11. Here is an interesting question. I see that many people struggle to do processing of astro photos, and it is certainly considered by many to be some sort of art / magic needed to get a good image. What do you think would happen if you took regular daytime photograph (raw data out of DSLR / OSC or Mono + LRGB) and took that thru the same paces you usually use to process astro photography? I think it would be interesting to see - as we know how to judge good daytime photo, but would we get anything resembling a good daytime photo by using our favorite astro photography work flow?
  12. Very nice processing! To my eye, best so far in this thread.
  13. Problem with doing it in daylight is that you need to do it on a distant target - like at least a mile. Closer your target - further away moves your focal plane. You want infinity focus - and that is closest position to lens - focusing on anything closer moves focal plane further out. For something really close - like twice focal length - focus position shifts one full focal length behind the infinity focus (if you try to find focus position for 1m scope on object 2 meters away - you'll miss by whole meter ).
  14. DSS does not implement linear fit for frame normalization - only what they call "background calibration" and is in essence constant fit rather than linear.
  15. You can measure exact focus position with a bit of tracing paper and moon (best thing to do on a full moon evening as not much else can be done). Move tracing paper back and forth until you get sharp image of the moon on it - it will be small but you should have no difficulty judging where it is sharpest.
  16. No, that is not the case. Say that read noise is X. You say you want your LP noise to be at least x10 that of read noise, right? So LP noise is thus 10*X. Relationship between signal and associated Poisson noise is that signal = square of noise. From this we have that LP signal is (10*X)^2 = 100*X^2 Saying that sky background is at least 10 x read noise^2 is equal to saying that LP noise is sqrt(10) times larger than read noise - or about 3.1622... times larger than read noise. No noise is removed by calibration - it is only signal that is removed by calibration. Noise remains. Calculate 1156ADU needs to be measured on calibrated sub. You are right - this demands a bit more processing and isn't suitable method for quick "in the field" testing, but it is accurate. If you want to do "in field" testing - you should also account for dark current signal as it can be significant as well (depending on temperature) and offset used won't always be correct (I've seen offset being missed by few electrons regularly). Problem with "in field" testing is that you can't just simply scale things like we propose (divide calculated and measured ADU to get increase/decrease coefficient for sub duration) - because dark current also depends on time and different time will skew ADU values in non trivial way.
  17. Kappa Sigma stacking is good method of stacking, but it really needs normalized frames. It looks at pixel statistics to be able to detect anomalous pixel values - like hot pixels or satellite trails. In order to form good statistics of pixel values - all subs must be normalized. As night progresses and you track target across the sky - few things happen: - target moves across the sky and raises and sets (depending on side of meridian at any particular moment) - this means that target is changing altitude - sometimes it is closer to horizon and sometimes to zenith. There is something called air-mass - that determines how much attenuation there will be by atmosphere. For same transparency - air mass is factor that determines how much effect that transparency has on target. It boils down that same target can be brighter and less bright - depending where in the sky it is positioned. - another thing that happens is that there is changing amount of light pollution. People turn lights / on and off depending on the time of night, and as target moves - it changes direction with respect to you and there are different levels of LP depending on direction (it is very rarely uniform). In any case - subs will have different target brightness and different background levels. Same target pixel will end up being brighter or less bright depending on these. This makes it difficult for algorithm to properly guess if pixel is hot pixel / satellite trail or simply brighter because of transparency or light pollution. Ideally - you want to do linear fit between subs (transparency is multiplicative while LP is additive so it turns into nice linear equation ax+b), or at least do what DSS is doing - to equalize median pixel values (just additive normalization). DSS technical info page clearly states this: http://deepskystacker.free.fr/english/technical.htm#Stacking
  18. Apart from not using calibration files, if you are going to use kappa sigma clip stacking method - you need to perform background calibration for it to work properly.
  19. I'm inclined to say - go with ASI294. 2.2"/px is closer to what you can realistically achieve with 72mm of aperture. Do be careful with that sensor. Many people were complaining of difficult flat calibration. It looks that sensor is not quite linear at some gain settings, and it does not behave well with duo band filters like Optolong L-eXtreme / l-eNhance Yes, advice to measure FWHM is a good one - that will give you baseline of what you can expect. Divide FWHM with 1.6 to get close to optimum sampling rate. We can go into math of it, but maybe it is just simpler to give you an example? Here is nice image found online that we will use as test sample. Imagine that this is high resolution image without impact of atmosphere - it is sampled at 0.1"/px (although it is large image it is only 48" x 48" given that it is sampled at 0.1"/px) Image blurred with 1.6" FWHM looks like this (remember 1.6" FWHM is actually 16px with this image scale): What is sampling rate that is needed for image like this? It turns out that 1.6" FWHM / 1.6 = 1"/px - we need to sample with x10 less pixels than image has - so in reality, we only need this: At this image scale - image again looks sharp. But how do we know if that is indeed what is needed to capture all the detail in above blurred version? Let's restore this small version to original size to see what it looks like - we will simply enlarge it: We have basically the same image as above. Eyesight is really not best judge of the difference between two images - so we will perform actual difference of images (subtract one from another) - to see what we'll get: Well - nothing, apart from some very faint ripples (which are consequence of resizing algorithm used - mostly due to edges) - there are no detail in difference - blank / gray. Here is what happens when you under sample: This was sampled at 1.5"/px instead of above 1"/px, so slight under sampling - but here, image starts to degrade - even visually - you can see the difference between original and resampled version - and difference now starts to show distinct ripples - now coming from contours / high contrast edges in the image (you can make out silhouette of bird and branch in difference image). If you over sample - like at 0.67"/px - then obviously there simply won't be any difference between the two images - like this: Point of choosing x1.6 factor is simply - lowest sampling rate where image does not degrade (or degrades below noise levels) Back to original question - again, I feel that your images will have 2" FWHM or larger - you can measure that by measuring FWHM of your raw subs or stacks (still linear prior to any processing). In most cases, aperture that small will simply produce larger scopes FWHM as such small aperture starts to have significant impact (airy disk diameter of 72mm aperture is 3.56" without impact of seeing and guiding, so we can't really expect FWHM to be lower than 2" when seeing and guiding are added to the mix).
  20. I see now where confusion comes from, and yes, that is why I said the math must be wrong. Measurements are usually performed on calibrated subs. Using 50 or whatever offset is very unreliable, and you don't have to do it on properly calibrated subs as only thing that is left is light signal. We should not be doing multiplication / division with 16 either, so that is a bug in drivers / captures software (whichever is responsible) as there is field in fits header that gives you e/ADU value. As is, it says that e/ADU is 1 (or rather 1.001somethingsomething) but ADU numbers are x16 larger than that. In perfect world - knowing e/ADU from fits header and read noise (or measuring it from bias stats) and having calibrated light is enough.
  21. If you use luminance in your workflow to achieve better images - then of course, use luminance. Human eye is much more sensitive to noise in luminance data than in chrominance. If you extract chrominance data from RGB and combine that with luminance then it makes sense to shoot luminance data to achieve higher SNR and better image. As far as exposure length - depends what suits you better. You don't need to go longer, but you can go longer. In either case - over exposed parts of targets (or star cores) are handled differently. No camera has infinite pixel wells and every camera will saturate on bright stars. Instead of shortening your exposure length - use "filler" subs. Shoot your subs normally and then shoot hand full of very short exposures (really short - like half a second to a second) that you will use to replace over exposed star cores with. You don't need many of such subs as you'll be using only the brightest parts of them anyway where signal is strong and SNR is good even in single sub.
  22. No. Luminance achieves much higher SNR in same time as it takes you to take R, G and B each. In order to even attempt to match 1h of luminance with synthetic luminance from R, G and B - you would need 1h of each of them - that is 3h total imaging time. Fact that your skies tell you that you don't need longer exposures than say 30-60s - does not mean that you in fact have to take such exposures and can't use longer exposure to minimize number of subs you need to process. You will not loose anything by going longer (if your guiding can support it and you don't get ruined subs due to odd event like cable snag, earthquake or bird landing on your scope ).
  23. Quite true, but here is bit that I don't really get: How did you get from 10 * RN^2 = 1290? This would imply that RN is equal to sqrt(129) = ~11.35 or something? I can't get numbers to work when I plug in standard RN values for ASI1600
  24. I think there are quite a few problems with this calculation. Let's use x5 rule and see what we will come up with. At unity gain, read noise of ASI1600 is 1.7e, so x5 larger read noise is 8.5e. We want LP levels to be such that they produce 8.5e of read noise. With shot noise relationship is simple signal is square of associated noise, so LP signal that will produce 8.5e is 72.25e That is our target. Since camera is set to unity gain - ADUs should match electrons, but remember, ASI1600 is 12bit camera and ADU values are in fact multiplied by x16 to fill 16bit range. Resulting LP level needed (in ADU like they are in subs) is 72.26 * 16 = 1156 All subs show larger value than that, so all exposure lengths are good. If need be, they can be reduced, but if not - I would leave them as is.
  25. I don't use star tools so can't be of a help there, but @alacant might be able to give you walk thru / step by step guide?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.