Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. You can always attach a camera and look at the flat as well. That will tell you where vignetting starts.
  2. Now collimate the scope to see if there will be any difference.
  3. Light pollution is additive signal (all light adds up), and as such won't get mangled if we apply linear transform to our raw data. One of properties of linear transform is that it preserves linear combinations. This means that when we apply color correction matrix (to any color space) - LP will still be added to our scene - but both scene and LP will be transformed to that color space. Regardless of which color space we work in - we always remove light pollution in the same way. It is probably one of the hardest parts to automate since we don't have clear distinction of what is background and what is foreground in the image. I've developed an algorithm that works rather ok in this role, but it is not flawless. I'll demonstrate it a bit later. There are other algorithms like DBE that do the same.
  4. I think it is best to measure it. You can calculate it in principle - if you have 3D model of the telescope. Imaging circle depends on internal baffling. With 3D model it is rather easy - you put it in ray trace software and it will show you illumination at focus plane (well - not easy in terms of calculations, but easy as it is solved problem and there is software that does it). For simple designs - like newtonian - you could calculate it with a bit of trigonometry if you draw yourself a diagram (like size of mirrors, their spacing, focuser draw tube dimensions and position when focused and so on).
  5. I'm not sure how good color Siril or any other photometric color tool produces. It is not down to procedure - procedure is good and sound. It is due to samples. We are trying to calibrate vast color space - using only very narrow set of samples. I often show this image when talking about color, but here it is again: This is 2d representation of 3d color space - a sort of projection. There is a line that is drawn in it - it is Plankian locus and all black body radiators sit on that line. Star colors sit on that line (there is even effective temperature marked on that line). We are trying to calibrate whole color space - using only samples that we want to align on that line. That would be fine if projection was rigid (like when aligning two subs against star points - but we know that both subs have same scale and distances between stars). Color space transform is just linear - which means it can have all linear components - translation, rotation, but also stretching in any axis independently and shearing in any axis. There are many degrees of freedom to the transform. In order to derive good transform - we really need samples that are as uniformly spaced as possible. We also need some green samples and some deep blue samples and some pink samples. We don't have those in outer space (at least not all of them and not regularly - we do have some NB sources that lie on outer contour of the space like Hb, Ha and OIII emissions). Best approach would be to do "in house" color calibration (needs to be done only once) and then use color calibration tool in Siril and others to make fine adjustments to color (atmosphere tends to shift color of object because blue light is scattered more then red - sun for example appears red/orange near horizon because of this effect - but that is not true color of the sun).
  6. Choosing white reference is there to let us match our perception of the color to perception of the color later when viewing. This is because there are two sets of environmental conditions - one when shooting (which is variable in earth based conditions) and one when viewing. White balance tries to do perceptual matching. I advocate that we completely drop perceptual thing (at least in beginning) until we get measured part correctly. Why do I insist that there is no color balance in astrophotography? Simply because there is no lighting in outer space (this is not really true for solar system objects, but again, there is no variable lighting - you can't see Jupiter on a sunny day and on a cloudy day and under fluorescent lighting). We get light as is, and best we can do is to measure it properly and then "emit same type of light" from our screens (color matching - if we take original light and view it side by side to light from the screen - they would look the same).
  7. Interesting fact is that you can color star field - like open cluster with only two narrow band filters - like OIII + SII for example. Stars simply follow (for the most part) Planck's law and you can derive slope of that curve from only two points / two measurements to deduce temperature of the star.
  8. I agree with you that we don't need to discuss that part, however, I do have a problem with statements like this: Simply because you are perpetuating some myths and leading astray people reading this. Nyquist does not go out of the window - it is always there, it is not some assumption, it is mathematically (and experimentally) proven fact. Encke's gap is not resolved. It is indeed imaged - and that is absolutely fine but it is not resolved. There is very important difference between being resolved and being imaged. We image stars all the time - but stars are much much much less in angular diameter than Encke's gap. Does it mean that our telescopes have infinite precision simply because we are able to see the stars and image them? No. To resolve - means to split, to differentiate. Encke's gap is singular feature - a dip in brightness, much like star is peak in brightness. We can image that as long as there is enough contrast (we can't image very faint star either - or rather, we need longer exposure to achieve needed SNR/contrast). Try measuring Encke's width to any precision from ground based images. I know that there are number of images that are shot below 1"/px - but show me one, made with amateur equipment that resolves a double star that is 1.6" apart.
  9. Do you have access to 3d printer? Look up low spec spectroscope. It is open source, mostly 3d printed spectroscope. There are several optical components that you need to source (grating, slit and such) - but housing is 3d printed and it is fairly cheap compared to commercial offerings.
  10. No, I never tried that - even for a moment. I just wanted to point out that although SCT image shows more detail - it is not due to over sampling and that images are in fact over sampled. You won't be able to resolve detail below 1"/px - and even 1"/px is very hard to achieve. However, don't let me stop you from trying. Left - fully resolved image at 1"/px - right, your current image at 1"/px Don't confuse noise and level of processing for detail, here is another comparison: This is original bottom image that you deemed as sharper in first example. Here is resized image from that example - with simple sharpening in IrfanView (nothing fancy): Do you still think there is significant difference? Remember, I'm doing this on 8bit data that has been processed (not on linear 32bit data) - and even so, I can demonstrate that there is no detail down to 1"/px in your 8" SCT image. Bottom line - don't let me discourage you in any way - if you feel your images are ok to be sampled at those rates, then fine by me. Sorry for all the confusion.
  11. Well, adjusting stretch to do this for us is, I guess closest way to be done today without inventing anything.
  12. I guess this is where we disagree. I don't see stacking images as nothing more then altering amount of light. Stacking is exact same thing as using one long exposure. It is just measurement process that does not alter nature of object we are imaging. Stretch of data, if done properly is also "adopting" data to suit our vision. Photo sensors in our cameras see light differently than human vision system. If we want to be strict - we would use Gamma of 2.2 when encoding our sRGB imaging - that is exactly defined level of stretch that is needed by standard. However, our eyes are dynamic in how they perceive light - we have iris that opens and closes to accommodate for level of light. Our displays are not capable of displaying dynamic range we are capable of seeing - even in "fixed" conditions (when our pupils don't change in size). Stretch is therefore important part of circumventing this limitation of display system rather than anything else. Like someone mentioned - if we were floating in outer space - we would not have problem seeing faint arms of galaxy and detail in the core (although not at the same time - we would have to concentrate on both features - like seeing stuff in shade and broad daylight). In my view saturation is actually bad and is done because we don't handle color properly in astrophotography. We calibrate our subs with flat field for example - and that is norm, while color calibration of our data is completely absent. I have very long list of things that I think are good and things that are not something that I would do - and have very detailed reason for this or that, but I don't want to push my view on anyone. Fact that we produce variety of renditions of the same object just shows that we don't really have complete control over imaging process. It can confuse people - we often hear question - "but how that object really looks like?". I don't mind people being creative - I would just be happier if there was clear distinction - that is why I mentioned Astro Art and measurement/repeatability as differentiator between natural and artistic. It is ok for us to say - this is narrow band / false color image, but we don't really want to say - this is Astro Art image when we utilize processing that changes the nature of object?
  13. I feel the same but do feel obliged to act in general best interest - like mentioning that there is certain level of responsibility to what people say, write and publish. Don't see much harm in mentioning that something can be classified as Astro Art rather than Astro Photography?
  14. I'm not sure I agree with this part. How can person, observing an image of object for the first time - decide if something is real or not? Imagine someone looking at the image of platypus for the first time - never hearing about the animal before. Hairy thing with duck like beak and Interdigital webbing, come on, really? That's a thing and not photoshopped?
  15. I would say that there are several things that will make image less natural. 1. Adding features or removing them 2. Changing of order of things - like if A >= B in brightness - I expect that to remain so in the image 3. I personally see color as being one of the things that should be kept as similar as possible to the real world. Best judge of whether image is "natural" or not - should be, similar to measuring something (after all, taking image is form of measurement) - repeatability. If many people shoot same galaxy and they all render a star or a feature in their images - then yes, that feature is there and it is natural to be shown in the image. Don't confuse poor measurement to "natural". If I shoot image from the inside and get white image or something - how is it different to taking image with too long exposure that saturates my sensor - or using ruler that is dirty enough so I can't read off correct number. I would also like to add - that I don't object people being creative in their "photographs" - but I do object them being called photograph. Term it differently - term it "astro art" instead. People in general are used to concept of photograph as being something realistic - depicting things as they are. When you take astro photo and then let your artistic side take over - you are doing disservice to the public by posting your work as astrophotography rather than astro art.
  16. Why do you think you'll tweak that advice? You think that above image is well sampled at 0.39"/px - although it is presented at 0.5"/px and has FWHM of 1.9"? One of two images below has been reduced to 50% in size and then resized back to match the original. Which means it was sampled at 1"/px at that point. If detail is there in the image that needs higher sampling rate than 1"/px - then that detail should suffer. Can you tell which one of two images was resized to 1"/px and then up scaled back to original size? (Actually you should be able to tell as it has less noise, but all the detail is still there).
  17. Yes. I usually apply "masked" sharpening (use luminance of the image as mask so that I blend in sharpened image only were it is bright), but this time I did not bother as background is not that noisy.
  18. No, I tried it, but don't see much value in it personally. Some people like it, and I'm sure they find it a great tool, however, that does not mean it is only tool capable of processing an image (nor that everyone will like it). You mentioned that you have issues with detail when processing in Siril - I just offered an alternative - Gimp, that is good at revealing the detail. Again - you might prefer StarTools in that role, but that does not mean everyone will, and it is good to give people alternative.
  19. Gimp to the rescue? It has very good tool for sharpening: (this was done on 8bit image, results are even better on 32bit data). Look up G'mic-qt add-ons, under Detail / Sharpen [Gold-Meinel]
  20. There is no such thing as Nyquist theorem modified for digital signals. There is only one Nyquist-Shannon sampling theorem and it is very clear in what it says. 1.6 figure is derived based on it. Nyquist-Shannon sampling theorem states that one ought to be sampling at exactly x2 (no x2.5 or x3) of the highest frequency component of the band limited signal. Not x2 FWHM, not x3 Airy disk - nothing like that. You must understand Fourier transform and Convolution theorem in order to understand frequency part of things - how the low pass filter is formed. In a nutshell - Telescope acts as complete cutoff filter - no higher frequencies exist than critical frequency - and this is set by aperture size alone (important bit, remember it for later). In deep sky imaging, we have two additional components - seeing and mount precision. All three combine to give you resulting FWHM (this is important bit as well). We usually approximate PSF with Gaussian (sometimes with Moffat). If we do the following: - calculate Fourier transform of Gaussian shape of a certain FWHM and look at its spectrum and set "artificial" cut off point (We must do that as Gaussian is an approximation since Gaussian simply does not have cut off point - it goes of to infinity getting ever so smaller in value but never being completely 0) and we select it at a place where frequencies are attenuated so that any difference is simply smaller than noise for most part in images - we get 1.6 In above simulation - you will see that restoration is not perfect - it shows two stars but there is some ringing. This ringing is in part due to my simulation using actual Gaussian function that does shoot of to infinity and ringing is just aliasing artifact. Very important point here: - you can sample image at higher sampling rate and over sample it and result won't change - you will get all the detail, so over sampling is not bad in that respect. It is bad because you spread out light more and lower your SNR. - x1.6 is approximation that is based on the fact that any difference between restored and original signal is going to be smaller than noise for SNR5 - if you want absolutely no difference between original and restored signal - you need to sample at planetary / critical sampling rate. That is going to be absolute over sample for DSO imaging - but it will also be mathematically 100% correct. Ok - that is re sampling. Now - why do you get better resolution with 11" telescope compared to 5" telescope - well that bit is also very straight forward. It has nothing to do with sampling - it has to do with physics of light. Remember how I said above that few things are important and to remember it - namely that larger aperture resolves more and that complete FWHM depends not only on seeing, but on combination of 3 factors - seeing, mount performance and aperture size. One of those works in favor of C11 and one you don't control and can be of arbitrary value on a given night (or even for each sub). No wonder you get slightly sharper image with C11 (or anyone for that matter) - but that is not due to oversampling, it is simply due to using larger aperture and not being able to control the seeing (selection bias).
  21. I see. It is not obsession with background levels. Read noise is impacting whole image. Fact that target is not bright is actually working against you in this case. If you want to show signal in image - it must be above noise level, so all we really care about is SNR or signal to noise ratio. Here is most important bit - this is what it is all about: "For given amount of imaging time, only thing that makes a difference to final outcome in terms of SNR between using short and long subs is - level of read noise. If read noise were zero - there would be absolutely no difference between 0.1s subs versus 1000s subs - adding up to same total imaging time". Since there is read noise, we are simply trying to select sub length that won't make much difference compared to one long single exposure. This depends on other noise sources in the image. By comparing this to LP noise (that is direct consequence of background levels) - we can calculate impact of it. Background is there - it is simply there and we can't do much about it (except use some sort of filtering if applicable). Here we focus on it - because it simply lets us calculate impact of read noise on overall SNR - not just background levels.
  22. Best resource that I've found is this: https://www.telescope-optics.net/ Probably not very suited for beginners - but maybe have a look anyway?
  23. It simply does not work like that. Good sampling value is: measured FWHM / 1.6. If you measured FWHM of 1.9" - then good sampling rate for such image is 1.9 / 1.6 = 1.1875"/px Don't think in terms of "how much pixels I need to record a star". Optimum sampling is determined so that you have minimum number of pixels to reconstruct image "function". Here is an example showing you what happens with two stars: I made two artificial stars that are separated by 2.4" and image is blurred by FWHM of 1.6" - here is image zoomed to 0.05"/px (so hugely over sampled - but very "smooth" in terms of sampling): So you see two stars "kissing" or making 8. If I make cross section and measure light intensity - I'll get graph like this: We can make out that it has two stars there - there are two peaks. Ok, so I maintain that we don't actually need 0.05"/px - but instead we only need 1"/px for this case as FWHM is 1.6" (remember FWHM / 1.6 and in this case it is 1.6" FWHM / 1.6 = 1"/px). So here is it: Yes, this image contains almost all the information in above image. It might not look like much - but data is there. Look what happens when I enlarge this small image to be the same size of original image (mind you - proper resampling method needs to be used here when enlarging / reconstructing original image): Also we can inspect again profile: It is almost the same as function we already plotted. There is some small ringing - but that is due to the fact that I'm using Gaussian blur here and star profiles are not quite Gaussian in shape (Gaussian goes off to infinity mathematically and telescope PSF does not - there is actual frequency cutoff at some point) and resampling introduces some ringing as well. Using 1/3 of FWHM will over sample by almost factor of x2 - which will lower your SNR by x2 for same imaging time. We often want "fast" systems - and purchase expensive "fast" scopes to minimize imaging time, and then go out and vastly over sample and make our setups x4 slower than it needs to be.
  24. Yes, really - that is nothing uncommon. Atik has smaller pixels and still some read noise (compared to modern CMOS sensors). In CCD days - it was not uncommon to use half an hour long narrow band subs because of low LP noise that narrow band filters ensure (they block most of LP spectrum). Why 25? That number is actually rather arbitrary, but math is rather simple - noise adds like linearly independent vectors - so square root of sum of squares. We can calculate read noise impact over just having LP noise rather easily and turn it into formula. Say that read noise is X and that we have some number N (above I used 5 as N - so x5 higher LP noise compared to read noise). We need to calculate "total" read noise - and that is straight forward sqrt(X^2 + (N*X)^2) = sqrt(X^2 + N^2*X^2) = sqrt(X^2 * (1+N^2)) = X * sqrt(1+N^2) Total noise will be X * sqrt(1+N^2) while increase over just LP noise will be X * sqrt(1+N^2) / X*N (total divide with just LP) or sqrt(1+N^2) / N Plugging in number will tell you how much higher resulting number will be, let's do it for some numbers like 1,2,3,4,5,6 to see what happens: 1 : sqrt(1+1) / 1 = sqrt(2) = 1.414... = 41.4% increase 2: sqrt(1+4)/2 = sqrt(5)/2 = 1.118... = 11.8% increase 3: sqrt(1+9)/3 = sqrt(10)/3 = 1.0541.... = 5.41% increase 4: sqrt(1+16)/4 = sqrt(17)/4 = 1.0308... = 3.08% increase 5: sqrt(1+25)/5 = sqrt(26)/5 = 1.0198... = 1.98% increase 6: sqrt(1+36)/6 = sqrt(37)/6 = 1.0138... = 1.38% increase So you see you can use any of above numbers if you accept certain increase in noise over just having LP noise (above are increases over perfect camera with 0 read noise). Similarly to imaging time - at some point you enter domain of diminishing returns. There is very small improvement between 5 and 6 (which is quite a bit longer sub as sub duration is related to square of this number 36/25 = 1.44 or 44% increase in sub duration) and there is question if it is forth it. Some people are ok with ~5.5% increase - and use x3 rule instead of x5 rule. I like to use x5 rule - but again - if you can't make your subs lone enough - don't sweat it, use as long as sensible, and make up small SNR loss by adding few more subs to stack (if you really need to).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.