Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. There is no different interpretation of Nyquist sampling rate. It is very clear in what it states - you need to sample at x2 maximum spatial frequency of the image or in another words - 2 samples per shortest wavelength in the image. I'm not advocating 1"/px sampling - I mentioned that sampling rate as one that is exceptionally hard to attain as far as signal from ones image goes. Here is breakdown of what I'm saying, just to be clear: 1. Ideal sampling rate of long exposure image is around x1.6 smaller than average FWHM of the image. This is based on correct application of Nyquist theorem in frequency domain, approximation of blur affecting the image with gaussian distribution (justified by fact that both tracking and seeing errors are gaussian in nature and overwhelm airy pattern on all but smallest telescopes) and limit imposed by noise In simple words - sample at x1.6 smaller sampling rate than your FWHM - it your FWHM is 3.2" - then your optimal sampling rate is 3.2/1.6 = 2"/px If your FWHM is 1.4" then your optimal sampling rate is 1.4" / 1.6 = 0.875"/px, etc ... 2. Most amateur setups are simply not able to attain data with resolution higher than 1"/px - or in another words - most amateur setups produce stars with FWHM higher than 1.6" - regardless of the sampling (unless severely under sampled). This is because of couple of things: 1. aperture size 2. seeing effects 3. guiding performance Just to give you an illustration - imagine that you are imaging in superb seeing with 1" FWHM seeing influence, on exceptional mount that has 0.3" RMS performance (tracked or guided) with 6" diffraction limited scope. Your total FWHM that you can expect will be ~1.93" This is because total gaussian rms will be square root of sum of respective gaussian rms-s: Seeing will therefore be 1" FWHM = 1/2.355 = ~0.425" Mount RMS is already given as RMS = 0.3" Airy disk RMS is about 2/3 of Airy disk radius (0.84/1.22 to be precise). For 6" scope Airy disk radius is 0.923" so RMS is 0.635" Now we just calculate square root of sum of squares of those so we have sqrt(0.635^2 + 0.3^2 + 0.425^2) = 0.8209" RMS and to get FWHM we need to multiply back with 2.355 so it is ~1.93" In another words - if you have close to perfect conditions - best you can capture is 1.93" / 1.6 = 1.2"/px - which matches almost perfectly with resolution that we have established for your image (about x2.5 over sampled given 0.47"/px sampling or about 1.175"/px). This is not something that I'm making up - I can quote source for every single statement. And just to address your comment that 1.4" FWHM star pattern on 1"/px sampling will be on a single pixel with some spillage ... 1.4 / 2.355 = ~0.6, so this Gaussian profile has sigma of 0.6. One pixel is +/- 0.5 if we assume that center of gaussian and center of the pixel coincide 0.5/0.6 = that is 0.8333 sigma. Between -0.8333 sigma and 0.8333 sigma lies only about 60% of signal in 1D case - in 2D case, I believe it will be only about %36 of total signal. So star is definitively spread over much more pixels than a single one.
  2. Can you explain what "object S/N" is? How do you define it?
  3. I'd be more concerned with gaussian filter in the workflow in light of above discussion to be honest. It is deliberate reduction in resolution without any real reason to do so. Even if data is over sampled - using gaussian filter on it will further reduce detail. Now I wonder if just using Gaussian with FWHM of 2-4 pixels binning the data x3 (similar to above median x3 but better behaved) would yield the same noise reduction of x4. Alternatively, I do wonder how would stack of such subs with selective bin x4 behave (and also in terms of resolution).
  4. We have touched up on theories of evolution in this thread, so maybe people knowledgeable in those could contribute more - but I think it is down to evolution for that one. Being interested in science is wasteful activity in terms of energy expense. There are rare individuals that possess enough of curiosity and again I suspect that is evolutionary thing - in order to move forward and evolve, curiosity is necessary. Who would try a new fruit if not curious - but it is double edged sword. If all possess some dose of curiosity and fruit is poisonous - not good, but if none are curious - we might miss out on a very good fruit and further development. There is also another highly beneficial trait we have evolved - that really gets on my nerves - it can be summed up as "monkey sees, monkey does". We have extremely high tendency to just repeat knowledge / behavior without deep scrutiny. This is highly beneficial in early age when we learn - it allows us to just adopt established knowledge and behavior - but at certain age it starts being impairment if we don't utilize reason to do deeper analysis of things.
  5. I'm not dismissive of his work related to spectroscopy, quite the opposite. I just fitted his workflow into long exposure imaging. Two are quite different as far as conditions for algorithm application go. In low light regime where SNR is below 1, doing median on single sub can be detrimental on final result. Over sampling by factor of x4 just to be able to do median filter and later bin again is extremely impractical as far as long exposure imaging go. I suspect that dithering is much more difficult with spectroscopy and stacking is not utilized often? In any case, before I make qualitative statement on his method I'd need to asses few things: - how does median filter reshape noise - how does median filter impact resolution of the image (what is its frequency response and can we even talk about general frequency response or is it random in nature) My gut feeling (we might say educated one ) - tells me that on both counts we might expect a surprise, possibly not nice one (there might be side effects of using the method that we might not be aware of). Here is an example: Gaussian noise on the left top, median x3 below that - frequency domain image on the top right, and corresponding frequency domain image on bottom right. As you can see - median filter does funny thing with the data and attenuates high frequencies in a weird way.
  6. They work in general and visual will benefit too if conditions for their use are right.
  7. I think that there are better methods - do agree on issue with telegraph noise if camera is susceptible to that - but selective binning is better way of dealing with it than using median filter before binning. Median filter acts as noise shaper and reshapes the noise in funny ways and if possible it would be best to avoid it as algorithms further down the pipeline often assume noise of certain shape (most notably Gaussian + Poisson). In any case - telegraph noise can be dealt with in manner similar to hot pixels - by using sigma clipping. If enough samples show that there should not be anomalous value there - then anomalous value should be excluded from bin / stack (rather than all values replaced with some median value).
  8. Well, that is the key. You spread the same amount of photons over more pixels with higher sampling rate. - Photon shot noise is related to amount of photons that land on pixel - it is equal to square root of number of photons - Dark current noise is not related to resolution it is per pixel. It is always the same per pixel - Read noise is not related to resolution - it is again per pixel Say that you have 100 photons land on either 4 pixels or 25 pixels. Let's say that dark noise and read noise are 0 for this case (but they just contribute to SNR drop - they can't improve it as they just add noise) 4 pixels will have 25 photons each, so signal is 25 and shot noise is square root of signal (Poisson distribution) - 5 SNR is therefore 25 / 5 = 5 In second case we have 25 pixels so each get 4 photons (100/25 = 4) Signal is 4 and again - Poisson distribution tells us that noise is square root of that - or 2. SNR is here 4/2 = 2 Look! just by using 25 pixels instead of 4 pixels - and that is over sampling by x2.5 (2x2 square vs 5x5 square) - we went from SNR 5 to SNR 2 for same number of photons (same exposure length with same aperture, light pollution, transparency and rest). You would need roughly x6.25 more imaging time to compensate! Wrong again. Software binning works and it works predictably and works much like hardware binning works as far as SNR goes. Only difference is that it does not reduce per pixel read noise. With hardware binning you readout less pixels and each pixel has read noise that it has. With software binning you read out more pixels and each has its read noise - so you end up with more read noise with software binning - but that is something you can control in choice of exposure length as long as you swamp read noise with LP noise - it makes no noticeable difference. Software binning works much like stacking works - averaging of samples reduces noise and keeps average signal the same so SNR goes up. No magic in it - that is how hardware binning works after all - there is really no difference in software vs hardware binning apart from some technical aspects (that with read noise vs more flexibility in software binning). In fact - you don't need to do binning with your data if that worries you. You can simply split all your sub exposures into 4 separate subs - each containing odd / even pixels. That way you don't change any of the pixel data - each of these 4 new subs has twice lower sampling rate (as binning does) - and you end up with x4 more subs to stack which improves SNR by x2 - same as binning 2x2. Binning is simply the same thing (or rather very very similar thing) as stacking. Well not true. Binning and resampling are not the same thing. Binning is form of resampling but not all resampling are the same. In order to achieve wanted SNR you need to bin / resample data while it is linear before you start processing it. Difference between binning and resampling is in pixel to pixel correlation. Resampling also improves SNR - but at expense of pixel to pixel correlation (introduces kind of blur into image) and it does so with trade of. Most resampling algorithms don't improve SNR as much as binning does - they do it less and non predictably. Binning always improves SNR by set factor (and that is bin size - for bin x2 SNR improvement is x2, bin 3 = SNRx3 and so on). There are resampling methods that improve SNR even a bit more than binning but do that at expense of resolution - they blur image in doing so. Even binning by addition blurs data very slightly - because of already mentioned pixel blur. Larger the pixel - larger the pixel blur, but it is still much lower than other blur contributions discussed. In fact - method of splitting subs is the only one that lets you have your cake and eat it too - it improves SNR but does not introduce any additional blur even in form of pixel blur. Perks of software binning and choice of how to best do it. And once again - you are wrong. Sharpening and deconvolution are really the same thing only done with different algorithms. If you remember my explanation above with multiplication with certain filter form in frequency domain - deconvolution is inverse operation. Convolution/deconvolution are related to multiplication / division in frequency domain (see convolution theorem https://en.wikipedia.org/wiki/Convolution_theorem ) Fact that you over sampled simply means that you have zeros or noise past one point in graph: Above is frequency graph of your data and its sharpened form. Blue line is proper sampling with respect to 0 / noise and red line is over sampling. You restore low frequencies properly by dividing data with appropriate blur kernel FT, but what do you end up with if there is no data to restore - you have 0 or in real case just some noise at those scales - signal is much lower than the noise. You end up with amplified noise at those frequencies - that is all.
  9. Do you mind sharing your raw data for FWHM measurement like you've been asked before? Don't need to upload whole image - just a crop containing few stars will be enough.
  10. Ok, you say that you have ability to "resolve" at 0.47"/px or whatever your rate is - and I'll show you that this is not true. To resolve means to distinguish between two or more entities (not to record or detect as some people mistakenly believe - you don't resolve Cassini division for example - you detect it). Look at following example: left is Hubble M51 reduced to about the same size as your image, and right is your image at 100% zoom. Hubble image clearly resolves group of stars at this sampling rate. There is no mistaking what those are. In your image we simply see a blob of something - no way of telling if it is dust or nebulosity or galaxy (like galaxy next to it - shape is 100% the same). To understand at what scale you have achieved the resolution of your system - you can run a set of comparisons. Do progressive reduction of the size of your image (resample it - use Lanczos resampling - for example in free software called IrfanView) and compare it to the same size Hubble image. As long as Hubble image looks sharper - you have not matched sampling rate to the resolution of data - or in another words - at that sampling you can have more detail than your image contains (as Hubble image clearly shows). Here is example of this procedure with your image - first at 100% as you have recorded it: Left is your image and right is Hubble (which I converted to black and white to be more similar and more comparable). It is clear that your image does not contain data that can be recorded at this resolution - as Hubble image is sharper. Now let's look at images at 50% resampled size: Ok, now we are getting somewhere. They look more like each other but Hubble image is still sharper by tiny bit. This means that your image does not have the sharp enough data (or detail) that can be recorded at this resolution either. Let's do another round and this time reduce it to say 40%: I would say that now the detail in two is about the same. There are a few differences - but mostly due to processing and level of contrast - but not in detail. This simply matches what I've shown you with frequency analysis - you are over sampled by factor of ~x2.5 in this image If your original sampling was 0.47"/px then in reality what you've achieved is 0.47"/px * 2.5 = 1.175"/px Don't get me wrong - I think that is superb resolution for amateur setup. Like I've mentioned - ones who approach 1"/px in actual detail they capture are at the top of what 99% of amateurs can hope to achieve. Thing is - you wasted a lot of SNR getting there because you sampled at 0.47"/px. Again, you don't have to image at 1.175"/px - you can just choose bin factor close to that and simply bin your data in processing. You won't loose detail because there is no detail there - just look at images above.
  11. Over sampling is one of main adversaries of going deeper - people just waste SNR in hope of capturing some detail that is simply not there.
  12. What method of downsampling did you use? Resampling can and will introduce additional blur if simple resampling method is used (namely bilinear or bicubic) - in fact, there is a thread over here where I discuss resampling algorithms and their frequency response curves Here are comparison of attenuation curves for different resampling algorithms: Bilinear, bicubic, cubic spline, quintic spline If you want to preserve as much detail as possible with interpolation - use sophisticated algorithm like Lanczos, Mitchell-Netravali / Catmull-Rom, B-Spline of high degree - but in general I think that Lanczos is the best for astronomical images.
  13. Yes, 1.6 factor is derived by taking gaussian with certain fwhm and then doing FFT of that gausian and finding where it hits 10% in frequency domain and then taking that frequency as cutoff, getting it back in spatial domain and comparing to original FWHM value and we get factor of 1.6. Nyquist sampling theorem always stats to sample at twice max frequency component we want to capture, that is included in above calculation when we return into spatial domain from frequency. Only difference between "pure" Nysquist sampling and above case is choice of cut off frequency. In "pure" case - we choose cut off at place where it literally cuts off any further signal - frequency components hit zero and remain zero - that is why we call it highest frequency component as all other have no value (or have value of 0 and don't contribute to sum in FT). In this case - we choose cut off frequency not based on that criteria but based on criteria that above this threshold any signal that we might capture will be simply eaten up by noise and there is no point in capturing it. In theory - you could sample all the way up to planetary critical frequency which depends only on aperture - and in lucky imaging we do just that - but there we can do it because: a) planetary signal is much much stronger b) we tend to minimize impact of atmospheric blur and remove impact of mount induced blur so our gaussian is much much "sharper" in spatial domain which makes it much much broader in frequency domain - it is attenuating much more slowly c) we stack thousands of subs to improve SNR This gives us chance to sharpen all the way - and even then we often hit the noise wall in how much we can sharpen things up. In long exposure imaging we have all the opposite: - signal is much weaker - we collect bulk of seeing and mount performance influence in our long exposure - we stack only hundreds of subs to improve SNR this is why we need to "cut our losses" at some point and accept that we simply can't sharpen up data enough past some point as we don't have good enough SNR.
  14. ImageJ is packed with all sorts of tools that you'll find useful for this. It is free / open source software aimed at scientific microscopy image processing - but it is loaded with plugins (which you can choose to install) and offers many tools that you can use in general image analysis.
  15. Not quite. Here is explanation of how it works. Optical system and seeing attenuate high frequency (and aperture at one point cuts them off completely), so if we examine our image in frequency spectrum we will have something like this: first graph is our representation of image in frequency domain. This is image unaltered by optical system and atmosphere (think Hubble class image outside of our atmosphere). Second graph is attenuation function or filter function in frequency domain. It is working much like equalizer for sound - it shows how much on scale of 0-1 given frequency is attenuated. Third function is result of modulation of original image with optical system and atmosphere. In spatial domain operation is called convolution (application of blur) which is equal to multiplication in frequency domain - third graph is (or at least it should be, disregard my drawing skills) first two functions multiplied. What my proposal is - which translates to using sampling rate of x1.6 less than your FWHM (so your understanding of that part is correct - if FWHM is 3.2" you should sample at 2"/px) - is to take frequency cutoff point at place where optics+atmosphere drops off to below 10%: Now, pure Nyquist says that you need to choose place where this function drops to 0. That is proper place to be in order to record all that there is, but I've modified this due to several things: 1. We are assuming Gaussian distribution of star profile when we talk about FWHM - and Gaussian distribution never hits 0 in frequency domain so it is illusory to try to take value where it hits 0 as it never happens 2. We must consider the fact that our images are polluted by noise and this noise pollution is not affected by blur - it happens later. There is another interesting fact about the blur - it has equal energy density at all scales. If you do FFT of pure noise - you'll get something like this: it will be random in intensity but average intensity will be the same regardless of frequency. Now - remember that SNR we so love? In frequency domain we have curious thing - higher the frequency - more attenuation of signal, but noise remains the same - so SNR per frequency component goes down. In fact - SNR above that cut off point is at least x10 as low as one where signal is close to 100% in strength - and you need at least x100 exposure time to be able to sharpen above those 10%. By the way - sharpening is doing inverse operation to that multiplication - we divide with same frequency response of blur (which is why sharpening is tricky - we don't know exact shape of the blur most often). There is second part to that - when you divide with something smaller then 1 - you actually amplify value - so we amplify both signal and noise, and since noise is not falling off - it is staying constant, more we sharp - more noise will appear. So that 10% cutoff was chosen as place where simply noise will take over any sharpening attempts and you need crazy amount of subs (order of few thousand) just to approach detectable levels of SNR after sharpening (SNR ~=5). Hope this makes sense.
  16. By the way @CCD Imager if you want to test if your image is over sampled - there is another handy method - just do frequency analysis of the image - here is an example: You see that concentration of the signal in the center and "empty" noisy space around that - well that shows over sampling. \ That circle should extend right to the edge for properly sampled image. Here, look what happens when I resample that image and do the same: left one is FFT of image down sampled x2 and right one is FFT of image down sampled x3 For reference - here is where signal ends in frequency domain: so x2 reduced (or 50%) is yet not quite properly sampled, but x3 is under sampled as there is clipping in frequency domain.
  17. Proper sampling frequency is simply related to quality of the data - regardless of how good or bad that data is. Over sampling is much much more detrimental to astronomical data then under sampling is. Here is an example: This is your image that you posted. Left one is original, middle one is downsampled to 50% and then up sampled - so effectively it has only half a sampling rate then your original data. Virtually no change there. Third one was downsampled to 33% of original - and yes, there is some change in level of detail - absence of sharpness is obvious - but this is x3 coarser sampling rate. First bit shows that your image is over sampled by factor of x2 - and second part shows that there is only slight loss of sharpness when you under sample. In fact, If I present image at proper sampling rate - it won't be even less obvious: left image is your image at proper sampling rate, and right is one that has been under sampled by factor of x3 compared to your original image, or x1.5 compared to proper sampling. Hardly any difference in detail In any case - over sampling is bringing in needles loss of SNR and for that reason is much bigger problem than slight under sampling (if any).
  18. Most of what you need to know about Nyquist sampling theorem can be found here: https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem It's not that hard to read up on proven mathematical theorem and when it is applicable and how to apply it. I never stated "my theory", so please refrain from such comments. If you wish to know on what I base my recommendation to sample at x1.6 of FWHM - I'm happy to explain the details - it is fairly simple to follow for anyone "knowledgeable" enough. Explanation goes like this: - Gaussian form does not qualify for Nysquist sampling theorem as it is not band limited signal - Star profile is band limited signal because it was captured with aperture of limited size and is not gaussian form. It is in fact convolution of airy pattern with at east two gaussian forms (seeing and mount performance). This is for perfect aperture and perfectly random tracking error (seeing is believed to be random enough over course of long exposure to qualify for central limit theorem). Optical aberrations and less then perfect tracking will only lower resolution of the image, so we are aiming for best case scenario. Airy pattern is band limited signal and therefore convolution of it with other signals (multiplication in frequency domain) will be band limited signal. - as such we must select sampling limit in such way as to most closely restore original signal (https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem#Sampling_below_the_Nyquist_rate_under_additional_restrictions). - in ideal world, regardless of other factors - we would need to sample at rate determined by aperture of telescope as that will determine true cut off frequency, however in real world - seeing and mount performance dominates airy pattern enough that star profiles are well approximated by gaussian profile and we have presence of noise. - in such conditions, sensible cut off point is one where frequencies are attenuated by more than 90% in frequency spectrum (sharpening past that point will simply bring too much noise forward). - from above conditions it is fairly easy to find proper sampling frequency - we take gaussian shape of certain FWHM, do Fourier transform of it (which is again gaussian function), find at which point it falls below 90% threshold and set our sampling at that frequency.
  19. Yes, but, as we have discussed before - sampling with area sampling devices is the same as sampling the function that has been convolved with appropriate pixel response - which in general lowers the resolution of the data. In magnitude it is rather small contribution and further blurs the image thus lower and not enhancing resolution of the image.
  20. Could you please provide fits then to verify 1.5" FWHM claim?
  21. Incorrect. Nysquist sampling theorem clearly states what it applies to - any band limited signal / function (which really means that Fourier transform of that function has maximum parameter value after which all values of Fourier transform are zero).
  22. Here is profile plot of one star in center: for reference, here is what linear data looks like if you can see the core and what profile of that star looks like in linear data: and this is from sub with similar resolution of about 2.6" FWHM
  23. In either case - quick measurement (but not accurate one as data seems not to be linear) gives fwhm of: \ average of 5.6px or with sampling of 0.47"/px - FWHM of ~2.6"
  24. Can you provide linear unaltered sub. This sub has been stretched and modified in image manipulation software
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.