Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Just for reference - this happens to properly sampled image when you do it:
  2. @Dan_Paris Have a look at this and tell me what you think: Top image is bin1 image and its Fourier transform. Bottom right image is that same Fourier transform with me setting to zero all the values higher than half sampling frequency (frequency that corresponds to x2 coarser sampling) to zero. I effectively removed all the higher frequencies that would not be captured if you sampled at half the current rate. Then I did inverse FT of that - which is bottom left image. Can you tell the difference in resolution between top and bottom image? And mind you - this is even done on processed data - not even on 32bit float point with much higher precision, yet results are evident.
  3. @Dan_Paris This is why I asked you to show me a feature or star pair that is resolved in one image and not the other. Level of processing and handling of the data will make apparent sharpness different. Even presence of noise can make something look sharper - but that does not mean you actually have distinction in the data.
  4. Those two crops were taken from your comparison images by the way so one is bin1 and other is bin2
  5. Or perhaps this one: Out of these two crops - which one is bin1 and which one is bin2
  6. Compare stars in these two images. Do you see same difference in resolution as in your example?
  7. @Dan_Paris Just in case you don't understand what I've meant by proper handling - have a look at this: Same image can be enlarged by crude algorithm like nearest neighbor - where pixelation artifacts are seen, but it can also be enlarged by sophisticated algorithm like Lanczos There is quite striking difference between the two, right? Your image of bin2 data shows artifacts of pixelation - which means you were not careful of how you enlarge it for comparison with other image.
  8. Can you point to the detail that is present in bin1 image and not bin2 image? Pair of stars that is resolved in one vs the other, or a feature visible in bin1 vs bin2? Only thing that I can see in bin2 image are signs that you did not handle it properly. It was scaled up to the size of bin1 image using crude nearest neighbor resampling instead of sophisticated algorithm like Lanczos.
  9. Everything I say on this matter is backed by relevant sources and by examples. Here is one - you don't think that software binning works to improve SNR, right? Here is test image - it has measured signal of ~1.01675 and noise of 1.0171 so SNR is 1.01675 / 1.0171 = ~1 Let's bin that data x2 and x3 - according to mathematical theory, SNR improvement will be x2 and x3 - regardless of the fact we are binning in software after acquisition. Interestingly enough we now get 1.0144 / 0.504 = ~2 and 1.1044 / 0.3484 = ~3.17 Pretty much as math predicted. We don't have to do this on synthetic data - we can use real data. Here is an example: This is part of M51. Single sub. Look what happens when I bin it 8x8 That is single sub! suddenly I can show tidal tails around galaxy while on single sub I was struggling to show spiral arms.
  10. Rescaling is not the same as binning and in principle - it is worse. It does not produce same SNR improvement and it introduces pixel to pixel correlation. However, @Dan_Paris claims that there is no benefit of binning CMOS data, and that optimum sampling is x3 FWHM, so I'm not sure there is much point going into discussion about all of that, as I've experienced similar claims regardless of all the evidence and resources presented:
  11. You know that it was a blue moon several days before that? Maybe some residual glow from that
  12. That is very easy to achieve - simply rig your system to sample at say 0.6"/px and then use the same data binned x2 for 1.2"/px and examine all aspects of the data - FWHM, SNR after stacking, compare FTs of both images and of course compare final product of processing at both resolutions to get the feel (enlarge smaller image to match size of larger and vice verse - reduce larger image to match the size of smaller).
  13. Ok, so here is very simple example to show FWHM vs sampling rate. This is baseline: left is high resolution image and right is Fourier transform of it. Now I'm going to apply blur of different FWHM size to image so we can se results of this. FWHM of 2 (2 pixels per FWHM) FWHM of 3 (3 pixels per FWHM): FWHM of 6: As we increase FWHM - we simply shrink frequency response to smaller area. But look what happens if I resample above image that was blurred to 6px FWHM to 1/3.75 of its size (or 6/1.6 = 3.75 so x3.75 times smaller): It again has "full" frequency spectrum and data of frequency goes to the edge. If I however reduce that image by factor of 6/3.3 (so having 3.3 samples per FWHM instead of 1.6 per FWHM) we again have empty space - just no data around at high frequencies. Ok, sure, I'm done explaining. Math is out there, for anyone interested - what ever I'm saying I can back up with actual sources for further understanding. I simply don't want to spend any more time on this subject.
  14. And he uses bicubic resampling. I've already linked and explained how important the choice of resampling method is. I can perform exactly the same experiment with different algorithms used to get completely different results
  15. Because 3.8um x 3.8um is spacing between point samples that you will get - it is not actual physical size of pixel. Physical size of pixel is smaller than that (as you have seen from images above - depending on sensor) and actual difference between surface of square for that pixel and actual surface of pixel (light gathering) is folded into QE - no pixels have 100% QE (even at peak sensitivity) because of this Anyway - once you get your data - you only have data point, you no longer have dimensions and should not think in terms of square pixels - it will just confuse you. They are samples without any physical size - they only have x and y location and measured intensity.
  16. Pixels are not squares! Pixels are point samples. People get the idea that pixels are squares because of nearest neighbor sampling. But use any other resampling and you'll see that pixels are not squares. Ok - here is simple exercise that you can perform in order to show if pixels are in fact little squares or not - take image with one pixel lit up - enlarge it by 1000% so you can "see the pixel" using nearest neighbor interpolation - you get "square" as you would expect "because pixel is square", right? - rotate that image by 45 degrees - enlarge that image by 1000% - you should see square rotated by 45%, right? What, still "regular" square? What gives? Isn't pixel a square? No - it's a point. It becomes a square when you enlarge it using "silly" interpolation method. Look what happens when you enlarge it using more advanced interpolation method: It's now this rounded thing with some ringing around, what? I thought it was a square Pixels are not squares, they are point samples - no size no width. On camera yes - but even there, they are not squares, they are more round like small glass windows (depends on technology and micro lenses applied):
  17. Do look it up. Here is a scientific paper written on how to use that software: https://iopscience.iop.org/article/10.3847/1538-3881/153/2/77/pdf
  18. I'm sorry to say - that is complete nonsense and utter misunderstanding of Nyquist sampling theorem. I don't understand why people insist relating x2 to something in spatial domain when it is clear in what it says: So it is not twice FWHM, it's not twice Reyleigh criterion it is not twice Gaussian sigma - it is none of that. You need to perform Fourier transform of the signal to find out where cut off frequency is and then use that (twice that value) in order to determine sampling rate. As for 2D case, here is simplified proof that above x2 max frequency component still stands even in case of non optimum rectangular sampling grid (optimal sampling grid for 2d case is actually hexagonal, but that is different matter). For sine wave either vertical or horizontal - we have reduction to 1d case. For any wave that is at an angle so not either horizontal nor vertical - it will be sampled at higher rate in X and Y then it's wavelength is suggesting: So wavelength of any wave at an angle will produce sine wave on X and Y with longer wavelength, so if you sample at twice per green arrow in X and Y direction - you'll produce more than two samples per blue arrow (or in fact along X axis).
  19. @CCD Imager It has been nice discussing this with you, but at this point - I see no further reason to proceed. If you have any questions, then sure, I'll be happy to answer
  20. Here it is 80mm TS F/6 APO with flattener / reducer x0.79 and ASI1600 with 3.8um pixel size - effective sampling rate around 2"/px This was just quick process and slight sharpening to show you that it can make a difference.
  21. Deconvolution is applied to the whole image not just star cores (or at least it should be done that way). My graph represents data in frequency domain - it is not star profile it is what you get when you apply Fourier transform to your data (whole image not just stars). Again not true. I just happen to have such data and I'll show you in a minute I just need to dig it out
  22. It is completely wrong concept if you want to understand imaging. You are not doing photometry of object - you are imaging it. More you "magnify" object - darker the image becomes because we are working with surface brightness - not integrated brightness. This also applies to sky noise (which is not only source of the noise by the way - there are read noise, thermal noise and target shot noise) - so there is no single sky noise Increase sampling rate and for same exposure time - sky brightness will go down and so will associated noise.
  23. Ok, so here is measurement of FWHM values in AstroImageJ It hovers around 3.7 pixels and if your sampling rate is 0.47"/px - that equates to 1.74" FWHM
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.