Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

CCD Imager

Members
  • Posts

    433
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by CCD Imager

  1. 28 minutes ago, tomato said:

    I image small galaxies with an Esprit 150/ASI178 mono dual rig, usually binned 2x2 so imaging at 0.94 arcsec per pixel. The ASI 678 OSC camera is described as being the successor to the 178, it has smaller pixels and no amp glow so I am going to try to use it to capture RGB alongside a 178 for Lum, thereby avoiding the dreaded cloud induced missing channel syndrome. The  colour data will be binned in software. If I ever want to try lunar, planetary or the ISS, then I should have a suitable camera.

    For starters, I like your telescope 🤣I used to own an Astrophysics 160, but sold it at a ridiculous price to purchase the 10 micron mount. The Esprit has performed just as well, it would have to be very special circumstances for the AP to edge ahead.

    Interesting that you have chosen a planetary camera to image deepsky objects and uncooled too, but I have often thought about just that when you dont need the FOV and the noise characteristics of modern camera's almost negate the need for cooling, its a good much cheaper proposition. In that regard I have a relatively old ASI183, but would love to see a modern version without the horrible amp glow.

  2. 22 minutes ago, Elp said:

    I'm guessing his internet speed will say, no. Funnily enough it's one I haven't imaged yet but on here collectively they'd be far more than 200hrs.

    It would need commitment of around 20-40 imagers who have the data. The laborious part would be calibrating, registering and combining, all in the latest and greatest PC. It doesnt have to be M51, there was also a recent very deep M31 image showing an arc not seen previously.

  3. 10 hours ago, Elp said:

    It's not the highest res for sure but it's a great collaboration result that you don't see too often. For more detail or deeper field, that's what space telescopes are for

    Its a collaboration effort predominantly from Americans and was started from scratch. It leads me to think how many M51 images have we seen presented on SGL. We wouldnt need to start from scratch and ask members to start imaging, surely the data is already there. How many hours would we have?

    I'll nominate Olly to do the processing :) 

    Adrian

  4. 29 minutes ago, Elp said:

    Cage rattled (225hrs)?

    www.galactic-hunter.com/amp/m51-the-whirlpool-galaxy

    Interesting that you should link that collaboration image. As discussed earlier in this thread, there are two major aspects of an image I look for to assess its overall quality. The sheer amount of signal that has been acquired is unrivalled for a ground based telescope with new nebulosity apparent. This is clearly what has struck a chord with many viewers. But look a little closer and assess the resolution/detail in the image, it is really quite low. Take a look at the detail in the spiral arms in this thread and compare.

    I guess its what floats your boat. 

    Adrian

    • Like 1
  5. 1 hour ago, ollypenrice said:

    I'll become really interested when I'm shown an amateur image of M51 which is significantly better than those shot at around an arcsecond per pixel. I don't want to be shown calculations and measurements, I want to be shown a picture in which I can see a worthwhile difference.

    Please do rattle my cage when this image is available!

    OK Olly, show us your best image of M51 at 1 arc sec/pixel, I would be interested to see it. I'm guessing you'll have an advantage of excellent S/N from your skies

  6. 1 minute ago, vlaiv said:

    And he uses bicubic resampling.

    I've already linked and explained how important the choice of resampling method is.

    Although more modern re-sampling methods may give more reliable results, the differences are really quite small. This afternoon, I ran all 9 algorithms in Pixinsight, they all basically gave the same result

  7. 6 minutes ago, vlaiv said:

    Pixels are not squares, they are point samples - no size no width. On camera yes - but even there, they are not squares, they are more round like small glass windows (depends on technology and micro lenses applied):

    They are squares, why do you think the spec sheet gives pixel dimensions like 3.8u x 3.8u??

    You photographs show spheres on top of the pixels, these are micro lenses.

  8. 5 minutes ago, vlaiv said:

    I'm sorry to say - that is complete nonsense and utter misunderstanding of Nyquist sampling theorem.

    I don't understand why people insist relating x2 to something in spatial domain when it is clear in what it says:

    image.png.d88411355e02a9ced8e244d6a84d5809.png

    So it is not twice FWHM, it's not twice Reyleigh criterion it is not twice Gaussian sigma - it is none of that.

    You need to perform Fourier transform of the signal to find out where cut off frequency is and then use that (twice that value) in order to determine sampling rate.

    As for 2D case, here is simplified proof that above x2 max frequency component still stands even in case of non optimum rectangular sampling grid (optimal sampling grid for 2d case is actually hexagonal, but that is different matter).

    For sine wave either vertical or horizontal - we have reduction to 1d case. For any wave that is at an angle so not either horizontal nor vertical - it will be sampled at higher rate in X and Y then it's wavelength is suggesting:

    image.png.e6ed9f7dcf433ab899240fd36cbbb7c0.png

    So wavelength of any wave at an angle will produce sine wave on X and Y with longer wavelength, so if you sample at twice per green arrow in X and Y direction  - you'll produce more than two samples per blue arrow (or in fact along X axis).

     

     

    This theory was born with telegram electrical signals where the signal was of significant duration. An image or stellar profile is completely different, a snapshot and requires Nyquist modification as Stan has pointed out. How can you ignore square pixels sampling round stars?

    I suspect neither of us will change our thoughts, so as you said, maybe best to leave it here

     

  9. It hovers around 3.7 pixels and if your sampling rate is 0.47"/px - that equates to 1.74" FWHM

    1.74?? Wrong! I have never heard or used AstroimageJ I have no idea how reliable it is, but it is certainly NOT consistent with several astronomical programs like Pixinsight, MaximDL, ASTAP, CCDStack. When I took the sub exposure of 1.2 arc secs, I sent it to colleagues to qualify.

    Your measurement is intriguing, so when there is doubt, always look to another way to measure. As mentioned it has been through the mill with other Astro programs, so look from another angle and visually inspect the stellar profile via a line graph. I have drawn where the half max is and extrapolated down to full width. Tell me what you think the FWHM is??

    Stellar-profile-2.jpg

  10. 38 minutes ago, vlaiv said:

    There is no different interpretation of Nyquist sampling rate. It is very clear in what it states - you need to sample at x2 maximum spatial frequency of the image or in another words - 2 samples per shortest wavelength in the image.

    Oh yes there is. Here is text from Stan Moore a well known and respected astro imager, he was the author of CCD Stack and someone I conversed with for a long time in the early days of CCD imaging. this is what he wrote:

    "There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution.  The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM.  But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave.  The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels.  But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels."

    So a different interpretation from yours and shows that the original theorem designed for an audio sine wave is different from a stellar image as I have tried to point out. Sorry to differ, but I agree with Stan Moore

     

    Also you mention an ideal sampling rate for my image with a FWHM of 1.4, but if I had not "over-sampled" I would never have realised this FWHM. And if you recommend a scale of 0.875/p, then according to Nyquist even with a sampling rate of 2x, I could not achieve better resolution than 1.75, so your method doesnt make sense

    49 minutes ago, vlaiv said:

    Just to give you an illustration - imagine that you are imaging in superb seeing with 1" FWHM seeing influence, on exceptional mount that has 0.3" RMS performance (tracked or guided) with 6" diffraction limited scope.

    Your total FWHM that you can expect will be ~1.93"

    But I have shown you an image with 1.4 arc sec resolution

     

    51 minutes ago, vlaiv said:

    Now we just calculate square root of sum of squares of those so we have sqrt(0.635^2 + 0.3^2 + 0.425^2) = 0.8209" RMS and to get FWHM we need to multiply back with 2.355 so it is ~1.93"

    Seems like I am defying mathematical laws

  11. 2 hours ago, vlaiv said:

    Fact that you over sampled simply means that you have zeros or noise past one point in graph

    I'm afraid you are taking me too literally, maybe my fault generalizing. Deconvolution will work best with sampling at the Nyquist sampling rate, for me, that equates to 3x and you, 2x or less. You advocate 1.0 arc sec/pixel sampling (correct me if I am wrong), so my image with a FWHM of 1.4 arc secs, stars fall on 1 pixel with some spillage. Deconvolution works on a central pixel or central group and compares with surrounding pixels, then adjusting the value of the center higher and surrounding pixels lower. It just wouldnt have an effect on my image.

     

    After seeing your hand drawn graph, maybe you should re-visit the line graph through a star in my image, see below, there is data spreading around 7 pixels from the center, that equates to over 2 arc secs and looks similar to your top graph.

     

    Take a small refractor with a FL of 400mm and use a common camera with 3.8u pixels, the sampling rate becomes 2 arc sec/pixel and if the seeing is around 2.0 arc sec, deconvolution is basically useless.

     

    image.thumb.jpeg.ee6629a102816fc92f93423b7d1019bb.jpeg

  12. 2 hours ago, vlaiv said:

    Wrong again. Software binning works and it works predictably and works much like hardware binning works as far as SNR goes. Only difference is that it does not reduce per pixel read noise

    I now ignore read noise, it is vanishingly small, less than 10% the values seen in CCD camera's 5-10 years ago, especially if you have sufficiently long exposures.

  13. Another comment regarding compromised S/N with over -sampling. S/N is governed by aperture (among other obvious parameters - exposure duration, transparency, light pollution, object etc) and not focal length or pixel sampling. Object S/N remains the same, with over sampling, it is just spread out over more pixels. You dont lose S/N with modern CMOS camera's and you are unable to bin in camera, only cosmetically with software. In this respect, it doesnt matter if you are over sampled, simply re-sample in post to achieve your desired sampling, nothing lost. The opposite is that you are potentially missing out on resolution. And lastly, deconvolution is more effective on over sampled images. Try it, take an image from a little refractor and watch zero improvement in FWHM. Dont get me wrong, I do like the wide vistas that small refractors can produce, great for those huge narrowband nebula.

  14. Firstly, I appreciate your time to discuss this image and also comparing my image to Hubble, I take that as a compliment :)

    A couple of points: My image and no doubt the image from Hubble have both been significantly processed, probably differently

    Secondly the raw FITS image is 71MB, the image I presented here is 1.8MB, an awful lot of data has been lost

    I'm happy to run your re-sampling experiments on my RAW data, I use Pixinsight that is very powerful and has many algorithms for re-sampling, both Lanczos 3 and 4, which would you prefer?

  15. We'll just have to disagree. You have a wonderful site to image from, when I visited 7 years ago, I had a QSI683 with whopping 5.4u pixels, so maximum resolution I could achieve was 2.0 arc secs. Every image I took that week was 2.0 arc sec or thereabouts, I was sampling limited. I am sure that your site has raw sub arc sec seeing. Buy a 10 micron, Astrophysics or Paramount mount with absolute encoders and exclude guiding error from your blurring equation :)

     

    I know you disagree on this too, but the resolution of my images definitely improved since owning the GM2000, Per, even from the heavens would agree with me.

    Adrian

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.