Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

CCD Imager

Members
  • Posts

    433
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by CCD Imager

  1. 4 hours ago, ollypenrice said:

    Well, this is one definition of under sampling but it isn't a definition that's going to get me worked up into a tizzy. This image looks, to me, very much like images of M51 which I've shot at 0.6"PP and 0.9"PP. I remember very clearly Valiv doing the same test on my data that he performed on yours, and I was convinced.

    You have captured what most competent imagers will capture at non-Atacama/mountaintop observing sites. You are not, in any meaningful sense, undersampled. That's to say, if you reduced your arcsecs per pixel you would gain no new detail that anyone would be able to see. That's my definition of over/under sampling. Definitions which don't involve what you can see are, for me, so much waffle. While all this chatter is going on, what really matters in amateur astrophography is being missed, and that is going deeper.  What I've discovered since moving to super-fast systems is that going deeper is a darned sight more interesting than trying to go for more resolution. It's not just that you go deeper: you also gain more control over dynamic range in processing.

    Olly

    With your sampling at 0.9 arc sec/pixel, even given Nyquist at 2x sampling, you would not achieve resolution better than 1.8 arc secs.  That is quite a lot worse than 1.4 arc secs and would be visually noticeable. I think you have answered your own question with your belief that higher S/N is more gratifying. To me the two most important factors are BOTH S/N and resolution, I treat them with equal respect.

  2. Vlaiv, it is impossible to interpret a JPEG image and also one that is processed.

    I did a quick clone of the TIFF imaged, resample then increased magnification to match. My result is below. 

    But really I need to go back to original data and make the comparison with resampling. I must add that the original will be much more amenable to deconvolution algorithms like Blur XTerminator.

    Resampled.jpg

  3. 53 minutes ago, Mandy D said:

    Obviously, you can apply Nyquist's theorem to non-sinusoidal signals by virtue of the fact that any complex signal can be reduced to a set of sinusoids via Fourier transforms. The only component necessary to consider for this is the highest harmonic you need to reproduce.

    Indeed and is applied to astro images, but there are differences in its application

  4. 38 minutes ago, vlaiv said:

    from above conditions it is fairly easy to find proper sampling frequency - we take gaussian shape of certain FWHM, do Fourier transform of it (which is again gaussian function), find at which point it falls below 90% threshold and set our sampling at that frequency.

    I have read Wiki, many other articles and also watched presenters discuss Nyquist. Most make reference to continuous electrical signal.

    The problem with your proposal is that FWHM varies wildly. It is better to know the best FWHM with an over-sampled system and use that. Should FWHM be higher, you can always bin/resize to suit. Under sampling equals loss of resolution

    And as most astro imagers suffer from less than perfect mount tracking/guiding/focusing, there is an acceptance that resultant FWHM values are the best they can achieve. However, if the "real seeing" is known, it brings home how their equipment is compromising FWHM. Having near perfect optics, a mount that doesnt require guiding and better autofocus hardware and algorithms, would suit greater over sampling. In other words your proper sampling frequency completely changes.

    Sorry, not your theory, just what you were advocating.

  5. 44 minutes ago, vlaiv said:

    Incorrect. Nysquist sampling theorem clearly states what it applies to - any band limited signal / function (which really means that Fourier transform of that function has maximum parameter value after which all values of Fourier transform are zero).

    Nyquist formulated the theory based on telegram electrical signals that have a CONTINUOUS sinusoidal waveform. It has since been adapted to other signal forms including measuring stars. But there are significant differences, not least that you are measuring round signal with square pixels.
    In simplistic terms, it has been translated that you need 2x sampling to reproduce signal and can I stress this is AT LEAST 2x sampling, the more, the more likely it is to be accurate. Because of the 3D nature and wasted area of square pixels, the consensus in the knowledgeable astro community is that 3x sampling is necessary.
    I see this in practise and this very M51 image that I have uploaded. Given a sampling rate of 0.47 arc secs, I should be able to resolve 1.4 arc secs, but according to your theory I could resolve 0.94 and when you perform a stellar line graph, its plainly obvious there are insufficient points. Just from eyeballing, I believe the 1.1 measurement is false, purely because it is under sampled.

  6. Here is the processed luminance image taken over 2 nights and shown at 100% full resolution, but please bare in mind a JPEG will reduce its quality, the average FWHM was 1.4 arc secs. I would like to point out the beauty of over-sampling, it gives deconvolution something to get its teeth into.

    Ive uploaded the luminance only so that you eye is directed to resolution, colour images in this respect are distracting :)

    Hopefully now, I have satisfied Olly's curiosity.

    Adrian

    M51-Lum-Pr.jpg

  7. 19 minutes ago, vlaiv said:

    Please do understand that the fact that you can represent function in 3 dimensions - does not make it 3d. In order for function to be 3d - it must be function of 3 parameters.

    Sine function is not 2d function because we can draw it on piece of paper (2d medium) or computer screen (again 2d medium) - it is 1d function because it accepts 1 parameter - sin(x).

    Image is 2d function because value / intensity of image depends on two parameters - x, and y so image = f(x,y). It is not 3d.

    Point source does indeed form quasi Gaussian shape. It is not true Gaussian shape because it consists (mainly) out of - Airy pattern shape convolved with two Gaussian functions (seeing and mount track / guide performance), however that has nothing to do with

    a) image being 2d function in x and y

    b) your ability or inability to measure something from profile plot of that function (Which is itself 1d function - not 2d as you are taking a slice of 2d function)

     

    Instead of muddying water further with confusing comments, could you please answer one of questions that you've been asked by either Olly or me -

    namely presenting sub that is under sampled at 0.5"/px or presenting sub that has 1.5" FWHM or less and was made with 150mm aperture - or to explain how is x3 FWHM optimum sampling given Nyquist theorem, or rather, given it's definition - how is x3 FWHM related to x2 maximum spatial frequency of image?

    I dont like comparing an electrical sine wave to a Gaussian plot of a star, apples and oranges
    Lets remember that Nyquist 2x sampling was specifically related to continuous electrical signals, a stellar plot is a snapshot and presents a gaussian profile. The Nyquist theorem was adapted to be used for applying to stellar images, measuring circular stars with square pixels has its issues and explains why you need greater sampling.
    I cant agree with you that a stellar profile is not 3D. Thirty years ago, this was discussed ad nauseum on Compuserve and agreed by many mathematicians and astrophysicists.

  8. 15 minutes ago, ollypenrice said:

    But why was it a single sub? If you failed to achieve this resolution on the rest, what does that say about your assertion that anything less than 0.5"PP is under sampled? I spent two years imaging at a little over 0.6"PP and was never, ever, able to present an image at full size. Unfortunately the camera in use refused to bin properly so we had to resample downwards in software. I then started shooting the same kind of targets, and sometimes the same targets, at 0.9"PP and found no consistent difference.

    It was a single sub simply because the seeing was its best for that night, other subs had higher FWHM measurements. As mentioned, the final image which was taken over a couple of nights had a FWHM of 1.4, so if you apply the Nyquist sampling theorem, I need a scale of less than 0.5 arc sec/pixel to take advantage of that excellent seeing, anything more and I am missing out.

  9. 7 minutes ago, vlaiv said:

    Not sure what are you trying to say here in response to my assertion that image is in fact 2d function - intensity depends on x and y coordinate of pixel and f(x,y) is in fact 2d not 3d.

    Further, any image you try to measure will consist out of finite number of samples - it is not continuous function but rather discrete one - and that is what sampling is. Nyquist sampling theorem deals precisely with that and gives criterion of when can original signal be fully restored given sampled function (function that has values only at certain points) and further more - how to properly restore the function.

    Whenever I took line plot of any star - I always had enough samples to do some sort of meaningful measurement, so I'm not really sure what you mean by "insufficient points".

    Could you provide an example please and explain what you find insufficient for meaningful measurement?

    A stellar profile has three axes - x-axis (right ascension), y-axis (declination), and intensity. Brighter stars have higher intensities. We shouldn’t compare with a sine wave, a completely different signal, an optical system forms an image of a point source as a Gaussian curve.
    Let me dig out the image in question and show you a line plot of said image, It is on CN, but I need to find it again for here

  10. 13 minutes ago, vlaiv said:

    Not misinterpreting anything. It is a 2d function.

    It's like saying sine is 2d function because it has height / intensity - it is not, it is 1d function.

    That doesnt fit with image analysis experience, when a line plot was drawn through a star, it is obvious there are insufficient points for meaningful measurement. Do go and have a look at the CM discussion

  11. 1 minute ago, ollypenrice said:

    I couldn't care less about the Nyquest theorem (or the Nyquist :grin:), I want to see these images which were undersampled at 0.5"PP!

    And I couldn't care less about discussions on CN or about FWHM, neither of which I can see when I look at an astrophoto. I want to see an actual image which was undersampled at 0.5"PP because it will be the most detailed image of the object in question that I have ever seen from an amateur system.

    Post the image and win the argument!

    Olly

    Lol Olly
    apologies about the spelling, I forgot you were a teacher!
    The image in question is a single sub and the final image had a FWHM of 1.4 arc secs. I don't think you would be enamoured with a single sub?

    Adrian

  12. 3 minutes ago, vlaiv said:

    Pixels are not 3d and are certainly not rectangular. For purpose of this discussion pixels are point samples.

    Even if you are referring to camera pixels not being perfect point sampling device - effect of that is convolution with pixel blur which is much smaller contributing factor than telescope aperture and seeing, and after all - it only adds blur and reduces resolution rather than enhance it.

    2d sampling case is the same as 1d case in case of rectangular sampling grid - except we must break down to X and Y direction - in which case it holds that X sampling rate must be twice max wavelength in X direction and Y sampling rate must be twice sampling rate in Y direction.

    Optimum 2d sampling case is hexagonal grid - but no one is making such sensor, and since sensor is square grid and X and Y sampling rates are the same and any other wavelength in direction other than pure X and Y will have longer X and Y wavelength (vector projected on unit vector basis will have smaller length in those basis than length of vector itself) - original requirement still stands - we need to sample at twice highest frequency component regardless of wave orientation.

     

    You are misinterpreting, it is the stellar profile that you are measuring and is 3D - x, y and intensity

  13. 5 minutes ago, ollypenrice said:

    This is a deep sky imaging thread, so not connected with 'lucky imaging.'

    If you have been under-sampled at 0.47"PP you have some astonishingly detailed images to show us and I'll look at them with interest - not to say astonishment. In the absence of such images I'll have to put this claim down to a triumph of theory over practice.

    Olly

    There was a discussion on CN about am image I took with 1.2 arc sec FWHM. The debate centered on insufficient pixels to properly sample in addition to insufficient aperture! I used the SW Esprit 150. However, I have regularly captured images around the 1.5 arc sec FWHM

  14. 5 minutes ago, vlaiv said:

    Of course I have, and for the record - it states following:

    "Given band limited signal, you need to sample at twice highest frequency component of that signal in order to be able to perfectly restore it".

    Can you tell me how would you justify "x3 per FWHM" being twice highest frequency component of band limited signal?

    Because the Nyquest theorem was applicable to continuous audio signal and astronomical measurements are 3D and pixels are rectangular not circular

  15. I dont know how Meteoblue calculates their FWHM, I tried to find a reference to it, but alas nothing. However, they may not be far off. Nearly 30 years ago, I performed seeing analysis on the majority of clear nights over one year, using an SBIG STV, utilising the DIMM method. Seeing was mostly sub 2.0 arc secs and often less than 1.5, occasionally less than 1.0

    This is the raw base seeing, but you then have to add the effects of optical quality, focus, guiding etc that blurs the FWHM, so users usually end up with a much higher FWHM. Over the many years since I measured raw FWHM, I have been working to reduce additional blurring components, Should you manage to own a high quality triplet refractor and a mount that tracks to less than 0.5 arc sec, your resultant images will have a much better FWHM.

     

    Most astro Imagers aim for a sampling of 1.0 arc sec/pixel, a good compromise for the majority of users

    Adrian

  16. Anything over 0.5 arc sec is under-sampled! Seriously, UK skies offer around 1.5 to 2.5 arc secs FWHM and you need to sample at a rate of 3x for optimal sampling. Not only that, if seeing is better than your optical resolution, you have missed out and if you end up over sampling, you can bin in post to suit your taste. And lastly, deconvolution algorithms just love an over sampled image, give Blur XTerminator a decent chance!

    Adrian

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.