Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Optical resolution in DS imaging.


Recommended Posts

35 minutes ago, ollypenrice said:

Ho hum, I'm in exactly the same position. ? Some time ago I bought a nice Meade ACF 10 inch second hand. I've yet to try it because the TEC is proving so competent in matching the old 14 inch images that I'm not all that tempted to do so. I don't think I'm doing its resale value much good though!

 

Interestingly I'm heading in completely the opposite direction.  I concluded that a high QE, large pixel, low read noise one-shot-colour camera (i.e. my Sony A7S) should be an excellent match for my Celestron C11.  Here is my first serious attempt, using just 2 hours of total exposure: 

As always, there are a few issues to iron out but I think this combination will be a solid performer.

Mark

Link to comment
Share on other sites

  • Replies 111
  • Created
  • Last Reply
8 minutes ago, sharkmelley said:

Interestingly I'm heading in completely the opposite direction.  I concluded that a high QE, large pixel, low read noise one-shot-colour camera (i.e. my Sony A7S) should be an excellent match for my Celestron C11.  Here is my first serious attempt, using just 2 hours of total exposure: 

As always, there are a few issues to iron out but I think this combination will be a solid performer.

Mark

Phew, you've just restored the resale value of our SCTs!

That's a great image, all the more so for being of only 2 hours. My TEC M52 had 12 hours. I don't know what kind of skies you have, Mark?

So many ways to skin a cat, it seems.

Olly

 

Link to comment
Share on other sites

3 minutes ago, ollypenrice said:

Phew, you've just restored the resale value of our SCTs!

That's a great image, all the more so for being of only 2 hours. My TEC M52 had 12 hours. I don't know what kind of skies you have, Mark?

So many ways to skin a cat, it seems.

Olly

 

Thanks.  It was taken on a slightly better than average night with an SQM reading of around 21.1  An average night is around 20.9

Mark

Link to comment
Share on other sites

7 hours ago, sharkmelley said:

Interestingly I'm heading in completely the opposite direction.  I concluded that a high QE, large pixel, low read noise one-shot-colour camera (i.e. my Sony A7S) should be an excellent match for my Celestron C11.  Here is my first serious attempt, using just 2 hours of total exposure: 

As always, there are a few issues to iron out but I think this combination will be a solid performer.

Mark

Thanks for that Mark, amazing image that tells me that I may keep my 11" SCT and I bet against Olly comming up with something similar with his 4" refractor and minute camera....

By the way this is what M51 looks like through a 2 meter RC - 3.6 hours of data that I processed from the Liverpool scope recently so aperture cannot be totally ignored

LT M51 r NebRGB PS27sign.jpg

Link to comment
Share on other sites

6 minutes ago, gorann said:

Thanks for that Mark, amazing image that tells me that I may keep my 11" SCT and I bet against Olly comming up with something similar with his 4" refractor and minute camera....

By the way this is what M51 looks like through a 2 meter RC - 3.6 hours of data that I processed from the Liverpool scope recently so aperture cannot be totally ignored

LT M51 r NebRGB PS27sign.jpg

Wow!  Absolutely stunning detail!

Mark

Link to comment
Share on other sites

22 hours ago, alan4908 said:

Total resolution = SQRT(Imaging scale^2 + atmospheric effects^2+ mount guiding error^2+ optical resolution^2)

 

22 hours ago, vlaiv said:

I've seen this approach before, but I don't seem to get reasoning behind it.

Why would we assume that different blur components (their sigma or RMS) add as linearly independent vectors rather than by means of convolution?

Convolution is indeed the correct way to combine sequential "blurring" elements. However, if you assume the blurring is Gaussian (as is often done and often the case) then this is equivalent to the approximation given by alan4908. This is because the convolution of two centered Gaussian functions is a third Gaussian function with a variance that is the sum of the convolved functions variances. G1(0,v1)*G2(0,v2) -> G3(0,v1+v2) where * is convolution and G(x,y) is a Gaussian function of mean x,variance y. Variance is the square of the standard deviation by definition and for a Gaussian there is a well defined relation between standard deviation and FWHM.

While the Gaussian assumption if often quite good for square or rectangular pixels it clearly is not , hence the debate of side length v diagonal.  As vlaiv points out it may also not not be good for guide errors. 

Regards Andrew

Link to comment
Share on other sites

10 hours ago, gorann said:

By the way this is what M51 looks like through a 2 meter RC - 3.6 hours of data that I processed from the Liverpool scope recently so aperture cannot be totally ignored

Very nice processing. I think this illustrates both the importance of the net technical resolution but also the importance of S/N ratio in our perception of resolution.

Regards Andrew

Link to comment
Share on other sites

1 hour ago, andrew s said:

Convolution is indeed the correct way to combine sequential "blurring" elements. However, if you assume the blurring is Gaussian (as is often done and often the case) then this is equivalent to the approximation given by alan4908. This is because the convolution of two centered Gaussian functions is a third Gaussian function with a variance that is the sum of the convolved functions variances. G1(0,v1)*G2(0,v2) -> G3(0,v1+v2) where * is convolution and G(x,y) is a Gaussian function of mean x,variance y. Variance is the square of the standard deviation by definition and for a Gaussian there is a well defined relation between standard deviation and FWHM.

While the Gaussian assumption if often quite good for square or rectangular pixels it clearly is not , hence the debate of side length v diagonal.  As vlaiv points out it may also not not be good for guide errors.

Thanks for the explanation Andrew.

On the point of guiding, since I've now image unguided, after the creation of a sky model, do you think that any tracking errors would approximate to a Gaussian function ?

Alan

Link to comment
Share on other sites

1 hour ago, andrew s said:

 

Convolution is indeed the correct way to combine sequential "blurring" elements. However, if you assume the blurring is Gaussian (as is often done and often the case) then this is equivalent to the approximation given by alan4908. This is because the convolution of two centered Gaussian functions is a third Gaussian function with a variance that is the sum of the convolved functions variances. G1(0,v1)*G2(0,v2) -> G3(0,v1+v2) where * is convolution and G(x,y) is a Gaussian function of mean x,variance y. Variance is the square of the standard deviation by definition and for a Gaussian there is a well defined relation between standard deviation and FWHM.

While the Gaussian assumption if often quite good for square or rectangular pixels it clearly is not , hence the debate of side length v diagonal.  As vlaiv points out it may also not not be good for guide errors. 

Regards Andrew

Ah, my bad.

For some reason I thought convolution of two gaussians produce gaussian with stdev that has been added. But you are right - looking at Fourier transform - variance is the exponent - so multiplying in frequency domain means adding exponents.

Link to comment
Share on other sites

34 minutes ago, alan4908 said:

do you think that any tracking errors would approximate to a Gaussian function ?

Simple answer is I don't know. You could look at a tracking log (even if not making corrections) and then plot a histogram of the deviations. Another simple approach is to look at a bright stars intensity profile and if you are happy that it looks  ok then even if the tracking / guiding is not Gaussian it is too small to matter.

Things that tend to give Gaussian response are either round, random, lots of parts, lots of samples or a combination of these. Things that don't have hysteresis are square or few /one-offs samples. 

If in doubt either measure (my approach) or see if your eyeball / brain are happy with it (Olly's approach) it's a hobby both work fine in their respective domains.

Regards Andrew

Link to comment
Share on other sites

1 hour ago, andrew s said:

Simple answer is I don't know. You could look at a tracking log (even if not making corrections) and then plot a histogram of the deviations. Another simple approach is to look at a bright stars intensity profile and if you are happy that it looks  ok then even if the tracking / guiding is not Gaussian it is too small to matter.

Things that tend to give Gaussian response are either round, random, lots of parts, lots of samples or a combination of these. Things that don't have hysteresis are square or few /one-offs samples. 

If in doubt either measure (my approach) or see if your eyeball / brain are happy with it (Olly's approach) it's a hobby both work fine in their respective domains.

Regards Andrew

Thanks Andrew. 

I don't have a tracking log but I've analysed the images of stars, including looking at them ?

When I measure these with CCDInspector I generally get single digit aspect ratios for my normal LRGB sub length of 600s.  My average FHWM (as measured by CCDInspector) is 2.4 arc seconds. Since I'm not chasing the seeing, I presume the measured aspect ratios of the stars are an indication of the mounts tracking accuracy. If this is correct then can I estimate the tracking accuracy by saying that a 5% aspect ratio would translate into an error of 0.05 x 2.4 = 0.12 arc seconds ?

Alan 

Link to comment
Share on other sites

26 minutes ago, alan4908 said:

Thanks Andrew. 

I don't have a tracking log but I've analysed the images of stars, including looking at them ?

When I measure these with CCDInspector I generally get single digit aspect ratios for my normal LRGB sub length of 600s.  My average FHWM (as measured by CCDInspector) is 2.4 arc seconds. Since I'm not chasing the seeing, I presume the measured aspect ratios of the stars are an indication of the mounts tracking accuracy. If this is correct then can I estimate the tracking accuracy by saying that a 5% aspect ratio would translate into an error of 0.05 x 2.4 = 0.12 arc seconds ?

Alan 

I don't think you can use ratio of RA to DEC guide error to estimate total error - it's a bit like saying that you can use ratio of rectangle sides to calculate it's surface (while you can use ratio to express surface formula - you are still missing one unknown to be able to tell surface).

Best calculation that I've come up so far, and that includes correction from above discussion (calculation with square root of sum squares is indeed proper one, as variance is being added for convolution instead of standard deviation which I wrongly presumed) is as follows:

Sigma airy disk = 7200*ATAN(0,45*lambda/(1000000*aperture))*180/PI()

(where lambda is nm and aperture is in mm)

Sigma seeing = =seeing/(2*SQRT(2*LN(2)))

(where seeing is FWHM in arc seconds)

Sigma guide - this is still unknown term, only measure that we have is guide RMS, but we don't know how it relates to sigma of gaussian PSF and if it indeed is a gaussian PSF - up to date I assumed it is equal.

Sigma total blur = sqrt(airy_sigma^2 + seeing_sigma^2 + guide_sigma^2)

And to convert back to FWHM if one wishes to compare to measured values:

Blur FWHM = sigma_blur * (2*SQRT(2*LN(2)))

(btw this conversion factor is often quoted rounded up to 2.355)

In order to finish above calculation we need to see how guide RMS relates to Gaussian sigma (if it indeed produces Gaussian profile).

Two things come to mind in order to do this:

1. We need to quantify how guiding error happens (and possibly classify guide error type according to mount used) - just to be clear what I mean by this: I suspect that high level mount will have constant drift between 0,0 position and measured guide deviation, and then be brought back with constant motion (short one), while low level mount will have more "jumpy" movement from original position to where it has been measured in guide exposure.

I actually have idea how to "see" what is happening - one should do guiding as usual and find very bright star and image it with very short exposures (something like Vega and 10ms) to produce frame sequence like in planetary imaging. Then it is the matter of plotting subframe centroid vs time - and it will show how mount moves between guide exposures - is it sudden jumps or steady motion to offset position.

2. Based on above - we just run a simulation - plotting on a fine grid values in each grid cell based on "time spent" in that cell for duration of guiding / single sub exposure (or even multiple exposures as image will be stacked in the end out of many subs), and we bin that fine grid to coarser grid - that will give us "sampled" PSF (close to actual image sampling we are using like 0.5"/pixel or something) and we fit gaussian and see sigma of fitted gaussian and how well it fits.

Then we can compare simulated PSF gaussian sigma to sigma of positions we used in simulation to see if there is relation and what would be conversion factor.

 

Link to comment
Share on other sites

Here is a little experiment I just did (not really scientific, but it's interesting to see results):

image.thumb.png.41781884ca83d3367ad585f403d9e773.png

So I've created stack of 100 subs - then I drew by hand on each sub - one line going from roughly center in some arbitrary direction and arbitrary length (Humans never seem to do good when they are asked to produce random output - so this might have some strong bias like me trying to cover each direction or trying to cover all distances). This sort of drawing lines corresponds to steady drift rather than sudden jumps.

Stack was then summed and binned to produce second image, and profile is plotted - it indeed starts to look Gaussian (as I believe it should due to Central Theorem). So while we need more simulations (to actually create random distributions of endpoints with particular RMS, or see difference between steady drift / sudden jump) I think it is good indicator that Gaussian profile is right one to use as approximation.

Link to comment
Share on other sites

2 hours ago, vlaiv said:

I don't think you can use ratio of RA to DEC guide error to estimate total error - it's a bit like saying that you can use ratio of rectangle sides to calculate it's surface (while you can use ratio to express surface formula - you are still missing one unknown to be able to tell surface).

Best calculation that I've come up so far, and that includes correction from above discussion (calculation with square root of sum squares is indeed proper one, as variance is being added for convolution instead of standard deviation which I wrongly presumed) is as follows:

Sigma airy disk = 7200*ATAN(0,45*lambda/(1000000*aperture))*180/PI()

(where lambda is nm and aperture is in mm)

Sigma seeing = =seeing/(2*SQRT(2*LN(2)))

(where seeing is FWHM in arc seconds)

Sigma guide - this is still unknown term, only measure that we have is guide RMS, but we don't know how it relates to sigma of gaussian PSF and if it indeed is a gaussian PSF - up to date I assumed it is equal.

Sigma total blur = sqrt(airy_sigma^2 + seeing_sigma^2 + guide_sigma^2)

And to convert back to FWHM if one wishes to compare to measured values:

Blur FWHM = sigma_blur * (2*SQRT(2*LN(2)))

(btw this conversion factor is often quoted rounded up to 2.355)

In order to finish above calculation we need to see how guide RMS relates to Gaussian sigma (if it indeed produces Gaussian profile).

Two things come to mind in order to do this:

1. We need to quantify how guiding error happens (and possibly classify guide error type according to mount used) - just to be clear what I mean by this: I suspect that high level mount will have constant drift between 0,0 position and measured guide deviation, and then be brought back with constant motion (short one), while low level mount will have more "jumpy" movement from original position to where it has been measured in guide exposure.

I actually have idea how to "see" what is happening - one should do guiding as usual and find very bright star and image it with very short exposures (something like Vega and 10ms) to produce frame sequence like in planetary imaging. Then it is the matter of plotting subframe centroid vs time - and it will show how mount moves between guide exposures - is it sudden jumps or steady motion to offset position.

2. Based on above - we just run a simulation - plotting on a fine grid values in each grid cell based on "time spent" in that cell for duration of guiding / single sub exposure (or even multiple exposures as image will be stacked in the end out of many subs), and we bin that fine grid to coarser grid - that will give us "sampled" PSF (close to actual image sampling we are using like 0.5"/pixel or something) and we fit gaussian and see sigma of fitted gaussian and how well it fits.

Then we can compare simulated PSF gaussian sigma to sigma of positions we used in simulation to see if there is relation and what would be conversion factor.

Hi Vlaiv 

Thanks for the detailed response.

Thinking about this, given that I use MaximDL to acquire my images, I should be able to create a mount tracking log by simply activating guiding but turning off the mount corrections. This would tell me the tracking error of the mount. 

Assuming I'm following your analysis correctly.... ?  I think you have omitted the blur creating by the CCD imaging process in the total blur calculation (above). 

Alan

Link to comment
Share on other sites

1 minute ago, alan4908 said:

Hi Vlaiv 

Thanks for the detailed response.

Thinking about this, given that I use MaximDL to acquire my images, I should be able to create a mount tracking log by simply activating guiding but turning off the mount corrections. This would tell me the tracking error of the mount. 

Assuming I'm following your analysis correctly.... ?  I think you have omitted the blur creating by the CCD imaging process in the total blur calculation (above). 

Alan

Not sure what mount tracking log would be good for (except specifying PA error and PE) - if one is already guiding, then guide error blur is created out of miniature departures from fixed position and then being guided back. If you turn off guiding you will effectively record unguided blur - which tends to be much larger (this is the reason we guide in the first place).

What would be blur created by CCD imaging process? I have not heard of a such thing so I have no idea what it is, why does it appear nor how it behaves.

Link to comment
Share on other sites

23 minutes ago, alan4908 said:

Hi Vlaiv 

Thanks for the detailed response.

Thinking about this, given that I use MaximDL to acquire my images, I should be able to create a mount tracking log by simply activating guiding but turning off the mount corrections. This would tell me the tracking error of the mount. 

Assuming I'm following your analysis correctly.... ?  I think you have omitted the blur creating by the CCD imaging process in the total blur calculation (above). 

Alan

Ah found it - you were referring to sampling resolution? (as per one of your previous posts).

I have to disagree here - sampling resolution does not have blurring effect as three stated blur types have (it certainly is not gaussian in nature). It indeed acts as low pass filter on signal cutting off frequencies higher than sampling frequency, but it is only cut off it does not blur in sense of attenuation of frequencies. All above act prior to any sampling and influence image resolution at imaging plane. I would say that these should be examined independently from sampling, and sampling chosen in part based on achievable resolution from these three factors.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Not sure what mount tracking log would be good for (except specifying PA error and PE) - if one is already guiding, then guide error blur is created out of miniature departures from fixed position and then being guided back. If you turn off guiding you will effectively record unguided blur - which tends to be much larger (this is the reason we guide in the first place).

Well, hopefully, this would enable me to estimate the tracking performance of my mount when it is not guiding.  As stated above I image unguided but I don't have a mount tracking error figure that I can put into the total system blur calculation. 

45 minutes ago, vlaiv said:

I have to disagree here - sampling resolution does not have blurring effect as three stated blur types have (it certainly is not gaussian in nature). It indeed acts as low pass filter on signal cutting off frequencies higher than sampling frequency, but it is only cut off it does not blur in sense of attenuation of frequencies. All above act prior to any sampling and influence image resolution at imaging plane. I would say that these should be examined independently from sampling, and sampling chosen in part based on achievable resolution from these three factors.

That's interesting....

After a bit more digging, I've found another reference to how you calculate the total system blur in an imaging system that is taken from The Astrophotography Manual by Chris Woodhouse page 35, 1st edition, under Imaging Resolution. Basically, it is stated here that if your seeing is z arc seconds and your image scale is y arc seconds and your diffraction limited resolution is x arc seconds. Then, if you ignore the mount guiding errors, then the total system blur = SQRT(x^2 + y^2 + z^2).  eg this states that you need to include the effects of the CCD sampling process within the total system calculation.  You might be able to view the text online if you do an internet search. 

Alan

Link to comment
Share on other sites

9 minutes ago, alan4908 said:

Well, hopefully, this would enable me to estimate the tracking performance of my mount when it is not guiding.  As stated above I image unguided but I don't have a mount tracking error figure that I can put into the total system blur calculation. 

That's interesting....

After a bit more digging, I've found another reference to how you calculate the total system blur in an imaging system that is taken from The Astrophotography Manual by Chris Woodhouse page 35, 1st edition, under Imaging Resolution. Basically, it is stated here that if your seeing is z arc seconds and your image scale is y arc seconds and your diffraction limited resolution is x arc seconds. Then, if you ignore the mount guiding errors, then the total system blur = SQRT(x^2 + y^2 + z^2).  eg this states that you need to include the effects of the CCD sampling process within the total system calculation.  You might be able to view the text online if you do an internet search. 

Alan

There is really simple way to test above equation. It states that blur depends on sampling resolution.

If it is agreed that stellar image represents total blur in the image, then any of us can take their own image (linear stack prior to stretching or single calibrated sub) - measure FWHM, bin x2 image, measure FWHM and see how do measured FWHM changes (if it changes at all).

If we observe given formula: resulting FWHM should be equal to ((FWHM/2.355)^2 - original_sampling^2 + binned_sampling^2)/2.355

(we calculate blur without contribution of the sampling by subtracting "sampling term" and then because we binned the image add new "correct" sampling term).

Just be sure to convert pixel FWHM to arcseconds in both images and compare.

Link to comment
Share on other sites

The effects of the pixels is, on reflection, quite complex.

They provide a square sampling pattern of the continuous PSF.

It (as vlaiv pointed out before) does not provide a point sample but averages over the size of the pixel.

Clearly sampling must impact resolution as  if a stellar psf just covers and is centered on one pixel all information on the psf is lost but for its integrated intensity.

The phase (i.e. where the psf falls on the pixels e.g. peak at a boundary or in the middle) of the psf with respect to the pixels can have a significant impact on the data recorder and two identical psf can look quite different.

Given we don't reconstruct the signal we can see the pixelation as we zoom in so there is a residual impact on resolution from the original continuous psf even if our eye/brain system filters it out at more distant viewing

When viewed like this I think you need simulations to draw out the effects in different circumstances as vlaiv has done and was done in the paper I liked to early on in the debate. I doubt simple formula with give more that a general impression but that may be all you want or need.

Regards Andrew

 

 

Link to comment
Share on other sites

5 minutes ago, andrew s said:

The effects of the pixels is, on reflection, quite complex.

They provide a square sampling pattern of the continuous PSF.

It (as vlaiv pointed out before) does not provide a point sample but averages over the size of the pixel.

Clearly sampling must impact resolution as  if a stellar psf just covers and is centered on one pixel all information on the psf is lost but for its integrated intensity.

The phase (i.e. where the psf falls on the pixels e.g. peak at a boundary or in the middle) of the psf with respect to the pixels can have a significant impact on the data recorder and two identical psf can look quite different.

Given we don't reconstruct the signal we can see the pixelation as we zoom in so there is a residual impact on resolution from the original continuous psf even if our eye/brain system filters it out at more distant viewing

When viewed like this I think you need simulations to draw out the effects in different circumstances as vlaiv has done and was done in the paper I liked to early on in the debate. I doubt simple formula with give more that a general impression but that may be all you want or need.

Regards Andrew

 

 

We do see pixelation because of the way certain software displays magnified image. Most of modern software uses some sort of filter to avoid pixelation. And look at following example:

image.png.be3622de47fed48e9269c774e8b6647a.png

Left is pixelated example (800% zoom on two star crop from processed image), while right is Lanczos resampled to 800% - if we exclude noise artifacts it looks pretty good up to lost frequencies.

While exact method of recording light in this case is surface integral sampling, in image processing theory and algorithms - each pixel is treated like a point sample (no dimension, just a value at coordinates) and when treated as such you can do above resampling which is close enough to proper signal reconstruction (without lost information due to sampling).

Link to comment
Share on other sites

20 hours ago, gorann said:

Thanks for that Mark, amazing image that tells me that I may keep my 11" SCT and I bet against Olly comming up with something similar with his 4" refractor and minute camera....

By the way this is what M51 looks like through a 2 meter RC - 3.6 hours of data that I processed from the Liverpool scope recently so aperture cannot be totally ignored

LT M51 r NebRGB PS27sign.jpg

I have often used your Liverpool Telescope images to assert on here the importance of aperture over F ratio so I would never argue with its importance if we remove all practical boundaries. However, buying something like the Liverpool Telescope lies outside those boundaries ? so my theme in opening this discussion was to consider the role of different 'amateur apertures.' I also bet heavily against coming up with something similar in a 4 inch with small pixels. I don't think the 4 inch will match the 5.5 inch or the 14 inch even if the pixel scale is the same. I'm interested in why, though.

A different direction for the discussion: the image above, to my eye, lies on a continuum with my pair of M51s, one in the 14 inch and the other in the 5.5 inch. The continuum goes in aperture order. The revelation of new features is not the key factor which defines the continuum for me. In my opinion it is defined more by the intensity of fairly small scale local contrasts. Slightly larger, that is, than the limits imposed purely by resolution. If this is exasperatingly imprecise to the theorists I can only apologize but I make pictures. On the other hand this observation does seem to agree with Vliav's simulations. 

 

1923039414_M51opticalresolutionthread.jpg.2d9b9f17442c24045d0df07034c7fb58.jpg

In this quick reprocess of just the spiral, made easier by not having to balance it with the hard stretch of the faint extensions, I feel I've taken the TEC data out of its comfort zone in trying to match the intensity of your local contrasts. In my 'real' version I wouldn't push this hard. Now hard processing needs depth of signal and depth of signal needs light and light arrives via aperture. Or time.

Please note that this is not how I chose to present my TEC data, it's an exploration of its limits and, in my view, beyond its limits.

Olly

 

Link to comment
Share on other sites

36 minutes ago, ollypenrice said:

In my opinion it is defined more by the intensity of fairly small scale local contrasts.

This is a key sentence in defining impact of resolution loss. Observe what happens to two close point sources as we apply larger and larger gaussian blur to the image:

image.thumb.png.e108af023e4d5b93f17b3025dcb75bb4.png

Above is stretched image 0.5 to 5.0 sigma gaussian blur, below is linear intensity value plot. Local contrast starts to lower until it is so low that two features cannot be distinguished any more - If we can't measure difference we say that those features are not resolved (this is where "detail" loss starts to be evident in this case - for closer/smaller features it will start sooner). One thing that need to emphasize - all stars in this image have same intrinsic intensity - it is detail loss that makes them look less bright (loss of local contrast) - so it is important to note, if you are after very faint point like sources - your SNR will improve if your resolution is good (good seeing, good guiding, large scope) - and you undersample (because as we said we are dealing with surface integral sampling rather than point sampling - you want to integrate all that spread signal into fewer pixels).

On the separate note, I did what I've suggested to be done to verify how sampling rate enters into above equation, here are the results (I used single calibrated sub and IRIS to calculate FWHM) - Sub was originally acquired at ~ 0.5"/pixel, and binned x2 and x3 for resolution of 1"/pixel and 1.5"/pixel:

Measured FWHM in each image (same star in all 3 version of sub, in arc seconds per pixel):

4.11
4.26
4.32

Appropriate sigmas:

1.745223
1.808917
1.834395

From above formula - "sampling blur free" sigma should be in each case:

1.672066
1.507375
1.055938

And we clearly see that these don't match. What is interesting though is that measured FWHM does increase with binning (but I can't spot pattern in increase related to sampling resolution) - it could be down to error in gaussian fitting because of decrease of number of pixels making up gaussian profile (simply fitting error is increasing due to lower resolution).

Link to comment
Share on other sites

Well, I'm still attempting to comprehend all this.....

In the meantime, I've found a very interesting post by Jon Rista on the subject of large telescopes/small telescopes and the limits of resolution.  He also addresses how you work out the limiting factors for resolution and the calculation for the total system blur for a system. Interestingly, his approach mirrors that of Chris Woodhouse that I mentioned above. 

Have a look here (you will need to scroll down to his post): https://www.cloudynights.com/topic/559188-understanding-sampling-resolution/

Alan

Link to comment
Share on other sites

12 minutes ago, alan4908 said:

Well, I'm still attempting to comprehend all this.....

In the meantime, I've found a very interesting post by Jon Rista on the subject of large telescopes/small telescopes and the limits of resolution.  He also addresses how you work out the limiting factors for resolution and the calculation for the total system blur for a system. Interestingly, his approach mirrors that of Chris Woodhouse that I mentioned above. 

Have a look here (you will need to scroll down to his post): https://www.cloudynights.com/topic/559188-understanding-sampling-resolution/

Alan

It might be that I'm missing something, but in that article all blur sources are treated the same, and the measure of total blur is lacking proper definition. How do we quantify level of blur?

Blur represents a low pass filter, and only way to quantify / describe the blur is to look at response curve in frequency domain.

If blur is of a strictly defined mathematical formulation we can choose a parameter that represents characteristic of such blur to compare between two different blurs of the same type. If we have rectangles we can compare their diagonals. If we take a rectangle and a circle - it is meaningless to compare diagonals because circle does not have a diagonal, but we can do something a bit different and say for circle we will take diameter as comparison parameter (provided it makes sense to do so) or we can take surface as comparison parameter.

Let's look at frequency response for airy disk blur:

image.png.5b9795e8a60f376ce8612aa85d55db6e.png

This is description of blur for airy disk - blue one for diffraction limited system (clear aperture), red one for possibly obstructed aperture or aperture with some level of aberration.

So any blur component that changes wave front irrespective of tracking / seeing - let's call it optics bench wavefront will be "incorporated" into this curve (so quality of optics, aperture size, filters used).

Seeing and guiding errors we will approximate with Gaussian, and justification for that is Central Limit Theorem or as andrew s puts it: "Things that tend to give Gaussian response are either round, random, lots of parts, lots of samples or a combination of these." with emphasis on last part (lots of samples) for cases when sub length tends to infinity.

What does Gaussian frequency response look like?

Like a Gaussian (precisely so):

image.png.7ece2852a0075d2e5859d3c92af8b70a.png

How would we model in this manner defocus blur?

Look at its response curve (for mild defocus):

image.png.5b7e4a4e931af78697210ff3d2f13137.png

What would be sampling response curve?

Well that one is a bit harder to explain because it is not acting like other types of blur. All above mentioned blurs work as convolution in spatial domain (and hence multiplication in frequency domain). Sampling is a bit different. Sampling is modeled via pulse train (delta functions repeating each pixel) and sampling is done via multiplication in spatial domain (or convolution in frequency domain). This graph might help:

image.png.0753b91139648486b318dd264da49de1.png

And the question on the image / bottom is important one to understand what is going on. When we reduce "time" or make pulse train denser (increase resolution / reduce pixel size), in frequency domain pulse train is stretched (individual pulses are pulled apart). Each pulse in frequency domain is "placeholder" for Fourier transform of original signal, so like I mentioned before when sampling we get original frequencies up to a sampling limit and then on each successive pulse we get another copy of Fourier transform of original signal shifted in frequency domain (sort of higher harmonic of frequencies of original signal) - this is why we need low pass filter (like Lanczos / Sinc) when trying to reconstruct original signal, or if we don't do it we get artifacts like aliasing / pixelization.

So you see, and this is important bit. All sources of blur have slightly different shape to them, and sampling is something different all together (you can think about sampling like this: it cuts off all frequencies above sampling frequency, and leaves all other frequencies below sampling frequency intact - and this holds for infinite pulse train. Image is finite in extent and there is some attenuation of frequencies - in image below will be explanation, but compared to other blur sources it is very minute).

Limited pulse train (each pulse having width):

image.png.f1917dca7870da9327f6d8e3a978bdca.png

This means convolution function will have some width to it as well, and higher harmonics will be attenuated. Convolution function in frequency domain will indeed do some blurring but in frequency domain rather than spatial one.

Bottom line - don't count sampling resolution as blur, but rather as bottom line limit in resolution, regardless of other blur sources, or choose your sampling resolution such that it captures enough frequencies that are still present in signal (not attenuated to very small level or cut off completely).

Back on the above formula. It works well if you approximate your blur sources as Gaussians. You can do that with Airy disk, or MFT of the whole system (but if you have optically poor instrument you need different coefficient rather than 0.45 lambda to approximate it well - that coefficient is good for approximation by Gaussian of perfect unobstructed system if we match surface below curves - for matching peak values one should use 0.42 lambda for example). You can do that with seeing. You can do that with tracking error. One might be able to do it with defocus blur - but I think since it is under our control - it should always be eliminated / minimized so don't see much point in including it into our approximation.

So for airy disk, seeing and guide error it makes sense to use above formula provided that you model each one as a Gaussian but you need to "convert" some value of each of those into meaningful gaussian approximation - relation of seeing FWHM to sigma of approximating Gaussian for example, or diameter of airy disk to approximate Gaussian sigma. Similarly with guide error blur - we need to estimate how to convert guide RMS into sigma of gaussian approximation. You can't just take any numbers and treat them equally and stick them into above formula - for example why would you mix airy disk diameter (or radius) with seeing FWHM - those are not "compatible" values - both need to be converted in respective sigmas to include them into above formula.

Just to mention few things about graphs - all frequency response (MFT) graphs shown here have X axis that is "inverse" of feature size. So if something is big in the image it is very close to origin in X, while if something is very small in the image it is further away (frequency vs wavelength relation 1/x). Also worth noting is that above graphs have not been scaled to same frequency units (and those depend on blur parameters) - so don't treat them like that (like concluding that Gaussian blur attenuates high frequencies roughly the same as Airy disk blur - it looks like that on graphs, but each graph will be stretched in X direction based on particular parameters - Gaussian sigma and Airy disk diameter).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.