Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Optical resolution in DS imaging.


Recommended Posts

Excellent. Thanks to Andrew and Vlaiv. For the first time I think I'm getting somewhere with this. As a practical imager my interest lies in the perceived detail a system can deliver. This may be deplorable to the pure theorist but it makes perfect sense to anyone intent on making attractive and informative pictures. (I believe we can distinguish between perceived legitimate detail and artefact. Artefacts, for one thing, will be very variable from one system to the next and from one processing job of the same data to the next. And we can refer to Hubble images, as I did when admiring Peter Goodhew's excellent galaxy pair in which the inner core of the lenticular might have been artefact but certainly wasn't. )

What we need to do here at the observatory end of the discussion is to get the very small pixel/very small aperture setup working to its absolute limit to see if it supports the theory. If it turns out that an 80/480 really can do good galaxy imaging with 2.4 micron pixels then there are going to be some happy bunnies out there.

Olly

 

Link to comment
Share on other sites

  • Replies 111
  • Created
  • Last Reply
4 minutes ago, ollypenrice said:

Excellent. Thanks to Andrew and Vlaiv. For the first time I think I'm getting somewhere with this. As a practical imager my interest lies in the perceived detail a system can deliver. This may be deplorable to the pure theorist but it makes perfect sense to anyone intent on making attractive and informative pictures. (I believe we can distinguish between perceived legitimate detail and artefact. Artefacts, for one thing, will be very variable from one system to the next and from one processing job of the same data to the next. And we can refer to Hubble images, as I did when admiring Peter Goodhew's excellent galaxy pair in which the inner core of the lenticular might have been artefact but certainly wasn't. )

What we need to do here at the observatory end of the discussion is to get the very small pixel/very small aperture setup working to its absolute limit to see if it supports the theory. If it turns out that an 80/480 really can do good galaxy imaging with 2.4 micron pixels then there are going to be some happy bunnies out there.

Olly

 

I'm just finishing set of comparison images between 14", 140mm and 80mm scopes in 1", 1.5" and 2" seeing (with great guiding - so it won't vary across the set). This will let you see what sort of difference there is in resolution between each and under what sky conditions 14" scope and 80mm scope won't differ by much. I'm going to provide two resolutions for images, just to rule out under/over sampling issues (around 0.4"/pixel and around 1"/pixel)

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Actually I believe there is reconstruction based on Nyquist sampling in terms of enlarging image: Lanczos resampling.

 

1 hour ago, vlaiv said:

Sinc filter in this case ( https://en.wikipedia.org/wiki/Sinc_filter ) which acts as bandpass in frequency domain. Lanczos resampling uses Sinc kernel convolution when resampling - producing "true sampled signal" and the least artifacts when resampling.

I know it can be done (within the limitations of the finite size of the detector etc.) but I am not sure it is as a general rule during processing. I did not find it in the packages I used in the past. Maybe more modern packages use it e.g. Pixinsight ?

Olly have you been aware of this and or used it?

Regards Andrew

Link to comment
Share on other sites

Ok, so here is result of "simulation" in 0.4"/pixel:

image.thumb.png.dbb77ea078c28423b69ad602a05f5cb8.png

Best viewed at original resolution (right click and open in new window).

First row is 80 mm scope - just airy disk or 0" seeing, 1" seeing, 1.5" seeing, 2" seeing (all mixed with mesu mount kind of guide performance).

Second row is 140 mm scope, again airy disk, 1" seeing,  1.5", 2"

Third row is 354mm scope (unobstructed) again 0", 1", 1.5", 2"

Mind you 1, 1.5 and 2" seeing is not FWHM of resulting stars - it is instrument independent seeing combined with small guide error (less than half of the smallest seeing blur).

Blur in each image is combination of respective airy disk, seeing of given FWHM  and small guide error.

All images have been normalized to peak value (in reality it is not so - worse the seeing, lower the peak star pixel value because blur spreads light over more pixels - this is what simulation did so I had to normalize each frame for visual comparison).

I don't know if I should post 1"/pixel version - it is really tiny (x2.5 times smaller than this) - so not sure if it will show the differences, and sampling will cut into 14" scope details (well, first only airy disk is already degraded for both 140mm and 14" by use of 0.4"/pixel - for 80mm I think sampling is ok).

Anyway this shows that 1" seeing in 80mm scope can almost match 140mm in 2" seeing, while 14" scope outclasses all

Link to comment
Share on other sites

Here is a little play with registax wavelets and frequency restoration - 80mm scope 1" seeing:

image.png.2f6cfd4c9b23dc1688a3c05d5ccc8e5c.png

Makes largest letter sentence almost readable.

Mind you, this image is noise free and registax is not really suited for processing of 32 bit data - so I exported image as 16 bit prior to messing with it.

Link to comment
Share on other sites

3 hours ago, andrew s said:

 

I know it can be done (within the limitations of the finite size of the detector etc.) but I am not sure it is as a general rule during processing. I did not find it in the packages I used in the past. Maybe more modern packages use it e.g. Pixinsight ?

Olly have you been aware of this and or used it?

Regards Andrew

No, never, unless it's incorporated into some of the generic sharpening algorithms under another name.

Olly

Link to comment
Share on other sites

On 05/06/2018 at 11:49, ollypenrice said:

If we put a 2.4 micron pixel camera in an 80/480 refractor we have a 1.5" optical resolution which won't become the limiting factor until we are sampling between 0.63"PP and 0.42"PP. Since we are imaging at 1"PP the optical resolution should, therefore, be irrelevant. On a night of 1 arcsecond seeing this setup should, according to theory (or my interpretation of it!) match literally anything else? Say a 350mm with 0.33" resolution sampling at 0.64"PP. But does anybody really believe this?

I think the key to understanding this is to understand how the individual blurring elements to the system interact to yield the total resolution of the system. You can approximate the total resolution of the telescope system by taking the square root of the sum of the squares of the individual resolution elements (see: http://www.stanmooreastro.com/pixel_size.htm)

So:

Total resolution = SQRT(Imaging scale^2 + atmospheric effects^2+ mount guiding error^2+ optical resolution^2)

In your above example of the 80/480 refactor let's assume that you have 1" of blurring due to atmospheric effects; an imaging scale of 0.62 arc seconds/pixel, an optical resolution of 1.5". If we make the assumption that the mount tracking error is insignificant, then we have:

Total resolution = SQRT(0.62^2+ 1^2+ 0 + 1.5^2)

Total resolution = 1.9 arc seconds.  In this case the most significant contribution to the blurring is the optical resolution of the scope. Your sampling rate would be 1.9/0.62 = 3x

On the other hand if your distortion from the atmosphere was 2 arc seconds then you would have:

Total resolution = SQRT(0.62^2 +2^2+1.5^2)

Total resolution = 2.6 arc seconds. In this case the most significant contribution to the blurring is the atmospheric effect. Your sampling rate would be: 2.6/0.62 = 4.2x

If you take the example of the much larger 350mm diameter scope with the 0.33 arc second optical resolution which has an image scale of 0.64 arc seconds/pixel then you'd get:

Total resolution = SQRT(0.64^2+1^2+0.33^2)  assuming 1" of atmospheric distortion

Total resolution = 1.2 arc seconds - In this case the most significant contribution to the blurring is the atmospheric distortion. In this case, the mount tracking error, assuming it is half the plate scale, will be 0.32 arc seconds eg it will now be comparable to the blur introduced by the optical distortion and so cannot be assumed to be zero. So, if you redo the calculation taking into account mount tracking error you get

Total resolution = SQRT(0.64^2+1^2+0.33^2+0.32^2)

Total resolution = 1.3 arc seconds - not much difference since the result is still dominated by atmospheric distortion. 

Alan

Link to comment
Share on other sites

3 minutes ago, alan4908 said:

I think the key to understanding this is to understand how the individual blurring elements to the system interact to yield the total resolution of the system. You can approximate the total resolution of the telescope system by taking the square root of the sum of the squares of the individual resolution elements (see: http://www.stanmooreastro.com/pixel_size.htm)

So:

Total resolution = SQRT(Imaging scale^2 + atmospheric effects^2+ mount guiding error^2+ optical resolution^2)

In your above example of the 80/480 refactor let's assume that you have 1" of blurring due to atmospheric effects; an imaging scale of 0.62 arc seconds/pixel, an optical resolution of 1.5". If we make the assumption that the mount tracking error is insignificant, then we have:

Total resolution = SQRT(0.62^2+ 1^2+ 0 + 1.5^2)

Total resolution = 1.9 arc seconds.  In this case the most significant contribution to the blurring is the optical resolution of the scope. Your sampling rate would be 1.9/0.62 = 3x

On the other hand if your distortion from the atmosphere was 2 arc seconds then you would have:

Total resolution = SQRT(0.62^2 +2^2+1.5^2)

Total resolution = 2.6 arc seconds. In this case the most significant contribution to the blurring is the atmospheric effect. Your sampling rate would be: 2.6/0.62 = 4.2x

If you take the example of the much larger 350mm diameter scope with the 0.33 arc second optical resolution which has an image scale of 0.64 arc seconds/pixel then you'd get:

Total resolution = SQRT(0.64^2+1^2+0.33^2)  assuming 1" of atmospheric distortion

Total resolution = 1.2 arc seconds - In this case the most significant contribution to the blurring is the atmospheric distortion. In this case, the mount tracking error, assuming it is half the plate scale, will be 0.32 arc seconds eg it will now be comparable to the blur introduced by the optical distortion and so cannot be assumed to be zero. So, if you redo the calculation taking into account mount tracking error you get

Total resolution = SQRT(0.64^2+1^2+0.33^2+0.32^2)

Total resolution = 1.3 arc seconds - not much difference since the result is still dominated by atmospheric distortion. 

Alan

I've seen this approach before, but I don't seem to get reasoning behind it.

Why would we assume that different blur components (their sigma or RMS) add as linearly independent vectors rather than by means of convolution?

Some things stated on a given link are wrong:

" There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution.  The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM.  But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave.  The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels.  But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels. "

This for example makes no sense at all. For one, Nyquist criteria for 2D sampling, in case of rectangular grid is the same as for 1D case - x2 maximum frequency in X direction and x2 maximum frequency in Y direction - or in case of square pixels - x2 maximum frequency (since both X and Y sampling rates are equal). I must stress frequency, so arbitrarily stating sampling rate in spatial elements without explaining how those spatial elements relate to frequency domain is wrong - like saying 2*FWHM. There is no justification for saying that length of FWHM in either X or Y corresponds to wavelength of highest frequency component in that direction (and it does not - true Gaussian decomposes into infinite number of frequencies).

Link to comment
Share on other sites

alan4908I, too, wondered why we would add the three sources of blur linearly. On the other hand it may not be right to regard any one source of blur (the one with the lowest resolution) as defining the limit of resolution. Anecdotally (and I no longer have the original capture files) I have the impression that the 14 inch was less affected by the seeing than the 140. This really is strictly anecdotal and may not be correct, but it remains a strong subjective impression. 

Olly

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I've seen this approach before, but I don't seem to get reasoning behind it.

Why would we assume that different blur components (their sigma or RMS) add as linearly independent vectors rather than by means of convolution?

Some things stated on a given link are wrong:

" There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution.  The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM.  But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave.  The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels.  But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels. "

This for example makes no sense at all. For one, Nyquist criteria for 2D sampling, in case of rectangular grid is the same as for 1D case - x2 maximum frequency in X direction and x2 maximum frequency in Y direction - or in case of square pixels - x2 maximum frequency (since both X and Y sampling rates are equal). I must stress frequency, so arbitrarily stating sampling rate in spatial elements without explaining how those spatial elements relate to frequency domain is wrong - like saying 2*FWHM. There is no justification for saying that length of FWHM in either X or Y corresponds to wavelength of highest frequency component in that direction (and it does not - true Gaussian decomposes into infinite number of frequencies).

Hi Vlaiv

I'm simply following the Stan Moore's analysis. My logic is that since he's a very highly respected member of the astro community, this information is also of a high quality (you might also want to have a read of his chapter "The Theory of Astronomical Imaging" in the book the Lessons from the Masters, editor Robert Gendler).  Since you disagree with his analysis, why don't you approach him directly to try to understand his perspective ?   His contact details are on the main web site from which this article is taken (http://www.stanmooreastro.com).

Alan 

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

alan4908I, too, wondered why we would add the three sources of blur linearly. On the other hand it may not be right to regard any one source of blur (the one with the lowest resolution) as defining the limit of resolution. Anecdotally (and I no longer have the original capture files) I have the impression that the 14 inch was less affected by the seeing than the 140. This really is strictly anecdotal and may not be correct, but it remains a strong subjective impression. 

Olly

Yes - all very interesting, perhaps even more so since a lot of experts appear to be in disagreement over how to analyse the situation. Perhaps this would be a good topic for a PhD thesis ? ?

Alan

Link to comment
Share on other sites

8 minutes ago, alan4908 said:

Yes - all very interesting, perhaps even more so since a lot of experts appear to be in disagreement over how to analyse the situation. Perhaps this would be a good topic for a PhD thesis ? ?

Alan

...or just an experiment with an 80/480 triplet and 2.4 micron pixel camera? (The role of simpleton comes easy to me!!!)

Olly

Link to comment
Share on other sites

4 minutes ago, ollypenrice said:

...or just an experiment with an 80/480 triplet and 2.4 micron pixel camera? (The role of simpleton comes easy to me!!!)

Olly

Yes, simple experiment would be just fine - can you mount 80/480 and a larger scope side by side / pointing at the same target - that way we can have consistent tracking error and seeing, and difference in measured star FWHM would be just down to aperture?

Link to comment
Share on other sites

Spectating here, I would say 100% that some empirical data is what's needed.

On a very basic level (my brain fizzes at the complicated stuff) seeing errors appear to be gaussian. Guiding errors don't appear to be, they seem to be of relatively narrow bandwidth and remarkably consistent amplitude  - more like a sine wave with a combination of dither and gaussian error overlaid. Imaging resolution applies a very sharp low-pass filter to the data and all bets are off for diffraction limit effects which create an airy disc potentially appearing as several concentric rings.

It's really beyond my pay grade to work out how to sum those four types of error, and I don't know if a gaussian approximation and RMS approach is a fair estimate or not.

It strikes me that in imagining people always try to tackle too many variables at once.

To understand the impact of different factors surely an empirical approach is needed, fixing all the input variables but one and just measuring one dependent variable - possibly fourier analysis of the image.

Would an ideal image show a frequency spectrum with detail right up to the imaging scale then dropping off? I suspect this would look horrible, and a smoother transition would look better, just as oversample audio sounds better.

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

Yes, simple experiment would be just fine - can you mount 80/480 and a larger scope side by side / pointing at the same target - that way we can have consistent tracking error and seeing, and difference in measured star FWHM would be just down to aperture?

This does need to be put to the test.

But to summarize our progress on the theory, it seems that we don't know how to combine the sources of fuzziness. Interesting.

Olly

Link to comment
Share on other sites

2 minutes ago, Stub Mandrel said:

Spectating here, I would say 100% that some empirical data is what's needed.

On a very basic level (my brain fizzes at the complicated stuff) seeing errors appear to be gaussian. Guiding errors don't appear to be, they seem to be of relatively narrow bandwidth and remarkably consistent amplitude  - more like a sine wave with a combination of dither and gaussian error overlaid. Imaging resolution applies a very sharp low-pass filter to the data and all bets are off for diffraction limit effects which create an airy disc potentially appearing as several concentric rings.

It's really beyond my pay grade to work out how to sum those four types of error, and I don't know if a gaussian approximation and RMS approach is a fair estimate or not.

It strikes me that in imagining people always try to tackle too many variables at once.

To understand the impact of different factors surely an empirical approach is needed, fixing all the input variables but one and just measuring one dependent variable - possibly fourier analysis of the image.

Would an ideal image show a frequency spectrum with detail right up to the imaging scale then dropping off? I suspect this would look horrible, and a smoother transition would look better, just as oversample audio sounds better.

here it is:

image.thumb.png.f41fe927aa301dc016e6167093a2cc59.png

left is linear power spectrum - right is log power spectrum. This is FFT of part of M51 image (I just took random 512x512 crop, including some stars and part of the galaxy).

From this to my eyes, couple of things are evident: Gaussian type low pass filter acted on the image (and that is to be expected as combination of blurs is effectively approximated with gaussian) - image is noisy (random noise produces random frequency distribution) - diffraction spikes at 45 degrees are evident.

End of gaussian is at about 3.4 pixels per cycle - meaning that I oversampled the image in terms of resolution - this was imaged at 1"/pixel and my guiding and seeing was not up to it on that particular night. Proper resolution for this image is somewhere around 1.7"/pixel.

Link to comment
Share on other sites

37 minutes ago, Thalestris24 said:

I think it's normal to sum the squares of uncorrelated/independent errors. But I'm not sure if that's an accurate model?

Louise

ps replying to Olly's last post :)

Yes it is right way to add uncorrelated errors if errors are such that they are displacing observed value - like error sources for pixel value - there is "true" value of pixel and each error is displacing it - adding a bit to it or subtracting a bit from it.

So we can either say total error is square root of sum squares or we can "apply" each error in sequence.

Similar approach we can use here, except we are not displacing value but convolving with PSF (or blurring).

I'm pretty confident on how to calculate resulting blur if we approximate each PSF with gaussian - simply add sigmas and that will be sigma of "summed" blur (convolution of gaussian by gaussian is gaussian with resulting sigma being the sum of prior two). Seeing PSF due to Central Limit Theorem is gaussian, Airy disk can be well approximated by gaussian.

One thing that I am worried about is, as @Stub Mandrel pointed out, justification of guide/track error being modeled as Gaussian PSF. If I'm interpreting Central Limit Theorem, even if individual "motions" of mount (slow or fast departure from correct position in almost linear manner I think would be correct approximation, followed by swift linear return to approximately original position during one guide cycle) / averages of those will tend to have Gaussian distribution. Problem being how to obtain gaussian PSF sigma from guide RMS (and how are they related depending on mount error model - sudden shifts vs slow drift).

Link to comment
Share on other sites

On ‎05‎/‎06‎/‎2018 at 17:21, ollypenrice said:

 If it turns out that an 80/480 really can do good galaxy imaging with 2.4 micron pixels then there are going to be some happy bunnies out there.

 

If I correctly understand where you are going with this,  you are asking if you can get the same detail in a galaxy image using with a small scope with a small pixel sensor as you can with a large scope with a large pixel sensor.  In broad terms (with lots of caveats) I think the answer is yes. 

However, let's suppose for example that the large scope has 3x the aperture of the small scope and the large sensor has 3x the pixel size of the small sensor.  In terms of final image resolution they may well be matched (if the focal ratios are similar).  BUT, which one collects the most light from the galaxy?  The big aperture scope will collect 9x as much light from the galaxy and this will give 3x the signal to noise ratio of the small scope.  The image quality (in terms of noise and revealing faint structures) of the big set of equipment will be far superior to that of the small set of equipment.   Why do observatories use big mirrors?  It's all about light collection. 

I think the bunnies will need to put their happiness on hold.

Mark

Link to comment
Share on other sites

18 minutes ago, sharkmelley said:

If I understand where you are going with this,  you are asking if you can get the same detail in a galaxy image taken with a small scope with a small pixel sensor as you can with a large scope with a large pixel sensor.  In broad terms (with lots of caveats) I think the answer is yes. 

Now let's suppose for example that the large scope has 3x the aperture of the small scope and the large sensor has 3x the pixel size of the small sensor.  In terms of final image resolution they may well be matched.  BUT, which one collects the most light from the galaxy?  The big aperture scope will collect 9x as much light from the galaxy and this will give 3x the signal to noise ratio of the small scope.  The image quality of the big set of equipment will far superior to that of the small set of equipment.   Why do observatories use big mirrors?  It's all about light collection. 

I think the bunnies will need to put their happiness on hold.

Mark

Thanks Mark and, yes, this is the thrust of my investigation. The question is, at what point does the ability to diminish both aperture and pixel size break down? For my own purposes (finding visually perceptible detail in galaxies) and to my own satisfaction, I've found I can get as much detail, so defined, out of the TEC140/Atik 460 as I could out of the 14 inch/SXVH36. Initial attempts to carry on the trend with a 106mm scope and 2.4 micron pixels have failed to match the detail in the two larger scopes but we have had other issues to contend with. We'll be trying again for sure.

I agree with everything you say about the light grasp and aperture. However, the small pixel cameras are more modern and more sensitive. I find that I need less time, if anything, in the TEC/460 setup than in the 14 inch. This will be down to the cameras. I've made the same point about professional telescopes and their size myself and I agree but the professionals have a more rigorous definition of resolution and also want to go much deeper.

My own feeling is that the bunnies will be pretty darned happy but that they'd still prefer a TEC140... At some point the small aperture-small pixel trick has to break down but the question is, when and why? This is the first thread I've read on this subject in which an answer seems to be within sight. 

Olly

Link to comment
Share on other sites

Olly,

the result of your experiment will be very interesting, and I am slightly scared. Maybe it will tell me that I should sell my 11" EdgeHD before I even have had a chance to use it and buy a cheap small pixel camera, if there will be a market for large SCTs after your experiment......

Link to comment
Share on other sites

6 minutes ago, gorann said:

Olly,

the result of your experiment will be very interesting, and I am slightly scared. Maybe it will tell me that I should sell my 11" EdgeHD before I even have had a chance to use it and buy a cheap small pixel camera, if there will be a market for large SCTs after your experiment......

Ho hum, I'm in exactly the same position. ? Some time ago I bought a nice Meade ACF 10 inch second hand. I've yet to try it because the TEC is proving so competent in matching the old 14 inch images that I'm not all that tempted to do so. I don't think I'm doing its resale value much good though!

Olly

Link to comment
Share on other sites

10 minutes ago, ollypenrice said:

Thanks Mark and, yes, this is the thrust of my investigation. The question is, at what point does the ability to diminish both aperture and pixel size break down? For my own purposes (finding visually perceptible detail in galaxies) and to my own satisfaction, I've found I can get as much detail, so defined, out of the TEC140/Atik 460 as I could out of the 14 inch/SXVH36. Initial attempts to carry on the trend with a 106mm scope and 2.4 micron pixels have failed to match the detail in the two larger scopes but we have had other issues to contend with. We'll be trying again for sure.

I agree with everything you say about the light grasp and aperture. However, the small pixel cameras are more modern and more sensitive. I find that I need less time, if anything, in the TEC/460 setup than in the 14 inch. This will be down to the cameras.

My own feeling is that the bunnies will be pretty darned happy but that they'd still prefer a TEC140... At some point the small aperture-small pixel trick has to break down but the question is, when and why? This is the first thread I've read on this subject in which an answer seems to be within sight. 

Olly

It's all a very interesting question!  You rightly allude to the fact that there are also big differences in sensor technology in terms of QE and read noise that also need to be taken into account.  So the total equation is fairly complex.  Increases in QE can partly compensate for reduction in light grasp but there's a limit to the improvement in QE - the SXVH36 is already at 50% peak (I think) so a 2x improvement is the absolute maximum that can be achieved.  Read noise in modern CMOS sensors has come down by an extraordinary amount so that should also be of some benefit.

I honestly don't think your 2.4 micron pixels will ever deliver the image quality to which you are accustomed but I'm looking forward to your results!

Mark

Link to comment
Share on other sites

5 minutes ago, sharkmelley said:

 

I honestly don't think your 2.4 micron pixels will ever deliver the image quality to which you are accustomed but I'm looking forward to your results!

Mark

I'm glad you say this because neither do I! But how close will they get? Cliffhanger...

Olly

Link to comment
Share on other sites

18 minutes ago, ollypenrice said:

Ho hum, I'm in exactly the same position. ? Some time ago I bought a nice Meade ACF 10 inch second hand. I've yet to try it because the TEC is proving so competent in matching the old 14 inch images that I'm not all that tempted to do so. I don't think I'm doing its resale value much good though!

Olly

I just wonder how much optical quality of instrument has to do with resolution. Another point is obstructed / unobstructed scope. Third point that might work against C14 is mount that it was used on, and guide performance (OAG vs guide scope) - as mirror shifts are likely to happen unless it was Edge version with mirror lock. Was it properly collimated - did you inspect only central region? SCT-s (classical ones) have coma ...

Poorer optics will "inflate" gaussian approximation of airy pattern (as more light shifts from central disk into rings - makes gaussian approximation less tall and more "fat" - thus increasing sigma).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.