Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

High resolution deep sky imaging


alan4908

Recommended Posts

I'm currently contemplating modifying my current imaging rig to yield higher resolution deep sky images.

Here are my thoughts, I'd be interested in any comments !

Alan

 

THEORY

Currently, I'm acquiring my Deep Sky imaging data at 0.7 arc seconds per pixel with an SW Esprit 150 and a SX Trius 814 camera. I believe that this is the highest resolution that I can obtain given average seeing conditions at my site.  For example, if I take my calibrated but stacked Lum data for my most recent image (NGC5907) and put it through CCDInspector it tells me that my average FWHM is 2.43 arc seconds. Which implies that I need to sample at 2.43/3.5 = 0.69 arc seconds/pixel.  Here, I'm using Stan Moore's recommendation of 3.5x rather than the ideal x2 rate (see http://www.stanmooreastro.com/pixel_size.htm).

However, if I look at the individual calibrated Lum subs of 600s I can see that some of these have much better FWHM, reflecting better seeing conditions. According to CCDInspector, the best of these has an average FWHM of 1.49arc seconds, implying a sampling rate of 1.49/3.5 = 0.43 arc seconds/pixel. So, in theory, there would be some benefit to decreasing the sampling rate further, although this would only be apparent if I kept only the very best images eg those with a much smaller FWHM than the average. 

However, given the fact that the UK has infrequent clear skies, the last thing I want to do is discard the vast majority of 600s Lum subs in return for a marginal increase in resolution. So, in order for this to be practical, I'd have to significantly reduce the Lum exposure time to (say) 60s, which means that this higher resolution approach would probably only be applicable to parts of very bright deep sky objects (eg Cat's Eye Nebula).  This is perhaps OK since I could construct a High Dynamic Range image of the entire object from the individual 60s and 600s subs. 

PRACTICE

Since I image unguided, I should be able to collect large quantities of 60s Lum images reasonably efficiently with the view of throwing (say) 80% of them away. 

The simplest way to reduce the sampling rate of my system, whilst attempting to maintain image contrast, would be to use a 2x Powermate. This would decrease my current sampling rate to 0.35 arc seconds/pixel.

I'd need to build new V-curves for FocusMax, new filter offsets and a new sky model for the mount. 

Current set up: Field Flattener->Spacer#1->Filter Wheel->CCD (Spacer#1 is to make the Field Flattener is 96mm from the CCD)

Proposed setup: Field Flattener->Spacer#2->Filter Wheel->Powermate->CCD - question is this configuration correct ? What is the required distance between the CCD and Field Flattener ?

Link to comment
Share on other sites

  • Replies 34
  • Created
  • Last Reply

Here is my view on this. I think that you are already oversampling. I'll explain.

First, reasoning on the given link is flawed.

I agree that blur PSF can be approximated with gaussian profile, but in my view, stated formula for blur size is not correct. Blur contributions from aperture, seeing and guiding error don't add as linearly independent vectors but rather by means of convolution. Seeing PSF is convolved with Airy disk PSF of optics (not gaussian, but Airy profile, but can be approximated with Gaussian), and also convolved with guide error PSF.

None of said PSFs is truly gaussian in nature, but each of them can be approximated by gaussian (both seeing and guide error PSFs tend to gaussian shape as integration time tends to infinity - central theorem). Result of convolution of a gaussian by a gaussian is a gaussian. Resulting gaussian has standard deviation being sum of two standard deviations of each gaussian PSFs being convolved.

This does not have an impact on choosing resolution, it is useful however in discussing appropriate resolution if we approximate total blur PSF as gaussian (and we have reason to do so if we consider how that blur is formed).

Given that you have average FWHM of your subs, that is reasonable starting point in choosing resolution. It is important to emphasize following: if blur PSF was true gaussian there would be no theoretical limit in resolution! However because blur PSF is not true gaussian (we only approximate it with one), because one of convolution components is Airy PSF, there is physical limit to resolution that can be achieved. Planetary imaging uses this upper limit to determine sampling.

Choosing sampling rate is also flawed in given text. First, Nyquist theorem for 2d sampling with rectangular grid is the same as Nyquist theorem for 1d case (and even spaced sampling). It states that sampling should be at x2 minimum frequency we want to capture both in x and y direction. We are using square grid, so we don't have to worry about different sampling rates in different directions.

Problem that I've seen in applying Nyquist theorem to image sampling is lack of understanding of Fourier transform of 2d signals. It is often quoted that x2 should relate to some value in spatial domain - like x2 of FWHM or something like that. This is simply not correct. We need to look at Fourier tranform of signal in frequency domain and understand what frequencies we can cut off - higher frequencies will not be meaningful to capture - and based on that we can choose our sampling in freqnecy domain.

In this case we have gaussian approximation to blur PSF that acts by convolution on our original image. There is a link between convolution and Fourier transforms that states - convolution in spatial domain is equal to multiplication in frequency domain. This coupled with the fact that Fourier transform of gaussian profile is again gaussian profile - let's us choose good sampling frequency.

Look at following graph:

image.png.95fe86967f216dc786178e7a07fa1d0a.png

And I will now remind you of old audio equipment - equalizer:

image.png.038ac2aee9c1deab909fa64ead088215.png

These two can be thought as a same thing - meaning that gaussian PSF blur acts in such way that it attenuates higher frequency more and more (multiplies with value less than one). Since gaussian never reaches 0 - theoretically it does not cut off (multiply with 0) any frequency and theoretically there is no upper limit on resolution.

Not so for Airy disk PSF, look at its Fourier transform (or better known as MTF):

image.png.aefd7496d61f644e70cf282cba03c7ce.png

There is clear cut off point (depending on aperture size) , after which all frequency components are effectively 0 - as we know there is limiting resolution for telescope of a given size - this is why.

Ok, now that we understand how frequencies attenuate because of blur PSF we can start thinking of resolution. Sensible option for maximum resolution would be to choose frequency at which attenuation is larger than some selected value. Like - no point in recording frequencies that are attenuated to 1% or less of their original value. This is where you have an option to select, because you can actually recover these high frequencies (up to cut off point of airy PSF, which is quite higher than frequencies we are examining) by some algorithm for frequency restoration - like deconvolution or wavelet based frequency restoration (famous wavelets in Registax). Problem is that by restoring frequencies you intensify noise (which is a signal that has frequency components randomly distributed - so if noise is large it will also have some large high frequency components that you will "boost" and make noise worse).

As you see, maximum resolution is tied to SNR - greater the SNR - more frequency restoration you will be able to do to keep SNR at acceptable level (this is if you plan to do frequency restoration on your signal).

So my advice would be to keep frequencies that are attenuated to more than 0.1 - 0.05 of their original value if you don't plan to do frequency restoration, and keep frequencies up to 0.01 if you do plan to do frequency restoration and have high SNR in your images.

Now we need to derive what sort of sampling will give us such frequencies. I've already done that and made a thread about it, so I'm going to link it here and just list the formula for sampling resolution based on attenuation factor chosen.

So formula is:

PI / sqrt( -2 * ln(fraction) / (FWHM / 2.355)^2 )

Let's plug in some numbers and see what we get from this:

You mentioned that you have 2.43" FWHM. Here are 3 values, for 0.1, 0.05 and 0.01 attenuation value respectively:

1.51"/pixel, 1.32"/pixel, 1.06"/pixel

This is why I've said you are oversampling. To keep even frequency components with attenuation up to 99%, for given FWHM, you need to sample at 1.06"/pixel, and even then you need to do frequency restoration in order to present captured detail as much as possible.

You can further improve resolution of your images by improving your FWHM - better guiding, making sure you image when seeing is exceptional, using larger aperture, or using some advanced stacking algorithms. As you have noticed, there is deviation between FWHM values in different frames. Ideally you would want to deconvolve individual frames to lower their FWHM to match reference frame - this in turn will increase noise in those frames, and in the end you want to do noise dependent weight combine of frames (I've developed rather interesting algorithm to deal with such cases, and trying to find time to implement and test it - it is useful in LP also because different parts of sky have different level of LP and hence different SNR-s for same target).

 

 

 

 

Link to comment
Share on other sites

Alan,

From a practical point of view, I think it will be pretty hard to achieve good result in the whole frame by combining field flattener and barlow/powermate. These components are probably not designed to work with each other. Also from practical point of view I am not sure if it makes sense to move much further than 0.7"/px for 150mm aperture instrument. You can give it a try with some smaller pixel camera (like QHY183M - quite high QE, CMOS cameras are also better suited to short exposure subframes), but it also depends on the actual resolution that your Esprit  can provide. I used for some time 130mm APO with 1.5"/px scale camera and then changed to 1.05"/px scale camera. I noticed some resolution improvement in my subframes, but it was not much. At my location average FWHM is also in the range 2.4-2.5", on some exceptional nights I am able to do 120s subframes with FWHM better than 2". 

Link to comment
Share on other sites

6 hours ago, vlaiv said:

You can further improve resolution of your images by improving your FWHM - better guiding, making sure you image when seeing is exceptional, using larger aperture, or using some advanced stacking algorithms. As you have noticed, there is deviation between FWHM values in different frames. Ideally you would want to deconvolve individual frames to lower their FWHM to match reference frame - this in turn will increase noise in those frames, and in the end you want to do noise dependent weight combine of frames (I've developed rather interesting algorithm to deal with such cases, and trying to find time to implement and test it - it is useful in LP also because different parts of sky have different level of LP and hence different SNR-s for same target).

 

Vlaiv - Many thanks for your very detailed response, it's interesting that you disagree with the Stan Moore analysis, unfortunately, my technical knowledge of this area is limited. However, I do understand the consequences of your conclusion which suggestions that I should abandon my powermate proposal :happy8:  Thanks also for the suggestions on how I can improve my existing FWHM.

4 hours ago, drjolo said:

Alan,

From a practical point of view, I think it will be pretty hard to achieve good result in the whole frame by combining field flattener and barlow/powermate. These components are probably not designed to work with each other. Also from practical point of view I am not sure if it makes sense to move much further than 0.7"/px for 150mm aperture instrument. You can give it a try with some smaller pixel camera (like QHY183M - quite high QE, CMOS cameras are also better suited to short exposure subframes), but it also depends on the actual resolution that your Esprit  can provide. I used for some time 130mm APO with 1.5"/px scale camera and then changed to 1.05"/px scale camera. I noticed some resolution improvement in my subframes, but it was not much. At my location average FWHM is also in the range 2.4-2.5", on some exceptional nights I am able to do 120s subframes with FWHM better than 2". 

Thanks for the pragmatic response. My conclusion is that this is another nail in the coffin for the powermate route, taking into account my current sampling rate and average FWHM.

The QHY183M camera does look interesting and it does seem to be a way of exploiting lucky imaging, it's also interesting that you seem to have approximately the same average seeing conditions and that you did notice some small improvement when you changed your sampling rate from 1.5 to 1.05 arc seconds per pixel.

Anyway, I will park the powermate proposal for now. :happy11:

Alan

Link to comment
Share on other sites

Imo, the question really is if you want to optimise your rig for averaging imaging conditions or best imaging conditions. Since you already have a very stable mount, my guess is the latter. Why not go for better pixel scale, so you're prepared for those (rare?) crisp clear nights, and live with the fact that most of the time your images will be oversampled. (And when that happens, maybe image 2 x 2 binned?)

"Up here", those crisp, clear nights most often occur late winter or early spring, just when they're needed for galaxy season. And since you already have multiple scopes and cameras, you can swap depending on target and season. That's the way I would go, if I had the funds for it.

Btw, I wouldn't discard luminance subs, just because seeing wasn't perfect.

Just my €0.02

Link to comment
Share on other sites

10 hours ago, wimvb said:

Imo, the question really is if you want to optimise your rig for averaging imaging conditions or best imaging conditions. Since you already have a very stable mount, my guess is the latter. Why not go for better pixel scale, so you're prepared for those (rare?) crisp clear nights, and live with the fact that most of the time your images will be oversampled. (And when that happens, maybe image 2 x 2 binned?)

"Up here", those crisp, clear nights most often occur late winter or early spring, just when they're needed for galaxy season. And since you already have multiple scopes and cameras, you can swap depending on target and season. That's the way I would go, if I had the funds for it.

Btw, I wouldn't discard luminance subs, just because seeing wasn't perfect.

Hi Wim

Hmm that is an interesting point. 

One item that I neglected to mention above is that I routinely use a little deconvolution on the high signal to noise parts of my stacked Lum images (using CCDstack) which virtually always improves the overall image resolution. Given deconvolution should only work on oversampled images, this presumably indicates that I'm a little oversampled the majority of the time as a consequence of my average seeing conditions. Assuming this is correct, then it doesn't seem too bad a place to be in since I will be giving up up a little signal to noise for the chance to capturing higher resolution data when seeing conditions allow. 

I think I shall try to understand a little more about the techniques available for Deep Sky lucky imaging before I make any further purchases !

Alan

 

 

 

Link to comment
Share on other sites

6 minutes ago, alan4908 said:

One item that I neglected to mention above is that I routinely use a little deconvolution on the high signal to noise parts of my stacked Lum images (using CCDstack) which virtually always improves the overall image resolution. Given deconvolution should only work on oversampled images, this presumably indicates that I'm a little oversampled the majority of the time as a consequence of my average seeing conditions. (...)

You can try a drizzle + deconvolution combo with your current setup. When I had 1.5"/px setup that was definitely not oversampled I tried this combination few times with good result. I used PixInsight Drizzle Integration and then deconvolution with PSF model and mask. Drizzle integration makes sense if you think that image is undersampled. 

Link to comment
Share on other sites

Drizzle makes sense when undersampled, as noted. But it does at the cost of snr and ending up with large files (assuming 2 x drizzle). Deconvolution can work even with somewhat undersampled images, if you're careful. But yes, using it with caution on oversampled images is best practice.

AfaIk, lucky imaging for dso only gets you so far. To beat seeing you need very short exposures, typically under 5 seconds (very conservative estimate). I wonder how deep you can go in just 5 second exposures. I don't recall ever having seen ifn images where this technique was used with success. I think that once camera technology gets there, we can all safely ditch guiding. (And unfortunately that nice mount of yours will then be overkill. :wink:)

Link to comment
Share on other sites

I'm working on a magazine article at the moment, looking at how refractor high res images compete with ones from larger reflectors. As I've often said, I take the theory with a pinch of salt because I rarely find it gives a terribly good prediction of the practical results. I like much of your thinking, especially the idea of using only the best images to get the sharpest results on the bright parts.

My inclination would, like drjolo, be to try a very small pixel CMOS for the reasons he suggests. One of my robotic guests is doing just this in his FSQ106 and initial results suggest remarkable resolution for the aperture and FL.

Personally I doubt that you'll gain much by going below your present 0.7"PP but who knows, you might gain a bit. Julian Shaw used an AP Advanced Barlow and 6 inch refractor to capture this rendition of the Sombrero (well placed in NZ for the shoot.) I think it's a great capture. Scroll down on the second page of the thread to see it. 

 

 

Link to comment
Share on other sites

1 hour ago, alan4908 said:

Hi Wim

Hmm that is an interesting point. 

One item that I neglected to mention above is that I routinely use a little deconvolution on the high signal to noise parts of my stacked Lum images (using CCDstack) which virtually always improves the overall image resolution. Given deconvolution should only work on oversampled images, this presumably indicates that I'm a little oversampled the majority of the time as a consequence of my average seeing conditions. Assuming this is correct, then it doesn't seem too bad a place to be in since I will be giving up up a little signal to noise for the chance to capturing higher resolution data when seeing conditions allow. 

I think I shall try to understand a little more about the techniques available for Deep Sky lucky imaging before I make any further purchases !

Alan

 

 

 

There is always some sense in doing a bit of frequency restoration (either by deconvolution or by wavelet decomposition) regardless where you are with sampling and depending on your SNR.

image.png.03b26a59f7c64215830b317989917b49.png

You won't be able to restore only those frequencies that are effectively multiplied by 0 - those past resolving power of the telescope.

If one had image with high enough SNR, they would be able to restore all frequencies up to resolving power of the telescope, regardless of the seeing and guide errors.

Problem is of course that by going with higher sampling frequency (using smaller pixels, or longer focal length) you are starting to hurt your SNR more and more, and you will be able to do much less frequency restoration because of poor SNR.

41 minutes ago, ollypenrice said:

I'm working on a magazine article at the moment, looking at how refractor high res images compete with ones from larger reflectors. As I've often said, I take the theory with a pinch of salt because I rarely find it gives a terribly good prediction of the practical results. I like much of your thinking, especially the idea of using only the best images to get the sharpest results on the bright parts.

My inclination would, like drjolo, be to try a very small pixel CMOS for the reasons he suggests. One of my robotic guests is doing just this in his FSQ106 and initial results suggest remarkable resolution for the aperture and FL.

Personally I doubt that you'll gain much by going below your present 0.7"PP but who knows, you might gain a bit. Julian Shaw used an AP Advanced Barlow and 6 inch refractor to capture this rendition of the Sombrero (well placed in NZ for the shoot.) I think it's a great capture. Scroll down on the second page of the thread to see it. 

 

 

Olly, is there anywhere full resolution version of this image?

The image linked in the post is presented at resolution of about 0.75"/pixel and I don't know what was original capture resolution, and if any captured data was lost due to scaling down.

There is not much point in capturing image at 0.4"/pixel if one is going to scale it down to 0.8"/pixel resolution - then you can just skip x2 barlow and shoot at 0.8"/pixel - you won't loose any detail in doing so over 0.4"/pixel and than scaling.

Link to comment
Share on other sites

2 hours ago, drjolo said:

You can try a drizzle + deconvolution combo with your current setup. When I had 1.5"/px setup that was definitely not oversampled I tried this combination few times with good result. I used PixInsight Drizzle Integration and then deconvolution with PSF model and mask. Drizzle integration makes sense if you think that image is undersampled.

I have never tried drizzle integration - perhaps I should, since if the image deteriorates in clarity then I presume that this would be further (practical) evidence that my images are slightly oversampled.

1 hour ago, wimvb said:

Drizzle makes sense when undersampled, as noted. But it does at the cost of snr and ending up with large files (assuming 2 x drizzle). Deconvolution can work even with somewhat undersampled images, if you're careful. But yes, using it with caution on oversampled images is best practice.

AfaIk, lucky imaging for dso only gets you so far. To beat seeing you need very short exposures, typically under 5 seconds (very conservative estimate). I wonder how deep you can go in just 5 second exposures. I don't recall ever having seen ifn images where this technique was used with success. I think that once camera technology gets there, we can all safely ditch guiding. (And unfortunately that nice mount of yours will then be overkill. :wink:)

I guess I was thinking of applying lucky DSO imaging in a slightly different and perhaps more limited context - to explain: if you take some DSO's they consist of some very bright and very dim parts - an extreme example of this is the Cat's Eye Nebula where the central Eye is very bright, very small but contains a lot of detail.  I was thinking that the lucky DSO imaging technique could be applied to the central Eye and "normal" DSO images to the rest of the nebula.  So, I would take the Cat's Eye with existing setup then I would retake it with my to be defined lucky imaging set up. I'd then combine the two images and end up a better looking image....hopefully :happy11:

 

51 minutes ago, ollypenrice said:

I'm working on a magazine article at the moment, looking at how refractor high res images compete with ones from larger reflectors. As I've often said, I take the theory with a pinch of salt because I rarely find it gives a terribly good prediction of the practical results. I like much of your thinking, especially the idea of using only the best images to get the sharpest results on the bright parts.

My inclination would, like drjolo, be to try a very small pixel CMOS for the reasons he suggests. One of my robotic guests is doing just this in his FSQ106 and initial results suggest remarkable resolution for the aperture and FL.

Personally I doubt that you'll gain much by going below your present 0.7"PP but who knows, you might gain a bit. Julian Shaw used an AP Advanced Barlow and 6 inch refractor to capture this rendition of the Sombrero (well placed in NZ for the shoot.) I think it's a great capture. Scroll down on the second page of the thread to see it. 

Thanks for the suggestion Olly - I shall definitely have a detailed look at very small pixel, very high QE camera's and to see what is available.  The Sombreo picture is impressive and the Astrophysics Advanced Barlow you mention it was taken with also looks interesting....http://www.astro-physics.com/index.htm?products/accessories/visual_acc/visual_acc

Alan

Link to comment
Share on other sites

2 hours ago, vlaiv said:

 

There is not much point in capturing image at 0.4"/pixel if one is going to scale it down to 0.8"/pixel resolution - then you can just skip x2 barlow and shoot at 0.8"/pixel - you won't loose any detail in doing so over 0.4"/pixel and than scaling.

This may not be correct. The stars were smaller in the Barlowed image, it seems. Julian didn't take many subs pre-Barlow, from what I remember, but he showed me what he had and the Barlowed stars were smaller. I can't be more precise because this wasn't my data. I think there is a Roland Christen article about the advantages of the Advanced Barlow on the net. So the idea with the Barlow was not, as one might suspect, to present a larger final image but one with slightly enhanced resolution at the normal size. Without full data sets for comparison it isn't possible to give chapter and verse but since you're interested in squeezing the last drop of resolution from your scope this might be a line to follow up on.

Olly

Link to comment
Share on other sites

31 minutes ago, ollypenrice said:

This may not be correct. The stars were smaller in the Barlowed image, it seems. Julian didn't take many subs pre-Barlow, from what I remember, but he showed me what he had and the Barlowed stars were smaller. I can't be more precise because this wasn't my data. I think there is a Roland Christen article about the advantages of the Advanced Barlow on the net. So the idea with the Barlow was not, as one might suspect, to present a larger final image but one with slightly enhanced resolution at the normal size. Without full data sets for comparison it isn't possible to give chapter and verse but since you're interested in squeezing the last drop of resolution from your scope this might be a line to follow up on.

Olly

Only way I can think of barlow producing sharper image, if one is already sampling at high enough rate without it, and frequency restoration is not included, is if barlow itself corrects optics of a given telescope somehow (like APM barlow correcting for coma in newtonians, and reports about Baader VIP sharpening up view with some eyepieces, possibly lesser curvature due to narrower field?).

On the other hand you say that stars were indeed smaller. How did you measure? By gaussian fitting or by just looking at star "diameter" in subs? If second, did you make sure that both frames are normalized and applied equal stretch (even in single sub stars can appear to have different "diameter" depending on how much one stretches the data).

 

Link to comment
Share on other sites

5 hours ago, vlaiv said:

Only way I can think of barlow producing sharper image, if one is already sampling at high enough rate without it, and frequency restoration is not included, is if barlow itself corrects optics of a given telescope somehow (like APM barlow correcting for coma in newtonians, and reports about Baader VIP sharpening up view with some eyepieces, possibly lesser curvature due to narrower field?).

On the other hand you say that stars were indeed smaller. How did you measure? By gaussian fitting or by just looking at star "diameter" in subs? If second, did you make sure that both frames are normalized and applied equal stretch (even in single sub stars can appear to have different "diameter" depending on how much one stretches the data).

 

I just took a quick look at the Barlowed and unbarlowed stacks. Since I'm going to use my eyes to look at my images that's what I use to process them! :evil4: :D Not strictly true since I do measure things from time to time, most notably background sky per colour.

What does occur to me in this thread, though, is that it may be a mistake to take FWHM as an absolute measure of finally processed resolution. Stars don't behave like extended sources. FWHM is certainly a good indicator of seeing but is it a perfect indicator of final resolution? Maybe it isn't. 

Olly

Link to comment
Share on other sites

50 minutes ago, ollypenrice said:

I just took a quick look at the Barlowed and unbarlowed stacks. Since I'm going to use my eyes to look at my images that's what I use to process them! :evil4: :D Not strictly true since I do measure things from time to time, most notably background sky per colour.

What does occur to me in this thread, though, is that it may be a mistake to take FWHM as an absolute measure of finally processed resolution. Stars don't behave like extended sources. FWHM is certainly a good indicator of seeing but is it a perfect indicator of final resolution? Maybe it isn't. 

Olly

Actually it is perfect indicator :D - one can't hope for a better one.

Same PSF acts on each "point" of the image (not pixel, but infinitesimally small point) - well this is not strictly 100% true, since we can't say that seeing was exactly equal across larger FOV, but average of seeing in long exposure comes very close to being same everywhere on image (like 99.999% same). Neither it is 100% true for Airy PSF - it depends on characteristics of the scope and off axis aberrations - so not each point will have 100% exactly the same PSF. Guide error is indeed equal everywhere if polar alignment is good and there is no significant field rotation (again we are talking 99.999%). But let's take idealized case, and it certainly holds that PSF is the same over some area around a given star in the image. Star being for all intents and purposes - ideal single point with no inherent width and height (it does have it but it is sooo small due to distances to stars that projection of star radius is effectively close to 0) - it would be what is in maths called delta function. So star shape exactly presents how light from infinitesimally small point is spread out - since all that light that was captured indeed comes from the star which is single point of light.

Or to put it in mathematical terms: convolution of a PSF with a delta function is that same PSF (centered at delta function). This means that light of a single star is exactly the same shape of PSF acting on every other single point (with restriction already stated) - which in terms of Fourier transform defines filter that acts on frequencies degrading (blurring) the image.

Link to comment
Share on other sites

10 minutes ago, ollypenrice said:

Of course you can! Your eyes looking at the finally processed image are the best arbiters...

Olly

Well, it seems that we look at the achieved resolution from a different perspective (neither necessarily wrong and depending where ones interests in this hobby lie).

In my view what counts as resolution is measurable thing - being able to "scientifically" distinguish features rather than image looking "sharp" and "rich in detail". Eyes can be easily deceived (optical illusions), and we often perceive noisy image as being sharper regardless of the actual (true) detail present in the image. Images are sometimes even altered for getting perceived detail and sharpness - like morphological rounding and shrinking of the stars - none of that is adding actual detail (at least not one that is physically there in real object imaged) - but do tend to make images look sharper (or more detailed).

Link to comment
Share on other sites

13 hours ago, ollypenrice said:

I'm working on a magazine article at the moment, looking at how refractor high res images compete with ones from larger reflectors.

Interesting. I'd like to read that when it becomes available.

Link to comment
Share on other sites

2 hours ago, vlaiv said:

Well, it seems that we look at the achieved resolution from a different perspective (neither necessarily wrong and depending where ones interests in this hobby lie).

In my view what counts as resolution is measurable thing - being able to "scientifically" distinguish features rather than image looking "sharp" and "rich in detail". Eyes can be easily deceived (optical illusions), and we often perceive noisy image as being sharper regardless of the actual (true) detail present in the image. Images are sometimes even altered for getting perceived detail and sharpness - like morphological rounding and shrinking of the stars - none of that is adding actual detail (at least not one that is physically there in real object imaged) - but do tend to make images look sharper (or more detailed).

You're quite right. All perfectly true.

38 minutes ago, wimvb said:

Interesting. I'd like to read that when it becomes available.

I don't want to pre-empt the article for the magazine's sake but I'll be happy to send you the full res comparison files once it's out.

Olly

Link to comment
Share on other sites

10 hours ago, ollypenrice said:

Of course you can! Your eyes looking at the finally processed image are the best arbiters...

Remember that we are talking about FWHM being a measure of seeing. Not of how "pretty" the final image turns out to be. After it has been through countless stages of non-linear processing - including ultimately the response of the monitor and the human eye (and for a magazine article, the printing process / corrections) - all of which make any opinion highly subjective.

It is also worth considering a perturbation analysis on the "input" parameters: the scope, the additional optics (barlows flatteners, etc), the seeing and the image SNR to determine how much effect they have on the PSF. That is what the OP is asking: what effect will increasing the resolution of the sensor have?

Link to comment
Share on other sites

22 hours ago, pete_l said:

Remember that we are talking about FWHM being a measure of seeing. Not of how "pretty" the final image turns out to be. After it has been through countless stages of non-linear processing - including ultimately the response of the monitor and the human eye (and for a magazine article, the printing process / corrections) - all of which make any opinion highly subjective.

It is also worth considering a perturbation analysis on the "input" parameters: the scope, the additional optics (barlows flatteners, etc), the seeing and the image SNR to determine how much effect they have on the PSF. That is what the OP is asking: what effect will increasing the resolution of the sensor have?

I don't disagree. I hope to have the opportunity to try a small step down in pixel scale by borrowing a a CMOS camera with slightly smaller pixels than my CCD. (0.8"PP compared with 0.9"PP.) It will be interesting to see if this translates into a visible improvement. If the improvement isn't visible I don't think I'll be calling it an improvement! In truth I think the seeing will be by far the most influential factor.

In comparing 14 inch images of two galaxies I've taken here with 5.5 inch images at, respectively, 0.63"PP and 0.9"PP, I find that in only one tiny part of one of the images is there convincing evidence of more detail from the bigger scope. 

Olly

Link to comment
Share on other sites

 

A fascinating comparison of Theory and Real World.

 

15 hours ago, ollypenrice said:

In comparing 14 inch images of two galaxies I've taken here with 5.5 inch images at, respectively, 0.63"PP and 0.9"PP, I find that in only one tiny part of one of the images is there convincing evidence of more detail from the bigger scope. 

Presumably the 5" image cropped to the same galaxy size as the 14 inch?

Any problems imaging at 0.63 arc-sec per pixel, which some might consider to be over sampled  ?

Michael

Link to comment
Share on other sites

12 hours ago, michael8554 said:

 

A fascinating comparison of Theory and Real World.

 

Presumably the 5" image cropped to the same galaxy size as the 14 inch?

Any problems imaging at 0.63 arc-sec per pixel, which some might consider to be over sampled  ?

Michael

No problems other than there was next to no gain in visible resolution of detail, only in image size. I made the comparison both with the larger image at full size and resized downwards to the size of the smaller. The guide traces suggested that the seeing rather than the guiding were costing the extra theoretical resolution. The key thing is that there were no new features revealed when the larger image was seen at full size as opposed to reduced. The same features just became a little larger. I also asked an experienced imager to make the same comparison and his conclusions were the same.

Personally I'm delighted by this discovery because I would rather image with a refractor on the grounds of simplicity - and because I already have one!

Olly

 

Link to comment
Share on other sites

1 minute ago, Thalestris24 said:

I've often wondered whether it would be possible for an amateur to combine the outputs of two scopes spaced, say a meter or two apart, via a synthetic aperture thereby increasing the resolution. But it looks a bit complicated, unfortunately. A girl can dream!

Louise

No, unfortunately it is impossible.

In order to successfully do aperture synthesis you need phase information - CCD sensors don't record phase information. Even if you somehow manage to record phase information for light, frequencies of visible light are so high that you would need extremely fast sampler and extreme precision in timing (like atomic clocks) to combine signals.

There are two types of signal combination in radio aperture synthesis. One is physical - two dishes have wave guides that physically move captured signal into same place and then wave interference does the job. Second is data, this works for VLBA and such, where scopes are physically too far away to be able to lay wave guides between them - this one relies on very precise phase sampling, and then doing "interference" on data it self via calculations. It is usually done only in radio frequencies that are much lower than visible light.

There is however trick that once can do with single telescope, I don't know much about it, but you can have a look at that to see if it will interest you - it is called speckle interferometry. I believe that aperture mask with two (or more) holes is used and that produces interference patterns that can be recorded on sensor. From these interference patterns - image / data can be reconstructed. I'm not sure if it is used just to overcome seeing, or is there additional gain in "resolution" (not for image, but some features can be extracted - it is often used for double stars).

Here is brief read on the subject: https://en.wikipedia.org/wiki/Speckle_imaging

(it outlines couple of techniques, lucky imaging being related to one of them).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.