Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Optical resolution in DS imaging.


Recommended Posts

Root mean squares is a reasonable way to sum random variables, I can think of three questions that could apply:

  1. Are the random variables truly gaussian? Bear in mind that at low flux (i.e. faint features just above the noise floor) they might well be better modelled with a poisson distribution that could give give broader, less contrasty image.
  2. How well do you really know the variability of each variable? Seeing is far from constant over a session, the average seeing over each sub is probably measurably different. Also, what is the impact of changes in transparency in moving the noise floor? Or atmospheric dispersion - it's impact on planetary imaging is profound at relatively low angles, much greater than the sort of figures being discussed here (obviously irrelevant to narrowband).
  3. Are the variables actually random at all? It's already been stated that impact of the sampling frequency (pixel size) is neither gaussian nor random. It doesn't blur the data features, it just sets the resolution at which those features are sampled.

In fact I would argue that it is being incorporated at totally the wrong point in the calculation.

Rather than asking 'what impact does pixel size have on the image', instead as 'what pixel size is needed to capture all the useful information in the image'.

The finest grained information recorded are faint stars that just peep above the noise floor.

 

17 hours ago, andrew s said:

The phase (i.e. where the psf falls on the pixels e.g. peak at a boundary or in the middle) of the psf with respect to the pixels can have a significant impact on the data recorder and two identical psf can look quite different.

Consider sampling a pure sine wave  at twice its frequency - right on the Nyquist limit. Depending on the phase relationship the recorded 'waveform' can be a squarewave with an amplitude between that of the original waveform and zero (if it consistently samples the zero crossing point). Slightly more rapid sampling 'drifts' and allows a better picture of the waveform to be built up.

The sine wave becomes a stepped approximation. Nyquist tells us the 'steps' don't need to follow the curve exactly, as application of a low pass filter recreates the original waveform, BUT  the fidelity of this does increase (with a law of diminishing returns as the sampling frequency increases).

This does apply to images.

Think a typical guiding star on a PHD2 display showing a central peak flanked by two much smaller peaks - obviously oversampled from the perspective of showing there is a star there in an image, but that oversampling is vital to position the centroid to sub-pixel accuracy.

What this tells us is that faint stars spread over several pixels might not appear at all! To show the presence of the faintest stars you need big enough pixels to allow enough signal to be collected to get them above the noise floor, but there are two dangers - too big and they disappear into the gaps between pixels or you loose the sampling resolution to split close doubles.

 

So this is my contention - as pixels get smaller ability to resolve the image improves to the point where ultimately in ideal conditions each star could show as an airy disc, but with the downside that when the light from fainter stars is spread over too may pixels they will disappear into the noise and you will lose detail.

This makes ideal pixel size a CHOICE not an ABSOLUTE. For each subject and each imager's personal preference there will be a compromise solution that balances faint stars and fine detail.

 

If you don't believe me, when was the last time you saw a finished image that included stars just one or two pixels in size?

 

Link to comment
Share on other sites

  • Replies 111
  • Created
  • Last Reply
On 07/06/2018 at 10:50, alan4908 said:

Total resolution = SQRT(Imaging scale^2 + atmospheric effects^2+ mount guiding error^2+ optical resolution^2)

As you already touched upon, this formula is best used to get a feeling for where to put your money/efforts to upgrade a setup. If all the numbers for camera, optics, guiding, seeing, are plugged in, the formula should give a clear indication what to improve next. The results may very well show that guiding needs to be improved before even considering to get smaller pixels. Btw, I would enter expected best seeing conditions, rather than average. Otherwise one would be optimising the setup for average performance.

Link to comment
Share on other sites

This has been a stimulating discussion and pending extensive data input from Olly I thought I would try to summarise where I think we have got to and point away towards a practical way of making a choice of detector.

Firstly I think the problem can be decomposed into 3 sections.

1) Everything from the star (assumed to be a point source) to the detector which presents a PSF.

2) The detector treated as a sample data system that adds noise and has a certain quantum efficiency.

3) A measurement system - image processing system that measures the resultant image of the star.

1) Can be covered by the approximate "sqrt of sum of sqrs" type formula via simulation or via MTF as in Star Testing Astronomical Telescopes by H R Suiter.

    However, the output is a charcterisation of the PSF -e.g. peak hight and FWHM (For full S/N assessment you want the limiting magnitude, band width etc.)

2) The detector that samples the resultant PSF reduces its peak hight samples the PSF on a fixed grid depending on pixel size and quantum efficiency and adds noise.

3) The measurement system would then asses the resultant "image" of the ADU values of the PSF. 

     This would be in terms of the same characteristic as the initial PSF e.g hight and FWHM but also S/N     

        Note depending on how these are measured the measurement itself can add varying amount of noise - e.g. is a Gaussian fitted to the data, is linear or cubic spline interpolation used or some other filtering approach. 

In principle with this approach you could by specifying 1 and the outcome required in 3 optimise your choice of detector. 

While this may seem fanciful, these ideas are used in expose time calculators even for amateur work. I have used it with spectroscope design where I put in basic data on the sky seeing, telescope, spectroscope, target star magnitude and spectral type etc. and then altered the detector characteristics and tried to optimise spectral resolution while minimising exposure for a given S/N. (Lower RON beat higher QE.)

Regards Andrew 

 

Link to comment
Share on other sites

2 hours ago, Stub Mandrel said:

Consider sampling a pure sine wave  at twice its frequency - right on the Nyquist limit. Depending on the phase relationship the recorded 'waveform' can be a squarewave with an amplitude between that of the original waveform and zero (if it consistently samples the zero crossing point). Slightly more rapid sampling 'drifts' and allows a better picture of the waveform to be built up.

The sine wave becomes a stepped approximation. Nyquist tells us the 'steps' don't need to follow the curve exactly, as application of a low pass filter recreates the original waveform, BUT  the fidelity of this does increase (with a law of diminishing returns as the sampling frequency increases).

This is not quite how sampling works. Sampling works like this: there is pulse train - that is a function that has 0 everywhere except at "sampling points" where it has value of 1. Sampling points are usually enumerated 0,1,2,3, ... (so we can visualize X axis with real numbers and at each integer there is a spike that has value of 1).

This function is multiplied with our waveform to obtain sampled function: Characteristics of sampled function is that it is 0 everywhere except at sampling points where it has value of original waveform (everywhere it is multiplied with 0 of pulse train which gives 0 and at sample points it is multiplied by 1 hence exact same value). Therefore sampled function is not stepped function - like bar graph, it is more like point scatter - no data in between values (or it's being equal to 0 so no information there).

When we want to reconstruct signal we do some sort of interpolation of missing values, possible interpolations are: nearest neighbor - bar graph / stepped function, linear interpolation - straight lines "connecting the dots", polynomial of some degree (well both of prior functions are polynomials of degree 0 and 1 respectively), we can use some sort of spline function to interpolate. Which ever method of interpolation we choose, we need to cut off frequencies above sampling frequency because all frequencies that appear above sampling frequency after signal reconstruction are artifacts of interpolation rather than actual signal - so we use Sinc or similar filter.

Link to comment
Share on other sites

19 hours ago, vlaiv said:

On the separate note, I did what I've suggested to be done to verify how sampling rate enters into above equation, here are the results (I used single calibrated sub and IRIS to calculate FWHM) - Sub was originally acquired at ~ 0.5"/pixel, and binned x2 and x3 for resolution of 1"/pixel and 1.5"/pixel:

Measured FWHM in each image (same star in all 3 version of sub, in arc seconds per pixel):

4.11
4.26
4.32

Appropriate sigmas:

1.745223
1.808917
1.834395

From above formula - "sampling blur free" sigma should be in each case:

1.672066
1.507375
1.055938

And we clearly see that these don't match. What is interesting though is that measured FWHM does increase with binning (but I can't spot pattern in increase related to sampling resolution) - it could be down to error in gaussian fitting because of decrease of number of pixels making up gaussian profile (simply fitting error is increasing due to lower resolution).

Thanks for all the explanations, I think I'm slowing comprehending this (hopefully). 

What is confusing me is the following. Above, you give a demonstration on what the formula would predict if you took an image, measured the FWHM, binned the image 2x2 and then remeasured the FWHM. As you say, you can test the formula by comparing the measured binned FWHM with the predicted FWHM. After doing this you seem to conclude that the formula is inaccurate. 

I was attempting to repeat your analysis and I came to a different conclusion. To explain: if you take a well sampled image that you've taken with your scope the formula is saying that the measured FWHM should be: 2.355*SQRT(Image Scale/2.355)^2 + X1). Here X1 represents the unknown contribution from the seeing, guiding error etc.  Remembering my math, I can re-express this as:

FWHM = SQRT(Image Scale^2 + X2) where X2 is unknown.

So, if I take the image and bin it 2x2, X2 will not change but the image scale will be reduced by 2. So:

FWHM (binned) = SQRT(4*Image Scale^2 + X2).

So, now we attempt to eliminate X2 by squaring both equations:

FWHM^2 = Image Scale^2 + X2

and 

FWHM (binnned)^2 = 4* Image Scale^2 + X2

If you then subtract the two equations to eliminate the unknown X2 you get:

FWHM (binnned)^2 - FWHM^2 = 3*Image Scale

or rearranging

FWHM (binned) = SQRT (3*Image Scale^2 + FWHM^2).

In your test you have an image scale of approximately 0.5" and a measured FWHM value before binning of 4.11. So if you put these into the equation you get a predicted FWHM after binning of:

FWHM (binnned) = SQRT(3 x 0.5^2 + 4.11^2) = 4.2.  This compares to your measured value of 4.26. 

This appears to suggest a reasonably accurate prediction by the equation, at least when using oversampled data.

So what is the error in my analysis ?

Alan

 

Link to comment
Share on other sites

10 minutes ago, alan4908 said:

Thanks for all the explanations, I think I'm slowing comprehending this (hopefully). 

What is confusing me is the following. Above, you give a demonstration on what the formula would predict if you took an image, measured the FWHM, binned the image 2x2 and then remeasured the FWHM. As you say, you can test the formula by comparing the measured binned FWHM with the predicted FWHM. After doing this you seem to conclude that the formula is inaccurate. 

I was attempting to repeat your analysis and I came to a different conclusion. To explain: if you take a well sampled image that you've taken with your scope the formula is saying that the measured FWHM should be: 2.355*SQRT(Image Scale/2.355)^2 + X1). Here X1 represents the unknown contribution from the seeing, guiding error etc.  Remembering my math, I can re-express this as:

FWHM = SQRT(Image Scale^2 + X2) where X2 is unknown.

So, if I take the image and bin it 2x2, X2 will not change but the image scale will be reduced by 2. So:

FWHM (binned) = SQRT(4*Image Scale^2 + X2).

So, now we attempt to eliminate X2 by squaring both equations:

FWHM^2 = Image Scale^2 + X2

and 

FWHM (binnned)^2 = 4* Image Scale^2 + X2

If you then subtract the two equations to eliminate the unknown X2 you get:

FWHM (binnned)^2 - FWHM^2 = 3*Image Scale

or rearranging

FWHM (binned) = SQRT (3*Image Scale^2 + FWHM).

In your test you have an image scale of approximately 0.5" and a measured FWHM value before binning of 4.11. So if you put these into the equation you get a predicted FWHM after binning of:

FWHM (binnned) = SQRT(3 x 0.5^2 + 4.11) = 4.2.  This compares to your measured value of 4.26. 

This appears to suggest a reasonably accurate prediction by the equation, at least when using oversampled data.

So what is the error in my analysis ?

Alan

 

FWHM (binnned) = SQRT(3 x 0.5^2 + 4.11) = 4.2

How? When I put that expression in calculator value that I get is ~2.2 not 4.2?

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

This is not quite how sampling works. Sampling works like this: there is pulse train - that is a function that has 0 everywhere except at "sampling points" where it has value of 1. Sampling points are usually enumerated 0,1,2,3, ... (so we can visualize X axis with real numbers and at each integer there is a spike that has value of 1).

This function is multiplied with our waveform to obtain sampled function: Characteristics of sampled function is that it is 0 everywhere except at sampling points where it has value of original waveform (everywhere it is multiplied with 0 of pulse train which gives 0 and at sample points it is multiplied by 1 hence exact same value). Therefore sampled function is not stepped function - like bar graph, it is more like point scatter - no data in between values (or it's being equal to 0 so no information there).

When we want to reconstruct signal we do some sort of interpolation of missing values, possible interpolations are: nearest neighbor - bar graph / stepped function, linear interpolation - straight lines "connecting the dots", polynomial of some degree (well both of prior functions are polynomials of degree 0 and 1 respectively), we can use some sort of spline function to interpolate. Which ever method of interpolation we choose, we need to cut off frequencies above sampling frequency because all frequencies that appear above sampling frequency after signal reconstruction are artifacts of interpolation rather than actual signal - so we use Sinc or similar filter.

True, but basic conversion just outputs a signal and changes it for each new data point to create a stepped output, I've made a simple digital waveform generator that uses this approach for its D2A. This is analogous to a 1:1 output of data to pixels.

The approaches you describe are analogous to the various  ways (lanczos, b-spline...) of resampling an image to increase the pixel count, or in audio to the oversampling used on better quality D2A converters.

In audio a low pass filter can be as simple as an RC network across the output, in imaging the low-pass filtering is applied by looking at the final result from far enough away not to be able to distinguish individual pixels ? I suppose in audio terms this would be equivalent to directly driving a class D amp with the digital data and letting the response of the loudspeaker/listener's ears do the smoothing!

Link to comment
Share on other sites

1 hour ago, vlaiv said:

FWHM (binnned) = SQRT(3 x 0.5^2 + 4.11) = 4.2

How? When I put that expression in calculator value that I get is ~2.2 not 4.2?

 

Ah yes, well spotted, a typo on the formula which I have now edited. Have another look.

It should have been:

1 hour ago, alan4908 said:

FWHM (binned) = SQRT (3*Image Scale^2 + FWHM^2).

Alan

Link to comment
Share on other sites

3 hours ago, alan4908 said:

Ah yes, well spotted, a typo on the formula which I have now edited. Have another look.

It should have been:

Alan

Well, if you do that formula then it indeed LOOKS like 4.2 is close to 4.26, but let's change that formula a bit and say that we are going to use arbitrary factor next to image_scale^2, let it be 5 for example:

so instead of using: FWHM (binned) = SQRT (3*Image Scale^2 + FWHM^2)

we try FWHM (binned) = SQRT (5*Image Scale^2 + FWHM^2) then we get 4.259 - that looks like 4.26 even more :D.

I know that putting 5 instead of 3 is not justified by logic - but it shows one important thing: 4.26 is a bit larger than 4.11, and any small value X, where X << 4.11 when "square added" to 4.11 will increase 4.11 by very small amount. This means that by adding small number it might look that we are confirming above formula within measurement error, but we are also confirming formula with 5 instead of 3 in the same way.

 

 

Link to comment
Share on other sites

5 hours ago, alan4908 said:

Ah yes, well spotted, a typo on the formula which I have now edited. Have another look.

It should have been:

Alan

To avoid measurement / fitting error as much as possible, I've created artificial image with single star blurred by Gaussian blur sigma 2. I assumed resolution of 1"/pixel (easy to convert to arc seconds). I also found very good macro for Gaussian fitting for ImageJ - image is 32 bit, so no loss of precision due to lower bit count.

Here are results:

image.thumb.png.643be6471de0e6993fd6ef2189bc1786.png

As you see first Gaussian fit is very accurate (at least 5 digits). Measured base FWHM is 4.7096 (as would be expected for sigma 2, since 2 x ~2.355 is ~4.71)

Binned x2 image gives result for FWHM of 2.4295 or when we convert that in arcseconds  (times 2 because we binned x2) = 4.859

According to your formula above: FWHM binned = SQRT(3*1^2 + FWHM base^2) = 5.018

or 0.16 difference to expected - note that difference of two measured FWHMs is less than that at 0.15.

So I still maintain that adding sampling rate as above is not correct. On the other hand - I did some thinking and there is indeed some increase in measured FWHM after binning, and I think I understand why it happens, but as of yet are unable to quantify it.

It has to do with frequency cut off that sampling rate is producing. Take a look at this image:

image.png.cce49511764499d2fd7899887d8af914.png

If you observe gaussian without red bars (representing cut off) - it "looks" wider than if you cut off everything outside red bars (and put it to 0).

There is relationship between gaussian in frequency domain (above) and spatial domain (related to FWHM) - their "width" is reciprocal (smaller blur attenuates only very highest frequencies - large blur attenuates all but lowest frequencies - it is again wavelength vs frequency 1/x relationship).

This means that if we "narrow down" (via sampling cut off) in frequency domain - we will "broaden" in spatial domain and Gaussian fit (since cut off it will no longer be perfect gaussian in spatial domain) will produce a bit larger FWHM. So there is indeed a bit of additional blurring depending where you cut off frequencies with sampling but it does not behave according to above formula.

I still maintain that sampling rate should not be a part of the equation.

Link to comment
Share on other sites

Ha!

This is very interesting.

First, I would like to apologize to everyone if I'm spamming this thread with my "exploration" - I do believe it is related to subject, and hope someone will find this interesting and informative.

I just realized that binning is somewhat different than lower sampling in terms of point samples (pulse train) - that is usually used to mathematically describe sampling. I've noted this previously but was not sure about differences between surface integral sampling and point sampling. Binning is form of interpolation (we sum two adjacent pixels, or average them - this is very much like linear interpolation, but we then "skip" one sample, then average again, and so forth) and due to that it is bound to alter frequency response - as we have seen - FWHM changes with binned images.

I just did another set of measurements - this time I used synthetic Gaussian with sigma 2 (like in previous case) but instead of reducing resolution with binning (as I think happens when you image with larger pixels) - I actually considered pixel values as point samples and reduced resolution by using every second pixel value in X and every second pixel value in Y (like getting red channel from OSC raw - I actually used that plugin for splitting channels of OSC image without any linear debayering or super pixel mode - just channel extraction). I was in fact making sure pulses in the sampling pulse train are further apart. Here are results:

image.png.9a1551aa4469005c0501f19beddc0448.png

As you can see there is minimal change - FWHM is the same at four digits rounding.

This of course is supported by theory that sampling rate is not adding blur. Binning on the other hand is at some level, and that makes me wonder how should we theoretically approach the fact that pixel is not point sampling device but rather collects light across its surface.

Link to comment
Share on other sites

10 hours ago, vlaiv said:

I still maintain that sampling rate should not be a part of the equation.

and

 

9 hours ago, vlaiv said:

I just realized that binning is somewhat different than lower sampling in terms of point samples (pulse train) - that is usually used to mathematically describe sampling.

I am inclined to agree that sampling can't be added to the simplified sqrt of sum of sq equation. In the case of excessive oversampling in would be a reasonable approximation but near critical sampling the result certainly does not looked Gaussian (even by eye!).

However, it must be included to see what the effect on the output it has. I have looked across several resources but can't find an analytical approach to treating the sampling correctly. The paper I linked to before https://arxiv.org/pdf/1707.06455.pdf treats it numerically but says this

" The integration of the received intensity signal over the width of a pixel has the effect of smoothing the incident intensity profile, through convolution with a rectangular profile having the width of one pixel. This broadens the effective LSF" - for LSF read PSF

So it might be possible to treat it analytically but it will be complex due to the phasing of the PSF with rectangular profile.

Regards Andrew 

Link to comment
Share on other sites

In time sampling, as said before, the sampling function is 0 except during a very short pulse-like time period, at which time instant it takes the value of the signal to be sampled. Spatial sampling with pixels is more like the sample-and-hold method in time sampling. This comes closer to the step appearance of a digitised view (= image). This is afaIk, equivalent to Neil's argument, rather than Vlaiv's. The difference between spatial (image) sampling and time (music) sampling is that in the latter an analog signal is recreated from the sampled signal. In imaging, this is never done. Rather, the sampling needs to be so fine that our eyes can no longer distinguish between the original, analog view, and the sampled image. But the image itself is still the digitised, sampled version of an analog view or scene.

Btw, I'm not really trying to add to this discussion, but rather attempting to make sense of it all. It's far too long since I learned this stuff, and most of it is gone by now. So, please do correct my ramblings.

Link to comment
Share on other sites

13 minutes ago, wimvb said:

Rather, the sampling needs to be so fine that our eyes can no longer distinguish between the original, analog view, and the sampled image

... but we must not forget even the original is digitized by our eyes (rods & cones) - now that is a whole new topic... ?

Regards Andrew

PS I followed the new convention of adding an emoji for a post intended to be humorous.

Link to comment
Share on other sites

20 minutes ago, andrew s said:

... but we must not forget even the original is digitized by our eyes (rods & cones) - now that is a whole new topic... ?

Regards Andrew

PS I followed the new convention of adding an emoji for a post intended to be humorous.

I realised that when I wrote my reply, but I (also) didn't want to go there.

I know, don't you just hate it that you have to do that nowadays? Imagine what a play from Shakespeare would look like in modern days ...

Link to comment
Share on other sites

1 hour ago, andrew s said:

I am inclined to agree that sampling can't be added to the simplified sqrt of sum of sq equation. In the case of excessive oversampling in would be a reasonable approximation but near critical sampling the result certainly does not looked Gaussian (even by eye!).

However, it must be included to see what the effect on the output it has. I have looked across several resources but can't find an analytical approach to treating the sampling correctly. The paper I linked to before https://arxiv.org/pdf/1707.06455.pdf treats it numerically but says this

" The integration of the received intensity signal over the width of a pixel has the effect of smoothing the incident intensity profile, through convolution with a rectangular profile having the width of one pixel. This broadens the effective LSF" - for LSF read PSF

So it might be possible to treat it analytically but it will be complex due to the phasing of the PSF with rectangular profile.

Regards Andrew 

Andrew

Having experimented with my own data, the sum of the squares equation does seem to only give reasonably accurate results when used with highly over sampled data. So, I tend to agree with your summary.

However, since I'd really like to have a practical analytical guide for the total imaging resolution of a system,  surely there must be a way to approximate this ?

Alan

 

Link to comment
Share on other sites

1 hour ago, alan4908 said:

surely there must be a way to approximate this ?

Maybe we could use the following idea based on the separation of Gaussian profiles - a modern take on Dawes' limit / Rayleigh criterion of the Airy disk.

The picture is from the paper https://arxiv.org/pdf/1707.06455.pdf and it shows that once you get to 3 or 4 pixels per FWHM then the resolution becomes less dependent on phase. Looking at this I would conclude 3 or 4 pixels per FWHM is optimal for resolving details of a given FWHM in deep sky work. 5 or more might be better of the planets.

Hope this helps.

Regards Andrew 

ccd_resolution.png.5cfa007e919fa6828817cdc10d6a19a5.png

Link to comment
Share on other sites

46 minutes ago, andrew s said:

Maybe we could use the following idea based on the separation of Gaussian profiles - a modern take on Dawes' limit / Rayleigh criterion of the Airy disk.

The picture is from the paper https://arxiv.org/pdf/1707.06455.pdf and it shows that once you get to 3 or 4 pixels per FWHM then the resolution becomes less dependent on phase. Looking at this I would conclude 3 or 4 pixels per FWHM is optimal for resolving details of a given FWHM in deep sky work. 5 or more might be better of the planets.

Hope this helps.

Regards Andrew 

Andrew

Thanks for the information, it is not really what I'm looking for but it's interesting that your conclusion for optimal resolution is 3 to 4 pixels per FWHM seems to agree with the Stan Moore analysis (http://www.stanmooreastro.com/pixel_size.htm) which states that FWHM of 3.5 pixels or more is optimum. 

My Luminence stack for NGC5907 gives an FWHM of 2.43 arc seconds (as measured by CCDInspector) so since I'm at 0.71 arc seconds/pixel that would be an FWHM of 3.42 pixels. However, the best 600s subframe has an FWHM of 1.49 arc seconds or 2.1 pixels. Which suggests that some improvement is possible if I go for a camera with smaller pixel size/higher QE, however, I'd only expect to see this on nights of very good/excellent seeing from my site.  Given that I'd be increasing my oversampling rate, stronger deconvolution in areas of high SNR should enable me to extract this higher resolution from the averaged stack.

So, does this provide justification for purchasing a new camera with smaller pixels and higher QE ? - I have to say that I'm still not really convinced.  

Alan

Link to comment
Share on other sites

Alan

48 minutes ago, alan4908 said:

it is not really what I'm looking for

Sorry about that. At a bit of a loss to know what more I can offer.

However, the Dawes limit is about 0.8 arc seconds for you scope and assuming your kit is in the UK then I think you are getting into the area of deminishing returns.

It would be interesting to see the distribution of FWHM across all your frames - maybe lucky imaging would be a possible approach? 

I don't think high QE would help as much as lower read noise at least that what my modeling showed.

Good luck with whatever you decide

Regards Andrew

Link to comment
Share on other sites

1 hour ago, andrew s said:

Sorry about that. At a bit of a loss to know what more I can offer.

However, the Dawes limit is about 0.8 arc seconds for you scope and assuming your kit is in the UK then I think you are getting into the area of deminishing returns.

It would be interesting to see the distribution of FWHM across all your frames - maybe lucky imaging would be a possible approach? 

I don't think high QE would help as much as lower read noise at least that what my modeling showed.

Good luck with whatever you decide

Regards Andrew

Thanks Andrew for your suggestions.

I do like my current camera (SX Trius 814) and given that my site is in the UK (aka my back garden) I'm increasingly of the opinion that the benefit I'd derive from a new camera with smaller pixels and higher QE would not justify the cost. 

Due to the poor UK weather, I do implement a form of lucky imaging but not in the conventional sense. Since I image automated and unguided, I've configured my system to allow imaging when it is a little cloudy, this is because there's a chance that in the direction my scope is pointing, the sky is actually clear for the duration of the sub frame (normally 600s for LRGB). If a cloud happens to wander into the field of view, the image resolution will be degraded either significantly or hardly at all depending on the duration of the obstruction. Obviously, this approach does generate more throw away subs, due to clouds, but it also generates additional usable subframes that I wouldn't have obtained if I'd configured my system to image only when the skies are crystal clear (a somewhat rare event).

Alan

 

Link to comment
Share on other sites

I stand corrected on the matter of sensor PSF.

There is indeed sensor "blurring" in CCD sensors (not sure how it relates to CMOS sensors, but I suspect there is a similar effect there as well). It is not however related to pixel size and sampling frequency, but is rather a consequence of electron diffusion between pixels - full pixel wells due to voltage difference to less filled wells tend to spill electrons into adjacent cells.

It is also interesting that such blur is not "round" but rather elliptical - column vs row architecture of CCD. More on the topic can be found by examining this paper (it describes CCD PSF measurement process and analysis)

https://arxiv.org/pdf/1412.5382.pdf

 

Link to comment
Share on other sites

32 minutes ago, vlaiv said:

There is indeed sensor "blurring" in CCD sensors (not sure how it relates to CMOS sensors, but I suspect there is a similar effect there as well). It is not however related to pixel size and sampling frequency, but is rather a consequence of electron diffusion between pixels - full pixel wells due to voltage difference to less filled wells tend to spill electrons into adjacent cells.

It is also interesting that such blur is not "round" but rather elliptical - column vs row architecture of CCD.

 

I have read that this effect is a characteristic of CCD rather than CMOC, and is only a problem when the wells are near full.

Link to comment
Share on other sites

On the matter of "surface integral sampling" vs point sampling :D - it turns out that they are exactly the same (different by multiplication factor) and it is fairly easy to show that.

Let's observe sine function sin(x). Point sampling is just sin(X) - where X is one of our sampling points. In integral sampling - obtained value would be:

Integral from X-A to X+A of sin(x) - where A is some value representing "half width" (or half of pixel in our case).

Integral of sin(x)  = -cos(x)  or in our case it will be:

-cos(X-A) - (-cos(X+A)) = -cos(X-A)+cos(X+A) now we apply sum angle trigonometric identities and we get:

-( cos(X)cos(A)+sin(X)sin(A) ) + cos(X)cos(A) - sin(X)sin(A) = -cos(X)cos(A) -sin(X)sin(A) + cos(X)cos(A) -sin(X)sin(A) = -2sin(A) sin(X)

since A is fixed for each sample point (or X) it is constant so integral sampling for sine function gives c*sin(X) - or scaled point sample function at X.

Since we are dealing with Fourier transform and all functions (strictly true for periodic functions) can be represented as sum of sine and cosine functions (cosine is just phase shifted sine) and integral of sum is sum of integrals - we get exactly the same thing as point sampling if we use integral.

Hence - CCD/CMOS is point sampling device.

(Drizzle algorithm is actually based on this, as the fact that using every second sample produces gaussians with same FWHM)

I'm really trying to understand where "sampling resolution" blur term comes from (if indeed exists) but so far I have not been able to account for it. Only thing that is left to explore is the fact that we are not dealing with infinite pulse train, but limited set (hence number of pixels/samples plays a role in frequency capture).

 

Link to comment
Share on other sites

15 minutes ago, vlaiv said:

On the matter of "surface integral sampling" vs point sampling :D - it turns out that they are exactly the same

Here is a simple example?

Take a pulse of unit height and width DX the same size as a pixel. If it is exactly centered on a pixel then the sequence point sampling will go ... 0,0,1,0,0 etc. Now if it is halfway between the two it will go ...0,0,0.5,0.5,0... etc. Integrating will give ...0,0,DX,0,0... and ...0.0.0.5DX,0.5DX,0... respectively

Nice spot

Regards Andrew

Link to comment
Share on other sites

1 hour ago, andrew s said:

Here is a simple example?

Take a pulse of unit height and width DX the same size as a pixel. If it is exactly centered on a pixel then the sequence point sampling will go ... 0,0,1,0,0 etc. Now if it is halfway between the two it will go ...0,0,0.5,0.5,0... etc. Integrating will give ...0,0,DX,0,0... and ...0.0.0.5DX,0.5DX,0... respectively

Nice spot

Regards Andrew

Would you agree that if the average noise level is close to the signal from the star (say close to 0.5) it might be visible in the first instance, but not in the second?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.