Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Astronomy tools ccd suitability


vlaiv

Recommended Posts

Been meaning to ask this elsewhere, but it seems relevant here.....

2 hours ago, vlaiv said:

This is why over sampling is very bad - especially for a beginner. 

Is the correct advice not to over sample, or is the advice to bin to the appropriate level?

I ask this because:

- it's not like there's a massive range of pixel sizes available in the mainstream cameras - if I stick to 4 of the most recommended ZWO cameras (1600, 2600, 553 at 3.7 microns, 183 at 2.4) the pixel sizes are similar when compared to the older CCDs

- The low read noise on these cameras makes binning less of an issue, and they tend to have higher QE than older CCDS, so without doing the maths, I'd imagine one achieves a better SNR with these cameras + binning over an older CCD with larger pixels?

- Startools (and probably other tools) offers fractional binning, so I can bin to the exact level needed, even adapting to seeing if needed

 

If the guide is primarily telling people what camera to buy/use - it's important to know if a newer, efficient CMOS camera with small pixels + binning is going to outperform a choice that has the optimal native pixel size. 

Link to comment
Share on other sites

2 minutes ago, rnobleeddy said:

Is the correct advice not to over sample, or is the advice to bin to the appropriate level?

Two are essentially the same. If you don't want to over sample and you already have small pixels - then it is worth binning.

You can bin in software after acquisition - only thing to remember is to consider read noise appropriately.

Effect of read noise depends on background levels and if you over sample - same thing happens with background as happens with regular signal (they are both light signal) - it gets spread over more pixels. This in effect means that if you over sample by say factor of x2 with CMOS sensor because of small pixels - you'll need to make single exposure x4 longer than you would need with same aperture and focal length but with larger pixels with same read noise.

Good thing about all of this is - if you have correct exposure dialed in for pixel size as is - you don't have to change anything to bin your data later in software - read noise levels vs exposure duration will be correct for those cases as well - regardless if you bin x2 or x3.

I think that binning in software with small pixels is best of both worlds - you can get good sampling rate on night of good seeing - bin x2 and on night of not so good seeing - bin x3 so there is some flexibility there.

Although fractional binning exists - it is not real binning and does not behave like true binning. True binning does not introduce pixel to pixel correlation - so no additional blur being made. It also has precisely defined SNR improvement (if noise is truly random) - SNR improvement is equal to bin factor so bin x2 will improve SNR by x2 and bin x3 will improve SNR by x3 and so on ...

Fractional binning does not do that - it introduces pixel to pixel correlation and is no longer really predictable (maybe it is - just not as simple as above and probably requires rigorous mathematical analysis to confirm relationship). Even small fraction above bin factor can mess up things - for example fractional binning x2.1 might improve SNR by factor of say x2.5. We might say - well it is a good thing - but blurring the image also increases SNR - some of that SNR improvement is due to blurring caused by pixel to pixel correlation - and that is not something you want.

I'm yet to analyze what is the best way to go about all of this, but so far, my gut feeling is - if you say want to bin x2.5 - I'd actually do it in one of two ways (one simple and one very advanced):

1. Bin x3 and then when doing alignment of the frames - upscale each frame so that their resulting "zoom" is same as if you binned x2.5 - make sampling rate adequate for detail. This will loose a bit of sharpness but will boost SNR more and will allow you to sharpen your image a bit without bringing too much noise - so net result will be as if you did x2.5 binning (sort of)

2. Since subs that go into the stack don't have same FWHM - some have larger and some have smaller FWHM - we set target FWHM prior to stacking - one that corresponds to certain bin factor.

All subs that have larger FWHM will be deconvolved to reduce their FWHM down to target FWHM

All subs that have lower FWHM will be convolved with Gaussian so hit target FWHM.

This will create subs with very different SNR - but there is algorithm that can deal with this situation - and it will create close to optimum stack of such frames.

Now this is something I'm working on - so not really yet available in software - just wanted to share it as alternative option.

Hope this answers your question 

To recap - it does not matter if you image with small pixels as long as you are aware of that, select your sub duration appropriately and then bin your data to recover lost SNR while data is still linear - that is the same as originally imaging at correct sampling rate.

Link to comment
Share on other sites

15 hours ago, Mr Spock said:

While a lot of these discussions are interesting, I feel they are straying away from the subject of updating the tools. There is the danger of making things too complicated for people to understand. 
 

As with the previous poster, if I, also a numpty, can understand it, then its purpose is served. 

I see your point but consider this. Deep technical discussion is relevant here as we are also to establish what the correct methodology required to improve the implementation of such a tool. While the above may well be too complicated for the majority it is essential that those with the required understanding derive a better / correct methodology in order to provide the majority with a tool that a "numpty" can use and understand without needing to see or understand the nessasary underlying complexities which are being discussed here. 

It is possible that one answer is to have an advanced user mode for the tool the uncovers additional options also.

I suspect that when the tool was produced the market was dominated by CCDs with pixels much larger than the CMOS chips now avaliable and that as such the model did not need to be additionally bounded to prevent extream combinations of imaging scope and pixel size from being incorrectly validated. For me those would include instances when the combination falls in the heart of the green zone but the aperture of the scope simply cant support the resolution being suggested. In this instance the tool is recommending a smaller pixel camera but should not be doing so as the only thing a small pixel camera will result in is lower Signal to Noise with no advantage to resolvable detail.

Beyond this adding features such as a tickbox for OSC vs Mono should not over stretch the user.

Adam

Edited by Adam J
Link to comment
Share on other sites

I've implemented fractional binning in an EEVA context to allow a user to match the effective pixel size to seeing (in principle) to achieve 'effective critical sampling'. This is all done using a slider, with immediate visual feedback to see how far you can go before detail is lost. At that point SNR improvements due to binning are optimised with no loss in resolution. That's the idea anyway!

From a mathematical point of view, the approach underlying fractional binning is a perfectly well-defined operation and while normal binning (ie adding groups of pixels) might seem somehow purer and correct, it has poorer mathematical properties in terms of what it is doing to the spatial frequency content of the image. 

But this is probably off-topic for the current thread -- I suggest we have a separate thread on fractional binning if there is interest.

Martin

 

BTW There is quite a lot of debate elsewhere about the topics of this thread. Here's some links from CN. 

https://www.cloudynights.com/topic/654570-another-view-of-small-pixels-vs-big-pixels/

https://www.cloudynights.com/topic/789663-pixinsight-square-star-issue/?hl=square+stars#entry11370250

https://www.cloudynights.com/topic/757590-undersampling-and-oversampling/?hl=square+stars#entry10907380

https://www.cloudynights.com/topic/756676-advice-on-sensor-sizeunder-oversampling/?hl=square+stars#entry10896038

https://www.cloudynights.com/topic/747946-arc-seconds-per-pixel-confusion-can-you-help/?hl=square+stars#entry10766943

https://www.cloudynights.com/topic/728911-on-sampling-and-seeing-good-sources-for/page-2?hl=square stars

 

 

Link to comment
Share on other sites

2 hours ago, Martin Meredith said:

BTW There is quite a lot of debate elsewhere about the topics of this thread. Here's some links from CN. 

I had a look at those, and some of misconceptions that we touched upon here seem to be prevailing in those discussions as well.

I'm not going to go and dissect each of those threads, but if there is something in particular that needs to be further discussed or explained, I'll be happy to do it.

All we have said so far can be really condensed in a few points:

1. seeing is not only thing determining resulting star FWHM in the image - size of aperture, guiding and seeing all play a part and we have defined relationship between those and resulting FWHM

2. for a given FWHM of stars in the image (effectively a blur PSF) there is simple relationship between it and "optimum" sampling rate (I can explain why I put optimum in quotation marks).

3. Over sampling - very bad, under sampling not bad at all - and no, stars will not be square because of it.

By the way, I already presented images in this thread that were 4"/px for example and those did not look bad. In fact - whenever you look at large image on the screen that is scaled to fit - it is under sampled - yet looks fine.

Here is example:

image.png.97182561f88c6f408b3baf005f0f22ff.png

This is a crop from M13 image - scaled to fit the screen - shown here at 28% of 2"/px resolution - which is effectively 7.14"/px

Same image viewed at 100% (again crop):

image.png.254dd05ea0db97a4dea2ed62595f0a6e.png

and I'll now use feature of IrfanView - software that I'm using to view this image and make screen shots - and I'll zoom in to 300%:

image.png.493c19f922c06d5ceffaa3743212747e.png

Yes - stars look much softer in zoomed image - but there is no "pixelation".

Moral of this is - images just should not be looked at past 100% zoom and they will look fine no matter what sampling rate we used. If one used 3"/px for the image and expects to see detail at 300% zoom level (effectively 1"/px) - well they will be disappointed.

Link to comment
Share on other sites

In the EEVA use case I often zoom well past 3x and it is a perfectly acceptable thing to do. I don't expect resolution to increase of course. But zooming has other purposes.

And on your 3, we will have just to agree to disagree rather than go round in circles. My opinion: Oversampling: not bad at all (because of binning). Undersampling: (potentially) irrecoverable loss of detail coupled with square stars unless a (potentially) artefact-inducing interpolation algorithm is used, always assuming you have good criteria to decide which one to use:

source: https://matplotlib.org/stable/gallery/images_contours_and_fields/interpolation_methods.html

81033967_Screenshot2022-02-03at12_54_36.png.0190d164f7574920a3953d94b2380219.png

(and yes, I spotted that None and none and nearest are the same; not my image, no need to jump in on this; and I know that interpolation is going to be needed; just that starting off with a small pixel sensor means it is simpler for it to do its job)

Martin

Edited by Martin Meredith
  • Like 1
Link to comment
Share on other sites

5 minutes ago, Martin Meredith said:

Oversampling: not bad at all (because of binning)

You don't care because of loss of SNR?

Binning will have same effect as using larger pixels. If you can bin to optimum sampling then fine, but if you bin to same level of under sampling - things will be the same as regular under sampling.

You can't recover SNR and still be over sampled (although there is no need to be - but again - if you don't agree with that, there is only so much I can do about it).

7 minutes ago, Martin Meredith said:

Undersampling: (potentially) irrecoverable loss of detail coupled with square stars unless a (potentially) artefact-inducing interpolation algorithm is used, always assuming you have good criteria to decide which one to use:

None in above image is wrong.

It should look like this:

image.png.4dfb36f274165c9ff2c892a14e25760f.png

Everything else is some sort of interpolating algorithm, and nearest neighbor is most "artifact-inducing" one.

If you really want to know what sort of artifacts each interpolation method induces - I advocate you do some tests.

In fact - I'll do them today later and post here. It is rather simple - we make a function, we then sample it, we reconstruct it using different interpolation methods and we compare to original (subtract the two and look at residual).

Link to comment
Share on other sites

28 minutes ago, Martin Meredith said:

Oversampling: not bad at all (because of binning). Undersampling: (potentially) irrecoverable loss of detail coupled with square stars unless a (potentially) artefact-inducing interpolation algorithm is used

Sorry for being late to the party. I was about to post the same approach to the discussion. Fully agree with Martin 👍

From a practical point of view, don't we just want to get the camera with the smallest pixels, the best SNR (low read noise) and the best quantum efficiency we can afford. The we don't run in the problem of undersampling and if we feel that we oversample, binning comes to the rescue.

Link to comment
Share on other sites

Maybe as a "scientific" addition to the discussion, here is a paper (publication) from an IISW (International Image Sensor Society) workshop (2013) that has analysed the relation between low light performance (that's what we are after) and pixel size in oversampling situations. Here the abstract (summary) of the paper:

Quote

This paper considers the oversampling cameras and their image quality. The formulas for analyzing the low light performance are developed and they are compared to subjective testing results. In addition to the developed formulas, the key findings show that there is a significant amount of spatial information available above Nyqvist frequency of the camera system that can be captured with an oversampling camera. We also show that the low light performance is essentially defined by image sensor rather than the pixel size.

and the paper can be found here: https://www.imagesensors.org/Past Workshops/2013 Workshop/2013 Papers/13-1_071-Alakarhu.pdf should anyone want to follow the math.

I love the result that "there is a significant amount of spatial information" (our star images) "above Niquist frequency of the camera system that can be captures with oversampling". That should be useful for our application I'd say.

  • Like 1
Link to comment
Share on other sites

3 hours ago, alex_stars said:

and the paper can be found here: https://www.imagesensors.org/Past Workshops/2013 Workshop/2013 Papers/13-1_071-Alakarhu.pdf should anyone want to follow the math.

That paper discusses something completely different and is questionable in its methodology and conclusions.

As an example - they claim that there is mobile phone that is oversampling.

image.png.1ebb91a93b72b9771d765872c74261c3.png

In order to critically sample F/2.4 lens - one needs 0.66µm pixel size.

image.thumb.png.98a519ceff977f22dccf6b4dc48d7396.png

if we put above 2.4 lens and 550nm into equation we will get

1 / (0.00055 * 2.4) = 757.576 cycles per millimeter or you need twice that many pixels to properly sample per millimeter (Nyquist) - so optimal sampling is with 1 / 2 * 757.6 = ~0.66µm

Pixels in this "over sampling" camera are at least twice as large as they should be for optimal sampling let alone over sampling.

What they have shown in that paper is following:

If you sample image with higher frequency than other image and then resample it down using one of resampling algorithms (that have cut off filters built in them) - you will get less aliasing artifacts. Not because of sampling but because cut off filter that is essential part of interpolation algorithm for image reconstruction.

That is their conclusion:

image.png.d166a77a6f3d8ff248b05e9e78bce382.png

Both images are made the same only Nokia one is taken at original sampling and then resample down to 5Mpix - and suffers from less Moire because of that (but is not completely free of it as it is aliasing artifact that you can't avoid if your signal is such that it contains a lot of strong high frequency components).

 

Link to comment
Share on other sites

4 minutes ago, Martin Meredith said:

How do you know its not peer-reviewed? Many conferences have very strict peer reviewing (up to 4 reviewers for one in my field). And what's so odd about all the authors coming from the same institution, discussing their company's product? It is really very normal in the scientific world.

 

Well, maybe it is published somewhere - but I doubt that it would pass peer review with such statements.

They even contradict themselves in the text - for example - look at the graph:

image.png.fbef4bade9450a6791239948ae216ecc.png

They mark lines that represent sampling for different pixel sizes - including 1.4µm pixel - being right most. 1.4 pixel corresponds to 2.8µm cycle and there will be 1000µm / 2.8µm = ~357 cycles per millimeter.

a) they have marked half of that on the graph - 180 cycles per millimeter - clearly using half instead of two times frequency for some odd reason

b) lens discussed have much higher MTF cutoff frequency than that - over 600 cycles/mm - probably reaching 757cycles per mm that is theoretical maximum for F/2.4 lens

How can that phone then be over sampling? Yet that is premise of that paper and also conclusion:

Quote

The image quality of oversampling cameras,
including low light performance and sharpness, has
been considered in this paper. The results show that in
well-designed dystem low light performance is
essentially defined by the optics, sensor size and
spectral response, rather than the pixel size. The results
also show that smartphone optics has significant
amount of higher than Nyqvist information that can be
utilized by oversampling.

This is start of their conclusion. I agree with first part - if you control "/px of your setup then yes, sensor size and choice of optics to get wanted "/px dictate speed of setup.

Then it goes on to say that smart phone optics has higher than "Nqyuist information" - what ever that means. This simply can't be true or mean anything sensible.

Amount of information is dictated by aperture size. This is follows from theory of light and diffraction and is well established. Nyquist sampling theorem is also well established. It dates back to 1950's. If it is somehow flawed - well we had 70 odd years to disprove it (it is really mathematical proof so we don't have anything to disprove unless we turn math upside down) - yet much of modern telecommunications and signal processing science relies on it.

 

Link to comment
Share on other sites

Look, this is going a bit in circles now.

Do you have any objections on any of the three points that I wrote?

@Martin Meredith

You seem to be under impression that under sampling is a bad thing, but seem to agree with the rest of it?

Do you still think that squares are indeed actual pixels or have you accepted that squareness of it is down to choice of interpolation?

Here is one more simple explanation that should be easy to understand by most.

Let's look at only two adjacent pixels. Let it actually be 1D case (2D is just the same) - like sound samples or pixels on the numbers line.

Pixel at position 0 has value of 10 and pixel at position 1 has value of 11. That is what we know.

Question is - what is value at positions 0.25, 0.5 and 0.75 of this function? We don't really have actual number for any of those positions. We just have two numbers - one at position 0 and one at position 1.

We can say - well, that is easy - let's round the position and take number from position that we have.

round(0.25) = 0 - so value at position 0 is 10

0.25 has value of 10

round(0.5) - we can choose to round up or down - so let's round it down and so it is 0 and again value is 10

0.5 has value of 10

round(0.75) = 1 - now we take position one and value is 11

But hold on - wouldn't it be better if we draw a line between two points that we have and then look at height of that line instead?

In that case at position 0.25 we will have value of 10.25, at 0.5 it will be 10.5 and at 0.75 it will be 10.75

I just described two different ways of "filling in the blanks" - or interpolation.

Here is image from wikipedia showing three basic interpolation types - nearest neighbor, linear and cubic:

640px-Comparison_of_1D_and_2D_interpolat

As you see - we only have dots - and the rest depends on how we fill the gaps.

Among other things - Nyquist sampling theorem specifies how to interpolate in order to perfectly restore original data / function that we sampled.

https://en.wikipedia.org/wiki/Whittaker–Shannon_interpolation_formula

Why is simply Sinc function not used for interpolation. Because Sinc extends off to infinity on both sides (or in case of images in both width and height) and is not practical to do so. Next best thing is Lanczos - which is windows Sinc function.

Link to comment
Share on other sites

3 minutes ago, Martin Meredith said:

We are in complete agreement on one thing -- that it is going round in circles.

Could you be more constructive than that and say exactly what you are disagreeing with?

6 hours ago, Martin Meredith said:

My opinion: Oversampling: not bad at all (because of binning). Undersampling: (potentially) irrecoverable loss of detail coupled with square stars unless a (potentially) artefact-inducing interpolation algorithm is used, always assuming you have good criteria to decide which one to use:

Is this your final position regardless of things I've pointed out above?

 

Link to comment
Share on other sites

Not sure what is lacking in my explanations that you still maintain your position, so I'll give it one more try with different articles from wiki explaining concepts:

image.png.4941488c6deb2e4243037bfd2b13109f.png

found at: https://en.wikipedia.org/wiki/Multivariate_interpolation

image.png.3e9a9740c00dceae445f716ca65d66bc.png

found at: https://en.wikipedia.org/wiki/Image_scaling

As well as dedicated page on https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation

Link to comment
Share on other sites

2 hours ago, vlaiv said:

By the way - that paper you linked is not scientific paper - it is not peer reviewed nor published in any journal and funnily enough - authors all have e-mail addresses at nokia

@vlaiv, your scepticism against anything except your own opinion is fascinating as ever.

2 hours ago, Martin Meredith said:

How do you know its not peer-reviewed? Many conferences have very strict peer reviewing (up to 4 reviewers for one in my field). And what's so odd about all the authors coming from the same institution, discussing their company's product? It is really very normal in the scientific world.

I agree with @Martin Meredith, how do you know whether this paper I posted is peer-reviewed or not? Also in my field of work there are reviewers for conference proceedings and often cooperate researchers contribute, with the same level of scientific quality as academics. I served quite a few times as chief-editor for scientific conference proceedings and always relied on peer-review to ensure high-quality contributions.

BTW our colleagues from the phone company were invited by the session chairs to contribute, which is normally a sign of a high standing in the respective field  (see proceedings here). One of those session chairs was Shigetoshi Sugawa, a well respected professor and expert on CMOS sensors, with an h-index of 45 and over 8700 citations. Here is his IEEE profile just in case you are curious. I think it is fair to say that those people have "some clue" of what they are talking about, wouldn't you agree? But reading your posts above you obviously don't 😒

So before you jump the gun and judge others to be "not scientific" maybe you would be so kind as to share your scientific credibility with us? Yes? No? Maybe start with your last peer-reviewed contribution on image sensor technology? Else your above comment is just trolling and not amusing at all!

Link to comment
Share on other sites

2 minutes ago, alex_stars said:

@vlaiv, your scepticism against anything except your own opinion is fascinating as ever.

Do you have any counter arguments to arguments that I made about validity of the claims in that paper?

Is their phone really over sampling like they claim?

Link to comment
Share on other sites

2 hours ago, vlaiv said:

They even contradict themselves in the text - for example - look at the graph:

image.png.fbef4bade9450a6791239948ae216ecc.png

They mark lines that represent sampling for different pixel sizes - including 1.4µm pixel - being right most. 1.4 pixel corresponds to 2.8µm cycle and there will be 1000µm / 2.8µm = ~357 cycles per millimeter.

That reminds me of something: https://stargazerslounge.com/topic/371424-mtf-of-a-telescope/

Wasn't that an 11 page discussion on MTFs with the same result. We concluded that we agree to disagree. Well we are still on page "3" here so plenty of room left for fun.

Link to comment
Share on other sites

4 minutes ago, alex_stars said:

So before you jump the gun and judge others to be "not scientific" maybe you would be so kind as to share your scientific credibility with us? Yes? No? Maybe start with your last peer-reviewed contribution on image sensor technology? Else your above comment is just trolling and not amusing at all!

I can and most time always cite sources that you can verify about the things I "claim". I always try to make my examples repeatable by others. If ever in doubt how I did something - please ask and I'll walk you step by step.

Link to comment
Share on other sites

1 minute ago, alex_stars said:

That reminds me of something: https://stargazerslounge.com/topic/371424-mtf-of-a-telescope/

Wasn't that an 11 page discussion on MTFs with the same result. We concluded that we agree to disagree. Well we are still on page "3" here so plenty of room left for fun.

Actually - in that particular example, I learned something new thanks to you - slanted edge method.

  • Like 1
Link to comment
Share on other sites

33 minutes ago, vlaiv said:

Do you have any counter arguments to arguments that I made about validity of the claims in that paper?

Is their phone really over sampling like they claim?

Yes in fact I do. You actually missed the important point in the paper:

important_part.png.fa4dc720d11b2806838795281da64a04.png

So here is what our colleagues argue:

  1. Their camera technically operates on 1.4 µm and has 41.5 Mpix, which in the above example produces a 38 Mpix image with 1.4 µm pixels after readout.
  2. They do not claim that the actual 1.4 µm sensor is oversampling as such. And BTW your math is correct. Taking the 1.4 µm and with 500 nm light wave length (lambda) we get an x = 1.22 * lambda * f = 1.22 5e-7 * 2.4 = 1.4639e-6 or 1.46 µm. And you are right, we should have 1/2 of this
  3. However they compare their results to a normal 5 Mpix camera with 3.8 µm pixels and argue that in comparison to such a camera their camera is oversampling 3.8/1.4 = 2.71 so more than two.
  4. Another argument they make is that in their optical system (camera phones), the optics are normally not bandwidth limited (and I think they mean diffraction limited?)

So the final argument boils down to this:

  1. A normal camera utilizing the non diffraction limited optics of f2.4 would have 3.8 µm. That's their baseline. Only a diffraction limited optical system could sample at double the Nyquist "resolution", but who own such a system
  2. From their baseline, the "normal" camera with f2.4 and 3.8 µm pixels, they make a resolution jump down to 1.4 µm, which is a factor 2.71 smaller, so from the baseline of 3.8 µm, oversampling.
  3. They implicitly assume that the 3.8 µm pixel, 5Mpix camera is the optimal setup for the non diffraction limited optics they have.

I think we CAN learn from this paper as we all probably don't own diffraction limited telescopes. Or do you @vlaiv?

So we should maybe switch the perspective and not ask what is the diffraction limited "theoretical" resolution limit of our optics (according to the specs on the box), but as what is a useful sampling resolution for the optics we own and the "seeing" conditions of the night at hand.

When seen from that angle, oversampling is a good thing and actually what one wants, because then you are ready for the really good nights, seldom as they are.

Q.E.D

ps. and now please excuse me, I have to leave and look through my scope.

Link to comment
Share on other sites

1 hour ago, alex_stars said:

I think we CAN learn from this paper as we all probably don't own diffraction limited telescopes. Or do you @vlaiv?

I think that most of us in fact own diffraction limited telescopes. Most of us have telescopes where Strehl ratio is above 0.8.

In fact - if telescope is not diffraction limited - most of us will call it a lemon and get another telescope as high power views would be noticeably blurred.

1 hour ago, alex_stars said:

So we should maybe switch the perspective and not ask what is the diffraction limited "theoretical" resolution limit of our optics (according to the specs on the box), but as what is a useful sampling resolution for the optics we own and the "seeing" conditions of the night at hand.

This thread deals precisely with that aspect among other things. Formula is given to approximate resulting FWHM of star profile on exposures given seeing, telescope aperture size and guiding performance.

I will address some of the things you have quoted now regarding their paper:

1 hour ago, alex_stars said:

Another argument they make is that in their optical system (camera phones), the optics are normally not bandwidth limited (and I think they mean diffraction limited?)

You can't have optical system with limited aperture size that is not bandwidth limited. Bandwidth limited has different meaning from diffraction limited - although they have some things in common.

Bandwidth limited means that there is limit to what can be resolved with given lens. Some lenses are sharper ans some are less sharp - but each has limit of sharpness - it has limit on bandwidth of information it can record. There is no such thing as lens that is band unlimited - that would equate to 80mm telescope that is able to resolve rocks on Ganymede from Earth's orbit - that simply can't exist.

Diffraction limited means that lens is basically as good as can be and that blur is not due to imperfections in glass but rather laws of physics - it is wave nature of light that is causing the blur. More specifically diffraction limited is said for apertures that have Strehl ratio of 0.8 or higher.

1 hour ago, alex_stars said:

A normal camera utilizing the non diffraction limited optics of f2.4 would have 3.8 µm. That's their baseline. Only a diffraction limited optical system could sample at double the Nyquist "resolution", but who own such a system

Every system that is band limited (so any lens / aperture as they are by their nature band limited) has its own Nyquist frequency. By definition, Nyquist frequency is twice highest frequency component of the signal - and any band limited signal has maximum frequency. Only band unlimited signals don't have maximum frequency as their frequencies extend into infinity.

1 hour ago, alex_stars said:

They implicitly assume that the 3.8 µm pixel, 5Mpix camera is the optimal setup for the non diffraction limited optics they have.

They have MTF of their lenses - they don't have to assume anything - it is enough to have MTF and you can easily read of Nyquist frequency for that lens.

For example:

image.png.3483ee0ad24c68117dfa42a0911e1944.png

Teal dashed MTF ends at 310 cycles per mm - that means that wavelength of maximum frequency component is 1000µm / 310 = 3.226µm and that optimum pixel size for that lens is half that - ~1.6µm. Orange dashed MTF ends at 420 cycles per mm, so for that lens - optimum pixel size is 1000µm / 420 = 2.4µm and half that is 1.2µm - so for that lens - optimum pixel size is 1.2µm.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.