Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Astronomy tools ccd suitability


vlaiv

Recommended Posts

9 hours ago, vlaiv said:

I think that most of us in fact own diffraction limited telescopes. Most of us have telescopes where Strehl ratio is above 0.8.

Yes I know the definition of diffraction limited and Strehl ratio (https://www.telescope-optics.net/Strehl.htm). My point was that we get telescopes sold the be with high Strehl ratios, but I find pseudo certificates like this one:

strehl.thumb.jpg.31ffa7bede0b22bd2ec7e7bb4d5767d8.jpg

somewhat likes a sales pitch, not really a credible lab report. Don't get me wrong, I really enjoy TS APOs, owning myself one. Nevertheless the question arises how well do you know the actual Strehl ratio of your instruments??

Link to comment
Share on other sites

9 hours ago, vlaiv said:

Bandwidth limited means that there is limit to what can be resolved with given lens. Some lenses are sharper ans some are less sharp - but each has limit of sharpness - it has limit on bandwidth of information it can record. There is no such thing as lens that is band unlimited - that would equate to 80mm telescope that is able to resolve rocks on Ganymede from Earth's orbit - that simply can't exist.

Diffraction limited means that lens is basically as good as can be and that blur is not due to imperfections in glass but rather laws of physics - it is wave nature of light that is causing the blur. More specifically diffraction limited is said for apertures that have Strehl ratio of 0.8 or higher.

That is exactly my point @vlaiv. Maybe I was not clear enough in my comment on this. And BTW thank you for summarizing the definitions, I think indeed this is very useful for everybody reading along.

So we can discuss theoretical details all day, but we have to bring it back to the practicality of observing and imaging in astronomy.

I find the discussion on the Strehl ratio of the optical system in a laboratory setting (i.e. optical bench) rather beside the point as we never, really never (!) get to use the full potential of our possibly perfect telescopes (and how perfect they are is also not point of the discussion).

We can NOT forget that there is an atmosphere between us and the targets we want to observe (imaging and visual observations), we always have at least the following setup in our optical train (and I know you and everybody else knows that):

  1. Distant object we want to observe (from Moon to far away galaxies, etc)
  2. The Earths atmosphere at our location and time
  3. Our telescope, perfect as it may be...
  4. Our sensor (human eye or CMOS camera....)

We spend way too much time talking about #3 and possibly #4 and so often forget about #2.

It's like owning a super powered sports car and discuss the intricate details of its engine and never getting the chance to blast along the possible 300 km/h with it. Its in the garage and it was INDEED expensive, but you hardly can release its potential.

Link to comment
Share on other sites

So to summarize what I would like to contribute to this discussion. Let me start of with a quote from Wikipedia () https://en.wikipedia.org/wiki/Strehl_ratio#Usage on the Strehl ratio:

Quote

The ratio is commonly used to assess the quality of astronomical seeing in the presence of atmospheric turbulence and assess the performance of any adaptive optical correction system. It is also used for the selection of short exposure images in the lucky imaging method.

We often only think of the Strehl ratio as a measure describing the perfectness of our telescope. However it is really noteworthy that it is also often used to assess the astronomical seeing conditions on site at a given time. We as amateur astronomers don't do that too often. But the point is

There is an every changing atmosphere between our telescopes and the "stars" and we really don't get around that so easily.

So even though we can discuss theoretical limits of resolution and based on these optimal pixel sizes and even taking it that far that we discuss "over-sampling" and "under-sampling" with respect to these theoretical limits, it is not really useful for the practical application we aim for. In the end we can get a camera with the theoretically perfectly fitting pixel size, but out in the night, depending on the seeing conditions you are already "over-sampling", depending on what the seeing is doing.

To summarize, I really like the images from this page (http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/diflim.html)

rayc.gif.3dd18d04cf357eb87ba8af0b945cd4e6.gif

rayleighill.jpg.e2bc313ede66e3a0bb3f8cbecd0596b8.jpg

So we see the light intensity curves for resolving two distant light sources for the different options of "properly resolved", "Rayleigh Criterion" and "Unresolved" on top and below how the Airy discs look like. In the end I guess we strive for the left most case.

Now we can ask ourselves how does this look like on a image chip:

CCDdifc.gif.7b4c16287ea5a57343f23c49b738fc2d.gif

and we see the diffraction case and the Rayleigh case in this image. To the left the "simulations" on how these look like on the chip and to the right how they theoretically should look like.

The point is all of this is assuming no atmosphere with seeing, it is just the basic theoretical physics limits. And the atmosphere is gonna make it worse.... as we know.

So in the end I would advocate to get the camera with the best quantum efficiency, the smallest pixels possible and the lowest readout noise. Why? Well then you are technically ready for all kind of conditions, even the best seeing nights. If you end up effectively over-sampling your images (due to seeing conditions on that night) you can "postprocess" the images with binning and what not (see our phone company colleagues for another application) and hopefully recover more details than anticipated (as we see in good lucky-imaging cases).

Ideally we want an imaging system which can adapt its effective pixel size to the task at hand:

  • large effective pixels to collect most photons on faint targets for the benefit of good signal to noise ratios
  • small effective pixels to resolve the finest possible details (whatever the atmosphere allows on that night (location and time))

If I am not mistaken, binning allows us that. Even though unfortunately with CMOS sensors only in post-processing. So if we would have that phone chip with the 1.4 µm pixel size, we could choose effect pixel sizes of 1.4, 2.8, 4.2, 5.6 µm with different binning strategies. If you got the 5.6 µm pixel size camera in the first place, you can't go to higher resolutions when needed or better put "possible", due to your local seeing conditions.

 

Link to comment
Share on other sites

On 31/01/2022 at 21:32, Catanonia said:

Vlaiv,

No doubting your expertise in this subject and I would not even try to dispute any of it.

BUT

is there a danger that if your recommendations are applied to the site, it becomes "too" complicated for the newbie / average human and therefore effectively worthless ?

The site is supposed to be a top level guide for those that don't understand the science as you may understand it 

All I would say, is reading the original descriptions on the site, it makes sense to my small brain but reading your descriptions hurt my head.

I would only say, what ever is decided upon, dumb it down to simple language for us simpler folk

 

 

Problem is that people( I was guilty of this also) take what's written as gospel truth and those sampling figures shouldn't be crossed 

Truth is, it doesn't matter.. do people freak out when they're using a DSLR and a lens? 

I've just used the system to example this.... using a canon 600d you need to be using a 450mm lens to get under the undersampling limit of 2

I'm not contesting Vlaiv, he knows more or less everything on the subject, feels very passionate  on the subject,  whether he does it professionally or not I've no idea..

So the criteria set  between .67-2.0 as optimum sampling is way out...or didn't quite fit from analog to digital signal in the first place

I know is lots of images have won lots of competitions by significantly under sampling .... 

  • Like 1
Link to comment
Share on other sites

2 hours ago, alex_stars said:

We spend way too much time talking about #3 and possibly #4 and so often forget about #2.

In this case - we did not forget about it. It has been modeled. We are not looking at diffraction limit of the telescope but instead saying - let's calculate approximate expected FWHM given certain parameters - scope size, seeing and mount tracking.

That is one of the objections to original tool which just uses seeing but does not account for mount tracking performance or scope aperture. All three add up in long exposure astrophotography to create resulting star profile FWHM - which determines actual detail in the image.

This FWHM is then used to calculate optimum sampling rate and give recommendation based on that. There can be three cases:

1) over sampling. I advocate that in this case binning should be presented as an option and drawbacks of over sampling in long exposure astrophotography explained - namely lower SNR without detail if one does not bin to recover SNR

2) proper sampling - this should get A-OK and maybe note that binning will improve speed with very slight reduction in detail,

3) under sampling - this should get A-OK with note that some of detail sharpness is lost because of this, but system will be fast

1 hour ago, alex_stars said:

Ideally we want an imaging system which can adapt its effective pixel size to the task at hand:

  • large effective pixels to collect most photons on faint targets for the benefit of good signal to noise ratios
  • small effective pixels to resolve the finest possible details (whatever the atmosphere allows on that night (location and time))

If I am not mistaken, binning allows us that. Even though unfortunately with CMOS sensors only in post-processing. So if we would have that phone chip with the 1.4 µm pixel size, we could choose effect pixel sizes of 1.4, 2.8, 4.2, 5.6 µm with different binning strategies. If you got the 5.6 µm pixel size camera in the first place, you can't go to higher resolutions when needed or better put "possible", due to your local seeing conditions.

I completely agree.

This is also the problem - most novice astrophotographers looking at astronomy tools ccd suitability tool are not aware of how binning should be utilized and what are resolution that should be used given FWHM in their images. They probably even don't even know how to measure FWHM.

This is something that they have to be told - and above tool should tell them - "look, you are sampling at say 0.8"/px - but you'll have 4" FWHM stars with your little scope. You should really be sampling at 4"/1.6 = 2.5"/px - bin your images x3 to get to 2.4"/px which is close enough to your sampling target".

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, newbie alert said:

Problem is that people( I was guilty of this also) take what's written as gospel truth and those sampling figures shouldn't be crossed 

Truth is, it doesn't matter.. do people freak out when they're using a DSLR and a lens? 

I've just used the system to example this.... using a canon 600d you need to be using a 450mm lens to get under the undersampling limit of 2

I'm not contesting Vlaiv, he knows more or less everything on the subject, feels very passionate  on the subject,  whether he does it professionally or not I've no idea..

So the criteria set  between .67-2.0 as optimum sampling is way out...or didn't quite fit from analog to digital signal in the first place

I know is lots of images have won lots of competitions by significantly under sampling .... 

We are in total agreement here. That was my position from the start.

I think that over sampling is bad and under sampling can be used for wider field without any issues.

Tool needs to reflect this and it should correctly calculate over / under / correct sampling.

Currently it is wrong as it puts optimum sampling higher then it really is - it is causing most people to over sample while thinking they are in the safe zone.

  • Like 2
Link to comment
Share on other sites

2 hours ago, vlaiv said:

I completely agree.

This is also the problem - most novice astrophotographers looking at astronomy tools ccd suitability tool are not aware of how binning should be utilized and what are resolution that should be used given FWHM in their images. They probably even don't even know how to measure FWHM.

This is something that they have to be told - and above tool should tell them - "look, you are sampling at say 0.8"/px - but you'll have 4" FWHM stars with your little scope. You should really be sampling at 4"/1.6 = 2.5"/px - bin your images x3 to get to 2.4"/px which is close enough to your sampling target".

Fantastic, we agree to agree. I think it will be a wonderful tool! Looking forward to see the new setup.

Thanks for your input from my side @vlaiv always a pleasure to have a good discussion on matters.

Link to comment
Share on other sites

On 03/02/2022 at 19:29, vlaiv said:

Could you be more constructive than that and say exactly what you are disagreeing with?

Is this your final position regardless of things I've pointed out above?

 

I don't have a final position. You?

If you don't like my objections to undersampling so far, don't worry, I have others.

 

From a theoretical perspective, you're fond of citing Nyquist and reconstruction, but perfect reconstruction applies to bandlimited signals only. Under what conditions might we increase our chances of failing to meet that condition -- i.e. having spatial frequency content outside the Nyquist limit? When we are undersampled, and the more undersampled we are, the less 'perfect' our reconstruction becomes.

To frame the discussion of undersampling in Nyquist terms is a bit of a mistake in my opinion.

Nyquist is incapable of providing a complete picture of the frequency content of the signal prior to sampling.

We can even be undersampled when astronomy.tools tells us we're oversampled unless we incorporate a spatial lowpass filter at some point prior to sampling. Fortunately (?), the scope itself acts as a spatial low pass filter, the atmosphere, poor tracking and other disturbances helps us out in practice. How much do they help us out? It varies from night to night and from kit to kit. Theory isn't going to be much help there. What we can say is that if the problems I allude to are going to surface, they are going to do so on the nights of best seeing. And by failing to remove frequencies above the Nyquist limit prior to sampling we will not be able to perfectly reconstruct even the band-limited signal.

Any discussion of undersampling only makes sense in terms of seeing first, and Nyquist second. Only by taking seeing into account can we estimate just how bad our 'perfect' reconstruction is going to be.

As an aside, I work in audio/speech and for a long time the received wisdom was that there was no useful information in speech above 8 kHz, leading to sampling rates of 18-20 kHz. Now we know that there is quite a lot of informative energy above 8 kHz, demonstrated for instance by colleagues working in speech synthesis who absolutely insist on training their systems using speech data sampled at up to 50 kHz (to catch frequencies up to ~20 kHz). You could say we were undersampled all along -- but at least we were able to use a lowpass filter to ensure that the conditions for the Nyquist to apply were met, and reconstruction was near-to-perfect.

Back to images:

So in a hypothetical future where Vlaiv's arguments have convinced readers to go out and buy a sensor with (too-)large pixels, then what?

We go ahead and sample the night sky. And we have no way at all of knowing how much energy at high spatial frequencies has found into way into our sensor. They're just photons after all. They don't know that they're part of some cosmic configuration with their fellow photons that collectively represent a spatial frequency that is too high to be captured by our too-large pixels. But these photons, having made it to our scope, have to go somewhere. They don't necessarily produce aliasing artefacts so beloved of computer graphics/vision textbooks. But they're there just the same. Unwanted noise. I've always felt globular clusters are good candidates for high-spatial frequencies generators.

So on a night of excellent seeing -- the ones we all pine for and celebrate and spend long hours observing in -- those of us with sensors with large pixels (and of course, I mean with repect to focal length -- I'm talking about resolution here) may well harbour a nagging suspicion that our sensor is (1) not only not going to capture the additional detail that's available, but worse, (2) that additional lost detail is translated into noise that is impossible to detect in our captures. Yes, its a double whammy. (We've mainly been focused in this thread on the first part).

Here's a shot (in EEVA mode, nothing exciting) from the best night's seeing I can remember in the last 10 years. Do I believe I was undersampled on this occasion? I have absolutely no way of knowing, and neither does Vlaiv. (No need to look too closely at those stars -- you already know what you're going to see) BTW Vlaiv: no need to mention Lanczos again -- I get that I can smooth the rough edges off these stars, but perfect reconstruction? Who can say. See above.
IC_5146.Cocoon.root_2015.6.26_19_46_27.png.a905cc56e606aa312a3d152cb5cb211c.png

I never guide and I use alt-az, not eq, so you can imagine that things would be even worse (since I'd be successfully focussing the signal onto a tight range of pixels) if I were to guide and use the mount in eq mode. At least I can use the mount and within-sub field rotation to cover up the sins of undersampling.

In the days of high read-noise CCDs there was maybe a point in not trying too hard to avoid undersampling, but with CMOS,  small pixels and low read noise, why risk it? Fractional binning to achieve critical sampling and the best possible SNR may well be the way to go -- more experiments are needed in that direction. But that option is just not there if you're in the region of being undersampled to begin with.

Now I neither expect nor indeed desire to convince you, Vlaiv. I just want to leave this here so that anyone coming to this thread doesn't think that everyone agrees with you.

Martin

 

Added in edit: there are techniques like [1] that can handle some classes of undersampled images by eliminating aliasing to an extent

[1] https://arxiv.org/pdf/astro-ph/9810394.pdf

 

Edited by Martin Meredith
typos
  • Like 1
Link to comment
Share on other sites

A minor comment or two. Formally,  Nyquist requires point sampling (in addition to other criteria) but a CCD or CMOS pixel integrates over an area. This also means they are not shift invariant i.e. the result depends on where the image falls in relation to the pixels.

So again, formally, linear analysis like MTF etc. don't strictly apply and simulation is needed for accurate results.  In my area spectroscopy,  sampling at higher density than Nyquist is required for accurate estimation of line centre, line width etc.

Regards Andrew

Edited by andrew s
  • Like 1
Link to comment
Share on other sites

16 hours ago, Martin Meredith said:

Now I neither expect nor indeed desire to convince you, Vlaiv. I just want to leave this here so that anyone coming to this thread doesn't think that everyone agrees with you.

I don't think that point of this thread is to agree with me, at least that is not why I started it, but I would be happy if everyone agreed on the correctness of theoretical framework though - at least people bothered enough to follow thru and try to understand what is being said.

I would be very thankful if you actually put some effort into convincing me otherwise if I'm wrong about something. It does not require much - just pointer to mathematical explanation of why I'm wrong (I'm very capable of following math arguments) and/or example.

When you mistakenly thought that pixels are square, I did put some effort into showing you that it is actually not the case and that pixels are point samples and squareness arises from use of nearest neighbor interpolation.

If you still have doubts about that, I can offer yet another example to confirm pixels as point samples. That is why I asked:

17 hours ago, Martin Meredith said:

I don't have a final position. You?

If you don't like my objections to undersampling so far, don't worry, I have others.

It's not about me liking your objections. Question is - do they have merit? One about square pixels does not.

I also appreciate you having other objections - like band limited signal and any artifacts arising from any possible higher frequencies.

17 hours ago, Martin Meredith said:

As an aside, I work in audio/speech and for a long time the received wisdom was that there was no useful information in speech above 8 kHz, leading to sampling rates of 18-20 kHz. Now we know that there is quite a lot of informative energy above 8 kHz, demonstrated for instance by colleagues working in speech synthesis who absolutely insist on training their systems using speech data sampled at up to 50 kHz (to catch frequencies up to ~20 kHz). You could say we were undersampled all along -- but at least we were able to use a lowpass filter to ensure that the conditions for the Nyquist to apply were met, and reconstruction was near-to-perfect.

Telescope aperture is ideal low pass filter. It effectively kills off any frequency above critical frequency. They are not just attenuated - they are completely removed. MTF of best aperture looks like this:

mtf_obs0.gif.029c2a6d231482ee7df07d61df5993af.gif

Seeing and guiding provide another strong low pass filter. These produce Gaussian blur over long exposure (Central limit theorem https://en.wikipedia.org/wiki/Central_limit_theorem ). Fourier transform of Gaussian is a Gaussian - so these also act like low pass filter with Gaussian shape in frequency domain. https://en.wikipedia.org/wiki/Gaussian_filter

320px-Gaussian_Filter.svg.png

All of these restrict possible high frequencies and act as strong low pass filter.

We have a means of determining approximate cut off frequency by just examining FWHM of stars in the image. They are well approximated by Gaussian shape. Given that combined PSF convolves the image and stars are point sources - then star profile is PSF profile.

For this reason we never see aliasing artifacts in astronomical images. There are plenty of under sampled images created by amateur astronomers - just look at wide field shots at 3.5"/px or higher. Some of those images end up on APOD. We can even demonstrate that under sampling does not create visual artifacts in final image - even in high frequency scenarios like globular clusters. Here is an example:

M13_under_sampled.png.8c1b4a51e029d33ce6b64b4f602ed588.png

This is M13 sampled at 8"/px - that is very undersampled - yet image looks fine. There is visually nothing wrong with it - it does not show any artifacts.

17 hours ago, Martin Meredith said:

So in a hypothetical future where Vlaiv's arguments have convinced readers to go out and buy a sensor with (too-)large pixels, then what?

I don't think you properly understand what I'm saying.

I'm not trying to convince anyone of purchasing a large pixel camera. Not more than Astronomy.tools CCD suitability is already doing.

What I'm saying is - we need to change that tool to:

1) Correctly determine over / under / correct sampling for set of parameters user enters - like seeing, scope aperture and mount performance.

Tool already does this but in limited capacity which sometimes leads to wrong results.

2) Advise users on any ill effects of their choice.

Over sampling has big problem with making system much slower than it needs to be unless data is binned. No advantage comes from over sampling in long exposure imaging - no additional detail will be captured.

Under sampling won't capture all the detail available - but besides that - won't cause any other problems. Only problem that can arise from under sampling is aliasing artifacts. Those however don't appear in astronomical images due to strong low pass filter that seeing + guiding + scope aperture represent. Higher frequencies are killed off and those that are not are attenuated below noise floor (noise has uniform distribution in frequency domain while signal is attenuated by gaussian and MTF curves).

17 hours ago, Martin Meredith said:

Fractional binning to achieve critical sampling and the best possible SNR may well be the way to go -- more experiments are needed in that direction. But that option is just not there if you're in the region of being undersampled to begin with.

Some people want wide field of view and there is nothing wrong with being under sampled in that case. They are willing to let go of that last 10% of detail/sharpness/contrast for wide field image.

You can't have both - at least not yet. We don't have good large sensor astronomy cameras with pixels below 2µm. When those cameras are made - tool will still be viable. People wanting wide field will be advised that they should bin by certain factor given their small pixel size for best results.

Link to comment
Share on other sites

Forgive me for not grasping aspects of the topic but, what is the difference between a single large pixel and binning at say 2x2 to achieve the same size? I have noted the points about over sampling introducing noise and slowing the system, but you gain in definition, surely this is a good thing?

If, for example, I am not interested in wide field, which from the posts seems to be the only point to under sampling, is there another benefit to under sampling?

Not sure that the mount should be introduced into the calculation as you are trying to match the telescope to the camera?

Is it possible that EEVA and long exposure imaging look for different aspects from the camera and maybe that could be a tickbox?

1) Correctly determine over / under / correct sampling for set of parameters user enters - definitely

2) Advise users on any ill effects of their choice. - definitely

 

 

Link to comment
Share on other sites

1 hour ago, M40 said:

Forgive me for not grasping aspects of the topic but, what is the difference between a single large pixel and binning at say 2x2 to achieve the same size? I have noted the points about over sampling introducing noise and slowing the system, but you gain in definition, surely this is a good thing?

There is almost no difference between the two. With CCD camera - there truly is no difference.

With CMOS cameras and software binning - there is only minor difference of read noise. Read noise is larger if you bin then for regular pixels - but with stacking - that is not really issue as we adjust exposure length to over come read noise.

We could say that it is small drawback of using small pixels and binning - need for longer single exposures in stack. Many people have heard that CMOS sensors enable short exposures and that is true, but when you over sample that advantage slowly diminishes.

If you over sample by factor of two for example (with intention of binning x2 later) - your single sub needs to be at x4 longer. If with optimal sampling one would use 1 minute exposures - with twice over sampling - they need to use 4 minute exposures to overcome read noise.

If you over sample - you don't get any additional definition. There is nothing to be recorded above optimum sampling frequency. Image simply won't look any sharper or more detailed than if you properly sampled.

In fact - that is one of the tests for over sampling. If you can:

- reduce the size of your final image to smaller size by resampling and then

- reduce it to original size - and it does not change, then you over sampled.

Images usually loose detail when you down size them and then up size them back - unless they are over sampled to begin with and detail is not there.

1 hour ago, M40 said:

If, for example, I am not interested in wide field, which from the posts seems to be the only point to under sampling, is there another benefit to under sampling?

If you are not interested in wide field and you want to maximize detail that you capture - you should then aim for optimum sampling. This is what this tool should be for - if you have idea what you want - it should tell you what combination of camera and scope will provide you with that.

Workflow would be like this - you set your goal for detail level you want to achieve - say 1.5"/px and then you try to estimate average seeing in your location as well as note down your mount performance (usual guide RMS you get with your mount). Then you have all the info to try out different camera / scope combinations to find one that is closest match for your target resolution.

Tool will also be useful if you set some unrealistic goal like sampling at 1"/px. It is very hard to achieve that sort of resolution in all but best conditions with moderately large aperture (8" +). It will show you that there is little to be done if your seeing is regularly 2"-3" or your mount performance is 1" RMS guided.

1 hour ago, M40 said:

Not sure that the mount should be introduced into the calculation as you are trying to match the telescope to the camera?

It needs to be as it adds to overall blur of the image - and reduces detail (in technical jargon - it acts as low pass filter and removes high frequencies - that we have been mentioning above).

If image is blurred - it does not make sense to use very small pixels / high imaging resolution - as simply there is no detail to be captured. This is what we mean by over sampling - if you use too small pixels for level of blur of your image.

We can never know what sort of skies and mount performance we will have on any given night - but we can estimate knowing our usual conditions. If mount never guides better than say 0.7" RMS - well - no point in expecting it to suddenly start guiding at 0.3" and reducing its blur on the final image.

If you are doing EEVA - you might say that mount error is say 0" RMS. For very short exposures mount simply does not have time to drift / be corrected / drift / be corrected - and so for many cycles over one exposure. In few seconds exposure, mount error will be minimal - and you can say that it is effectively 0. This will then in turn mean that FWHM estimates are based on aperture and seeing only.

You can check if there is mount contribution by comparing FWHM values of final stack of say 2s exposures vs exposures that you usually use for EEVA - like 20 or 30s.

1 hour ago, M40 said:

Is it possible that EEVA and long exposure imaging look for different aspects from the camera and maybe that could be a tickbox?

1) Correctly determine over / under / correct sampling for set of parameters user enters - definitely

2) Advise users on any ill effects of their choice. - definitely

Only difference that I see is use of guiding - and it can be handled like above - or like you suggest, with a tick box - if that tick box is ticked then mount error is set to 0 and you can't choose mount / mount error in further calculations.

Other than that - maybe insist on optimum sampling and give stronger warning if oversampling? That is for EEVA practitioners to advise - what is more important - get image as fast as possible, or get the best definition of that image (difference is very small for slightly under sampling but it can be significant in amount of time needed to observe the target).

  • Thanks 1
Link to comment
Share on other sites

11 hours ago, vlaiv said:

If you are not interested in wide field and you want to maximize detail that you capture - you should then aim for optimum sampling. This is what this tool should be for - if you have idea what you want - it should tell you what combination of camera and scope will provide you with that.

As someone who has used the existing tools.... a lot, that was my aim.

Real world example for you, I currently have no interest in long exposure imaging/processing so I looked for a camera for eeva purposes to get the best or optimum sampling together with FOV on a specific telescope. I knew my targets, so I selected the camera on a very simple basis of no barlow, no reducer and no binning aiming for optimal sampling and FOV. I did the spreadsheet thing so I knew roughly what I was looking for and the tools then gave me a very easy method of comparing cameras. 

Purely from my very basic use of the tools, if I were to change them, maybe the FOV and CCD suitability could be combined such that it's all on one page with maybe extra tick boxes for use of camera i.e solar system/eeva/long exposure which could then include the mount calculations. 

Link to comment
Share on other sites

There is also question of OSC sensors that @Adam J pointed out.

Problem with OSC sensors is that contrary to popular belief - they are not the same as equivalent mono versions as far as sampling / resolution / detail go.

OSC sensors always sample at half the resolution pixel size leads you to believe, and final image is just enlarged in software x2.

For example - if we take modern DSLR with small pixels that has something like 6000 x 4000 pixels (24Mpixel) - we expect resulting image to be 6000 x 4000 px - but it is not as far as sampling goes. It is actually 3000 x 2000 (6Mpixel) and artificially scaled up x2 so that actual pixel count is "returned" to 24Mpix.

What does this mean?

1. We can't calculate sampling rate based on pixel size - we need to take twice smaller pixel.

2. Problem is that people won't easily accept this lower resolution - after all, software produces correct image size, right?

3. Binning such large image after stacking simply does not bring in SNR benefit like binning regular image that was not enlarged (true data)

4. Preferred way to debayer such images should be either super pixel mode, or if one really wants to exploit full resolution - bayer drizzle (if it is implemented correctly).

Main problem here is that people will have hard time adopting to all of this because so much software just ignores above and "works out of the box" - with interpolation debayering. It takes quite a bit of effort to process OSC data properly.

 

Link to comment
Share on other sites

On 04/02/2022 at 23:07, andrew s said:

So again, formally, linear analysis like MTF etc. don't strictly apply and simulation is needed for accurate results.

True, and we so often tend to forget. Do we know if anybody did some simulations on the subject we discuss?

Link to comment
Share on other sites

On 04/02/2022 at 20:44, Martin Meredith said:

In the days of high read-noise CCDs there was maybe a point in not trying too hard to avoid undersampling, but with CMOS,  small pixels and low read noise, why risk it? Fractional binning to achieve critical sampling and the best possible SNR may well be the way to go -- more experiments are needed in that direction. But that option is just not there if you're in the region of being undersampled to begin with

Glad to read this. I think solid experiments on binning of small pixel CMOS cameras would be great. Is anybody doing those?

Link to comment
Share on other sites

2 hours ago, alex_stars said:

Glad to read this. I think solid experiments on binning of small pixel CMOS cameras would be great. Is anybody doing those?

I do it regularly.

I have RC8 and ASI1600. Natively that is 0.48"/px - way over sampled. Every image I do with that combination needs to be binned.

Link to comment
Share on other sites

20 minutes ago, newbie alert said:

Do you bin 3x3 on that setup Vlaiv?

Sometimes even x4.

I've posted M51 taken with that setup that is best sampled at x4 bin. That also goes for this image, for example - taken on a night of rather poor seeing:

pacman.thumb.png.dde289649fed787ac6a8730649a5e1de.png

Just look at the size of those stars - very bloated. I binned this x3 (it is something like 1500 x 1100px in size) while in reality, it is better suited for bin x4.

Link to comment
Share on other sites

12 hours ago, vlaiv said:

I do it regularly.

so where do you publish your experimental results?

just for clarification I was NOT asking if somebody is using binning to process their astronomical images. I was referring to @Martin Meredith original post:

On 04/02/2022 at 20:44, Martin Meredith said:

Fractional binning to achieve critical sampling and the best possible SNR may well be the way to go -- more experiments are needed in that direction.

The keyword would be "fractional" here. So we are not talking about binning 2x2, 3x3 or even 4x4, but fractional, so values like 1.7x1.7 and such. Here is an example of what is meant:

  • Say you have a CMOS camera with 2.9 µm pixels and you worked out that that you actually want to have 4.8 µm pixels (given your telescope's specs) to achieve critical sampling.
  • If you "just" bin the usual way, you could get effective 5.8 µm pixels if you bin 2x2, or 8.7 µm pixels if you bin 3x3
  • But that is not what you want, you want 4.8 µm pixels. With fractional binning you can get that exactly by binning 1.65x1.65, which would lead to effective pixels of 4.78 µm (rounded)

That is what we talk about, so again, do you experiment with fractional binning @vlaiv? If so, which software to you use and where do you publish results? It would be really interesting to see those, because as @Martin Meredith points out, experiments would be very interesting and are needed.

Link to comment
Share on other sites

10 minutes ago, alex_stars said:

so where do you publish your experimental results?

I made numerous posts about fractional binning and related mathematical concepts here on SGL.

I'm not very fond of fractional binning - because in essence it is mathematically very similar (if not the same) as bilinear interpolation - which is arguably the worst kind of interpolation.

This can be seen if we take simple example - we take 3x3 matrix of pixels and bin them by factor of 1.5 (thus reducing them to 2x2).

image.png.831e7e968f080c01402aa7407472834b.png

Here we think of pixels as being squares for purpose of calculating their effective sampling "area" (although they might not be squares in reality on sensor and have oval shape or some other).

Let's think of how we are going to calculate top left red pixel from underlying black pixels.

We need to take whole (1,1), half of (1,2), half of (2,1) and quarter of (2,2) pixel (by value of course) and add them up (we can then divide with "total" area if we want average).

This would be fractional binning process. Here are some things to observe:

1. pixel (1,2) will contribute its value to both top left and top right red pixel. Similarly central black pixel (2,2) will contribute its value to all four resulting red pixels.

This is introducing pixel to pixel correlation - which equals to blur and pixels are no longer linearly independent (might be important in scientific analysis as their associated noise is no longer random but in part depends on same underlying pixel).

2. Process is the same as bilinear interpolation.

Imagine that we want to calculate value of a point that is at coordinates of (1.66, 1.66) - given that above black pixels have coordinates in 1,2,3 range for (x,y).

What would mathematical expression for that be?

 image.png.13d852de47f8035f06f7d22c68d6ec12.png 

We now sum of four values weighted by inverse of distances - and you'll see that you get again 1 : 0.5 : 0.5 : 0.25.

3. We don't get predictable SNR improvement. With regular binning - if data is genuine we get predictable SNR improvement - we get x2 for bin 2x2 and x3 for bin 3x3 - but here we don't get x1.5 improvement in SNR if we bin x1.5

Link to comment
Share on other sites

Just to give an example of a possible experiment, I found a very interesting post over at the other forum:

https://www.cloudynights.com/topic/526385-planetary-camera-rant-11-micron-pixels/?p=7067915

and I shamelessly copy the images here for a summary (😀 reference above)

So the setup of "Vanguard" is as such:

Quote

I used a tripod mounted Williams Optics 66mm ED, Focal length = 388mm, F/5.9 to examine a paper resolution target at 10 meters (30 feet).  I was able to reduce the aperture of the telescope to F/8.35, F/11 and F/18.75 by inserting black cardboard disks with a center hole in front of the objective lens.

Cameras:
AmScope MU1000: pixel size 1.667µm, 2x2 binned for 3.34µm and 4x4 biined for 6.7µm.
Celestron NexImage 5: pixel size 2.2µm
MallinCan SSIc: pixel size 3.75µm

and just as a teaser some results at F/8.35

post-219375-0-76369100-1456110390.thumb.jpg.1865745fef92bd91be88dc46d44a6dab.jpg

We can actually see how well the 1.66 µm images do. Just imagine you only own the 3.75 µm camera, all is lost for the really good seeing nights 😢.

Link to comment
Share on other sites

If you really want to match that resolution, maybe best approach would be to do following:

Bin to first integer factor that will get you under sampled below target resolution - in your case 5.8um pixel size and then drizzle to wanted resolution with appropriate parameters.

 

 

  • Like 1
Link to comment
Share on other sites

1 minute ago, vlaiv said:

Bin to first integer factor that will get you under sampled below target resolution - in your case 5.8um pixel size and then drizzle to wanted resolution with appropriate parameters.

Also an interesting idea. So we need resolution charts to actually quantify what happens test ideas. Thus we can make progress.

Link to comment
Share on other sites

9 minutes ago, vlaiv said:

I made numerous posts about fractional binning and related mathematical concepts here on SGL.

I'm not very fond of fractional binning - because in essence it is mathematically very similar (if not the same) as bilinear interpolation - which is arguably the worst kind of interpolation.

This can be seen if we take simple example - we take 3x3 matrix of pixels and bin them by factor of 1.5 (thus reducing them to 2x2).

image.png.831e7e968f080c01402aa7407472834b.png

Here we think of pixels as being squares for purpose of calculating their effective sampling "area" (although they might not be squares in reality on sensor and have oval shape or some other).

Let's think of how we are going to calculate top left red pixel from underlying black pixels.

We need to take whole (1,1), half of (1,2), half of (2,1) and quarter of (2,2) pixel (by value of course) and add them up (we can then divide with "total" area if we want average).

This would be fractional binning process. Here are some things to observe:

1. pixel (1,2) will contribute its value to both top left and top right red pixel. Similarly central black pixel (2,2) will contribute its value to all four resulting red pixels.

This is introducing pixel to pixel correlation - which equals to blur and pixels are no longer linearly independent (might be important in scientific analysis as their associated noise is no longer random but in part depends on same underlying pixel).

2. Process is the same as bilinear interpolation.

Imagine that we want to calculate value of a point that is at coordinates of (1.66, 1.66) - given that above black pixels have coordinates in 1,2,3 range for (x,y).

What would mathematical expression for that be?

 image.png.13d852de47f8035f06f7d22c68d6ec12.png 

We now sum of four values weighted by inverse of distances - and you'll see that you get again 1 : 0.5 : 0.5 : 0.25.

3. We don't get predictable SNR improvement. With regular binning - if data is genuine we get predictable SNR improvement - we get x2 for bin 2x2 and x3 for bin 3x3 - but here we don't get x1.5 improvement in SNR if we bin x1.5

Sorry @vlaiv I was not asking for your elaboration on how you would do fractional binning and if "you" are fond of it, I was asking for experiments with quantifiable results so that we can discuss what strategy would be best to work with.

Nevertheless thanks for always putting in the work to answer with great detail 👍

Either way, I think it is still clear that "one" wants a sensor with the smallest pixels possible, else one does not even get the chance to experiment with binning, fractional or integer wise.

 

Edited by alex_stars
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.