Sampling

Recommended Posts

Does this illustrative tool have any role today in determining what camera to match with what telescope? Or does the move to CMOS sensors with generally very small pixels and shorter exposures negate the need to worry about matching kit any more?

• 1

• Replies 28
• Created

Posted Images

58 minutes ago, jambouk said:

Does this illustrative tool have any role today in determining what camera to match with what telescope?

No

59 minutes ago, jambouk said:

Or does the move to CMOS sensors with generally very small pixels and shorter exposures negate the need to worry about matching kit any more?

No

I'd say that nowadays we have much more accurate way of matching camera to telescope with respect to resolution, both for planetary and long exposure imaging.

Share on other sites

Vlaiv, what tool do you recommend people would use?

• 1
Share on other sites

1 minute ago, jambouk said:

Vlaiv, what tool do you recommend people would use?

Math / physics

For planetary imaging, everything related to sampling can be expressed in one simple formula:

F/ratio = pixel_size * 2 / wavelength_of_light

Where wavelength of light is in same units as pixel size (either micrometers or nanometers) and represents shortest wavelength of light that one is interested in recording. Can be 400nm for color imaging or perhaps 656 for H-alpha imaging (Solar Ha or lunar with Ha filter), or maybe 540nm for white light with Baader solar continuum - you get the idea.

For deep sky imaging there are two basic formulae that are used - first is:

sampling_rate = 1.6 * expected_FWHM

and second:

expected_FWHM = sqrt(seeing_fwhm^2 + tracking_fwhm^2 + airy_disk_gaussian_approximation_fwhm^2)

Or - you don't need to calculate expected FWHM - you can measure it from your existing subs

Only tricky part is knowing the seeing FWHM (or estimating average / min / max from existing subs) for given location.

Anything below these sampling values is ok - you can choose to under sample (without any ill effects) - if you want to get wide field image

Anything above these sampling values is going to hurt SNR and should be avoided.

Use of binning can help - it effectively increases pixel size by bin factor (bin x2 - x2 larger pixel and so on).

Share on other sites

OK.

For the planetary equation, this is using focal ratio, not focal length. Two telescopes with the same focal ratio, but different focal lengths (and hence appertures) will presumably not be equal? I'm also not sure how one identified the optimum sampling rate (I presume sampling is being measured in arcseconds per pixel) from this equation.

For DSO imaging, if one doesn't know the expected FWHM as this is the first camera to be used, what value would one use? What is bog standard average UK seeing?

Thanks.

James

• 1
Share on other sites

3 minutes ago, jambouk said:

For the planetary equation, this is using focal ratio, not focal length. Two telescopes with the same focal ratio, but different focal lengths (and hence appertures) will presumably not be equal?

Resolving power of the telescope grows with aperture size. Large aperture resolves more than small

For given pixel size longer focal length will give higher sampling than shorter focal length (more pixels per arc second).

It turns out that these two grow the same for fixed F/ratio - thus if 6" scope has optimum sampling rate with F/15 for given pixel size - so with 8" scope as difference in resolving capability between 6" and 8" is the same as difference in sampling rate between 2250mm of FL and 3000mm of FL

6 minutes ago, jambouk said:

I'm also not sure how one identified the optimum sampling rate (I presume sampling is being measured in arcseconds per pixel) from this equation.

Optimum sampling rate for planetary is identified as sampling rate at which all possible information is captured.

Telescope aperture, due to nature of light can only capture so much detail (depends on aperture size). This is identified as frequency cut off point. If we analyze signal produced by telescope at focal plane with Fourier transform we will find that there is certain frequency above which all higher frequencies are 0 - simply no signal there.

Nyquist sampling theorem states that one should sample at twice the highest frequency in the band limited signal.

We put those two together and we arrive at optimum / critical sampling.

Contains formula directly used to derive above one.

Where f0 is cut off frequency, lambda is wavelength of light and F# is - f number of f/ratio of optical system. We combine that with Nyquist sampling theorem which says that we need two samples (two pixels) per wavelength corresponding to highest frequency (1/f0). We rearrange for F# and we get:

F# = 2 * pixel_size / lambda

Sampling is being measured in arc seconds - but there is simple relationship between pixel size in microns and arc seconds for given focal length (this shows that sampling rate in arc seconds is not fixed and grows with aperture since F/ratio is fixed - larger scopes resolve more in terms of arc seconds or have higher angular resolution).

14 minutes ago, jambouk said:

For DSO imaging, if one doesn't know the expected FWHM as this is the first camera to be used, what value would one use? What is bog standard average UK seeing?

There is relationship between mount RMS and FWHM that is given by FWHM = 2.355 * RMS

Stock Heq5 has guide RMS of around 1"

Modded/tuned Heq5 has guide RMS 0.7-0.8 and even down to 0.5" depending on all the mods applied (same is for EQ6 really).

Higher tier mounts can go as low as RMS 0.2"-0.3" (Mesu200 for example).

Average seeing is 2" FWHM. Good seeing is 1.5" FWHM. Excellent seeing is 1" FWHM and below (below 1" happens only on mountain tops / deserts - special sites).

You can always check out seeing forecast for your location (this is without local influence - actual seeing might be worse due to local heat sources):

You will often find that seeing is below 2" - on a cloudy day / night - but when it is night time and clear - it is going to be around 2" (stable atmosphere helps with cloud formation).

In the end - here is sort of rule of the thumb related to aperture size:

with <100mm scopes - limit yourself to widefield imaging as you'll be limited to say about 2"/px

100mm - 150mm - upper sampling limit would be 1.8"/px

150mm - 200mm - 1.5"/px

above 200mm - 1.3-1.4"/px

One can attempt 1"/px - only on the best of nights using very good mount (RMS 0.5 or less).

In all reality - odds are - you won't get image that has detail below 1"/px in 99.99% conditions / equipment.

Share on other sites

I am rubbish at maths and as much as I try, it's all greek to me.

SO I use this calculator instead astronomy.tools

Share on other sites

13 minutes ago, bomberbaz said:

I am rubbish at maths and as much as I try, it's all greek to me.

SO I use this calculator instead astronomy.tools

It would be really nice if it actually worked - but unfortunately it does not.

Share on other sites

2 hours ago, bomberbaz said:

I am rubbish at maths and as much as I try, it's all greek to me.

SO I use this calculator instead astronomy.tools

The results from Astronomy Tools comes up with might be meaningful for Deep Sky imaging, but they are a nonsense when it comes to Planetary Imaging.

Rather than using a Barlow with my Esprit 150, it suggests that under OK Seeing Conditions I should actually be using a Focal Reducer with my ZWO ASI 462 Planetary Camera (pixel size 2.9 um)

John

Edited by johnturley
Share on other sites

11 hours ago, vlaiv said:

F/ratio = pixel_size * 2 / wavelength_of_light

Hi @vlaiv. I've been thinking about this formula a fair bit since we discussed sampling on my Jupiter thread the last couple of days. My understanding of Nyquist theorem (which is very limited) is that sampling must be 'at least' twice the frequncy of the anologue signal, so is there some wriggle room in using a number greater than 2? Would that contribute to why some planetary imagers are imaging at significantly longer focal lengths, i.e. what if we used 2.25, 2.5 or 3 in the above equation and what wouldn't we? If I use, say, px-size*3/wavelenght of light with my 2.9mn camera pixels, then I'd get 2.9*3*(1000/400)=21.75. So I guess I'm asking, why are we restricted to the *2 factor, it's a pretty crude component of the formula isn't it?

Share on other sites

17 minutes ago, geoflewis said:

My understanding of Nyquist theorem (which is very limited) is that sampling must be 'at least' twice the frequncy of the anologue signal, so is there some wriggle room in using a number greater than 2?

Yes, you are right, what Nyquist is saying is that sampling with less than x2 will lead to aliasing artifacts in reconstructed signal and that you need x2 or more in order to perfectly reconstruct the signal.

I already pointed that out - you can certainly sample at higher sampling rate than optimum / critical sampling rate and you won't loose any detail / you will capture the detail equally as with optimum sampling rate.

There is however other part of the story and that is SNR part.

We need good total SNR, but we also need good per sub SNR. Former we need to get good resulting image, but later is needed for software to be able to do it's thing properly.

Software needs to do two primary things - first is to undo distortion that is created by tilt component of the seeing across the planet's face - that is what alignment points are for. Second thing is to identify quality frames.

For both of these things per sub SNR needs to be high enough otherwise alignment might not be correct as it would be performed on noise rather than on features and of course - noise can be mistaken for detail and otherwise blurred sub might be accepted (AS!3 has special option to handle this called noise robustness that you can increase if your subs are too noisy).

In any case - sampling higher than x2 will lead to decreased per sub and consequently overall SNR, but will not otherwise contribute to captured detail - all detail is already available at x2. This naturally leads to question - why do it then?

• 1
Share on other sites

Thanks @vlaiv, I’m am gradually beginning to understand this better - I think 🤔

Share on other sites

18 minutes ago, vlaiv said:

why do it then?

I think there are a couple of reasons why you would choose to use a longer FR (but I’m not saying that these are the reasons why it is done by many)

1) Steeper light cone leads to reduced aberration due to ADC prisms (we touched on this a few weeks ago). Though I admit I don’t know how to quantify the ADC aberration at various FR in order to compare it to the loss of SNR between those FR.

2) More barlow power  = larger diffraction limited field diameter = Beneficial for collimation as more tolerance to any sag/droop in imaging train?

Considering the large aperture scopes (and sensitive latest cameras) that most planetary imagers use bring in a lot of signal to begin with anyway so throwing a little away to get the above benefits could be a good decision?

Edited by CraigT82
• 1
Share on other sites

36 minutes ago, CraigT82 said:

Considering the large aperture scopes (and sensitive latest cameras) that most planetary imagers use bring in a lot of signal to begin with anyway so throwing a little away to get the above benefits could be a good decision?

While I can't comment on usefulness of points you listed, I'm just going to point out that large aperture in this case simply means nothing for signal.

Per pixel signal will be equal in 3" scope as well as 14" scope if one uses critical sampling for both - or same F/ratio for both.

We can see that on simple example - if we double the aperture and keep F/ratio the same - we will double focal length.

With double the aperture we quadruple light gathering, so x4 photons are collected.

With doubling of focal length - we double sampling rate, and again we reduce pixel sky coverage to quarter (1/2 squared) - as it is again surface.

We end up with x4 more photons spread over x4 more pixels (as each pixel now covers 1/4 of the sky). Signal per pixel remains the same.

On the other hand - using x2 longer focal length then needed - cuts signal by x4 and thus per sub SNR falls by half (or more - depending on how large read noise is compared to shot noise).

• 1
Share on other sites

We find ourselves in a strange position as deep sky imagers these days.  CMOS pixels are getting smaller and smaller and there's no hardware binning possible. This is great for creating widefield images at short focal lengths without the need for mosaics, but how do we best exploit the optical resolution of large apertures and longer focal lengths? How effective is software binning and what's the best way to do it?

Olly

Share on other sites

32 minutes ago, ollypenrice said:

We find ourselves in a strange position as deep sky imagers these days.  CMOS pixels are getting smaller and smaller and there's no hardware binning possible. This is great for creating widefield images at short focal lengths without the need for mosaics, but how do we best exploit the optical resolution of large apertures and longer focal lengths? How effective is software binning and what's the best way to do it?

Olly

Software binning is as effective as hardware binning if one accepts that sensor has a bit more read noise.

If one exposes properly for read noise at native resolution - then nothing else needs to be done.

Say that we are swamping sky background noise with 5:1 and that we have read noise of 2e.

This means that sky background signal is (2e * 5)^2 = 10^2 = 100e.

When we bin x2 following happens - we effectively increase read noise by bin factor so we now have 4e of read noise - and that is still swamping the background with factor of 5:1.

Adding 4 background pixels together will produce 400e of signal - that will have sqrt(400e) = 20e of sky background noise and that is still x5 larger than 4e of read noise.

As long as one is properly exposing for native resolution - all is good as far as read noise. In all other aspects - software binning acts like hardware binning.

In fact - best way, in my view, to bin the data is "not to bin at all" - but do something else that will explain how binning actually works even if it is software binning.

Imagine we are stacking 100 subs at say 4000x3000px and that we want to bin x2. Instead of binning we do something similar to split debayering.

We take each sub and split it into 4 subs without changing a single pixel. We take odd, odd (in vertical and horizontal) pixels of original sub and just make first "sub sub". Then we take odd, even pixels (again in vertical and horizontal) and we create second "sub sub", and so on - even, odd and even, even

Resulting sub subs - have 2000x1500 pixels. Each pixel in them is spaced at "twice the distance" of original - so resolution is only the half of original. We haven't changed a single pixel value by this process so there is no change in read noise or anything else but now we have 400 subs to stack instead of 100 subs.

Resulting stack will have x2 higher SNR and twice lower sampling rate.

If we do it like that, we avoid whole summing pixels together thing and just have twice lower resolution and regular stacking that we are used to - just with x4 more subs for x2 better SNR.

(as a bonus - this method gives ever so slightly sharper results as it aligns each of those subs with sub pixel accuracy, and that is why I think it is best way to do it).

Share on other sites

6 hours ago, vlaiv said:

We take each sub and split it into 4 subs without changing a single pixel. We take odd, odd (in vertical and horizontal) pixels of original sub and just make first "sub sub". Then we take odd, even pixels (again in vertical and horizontal) and we create second "sub sub", and so on - even, odd and even, even

Thanks. Very ingenious. What I'm now wondering is how to make the pixel selections you suggest...

Olly

• 1
Share on other sites

22 hours ago, johnturley said:

The results from Astronomy Tools comes up with might be meaningful for Deep Sky imaging, but they are a nonsense when it comes to Planetary Imaging.

Rather than using a Barlow with my Esprit 150, it suggests that under OK Seeing Conditions I should actually be using a Focal Reducer with my ZWO ASI 462 Planetary Camera (pixel size 2.9 um)

John

Nonsense for deepsky too

• 1
Share on other sites

I would add to the discussion, and specifically @vlaiv s post above to the upper limits:

On 09/11/2022 at 11:54, vlaiv said:

with <100mm scopes - limit yourself to widefield imaging as you'll be limited to say about 2"/px

100mm - 150mm - upper sampling limit would be 1.8"/px

150mm - 200mm - 1.5"/px

above 200mm - 1.3-1.4"/px

One can attempt 1"/px - only on the best of nights using very good mount (RMS 0.5 or less).

In all reality - odds are - you won't get image that has detail below 1"/px in 99.99% conditions / equipment.

That even if you do have a 200mm scope you might want to prepare for 2'' actual imaged resolution, or worse, if the conditions are really bad. So this should be taken as the upper reasonable limit. (as it was written, but someone might still interpret it as a target rather than limit)

My setup is currently a 200mm newtonian with a quality coma corrector (TeleVue paracorr) and a decent mount, the AZ-EQ6 that guides at around 0.6'' RMS. The guiding performance could probably be improved to 0.5'', maybe 0.4'' on a very good night but not so sure about that, mine is not a very good unit when it comes to RA worm roughness and DEC backlash. With this setup i am getting 3.5'' FWHM stars at least 3/4 of the time, better than 3'' FWHM stars maybe half the time, 2.5'' FWHM or better (required for 1.5'' actual resolution) maybe  1/4 or 1/5 of the time so not really an option for building up long integrations. This suggests my usual sky conditions allow a 2.2'' imaging resolution to be the most convenient target (which i can do at Bin x3) The variation is mostly in seeing which i have found to be at 2'' FWHM or slightly worse when transparency is good, and 1.5'' or lower when transparency is very bad and clouds are about to roll in so building up a long integration of these poor transparency low SNR subs would take years.

So my take on this is that if one wants to have a realistic target resolution its much better to aim upwards to 2'' per pixel with your typical 8 inch newtonian that runs fairly well rather than towards the upper limit.

• 1
Share on other sites

1 hour ago, ollypenrice said:

Thanks. Very ingenious. What I'm now wondering is how to make the pixel selections you suggest...

Olly

I'm sure it can be easily done.

I think that there is script for Siril that does something similar - split debayer. I'm sure same thing can be done for regular subs and that it can also be done in any software that has scripting support - like PI or ImageJ

I've written plugin for ImageJ that is capable of doing just that (among other things).

Share on other sites

Look at "split cfa" type command in software of your choice:

It is intended to do debayer on OSC data - or rather to extract color information into separate R, G and B subs instead of creating combined image with interpolation - but in essence it does bin x2 type operation on mono data - splitting it up into 4 subs.

That is immediately available for testing purposes.

Share on other sites

6 minutes ago, ONIKKINEN said:

So my take on this is that if one wants to have a realistic target resolution its much better to aim upwards to 2'' per pixel with your typical 8 inch newtonian that runs fairly well rather than towards the upper limit.

Indeed - seeing plays major part and depending on skies - one might find that they need lower sampling rate more often due to poor seeing.

Interestingly enough - a bit better results are obtained with NB filters. NB is a bit less sensitive to seeing, especially longer wavelengths - like Ha.

Share on other sites

The most convenient way to run the split debayer command in Siril for a set of images is to open a sequence of calibrated, but not debayered subs and just write seqsplit_CFA nameofyoursequence on the command line. You choose the sequence name when importing the subs into Siril initially using the "convert" tab. If you use symbolic links they wont take any storage space so the whole operation is really fast. Siril is really fast anyway, calibrating and splitting 500 subs into 2000 mono split subs takes maybe 5 minutes.

The resulting images are named: CFA_0, CFA_1, CFA_2, CFA_3. The user has to figure out themselves which 2 of these are from the green pixels, which is red and which is blue. For my IMX571 camera CFA_0 and 3 are green, 1 is red and 2 is blue so Siril splits it as GRBG rather than RGGB for some reason.

• 1
Share on other sites

1 minute ago, ONIKKINEN said:

The resulting images are named: CFA_0, CFA_1, CFA_2, CFA_3. The user has to figure out themselves which 2 of these are from the green pixels, which is red and which is blue. For my IMX571 camera CFA_0 and 3 are green, 1 is red and 2 is blue so Siril splits it as GRBG rather than RGGB for some reason.

Here we are talking about split binning on mono data - so naming won't matter really as all subs will still be mono.

Only drawback of split_cfa approach is that it is limited to bin x2 (or bin x4 if bin x2 is repeated on output of first bin x2). It can't do bin x3 split.

• 1
Share on other sites

3 minutes ago, vlaiv said:

Here we are talking about split binning on mono data - so naming won't matter really as all subs will still be mono.

Only drawback of split_cfa approach is that it is limited to bin x2 (or bin x4 if bin x2 is repeated on output of first bin x2). It can't do bin x3 split.

Right, didn't read the whole discussion properly it seems. Just tested the command with already split (so mono) data and the command still runs. The files are still named CFA_0,1,2,3 but now of course they dont matter since all of them are mono anyway so one could throw them all together.

On the bin x3 issue i am experimenting on what the best way to reach that is, or one could interpret it as what the best way is to reach bin x 1.5 with OSC subs split to mono to half resolution which locks away bin x3 from the raw data but this also applies on wanting to bin 1.5x from a mono camera. I am experimenting with just resampling down to 67% (or thereabouts), debayering the data instead of splitting an then binning them and lastly simplified x2 drizzle integrating (super resolution stacking, not real drizzle in siril) the mono subs to 200% size and then binning that x3. Not sure what to make of the data i have gathered yet so wont comment anything concrete, but it looks like either the split-drizzle2x-bin x3 route or the debayer + bin x3 is the best way from the looks of it. Resampling with a smart algorithm will have good sharpness, but SNR wont increase all that much, using a simpler algorithm goes the other way. Maybe there is a middle ground algorithm that does a good job, but not sure what that would be.

Sounds like a lot of trouble to drizzle integrate thousands of subs but Siril actually runs through a mountain of subs that way in not that long. Registration and drizzle stacking more than a thousand subs is a 20-40 minute process on a decent PC. Compare that to APP (other stacking softare i use) which takes maybe 5 hours to do all that WITHOUT the drizzle.

Create an account

Register a new account

• Similar Content

• Sample Image Files

By CombatCoder,

• 189 views
• Samples from Atik 383L+

By MG01,

• 20 replies
• 463 views
• Jupiter sampling C14 F11 vs F18 - 11 Nov 2022 1 2 3

By geoflewis,

• 53 replies
• 1,018 views
• sampling v speed

By iwols,

• 4 replies
• 270 views
• 13th Jan hi res critical sampling added 1 2

By neil phillips,

• 25 replies
• 728 views
×
×
• Create New...