Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Let's talk about (under)sampling


Martin Meredith

Recommended Posts

As a newcomer to astronomical image acquisition, I realised just now that there's something odd about the way that under-sampling is considered for astro images. This something odd is almost certainly my lack of understanding, hence this post.


The Nyquist theorem says that the sampling frequency should be twice the rate of the highest frequency present in the signal. If we're interested in capturing this level of spatial detail, we must ensure that f/ratio and camera pixel scale are well-matched.


Oversampling doesn't buy us any new information but is otherwise harmless.


Undersampling, on the other hand, loses information that is potentially captured by the optics. Berry and Burnell, in their Handbook of Astronomical Image Processing, say that "undersampling is not necessarily a bad thing since it may be a tradeoff necessary to cover a large field of view".


It is at this point that I have a problem. There are really two things we need to take from Nyquist regarding undersampling. One is the pixel scale point mentioned above -- undersample and you lose resolution. But a second consequence of undersampling is aliasing, which introduces (often non-random) noise into the acquired image. Why: because energy doesn't just disappear when sampled, but is squeezed into different regions of the spatial frequency spectrum. 


The way this is dealt with in audio is to use an analogue lowpass filter prior to sampling to ensure that there is practically no energy at frequencies above half the sampling rate. Note that this has to be done to the analogue signal: once digitised, it is already too late.


So I got to thinking about where the effective analogue LP filter is in the astronomical image acquisition chain. Two possibilities are (i) the atmosphere itself and (ii) the aperture of the objective lens. 


Ignoring seeing, the aperture determines the maximum resolution, meaning that smaller apertures act as LP filters with a low spatial resolution. Hence any undersampling at the CCD is not a problem -- no aliasing. But what if you couple an undersampled CCD (if you see what I mean) with a larger aperture? In this case, I assume aliasing is a potential problem.


In a nutshell what I'm asking is: is aliasing ever a problem in realistic astronomical image acquisition? 


Feel free to correct any mistakes in the above chain of 'reasoning'  :smiley:


Martin
Link to comment
Share on other sites

" In a nutshell what I'm asking is: is aliasing ever a problem in realistic astronomical image acquisition? " 

I have to admit I've never seen aliasing, if what you mean by aliasing is a secondary or interference signal. All I've ever noticed is blur. You just don't get that lovely sharpness in the stars or dust filaments.

I almost always undersample and dither because I want wide field images rather than pin sharp and have no desire to mosaic for weeks on end :)

So what I'm saying is, " No idea. Sorry "

Dave.

Link to comment
Share on other sites

The only time I've given aliasing a thought is in processing one shot colour images. The software I used (AstroArt) has an adjustable anti aliasing filter which I gormlessly left at default because there didn't seem to be any need to change it.

Is oversampling harmless? I wouldn't have said so. It deprives overly small pixels of light, causing a reduction in S/N ratio.

Olly

Link to comment
Share on other sites

" In a nutshell what I'm asking is: is aliasing ever a problem in realistic astronomical image acquisition? " 

I have to admit I've never seen aliasing, if what you mean by aliasing is a secondary or interference signal. All I've ever noticed is blur. You just don't get that lovely sharpness in the stars or dust filaments.

I almost always undersample and dither because I want wide field images rather than pin sharp and have no desire to mosaic for weeks on end :)

So what I'm saying is, " No idea. Sorry "

Dave.

Thanks for reading. The thing is, aliasing won't necessarily be obvious. All the common examples are just illustrative  e.g. using stripes to show Moire patterns, but once high spatial frequency energy is in the signal it has to be assigned to the CCD photosites somehow -- it is there unless it is filtered out before hitting the sensor. It is typically only obvious (as interference) when the high frequency signal is narrowband (e.g. in audio terms, a sine wave), but broadband "noise" at high frequencies will get mapped to broadband noise on the sensor too.

I'd like to know the extent of the problem because, were it to be an important "noise" component, it would be possible to deal with :laugh: , at least at the rule-of-thumb level -- "simply" a case of estimating the effective lowpass filtering characteristic of the image chain prior to the sensor, and then using that cutoff frequency as the minimum pixel sampling rate. I suspect it is very much tied up with seeing, and becomes more of an issue when seeing is very good.

Martin

Link to comment
Share on other sites

The only time I've given aliasing a thought is in processing one shot colour images. The software I used (AstroArt) has an adjustable anti aliasing filter which I gormlessly left at default because there didn't seem to be any need to change it.

Is oversampling harmless? I wouldn't have said so. It deprives overly small pixels of light, causing a reduction in S/N ratio.

Olly

Thanks. I'm guessing that AstroArt's antialiasing refers to post-acquisition down-sampling? Any sensible default would work, I'm sure. But there's nothing you can do in software to remove aliasing present at the time of capture, according to my understanding of the theory.

We often oversample in audio without decreasing SNR but then normally we're not dealing with faint signals nor noisy reads. I suppose that's the difference here.

Martin

Link to comment
Share on other sites

Thanks. I'm guessing that AstroArt's antialiasing refers to post-acquisition down-sampling? Any sensible default would work, I'm sure. But there's nothing you can do in software to remove aliasing present at the time of capture, according to my understanding of the theory.

We often oversample in audio without decreasing SNR but then normally we're not dealing with faint signals nor noisy reads. I suppose that's the difference here.

Martin

This is really over my pragmatist's head but some of the technichally minded imagers are doubting the application of the Nyquvist theorem, audio based, to the visual situation in astronomy. I'm not competent to comment, only to pass on the existence of the debate.

Olly

Link to comment
Share on other sites

There are many that say the audio theory doesn't fully translate to the visual.

For what it's worth, and this is me making things up as I go along, the antialiasing you seek is probably built into a mono CCD already. A OSC could benefit but just how blurred do you want your images ?

I would also suspect that you'd need to be fairly sever with the undersample for it to show. If it does show then stand further away from the laptop screen :) If it was a great and known problem we would all know about it I'm sure. There would be software companies trying to sell us stuff to cure it other than the normal low pass stuff we have now ?

Dave.

Edit. Knocking round the back of my skull was the idea a camera maker had used a wacky antialiasing filter that wasn't glass based. I've had a look around and the Pentax K-3 uses the ultra sonic anti dust mechanism to blur the chip ! Knew I'd seen one.

Link to comment
Share on other sites

This is really over my pragmatist's head but some of the technichally minded imagers are doubting the application of the Nyquvist theorem, audio based, to the visual situation in astronomy. I'm not competent to comment, only to pass on the existence of the debate.

Olly

Thanks - that's very interesting to know. Actually, Nyquist/aliasing is only an issue for a regularly-sampled grid. However, the effect of sampling irregularly (more like the eye) is to spread aliased energy around the spectrum, effectively reducing contrast anyway (ie swapping Moire bands for low-intensity noise).

There is a case to be made that atmospheric disturbances produce something like irregular sampling. Clearly, the CCD remains regular, but the image itself is slightly distorted from moment to moment, which might be considered as equivalent to irregular sampling. In that sense the imagers may well be right.

Still, its fascinating to compare the audio and image domains.

Martin

Link to comment
Share on other sites

There are many that say the audio theory doesn't fully translate to the visual.

For what it's worth, and this is me making things up as I go along, the antialiasing you seek is probably built into a mono CCD already. A OSC could benefit but just how blurred do you want your images ?

I guess that makes most sense!  :smiley:

Martin

Link to comment
Share on other sites

The reason you might beat the Nyquist theorem limit in practise might be spelled deconvolution (and similar algorithms). Deconvolution was used to recover resolution on Hubble pictures before the corrector lenses where installed and (without bothering looking into the math) I can see where oversampling might help the deconvolution algorithm to get a good result. Like Olly said however you´d better really want that resolution gain or be patient because the the loss in light gathering will be severe.

Link to comment
Share on other sites

In audio you would be making one set of samples of the signal.  In astro-imaging that is rarely the case.  Multiple exposures are taken and combined to get sufficient SNR.  Inevitably that is coupled with with dithering, i.e. re-pointing the scope between exposures, whether intentionally or through poor guiding, field rotation, etc. Basically that is a great anti-aliasing mechanism.

To prove the point, conversely, you can use drizzling to recover additional resolution from a set of under-sampled and dithered images. In practice most amateurs would only do so on really bright targets (e.g. the Moon) as you'd need an awful lot of samples on fainter objects to get satisfactory SNR in the drizzled image.

Also worth bearing in mind that OSC cameras (specifically DSLRs) usually have an anti-aliasing filter in front of the sensor (but not so for proper mono CCDs).

Link to comment
Share on other sites

 
You will occasionally run into claims that for FWHM <2 pixels, images are undersampled because the Nyquist theorem. The Nyquist criterion doesn’t really have much to do with CCD data as it is based on the idea of sampling a function in discrete intervals with delta functions (unlike CCDs which integrate over contiguous intervals).
 
Above is an interesting quote from a course run at Lick Observatory.
 
NigelM
Link to comment
Share on other sites

The reason you might beat the Nyquist theorem limit in practise might be spelled deconvolution (and similar algorithms). Deconvolution was used to recover resolution on Hubble pictures before the corrector lenses where installed and (without bothering looking into the math) I can see where oversampling might help the deconvolution algorithm to get a good result. Like Olly said however you´d better really want that resolution gain or be patient because the the loss in light gathering will be severe.

Thanks for the Hubble pointer. There's some interesting material on the benefits of defocusing there which is claimed helped the original spherical aberration and as a nice side effect performed some antialiasing.

But just to re-emphasise: my concern is not loss of resolution caused by undersampling. My concern is the additional aliased energy from which gets distributed about the image in the absence of an analogue low pass filter. The two are separate. Oversampling doesn't solve the aliasing problem completely. According to theory, there is no processing possible to remove the effects of aliasing once digitisation has taken place. Note in passing that what in computer graphics is called anti-aliasing is to do with mitigating the obvious visual appearance of aliasing and doesn't actually remove aliased energy (except by accident).

Even in the case of effectively irregular sampling (let's say caused by the kind of things IanL mentions) any energy from higher spatial frequencies must still be in the image unless removed before digitisation. I don't see a way around this. It is another "noise" source which would be interesting to quantify with real images. What we need to know is how much energy at high spatial frequencies exists in typical astronomical signals. Unfortunately I don't have my copy of Suiter's Star Testing book with me but I've a feeling this is covered there. I'm prepared to accept that there may not be much energy there and so no need for lowpass filtering prior to acquisition, but it seems to be something that may -- ironically -- degrade performance in good seeing.

There's also an interesting thread here which both clarifies and muddies the waters further.  :smiley:  

Martin

Link to comment
Share on other sites

To be honest, I think you're looking for a problem that exists in theory, but that doesn't manifest itself in practice (at least not in any obvious way).  Here is my take on it, would be interested to see what others think:

- First, let's assume we take a single set of full samples. That is a single exposure with a pixel scale is at (or greater than) that required by Nyquist to fully sample the resolving power of your optics. Also the image is taken in space (no seeing or other effects to limit the ability of the imaging system to achieve its maximum potential resolution).

- In that case would there be aliasing of frequencies higher than half the Nyquist rate of the system? No. Those higher frequencies are simply not resolved by the optics. In other words the optics are an anti-aliasing filter that precisely cuts off any frequency that is higher than the resolving power, hence no aliasing occurs. Hopefully that is uncontroversial?

- But you are actually talking about what happens when you under-sample, not fully sample:

- If we take the same set of optics as above, in space, etc. but take a single image with a camera that gives a pixel scale that is lower than Nyquist, would there be aliasing? Theory says yes there should be.  Would we see it in practice?  I don't know.  Whether it was visually apparent would rather depend on the target I suspect, but certainly it ought to be measurable in some way. Those higher frequencies can't just disappear so they must be aliased as lower ones.  I guess there must be professional papers out there on this topic?

- Space telescopes are not the real world for most of us though, so what else comes in to play for mere mortals?

- First we have to consider seeing.  For a single image, seeing will limit the available resolution in the form of a (spatially and temporally variable) point spread function, i.e. an anti-aliasing filter again. So any frequencies higher than the net seeing limit are anti-aliased before we start sampling.  What that means is that we can have a camera that is undersampling vs. the resolving power of the optics, but still fully sample the scene provided we have a pixel scale that is at least half the current seeing conditions. We're not eliminating the aliasing issue here, just changing the definition of 'fully sampled' to the correct one for the real world. Again hopefully this is uncontroversial?

- As an aside, we also have to consider tracking/guiding accuracy if we are taking long exposures (i.e. anything other than very short planetary/lunar/solar images). For simplicity, lets assume we can track/guide to the seeing limit for the duration of the exposure.  If not, again we'd just have to redefine 'fully sampled' to mean the net resolution afforded by our tracking/guiding accuracy.

- So we're only going to be concerned with aliasing if we have optics that can resolve to the current seeing limit. With a very small aperture or very good seeing, we might not be able to do so. Again the optics become a highly effective anti-aliasing filter if they cannot resolve to the seeing limit.

- In practice most of us will be using optics that can resolve to more than the current seeing conditions will permit.  If we have a camera pixel scale that is less than 2 x the current seeing, aliasing through under-sampling should be a concern. So why isn't it?

- Well for one thing I suspect in a single image, aliasing is often not going to be that visually apparent (I am fully prepared to be shown examples that prove otherwise of course).  Either the scene is such that it is not particularly obvious to the eye, or in a single exposure the SNR may be fairly low.  Picking out subtle aliasing effects might be difficult in such images. Bear in mind that there are very few regular/repeating patterns at any scale in astronomical images.  I'd contend that visually distinguishing minor variations in brightness due to (say) nebulosity vs. variations of a similar scale and magnitude due to aliasing effects might be something of a tall order.

- If I was going to look for aliasing I would start with single, fully sampled exposures of something bright like the Moon or Sun. It might be interesting for someone who has such images to take a look and see if any aliasing is visually apparent.  That said, I suspect some kind of image analysis would be needed to identify/characterise it (e.g. comparing sampled and under-sampled images of the same subject). Bear in mind these images would have to be taken at a sampling rate of at least 2 x seeing and with a mono CCD camera. A OSC camera won't do as the Bayer pattern filters would basically act as an anti-aliasing filter for the colour component, and certainly not an unmodified DSLR as they have anti-aliasing filters built in.

- I don't think it is right to dismiss anti-aliasing in computer graphics as being irrelevant to this topic. In computer rendering, aliasing occurs because the sampling resolution is determined by pitch of the pixels on the display screen. The scene itself has (potentially) infinite frequencies within it as it is just a mathematical model. The Moire patterns and jagged lines that appear on a low(er) resolution display are not some different kind of 'aliasing', they are exactly the same as the aliasing one gets with optics, cameras and a real world scene, i.e. higher frequencies manifesting themselves as lower frequencies due to the sampling limit of the display device vs. the same for the sampling limit of the capture device.

(Youngsters may not be particularly aware of this issue, since modern displays are very high resolution, meaning your Mk1 eyeball performs as an anti-aliasing filter unless you jam it right up against the screen.  Also most modern GUIs anti-alias everything like fonts and UI elements, not just fancy 3D graphics.  In ye-olden days of the green screen terminal and 4 colour CGA graphics, etc. we were very aware of aliasing effects. As an aside, aliasing on computer screens only became a problem with the advent of raster graphics displays as these use relatively large dots to make up the pixels.  Older vector graphics displays just scanned the electron beam on to the phosphor display in a continuous line or curve.  The sampling resolution of a vector display was the size of the individual phosphor molecules on the screen, vs. the really large dots created by the dot mask for a raster display.)

- Anyway, in computer rendering, anti-aliasing is most usually performed by multi-sampling. A simple method is to render an image at (say) double the intended display resolution and then down-sample it to mitigate the appearance of aliasing effects (Moire patterns and jagged line edges). The higher the rendering resolution vs. the display resolution, the better aliasing artefacts are mitigated. In practice various short cuts may be employed to make the process less computationally expensive so you can get nearly the effect of 4 x down-sampling without having to render at the full 4 x resolution for example. The key point is that you sample the scene at a higher resolution than the one at which you intend to display it, and the subsequent down-sampling process acts as an anti-aliasing filter. The upshot is that you reduce the appearance of aliasing by averaging it out through down-sampling.

- Dithering and stacking (integration) is a precise analogy of this process.  We have a scene (coming through the telescope optics) that contains frequencies higher than our (under-)sampling rate.  As previously discussed, a single image should contain those frequencies aliased to lower ones.  But by taking multiple images we are in effect multi-sampling the scene. In this case we are averaging the higher frequencies and get aliasing effects in a single image. But by moving the scope a bit and taking a new image (set of samples) we again get aliasing of those higher frequencies.

- Now if when we dither the grid of camera pixels moves a whole number of pixels across the scene, we gain nothing.  Exactly the same aliasing effect will appear in the next image since we sampled it in the same way.  Conversely if we re-point the scope so that the camera pixels move some pixels plus a fraction of a pixel, we will get a different aliasing effect in the next image.  Repeat multiple times.  The stacking process than combines all those multi-samples and then display them at the original sampling resolution, basically averaging out the aliasing effects in a manner that is analogous to the graphics example above.

- Sub-pixel dithering is something you might deliberately do with a professional or high end system. In a low end system the inherent inaccuracies in guiding and tracking, plus field rotation due to less than perfect polar alignment will make it impossible to precisely align the camera pixels with the scene from exposure to exposure. (Put it this way, if I asked to to do that, how many of you are confident that you could for 20 or 30 exposures?)

- I contend that there is no difference between 'visually mitigating the appearance of aliasing' in computer graphics and doing the same by dithering in imaging.  The upshot is the same; of course the aliased 'energy' (computationally modelled in one case, measured in the other) remains in both cases - you can't destroy it, all you can do is mitigate the appearance of aliasing (or live with it if that is more valid for your particular objective).

- Finally of course, how many of us actually display our images as they came from the camera?  Stacking aside, a whole multitude of processing operations are performed, e.g. noise reduction, dynamic range compression, etc. must have a significant effect on the visibility of any residual aliasing effects post the stacking process.  Also most images are down-sampled for display on the web or in print.

Link to comment
Share on other sites

Isn't  the fact that stars come out looking square essentially what is traditionally called aliasing (think blocky text)? I reckon that is about the only effect in astro work.

One might also wonder whether you really are sampling a continuous distribution in the first place. In astro work we are often talking about detecting individual (or a handful of) photons. They come ready discretised (if we ignore quantum mechanics)!

NigelM

Link to comment
Share on other sites

Isn't  the fact that stars come out looking square essentially what is traditionally called aliasing (think blocky text)? I reckon that is about the only effect in astro work.

One might also wonder whether you really are sampling a continuous distribution in the first place. In astro work we are often talking about detecting individual (or a handful of) photons. They come ready discretised (if we ignore quantum mechanics)!

NigelM

Blocky/jagged edges on what are in fact straight lines, curves, etc. is indeed one manifestation of aliasing and square stars definitely fits the bill.

The other common manifestation which is what I was thinking about more is the Moiré pattern.  You might have seen it on TV when the presenter has an outfit that has fine lines and you get that shimmering/wavy sort of effect.  Is is due to a high frequency pattern (e.g. fine stripes) being under-sampled by the sensor.  It is most obvious when the the rows and columns in the sensor are at a different angle to the stripes, e.g. this series picture of a parrot's feathers:

Moire_on_parrot_feathers.jpg

20110611031132%21Moire_on_parrot_feather

20110611030143%21Moire_on_parrot_feather

(From Wikimedia Commons, Attribution "Fir0002/Flagstaffotos").

The effect is twofold.  Firstly what are actually the fine, straight or single curved lines running from the centre of the feathers diagonally to the edges appear to be rather jagged and blocky, especially in the lowest resolution image.  Secondly, an interference pattern manifests itself as multiply curving darker bands on the feathers that don't exist in reality.  As you can see, different sampling rates in the three images change both the "blockiness" of the fine lines, and also the interference patterns.

Indeed rotating the angle of the camera to the scene (or the parrot to the camera) would cause the interference pattern to shift and change.  You often can this on TV as the presenter moves their stripy clothing around.  Note that the lines in the sensor and in the scene can be perfectly parallel and you will still get interference in the form of alternating light and dark bands at a frequency that depends on the relative frequencies and phases of the stripes and the sensor elements.

The important point is that the same thing is happening with the piece of wood the parrot is sitting on.  It's not special, and is subject to the same aliasing effects and the feathers.  You don't see it though because there are no/few straight lines and no repeating high frequency patterns to create the interference effect.  In essence the OP is asking if the same sort of thing happens in under-sampled astro-images, and if not, why not?

My 2p's worth is that it probably does happen in single images if under-sampled, but it would be equally hard to see since astro-images are more like the wood than the feathers.  Furthermore, assuming it is a stuffed parrot and we re-oriented the camera to take several images which we then stacked them, you should reduce if not eliminate the aliasing artefacts. (Easy when you have stars as reference markers to perform appropriate transforms to the images, harder with a parrot).

Link to comment
Share on other sites

Ian: thanks for those very considered postings which I think will be very useful to many -- certainly to me. I agree with nearly all the points you make.

One nuance for me remains the way the term antialiasing is used in computer graphics. It may well be equivalent to dithering. However, it seems to be the case that no antialiasing algorithm can do the job of an analogue filter designed to perform antialiasing and we need to distinguish the two. The former is approximate (and might even add more distortion elsewhere). To this end, I came across an interesting quote in the original Drizzle paper

"Thus, it appears that no additional measurable astrometric error has been introduced by Drizzle. Rather, we are simply observing the limitations of our ability to centroid on images that contain power that is not fully Nyquist sampled even when using pixels one‐half the original size." 

which I take as implying that energy from higher spatial frequencies is still present in the image and there's not a lot that Drizzle can do about that, but at least Drizzle doesn't introduce any new distortions. If my interpretation is right, that seems to me like a realistic summary of what the better "antialiasing" algorithms can achieve.

My main interest is not in the overt visual appearance of the excess energy that we have allowed to enter the digitised data (admittedly, perhaps rarely). I'm pretty sure if it were so visually obvious in astro images it would have been recognised long ago. My interest is exactly captured by your last post regarding the wood and not the parrot. What if that extra energy is essentially adding relatively unstructured noise to the image? If so, we might not notice, and it is difficult to predict what would be the effect of  dealing with it. I say relatively unstructured because if it were uniform, we could presumably handle it by subtraction. It is highly unlikely to be either unstructured or simple (like the cases that show up as Moire patterns). Instead it would be a function of the continuous spatial frequency spectrum above the Nyquist rate.

It seems quite likely to me that there is a significant amount of high spatial frequency energy in the real signal (ignoring any lowpass filtering effects of aperture and seeing). Perhaps this is simplistic, but compare the sort of image the 'average' amateur might take of say M33 with a Hubble image. The latter surely possesses more detail at high spatial frequencies, and from the point of view of the source of the signal that detail corresponds to real structures that the amateur image has no hope of capturing. Increasing resolution might result in shifts of energy to higher spatial frequencies as unresolved, large "phantom" objects get resolved into real objects. Think globulars -- perhaps the best place to look for aliasing.

I take the point about all the other processing that is done to images (stretching etc) but the difference here is that, if aliasing were to be demonstrated as a real problem, we would be talking about it in the same vein as darks, flats and biases i.e. things which we  know how to deal with and for which there is a well-understood solution.

Let's take an audio analogy (I know not all accept the equivalence, but I remain to be convinced there is any fundamental difference). The appearance of aliased energy in speech signals can be very subtle. But it is possible to demonstrate that not handling aliased energy can make a difference to estimates of properties such as fundamental frequency or vocal tract length. Given that in audio we understand the theory and consequences of undersampling, there is no excuse ever to allow such a thing to contribute to measurement error -- after all, there's plenty of other contributors that we don't fully understand. [As an aside, you might be wondering why aliasing ever occurs since it is so simple to deal with in audio. Its worth stating that aliasing is not just a problem at the point where the signal is converted from analogue to digital but at any point where it is down sampled, including from digital to digital, unless the downsampling is handled correctly. It is sometimes easy to forget to do this, speaking for myself.]

Having said all of the above, I'm leaning to the view that it may not be a big problem in astronomical imaging simply because I'd have expected to see more mention of it, and some web trawling hasn't thrown up too much, and not all of that consistent. But I'm going to stick my neck out and predict that there may be a few rare situations (good seeing, large aperture, pixels a bit too large) where just a slight amount of defocusing will lead to a better image!

Martin

Link to comment
Share on other sites

The Nyquist criterion doesn’t really have much to do with CCD data as it is based on the idea of sampling a function in discrete intervals with delta functions (unlike CCDs which integrate over contiguous intervals).
 
Above is an interesting quote from a course run at Lick Observatory.
 
NigelM

This quote above is very important.  You really need to understand its implications before going further or you could really confuse yourself.  

A the fact that you using an "accumulating square top" sample rather than an ideal delta to sample with adds a pretty aggressive spatial LPF to the incoming signal before it is sampled.  In the 1D signal world, imagine the difference between a low pass track and hold before conversion and a  straight instant conversion.  (1D signal processing and 2D image processing work with exactly the same maths, That's what makes math such a light weigh and useful tool box to have ) This is such a big effect you have to include it in your thinking before deciding if you have a problem.

Also, frequencies sampled below the Nyquist rate are not randomly re-distributed.  They "fold back" at the Nyquist frequency.  So if you sample at Fs, and have a signal at (Fs/2) +n, it will appear after sampling at (Fs/2)-n. So if the problem exists it should be fairly easy to locate it.

Mike

Link to comment
Share on other sites

This quote above is very important.  You really need to understand its implications before going further or you could really confuse yourself.  

A the fact that you using an "accumulating square top" sample rather than an ideal delta to sample with adds a pretty aggressive spatial LPF to the incoming signal before it is sampled.  In the 1D signal world, imagine the difference between a low pass track and hold before conversion and a  straight instant conversion.  (1D signal processing and 2D image processing work with exactly the same maths, That's what makes math such a light weigh and useful tool box to have ) This is such a big effect you have to include it in your thinking before deciding if you have a problem.

Also, frequencies sampled below the Nyquist rate are not randomly re-distributed.  They "fold back" at the Nyquist frequency.  So if you sample at Fs, and have a signal at (Fs/2) +n, it will appear after sampling at (Fs/2)-n. So if the problem exists it should be fairly easy to locate it.

Mike

I'm not sure mathematics is a lightweight toolbox unless you're dealing in approximations. So let's start with one: it is well-known that unweighted averaging is a pretty poor approximation to a low pass filter and has some side-effects, which is why in audio it is very rare to see rectangular windows. Sure, they have a lobe near to DC but also has only a moderate out of band rejection and some rather large sidelobes. If you want to get rid of aliased energy it is much better to design a filter to do the job and not rely on a side-effect of accumulation, in my opinion -- at least in audio. There may well be tradeoffs in astro e.g. difficulty in designing a suitable optical filter without affecting transmission too much.

Here's an audio example. The signals are tones at frequencies of 978 and 4519 Hz with randomised starting phase, originally sampled at 200 kHz. Their frequency spectrum is shown in the upper left plot. The other plots show different ways of downsampling by a factor of 30. Upper right is done by ideal delta sampling and no antialiasing filter. This clearly shows very little rejection of the aliased component at around 2147 Hz. Lower left is a simulation of the accumulation method (taking the mean of every 30 contiguous samples). We see a little more rejection, but the aliasing is still quite strong. The lower right shows the result of lowpass filtering prior to downsampling. Here the rejection is much stronger. Still not perfect, but then there is no such thing as an "ideal" low pass filter implementation.

undersamp

I prefer not to discuss other people's course notes as one has to understand the context and the fact that there is often a verbal commentary. My interpretation is that "doesn't really have much to do with" is a somewhat flexible and not really mathematical statement and probably not intended to be. I would put it otherwise from what I've gleaned so far. It seems fairer to say that area sampling goes some of the way to attenuating high spatial frequencies, but in a less-than-optimal way. However, we can quantify the attenuation relative to other more principled approaches, and probably should do.

I don't think anyone is claiming that freqs above Nyquist are randomly redistributed. That doesn't make them easy to spot without also going to a higher resolution (which kind of defeats the point).

There's a danger of self-perpetuation here too. Again, drawing from audio, most practitioners looking at speech spectrograms may well think there is very little energy above 6-8 kHz in speech. Why? Because it isn't very linguistically informative ... hence, that's the conventional upper plotting limit, ... hence, the Nyquist limit for any new recordings, ... hence, we never see any energy above this. But try collecting speech at say 44.1 kHz and sure enough, there's plenty of energy above 6k and without analogue LP filtering its going to end up aliased in your recordings.

Martin

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.