Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Oversample or undersample?


michaelmorris

Recommended Posts

An interesting debate so here is my two pennies worth. I have some difficulty with the use of the Nyquist sampling criterion in his context. Nyquist basically says that if you have a function then if you sample at greater than twice the highest frequency it contains then you can completely (given certain other constraints) reconstruct  the initial  waveform.

The key here is reconstruct. Normally this is not done in a way that satisfies the Nyquist criteria. The reconstruction requires that the recovery algorithm computes the infinite weighted superposition of shifted sinc functions. These criteria can't be met in practice with a CCD image and normally not even an approximation is used. In fact we just look at the image  and rely on the way eye/brain works to reconstruct the image. This is easy to illustrate as we tend to automatically seek an image size that neither is so magnified so as to make pixelation obvious or so reduce it so we can't see the detail.

The degree of sampling of the object, to my mind, depends on what you want to do with the image (look at it, plate solve, spectra radial velocity etc.) and how you processes it (drizzle, sharpen, smooth,convolution etc.).

To conclude I would say that Nyquist can get you in the ball park but it is only a starting point as other have proposed. I don't think there is one theoretical optimum even for an instrument above the atmosphere!

Regards Andrew

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Thanks guys

A most illuminating discussion. It's definitely helped the decision making process regarding which CCDs to consider. It could easily be early 2017 before I have the funds in place to actually consider buying something, but it's given me a starting point when looking at CCD cameras at Astrofest in February.

Link to comment
Share on other sites

Thanks guys

A most illuminating discussion. It's definitely helped the decision making process regarding which CCDs to consider. It could easily be early 2017 before I have the funds in place to actually consider buying something, but it's given me a starting point when looking at CCD cameras at Astrofest in February.

I think there is every possibility that there will be major changes in technology coming up soon. In the nearly ten years that I've been doing this I've often read comments to the effect, 'Why worry, by next year what you buy will be obsolete anyway.' This is, I believe, totally incorrect and does not stand up to scrutiny. In truth the technology evolves remarkably slowly in our astrophotographic neck of the woods. There is precisely nothing you could take with an Atik 414 that you could not take with a 314 or a 16HR. APODS have been garnered by the Kodak 11 meg and Tak 106 since I've been imaging and they are still appearing.There have been gains in convenience and in detail but there has not been a development which I would call exciting (since I don't have the price of a new Porsche to spend on a camera) since I started. By comparison, when I started AP it was the case (I believe, it's not my world) that no big name fashion mag would have put a digital image on the cover. That has changed.

I'm guessing, but in the next three years we might just be ready for serious excitement. It will hang on whether or not monochrome technology is considered worthy of attention by the big boys.

Olly

Link to comment
Share on other sites

Scientific CMOS looks like it might be viable for amateurs in the (near?) future although it's certainly not cheap at the moment; for a higher end camera with a decent (roughly APS sized chip) we're probably talking Vauxhall rather than Porsche prices, but car prices all the same. The performance you get i.e. ~1e- read noise with ~30,000 e- well depth/6.5 micron pixels/ 16-bit ADC and deep cooling) would be well ahead of any currently available amateur camera though. And prices are dropping all the time (you can get a smaller chipped lower spec model for under $10,000 now, bargain!  :grin: ).

Paul

Link to comment
Share on other sites

With your scopes, which are similar to my setup, and for under £1000 I would go for the Atik 414ex, I love it and have not found a bad review yet, just my option though... :)

AB

My scope set up would be changing at the same time. Whilst the Orion 80ED and 66mm scope will probably be staying, the plan is to upgrade my venerable old 8" LX200 to a new mount and a new SCT (probably Skywatcher AZ EQ6 GT and a Celestron 9.25"). A CCD purchase would be a possible part of this upgrade.

Link to comment
Share on other sites

As far as I can tell, it all goes back to the old chestnut: "The best upgrade you can buy is a dark sky.".

I have yet to see a piece of technology that will guarantee a clear sky with no light pollution.

Olly has already got it right. Not necessarily about the technology (although time will tell), but about choosing where to live. ;)

Link to comment
Share on other sites

As far as I can tell, it all goes back to the old chestnut: "The best upgrade you can buy is a dark sky.".

I have yet to see a piece of technology that will guarantee a clear sky with no light pollution.

Ahhh, well. There are two possibilities that come to mind.

The first is remote operation. Using someone else's observatory across the internet.

The second (and I'm out on a limb here) could be imaging in the near infra-red. I stand to be corrected, but NIR doesn't suffer from man-made light pollution since there's little reason to illuminate in that part of the spectrum; night-time security cameras notwithstanding, and not contributing nearly as much LP as their visible light counterparts.

Link to comment
Share on other sites

If you are mainly going to be using a short focal length scope <600mm then it seems to make sense to go for small pixels <6microns. If you are going to be imaging at long focal lengths then big pixels >7microns seems like a good idea. If you have a range of focal length options then 6-7 microns sounds like a plan. Of course, as has been said, you can always bin small pixelled cameras at long focal length. This all ignores the other big decision which is the size of the chip!!

One thing that is sometimes ignored is consideration of how you are going to display images. If you are wanting to produce large posters of your images then sampling rates and max resolution has more relevance than if you are going to be presenting online via a computer monitor. When this is the case all that optimal sampling goes out of the window because of monitor constraints. 100% resolutions are posted but without being able to view the whole of the image, you have to zoom in on little bits. If you like inspecting noise and star roundness at the corners, then great, but this is a seriously nerdy thing to do!

APODs with KAF11000 chip cameras and an FSQ 106 are ten a penny and wonderful, the oversampling isn't evident on a computer monitor. Not many of us have looked at an APOD on anything other than a monitor!

Link to comment
Share on other sites

If you are mainly going to be using a short focal length scope <600mm then it seems to make sense to go for small pixels <6microns. If you are going to be imaging at long focal lengths then big pixels >7microns seems like a good idea. If you have a range of focal length options then 6-7 microns sounds like a plan. Of course, as has been said, you can always bin small pixelled cameras at long focal length. This all ignores the other big decision which is the size of the chip!!

One thing that is sometimes ignored is consideration of how you are going to display images. If you are wanting to produce large posters of your images then sampling rates and max resolution has more relevance than if you are going to be presenting online via a computer monitor. When this is the case all that optimal sampling goes out of the window because of monitor constraints. 100% resolutions are posted but without being able to view the whole of the image, you have to zoom in on little bits. If you like inspecting noise and star roundness at the corners, then great, but this is a seriously nerdy thing to do!

APODs with KAF11000 chip cameras and an FSQ 106 are ten a penny and wonderful, the oversampling isn't evident on a computer monitor. Not many of us have looked at an APOD on anything other than a monitor!

Awe... sob, my life's work goes into these oversize whoppers! :grin:   I agree about the nerdiness of pixel peeping, absolutely, but I like creating large, multi-object vistas which can then be zoomed in on simply to explore the objects in a different way. I just like the idea of making long focal length widefields, I guess! A bit like 100 degree EPs, the experience is 'immersive,' as they say. The fact that you can make big prints is a bonus, though the biggest Orion Tom and I have yet seen in print is only 1.8 metres high. We dream of 6 metres...

Another thing about the big high res mosaics is that they are great for presenting astronomy to newcomers. I can put this one on the screen, say,

IC443%20M35%20Monkeyhead%20HaOIIILRGB2%2

and then zoom in on the two clusters to explain stellar evolution and spectral types, head down to the Jellyfish so it fills the screen and go over supernovae, then cover emission nebulae with the Monkey Head nice and close. I somethow feel this is better than going over the same topics using discrete images.

An interesting difference between this forum and the French one I frequent is than on Webastro you will always be asked for a link to 'La full' - the fullsize - if you don't post one. Most reactions that follow from visits to the fullsize express healthy enjoyment of details. Pixel peeping reactions are very rare and are, I think, disapproved of.

Olly

Link to comment
Share on other sites

Olly, I think the move to touch screens with the ability to simply zoom in and out and pan across an image has changed the way we appreciate images so feel free to keep on  going large! That is on the plus side on the down side I don't think many devices allow you to set and correct the colour balance. Indeed I wonder if imagers do that anymore before processing their works of art? The colour balance on my laptop shift as I move my head from side to side so no hope there at all!

I agree with Paul that development in CCD technology has probably gone as far as it is going and CMOS will be the wayforward.

Regards Andrew

Link to comment
Share on other sites

Olly, I think the move to touch screens with the ability to simply zoom in and out and pan across an image has changed the way we appreciate images so feel free to keep on  going large! That is on the plus side on the down side I don't think many devices allow you to set and correct the colour balance. Indeed I wonder if imagers do that anymore before processing their works of art? The colour balance on my laptop shift as I move my head from side to side so no hope there at all!

I agree with Paul that development in CCD technology has probably gone as far as it is going and CMOS will be the wayforward.

Regards Andrew

That's an interesting point about touch screens! I don't have one myself but you're right, people do indeed zoom in and 'navigate' images routinely with this technology and it might well influence the way photographers work.

(The interaction between the way we view and the viewing technology available is interesting, too. 25 years ago we had a debate at the dinner table about the usefulness of being able to buy films on video tape. Many felt that it was pointless and that rental would be the way. At that time people didn't normally watch films more than once and felt that when they'd seen it they'd seen it. Now it is the norm for people to have favourite films which they watch regularly. That's a very significant change.)

I agree that CMOS is likely to be the future. We just have to hope that the astro market is big enough to be interesting to the powers that be.

Olly

Link to comment
Share on other sites

Awe... sob, my life's work goes into these oversize whoppers! :grin:   I agree about the nerdiness of pixel peeping, absolutely, but I like creating large, multi-object vistas which can then be zoomed in on simply to explore the objects in a different way. I just like the idea of making long focal length widefields, I guess! A bit like 100 degree EPs, the experience is 'immersive,' as they say. The fact that you can make big prints is a bonus, though the biggest Orion Tom and I have yet seen in print is only 1.8 metres high. We dream of 6 metres...

 

Another thing about the big high res mosaics is that they are great for presenting astronomy to newcomers. I can put this one on the screen, say, and then zoom in on the two clusters to explain stellar evolution and spectral types, head down to the Jellyfish so it fills the screen and go over supernovae, then cover emission nebulae with the Monkey Head nice and close. I somethow feel this is better than going over the same topics using discrete images.

 

An interesting difference between this forum and the French one I frequent is than on Webastro you will always be asked for a link to 'La full' - the fullsize - if you don't post one. Most reactions that follow from visits to the fullsize express healthy enjoyment of details. Pixel peeping reactions are very rare and are, I think, disapproved of.

 

Olly

Ah but then Olly you have to factor in the cost of buying a beautiful farmhouse atop a remote hillside in Haute Provence!

Link to comment
Share on other sites

here are some random musings, I have spent quite a lot of brain power of understanding the limitations of sampling as I once built a simple digital oscilloscope and explored its performance. Please treat the following as musings to stimulate and participate in the discussion - not proven facts!

Perhaps the real problem is that any such calculations assume that the aim is to capture every last piece of data available in the view. This is valid if imaging a small object and you wish to display it as large as possible or if you wish to be able to inspect the details of an image. But for most purposes the real aim to to create a well-framed image, and the need is for the resolution to be sufficiently well matched to the resolution of the final print or display on which the image is viewed.

It strikes me that Olly's 'large pixel' approach is a pragmatic one which sets image quality (low noise, good contrast) above extreme resolution, and his results speak for themselves. Zooming in results in pixellation rather than 'loss of detail' and the obvious aesthetic choice is to stop zooming before pixellation becomes apparent.

If a picture's resolution is lower than the pixel size, it will looks soft and fuzzy if zoomed in on too much. This doesn't really matter if the image is normally viewed at a more appropriate zoom level.

However, most calculations on this subject assume the aim is to capture all possible detail and data. What Nyquist does do is draw attention to what we actually mean by this. By analogy with sound sampling, we can take an imaginary line across an image projected onto a sensor and plot the actual and recorded signal level  exactly analogously to a sound signal. Note that this makes NO assumptions whatsoever about contrast, diffraction effects etc. These are all properties of the image that determine the scale of detail we wish to capture.

To record a hard line between black and white, then the 'worst case' will always be a grey pixel between fully black and white ones, regardless of resolution.

To separate two dots (binary stars) a darker pixel is needed between two brighter ones and this is where Nyquist comes in. It doesn't need to be a black pixel, just darker. the worst case is two equally bright points spaced exactly two pixel widths apart. If the points are centrally disposed on the pixels then two bright ones will have a darker pixel between. If the points lie exactly on pixel boundaries, then the resulting output will be a minimum of four pixels illuminated with the two central ones of equal brightness. For there to be a central dark pixel with any given spacing of the points then the pixel spacing must be at least twice this distance - Nyquist.

One point about nyquist is that near the limit the original waveform is quite distorted (a sine wave can become a square wave and be phase shifted up to 90 degrees). In images these effects would be seen as pixelisation and slight shifts in location.

When seeing and diffraction turn points of light to airy disks (or fuzzy spots) the picture becomes more complicated, resolution needs to bu sufficient to detect darker areas between lighter ones, the same issues as when looking at different criteria for splitting binary stars. Here's the 'double double' imaged with a Lifecam at 1200mm, but the binaries are just disks, not separately resolved points. The image is clearly oversampled:

post-43529-0-59221500-1451564345_thumb.j

Here's an extract from another picture at 1200mm using a DSLR with pixels about twice as big under better conditions. Looking closely at the smallest stars, this image could easily have been produced using a lower resolution, as small dim stars are about four pixels across. Bearing in mind the desire to avoid square stars, the pixels could probably be twice as big, but not four times.

post-43529-0-87822400-1451564786.jpg

Finally, here's a DSLR shot taken using a 135mm lens. While the stars could all be resolved (and some of the binaries split) at a lower resolution, I think this would result in 'square stars' and I think this image is pretty much optimal match of resolution to pixel size.

post-43529-0-12390000-1451565750.jpg

One interesting corollary, which I have never seen mentioned is that images are two-dimensional and we wish to distinguish objects (e.g. binary stars or lines of contrast) that will rarely be aligned along the horizontal or vertical axis. When the alignment is at 45 degrees the resolution of the chip is reduced by a factor of 1.414, sou you can , very justifiably, argue that the pixel sizes calculated by most techniques are 1.4 times to large for optimum resolution. A hexagonal array of sensors would be optimal, but would also require a hexagonal array of pixels for display. Just ask any insect, or indeed look at the distribution of rods and cones in the mammalian eye.

Random stars.tif

Link to comment
Share on other sites

The other thing that should be remembered about applying Nyquist's theorem is that the result of reconstructing a sampled input gives rise to a lot of higher-order harmonics. The system then needs a low pass filter applied to the output in order to suppress them.

The other, other thing is to do with stacking. The application of Nyquist's theorem, or any other signal processing algebra needs to take account of the differences between subs - not just noise, but turbulence, flexing and mount jitters too.

And anyway. Nyquist's theorem applies to a constant input. Stars (or point sources :grin: ) are closer to impulses than to steady-state signals when it comes to processing them.

Link to comment
Share on other sites

I just wanted to add that there are two effects of undersampling, the obvious one (loss of high spatial frequencies), and a less obvious but pernicious one, the additional of aliasing noise into the image. Think of it like this: high spatial frequencies are likely to be present (if seeing is good enough…), but the photons from those spatial frequencies are not represented in an orderly fashion in the image. But they're still there. That is, the luminance they possess contributes to the photon counts in certain pixel positions. This is aliasing noise.

As Pete says, we normally get rid of aliasing noise by low-pass filtering. But this has to be done *before* the signal is digitised and not post-reconstruction. There is no clear way to remove aliasing noise once the signal is sampled. In practice, an analogue spatial antialiasing filter is possible but not frequently used in astro as far as I can see, and instead I guess we rely on the great lowpass filter in the sky to reduce the level of high spatial frequencies in the signal.

One interesting aspect of aliasing noise is that we can modify its spatial frequency distribution by changing the grid over which we sample. Sampling on a regular grid tends to concentrate aliasing noise into a narrow band (I'm not sure, but I imagine this is also the case -- perhaps more so -- for hexagonal grids). If we want to reduce the peak level of aliasing noise by spreading it around in spatial frequency, the optimal approach is stochastic sampling i.e. the use of an irregular grid. I believe the mammalian eye can be considered as to some extent doing this. (The situation is directly analogous to the effects of straight versus curved spider vanes).

How important aliasing noise is in practice is arguable. Just how much signal is there at high spatial frequencies? How much gets through the atmosphere unscathed on a typical night? I don't know. But I suspect that there are luminous astronomical objects that can 'generate' high spatial frequencies (globular clusters for one).

Martin

Link to comment
Share on other sites

For anyone who would like a better grasp of Nyquist et al the following link is quite good and I hope is not too technical  https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem. If the mathematics is too hard at least it does show an image that is under sampled and has a moire pattern due to aliasing. 

Moire fringes can be an issue for professionals doing spectroscopy at good sites when the star image gets smaller than the slit size and the spectroscope  CCD have been matched for the full slit or fiber width.

I feel some of the post (and I am happy to include include mine) don't fully represent the theorem and its limits correctly.

Regards Andrew

Edit: if nothing else read this section "Application to multivariable signals and images" it makes a number of point key to CCD images.

Link to comment
Share on other sites

For anyone who would like a better grasp of Nyquist et al the following link is quite good and I hope is not too technical  https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem. If the mathematics is too hard at least it does show an image that is under sampled and has a moire pattern due to aliasing.

Thanks for an excellent link.

Link to comment
Share on other sites

Martin, only in stacking one shot colour CCD images have I ever been invited by software to apply and adjust an anti-aliasing filter. I confess that since the default worked fine I just left it as was. I have never given any thought to why adjustable anti-aliasing would feature in OSC processing but not in monochrome. Would it be because of the under-sampling of the colour since this is done at a large scale via the Bayer matrix?

Olly

Link to comment
Share on other sites

Strangely, it is the aliasing that encodes the additional information obtained during image capture, with sub pixel dithering, that allows the Drizzle algorithm to enhance the resolution of the final image beyond that of the individual  captures.

Regards Andrew

Link to comment
Share on other sites

I'm not sure, Olly. While the case I mention above applies antialiasing before converting to digital, it is also true that any digital-to-digital downsampling also requires an antialiasing filter. In the latter case it is applied in software. So one possibility is that there was some downsampling going on prior to image combination? Another possibility is that some heuristic approach to antialiasing was being applied as is sometimes seen in computer graphics (e.g. replacing sharp staircase edges with shades of gray). This is not necessarily strict antialiasing in the Nyquist sense.

If it were the former, I'd be surprised if the software offered you a choice of antialiasing approach as it is a well-understood problem that shouldn't require user intervention.

Martin

Link to comment
Share on other sites

Strangely, it is the aliasing that encodes the additional information obtained during image capture, with sub pixel dithering, that allows the Drizzle algorithm to enhance the resolution of the final image beyond that of the individual  captures.

Regards Andrew

That's interesting, thanks. I didn't know that.

I always thought (perhaps simplistically) that drizzle effectively provides a higher resolution because the grid of sample points, when combined across drizzled images, approximates a sensor with smaller pixels (or equivalently a finer grid). Also, I would imagine that the random-ish shifts between drizzled images will go some way to approximating stochastic sampling on this finer grid, thereby reducing the spikiness of any aliasing artefacts.

[edit] So, thinking this through, if one drizzled a series of images that had been properly down sampled, logically it would not be possible to increase resolution using drizzle...

Martin

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.