Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Sensor size, resolution, field of view...


jm_rim

Recommended Posts

I have been trying figure out the difference between CCD cameras and the many aspect in which they can differ. One thing I haven't been able to find the answer for is regarding sensor size, resolution, and field of view. I found another thread about the same question although no definitive answer, so i thought i would give it a try.

Often when I read other threads, forums, and elsewhere regarding which CCD to chose, a common answer is to buy the largest sensor that one can afford to gain a larger field of view.
However, as discussed in the link above, why not just buy a CCD camera with smaller pixels and scope a smaller focal length. I know that there is a lot of other factors beside the sensor side, but is there a major benefit by purchasing a CCD with a large sensor, and presumably larger pixel?

Jesper
 

Link to comment
Share on other sites

Larger pixels provide a bigger target for photons to hit and thereby be registered.

You also have the Dawes limit that will restrict your resolvable resolution on smaller aperture scopes.

 

Personally I have a small frac with a smallish CCD but I know the limitations of it, I will never get great shots of small galaxies and planetary nebulae as the scope won't be able to resolve them well enough but larger targets are all good.

Link to comment
Share on other sites

Hi Jesper

Olly explains this sort of stuff very well, but I'll give it a try....

If the sensor is wider than the illuminated field of the particular OTA, then the images will be vigneted.

If it's a high-frame-rate camera imaging bright planets, then the more pixels the better the resolution, loss of sensitivity due to smaller pixels withinb reason should not be a problem.

 However for DSO's  "seeing" comes into play, and then less than 1 to 2 arcsecs/pixel can be detrimental - or so I've read.

Michael

Link to comment
Share on other sites

I'll do my best!

The FOV is governed by the focal length and the sensor size but, as Michael says, it cannot be taken for granted that the largest sensors will be covered by telescope optics. In fact very few telescopes can cover a full frame DSLR or CCD. You also need to take manufacturers' claims about field coverage with a large pinch of salt. Takahashi say the Baby Q will cover full frame. Well it just won't and that's that. Don't commit to a telescope for a large sensor until you've seen some real images.

'Resolution' is a terrible term because it has too many meanings. In its proper sense it describes the real level of detail distinguishable in a final image. In imaging it tends to be used to describe the number of arcseconds per pixel the system is capturing. (This is governed by focal length and pixel size.) These two definitions are not synonymous, though, because it is very easy to make a system which delivers 0.5 arcsecs per pixel but very difficult to find guiding and seeing good enough to turn this into real final detail. And then, alas, there's the idiotic daytime camera tendency to use 'resolution' to mean pixel count without reference to chip size. This should simply be ignored!

' However, as discussed in the link above, why not just buy a CCD camera with smaller pixels and scope a smaller focal length?'  Very good question! It may well be the case that you could get remarkably similar results from two entirely different ways of reaching the same resolution in arcseconds per pixel. What we need, but which I don't have, is a direct comparison between a smaller scope with smaller pixels and a larger scope with larger pixels. I'm working on it, though.

'  ...but is there a major benefit by purchasing a CCD with a large sensor, and presumably larger pixel?'  There is if you have a big telescope with a long focal length. You'll get a bigger field and a better pixel scale. What I and many others would like, though, is a large sensor with smaller pixels. I put up with a low resolution of over 3"P/P because nobody makes a full frame chip with smaller pixels. I have the optics to cover, easily, the full frame chip and I do want that expansive FOV so I put up with what's available. CMOS chips may come along and change that, but they have to be mono for me.

Olly

 

 

Link to comment
Share on other sites

This discussion can be as complicated - or as simple - as you want to make it :) 

Scope aperture: Big scopes collect more photons (light) than small ones. Obvious, yes? Not so obvious is the need for the light to be placed onto the CCD sensor so as much as possible gets recorded as image information. The image circle is a fundemental metric in this respect and it varies according to optical design, much as camera lenses [used to be] designed to work with a specific 'format' of film (110, 35mm, 'medium format' and so on). The point being the optical format needs to match the recording medium. As astro-imagers we are in the position of choosing from a variety of sensor sizes seemingly giving little regard to optical 'format', not all of these sensor sizes are suitable to use with our particular scope. If the sensor is too big for the optical 'format' of the scope then the result is clear and we see vignetting. However, if the opposite is true and the sensor is too small then we may waste a large proportion of the light the telescope is collecting. How large?

An example would be my own Televue NP127is, with its generous 50mm image circle it can cover a large format sensor (37mm square), yet I initially used it with a small Sony sensor (Atik490EX CCD camera with a sensor approx 12mm x 10mm). So lets see... the surface area of the sensor was 12*10 = 120 mm squared, while the area of the scope's image circle was (25*25)*3.142 = 1,965 mm squared. Hmm that means I was actually using a paltry 6.1% of the light collected by the scope, or to put it more starkly, throwing away 93.9% of the light I was collecting! So I moved to using a larger 37mm x 37mm sensor (Kodak 16803) which has a surface area of 37*37 = 1,369 mm squared meaning I now see 70% of the collected light in my images - a big improvement. Of course you're not going to reach 100% without serious vignetting because sensors are rectangular or square and image circles are... circles :)

So that's just one aspect of your choice - but an important one. How you divvy up the photons you actually record on the face of the sensor governs the scale (resolution) of the image (as discussed above by Olly).

ChrisH

Link to comment
Share on other sites

17 hours ago, jm_rim said:

Often when I read other threads, forums, and elsewhere regarding which CCD to chose, a common answer is to buy the largest sensor that one can afford to gain a larger field of view.
However, as discussed in the link above, why not just buy a CCD camera with smaller pixels and scope a smaller focal length. I know that there is a lot of other factors beside the sensor side, but is there a major benefit by purchasing a CCD with a large sensor, and presumably larger pixel?

That's an option of course to a certain degree. Don't forget, that we are counting photons, thus the scope/lens diameter has to stay the same if you want similar result. (The situation now is eased a bit by the fact that some large consumer chips nowadays deliver crappy QE with horrible read noise with dismal dark noise see KAI-11002). The Dawes limit for resolution also asks for keeping the optics' diameter.

What you still loose is full well capacity=dynamic range. Otherwise an Atik 490EX on a good 2.8/300 lens could serve you just as good as an Atik 11000 on a 107/700 APO.

Link to comment
Share on other sites

On 12/12/2016 at 13:56, GTom said:

. Don't forget, that we are counting photons, thus the scope/lens diameter has to stay the same if you want similar result. (The situation now is eased a bit by the fact that some large consumer chips nowadays deliver crappy QE with horrible read noise with dismal dark noise see KAI-11002). 

 

That bad?

HORSE%20and%20FLAME%202016-M.jpg

HaOIIILRGB%20vDb149%20150152%2050%20Hrs-

 

Mopdified%20Straton%20FIN%20web-M.jpg

LEO%20TRIPLET%20TEC140%202015%20web-M.jp

M42%20TEC140%20LRGB%20V3-M.jpg

Olly

Link to comment
Share on other sites

10 hours ago, ollypenrice said:

That bad?

...

Olly

No Idea until I know how much work and what kind of scope you used. QE and noise problems can be balanced with longer collection - what I am curious about is, how much more exposure time would an Atik11000 vs a 460exm need for a small target. Time is a precious commodity for most of us.

Link to comment
Share on other sites

17 minutes ago, GTom said:

No Idea until I know how much work and what kind of scope you used. QE and noise problems can be balanced with longer collection - what I am curious about is, how much more exposure time would an Atik11000 vs a 460exm need for a small target. Time is a precious commodity for most of us.

Of course time is precious, more so for me than you might think. As an imaging provider I want people on short visits to go home with nice bulging data sets. When working with Tom on our own projects, sure, we spend 'as long as it takes.'  You can bash the rustic old Kodak 11 meg chip on its noise and QE but it catches more of the light from our optics than smaller chips simply becaue it is bigger. Our current optics are Tak FSQ106 and TEC140 with TEC flattener, so if you don't have a big chip an awful lot of that expensive light is going nowhere. The 460 chip will give you a FOV of I.3 x 0.9 degrees in a 106 while the Kodak will give you 3.9 x 2.6 degrees. That is to say 1.17 square degrees against 10.14 square degrees. Yes, the Kodak is covering 8.6 x as much sky. Before knocking it too hard you might want to work out your mosaic making time with a 460 chip to cover the same area. Or you could use the big Kodak in big optics:

M51%20DEC%20VERSION%20clip-XL.jpg

The thing is that, so far, no camera does everything. Seen one way the big Kodak is rubbish. Seen another way the small Sony is rubbish. In reality they are both brilliant. 

Olly

 

Link to comment
Share on other sites

47 minutes ago, GTom said:

No Idea until I know how much work and what kind of scope you used. QE and noise problems can be balanced with longer collection - what I am curious about is, how much more exposure time would an Atik11000 vs a 460exm need for a small target. Time is a precious commodity for most of us.

Neither noise nor QE are problems - it would only be a 'problem' if (for example) the noise could not be compensated for. In fact, what noise is there is from the big Kodak sensor is easily corrected. The QE is a weak point being just 45% in the Red end of the spectrum (IIRC) which is less than the ~70% for Sony sensors, but that hardly makes it unusable and the benefit of having a sensor with a large surface area far outweighs any perceived lack of sensitivty. You are in danger of being drawn into comparing numbers (specifications) in the absence of real-world experience, and whilst it's very tempting to do this it can often lead you down the wrong track. The 1600MM looks great until you run up against the huge amp glow it produces and then you'll be using the same arguments which apply to the Kodak (i.e., you can correct for noise with callibration frames!). The bottom line is there is no perfect instrument, all sensors have good and bad points, so just look at what can and (importantly) has been achieved with each of them :)

ChrisH

Link to comment
Share on other sites

Interestingly the qhy sibling of the ASI camera doesn't show much amp glow.

I do not question the fact, that the 11k is a "real estate" chip (btw its red=H-alpha sensitivity is around 30%), if you want to cover several square degrees, you don't have too much other choice. Maybe monochrome modded+cooled canon 6D, but that wont really be better, due to the loss of microlenses...

Link to comment
Share on other sites

I came across this thread as I were going to ask about these things myself.

What I have found in my digging is that you can calculate a useful number for pixel size.

The resolution per pixel on the CCD is known as the sampling rate. The sampling rate in arc seconds can be determined by performing the following small calculation

Sampling rate (theoretical resolution) in arc seconds = (CCD Pixel Size / Telescope Focal Length ) * 206.265

Now, notice the use of the word ‘theoretical’ above. One of the biggest influences on image resolution is not the instrument in use but the atmosphere and in particular the ‘seeing’ (the steadiness of the atmosphere). Poor seeing results in loss of detail, particularly when imaging the Moon and planets where close up views of these bright objects show them shimmering and pulsating quite alarmingly. Deep sky objects are not as badly affected as solar system objects but the shimmering can reduce the detail captured over the long exposure times that are required. Typical seeing from a back garden location in the UK is between 3 and 4 arc seconds although really good nights might reduce this figure down to 2 to 3 arc seconds

For deep sky imaging, if you aim for a sampling rate of between 1.5 and 3arcseconds per pixel you can’t go far wrong

So, you can by a camera that combined with your scope gives a low sampling rate, lets say 1. You at getting more theoretical detail, but if you never have a seeing better 3 then you are not gaining anything. On the other hand, if your sampling rate is higher than your seeing, then there is practical detail you are not able to collect.

Shifting the formula above, you can enter sampling rate (seeing), and telescope focal length, and get the best pixel size out the other end.

 (CCS Pixel Size / Telescope Focal Length) * 206.265 = Sampling rate

becomes

(Sampling rate * Focal Length) / 206.265 = CCD Pixel size.

So for a great? seeing of 2 with a 500mm refractor you get

(2 * 500) / 206.265 ~ 4.85 micron

 

This was very theoretical, and I have some questions about It myself. However for me it seams like no point in chasing the lowest sampling rate when its only a theoretical benefit. And to top it of seeing might not be the limiting factor, It might be wind, guiding, aso.

When it comes to chip size, as it has been mentioned larger chip gives bigger fov, but to large, and you need a more expensive scope with bigger focuser to avoid the light not covering the sensor.

 

So is 2 arcsecond seeing something I would get on my darksite with green to yellow LP? I don't know, and don't know how to fin out. (It is what I was going to ask, but probably should not hijack the thread:) )

 

Some links

http://www.iankingimaging.com/show_article.php?id=5&system=1

http://nightskyimages.co.uk/sampling_rate.htm

Link to comment
Share on other sites

The discussion of what can be realized in terms of resolution goes round and round. I don't believe 2 arcsecs per pixel can't be beaten. Nor do I believe that 3.5 arcsecs per pixel will give horrible results. Actually putting all this to the test is quite an investment!

My instinct says that high resolution might mean about an arcsecond per pixel. I'm not a resolution chaser, I'm more interested in the big picture. I know I can do 1.8"PP and I've done a lot of imaging at 0.66"PP. What I lack is experience of the values between the two.

Olly

Link to comment
Share on other sites

That's interesting.

I guess it takes a lot of experience to know this things. Just looking at the theories and 'calculating' then it's hard to see what matters.

 

However, by using the formula and looking at cameras with a sampling rate of 2 ~ 2.5, and matching sensor size with what I want as FOV, then the ATIK 383L+ seams almost perfect for a 500mm frac @2.23 arc sec. and this seams to correlate with what I have heard (good things) about that camera.

 

It is pricy though :)

Link to comment
Share on other sites

9 hours ago, ChrisLX200 said:

As I understand it, you need at least two pixels at any particular resolution to resolve at that level - so if you want to resolve 2 arcsec then you need to collect data at 1 arcsec.

ChrisH

I know you are quoting the often-touted conventional wisdom but this is a mis-application of the Nyquist sampling criterion. More correctly, for a 2 dimensional signal you need more like 3 - some (including me) would argue that for critical sampling you need 3.3 pixels to sample correctly.

Derrick

Link to comment
Share on other sites

2 hours ago, derrickf said:

I know you are quoting the often-touted conventional wisdom but this is a mis-application of the Nyquist sampling criterion. More correctly, for a 2 dimensional signal you need more like 3 - some (including me) would argue that for critical sampling you need 3.3 pixels to sample correctly.

Derrick

Well yes, but my point was that after all the careful calculation for achieving the 'ideal' pixel size to match your optics the result suddenly goes out the window because of the introduction of one new parameter. Now instead of matching the local seeing at 3 arcsec/pxl we [apparently] need to imaging at 1 arcsec/pxl to achieve the best theoretical resolution, and that profoundly changes the choice of camera - if you pay attention to such things anyway :)  Some people do take such calculations very seriously.

ChrisH

Link to comment
Share on other sites

It depends on your imaging preferences  - if widefield nebulae, galaxy fields etc. are what float your boat you probably don't care much about critical sampling - FOV is king. OTOH if your interest is in imaging PNebs, galaxies and other small objects and extracting the maximum detail for seeing conditions (as mine primarily is) than critical sampling is ... well, critical :-)

 

Link to comment
Share on other sites

Don't be too spooked by vignetting. Remember: you are under no obligation to use the entire field of your CCD in the final image. You can crop the edges as much as you please. It's also worth noting that many astronomical targets are quite small, so will fit quite comfortably on a smallish CCD - or on the unvignetted part. Plus, unless you're purposely imaging wide starfields, it doesn't matter if the stars around the edge are dimmer or that there are fewer of them visible. Nobody will be counting them!

Also, a word about megapixels. Do we really need them all? An HD screen is only 2MPix yet it displays images extremely well. One can make a strong case that a 2MPix final image is just fine in the vast majority of cases - so don't worry if the CCD of your dreams is "only" 2 or 3 or 4 MPix.

Link to comment
Share on other sites

6 hours ago, derrickf said:

I know you are quoting the often-touted conventional wisdom but this is a mis-application of the Nyquist sampling criterion. More correctly, for a 2 dimensional signal you need more like 3 - some (including me) would argue that for critical sampling you need 3.3 pixels to sample correctly.

Derrick

 

This might be correct, but it only speaks of a ccd's effectiveness in registering light. It does not mean that a single pixel suddenly gets more effective.

So if you would expect seeing of lets say 1 arc seconds, and you have a camera / scope with a sampling rate of 1 arc sec, then you are at the maximum practical resolution or capability of registering light. If you need three pixels to register a signal you will resolve at best 3 arc seconds.

You cannot use a camera / scope combo with a sampling rate of 0.33 to end up truly resolving 1 arc second. As the seeing only 'allows' detecting 1 arc sec, then a sampling rate below 1 is completely unnecessary as the [pixels!] will not gain any info (due to environmental limitations). 

The pixels are the physical sensors, the fact that three pixels need to sense to register a signal is a electronical limitation.

 

That's how I see it anyways....

 

// Edit: Just to be clear, I did not mean to post that as a fact, only a opinion that I would be happy to hear if it's correct, or if not, then how it's wrong :)

Link to comment
Share on other sites

Magnus,

you might want to read about sampling theory: a system that can sample at roughly 0.33 arcseconds is indeed required to properly reconstruct 2D features separated by 1 arcsecond - it is not resolving features at 0.33 arcseconds.

Clearly one cannot capture more detail than permitted by local environmental and atmospheric effects otherwise NASA would not have invested billions of dollars sending telescopes into space. Earth based imaging will always be limited by the seeing which we often evaluate using the FWHM of star images. So if a star is blurred by atmospheric conditions into a disk with a FWHM of 1 arcsecond , we need a system capable of collecting 3.3 samples across the disc at full width half maximum intensity.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.