Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Resolution ("dummy" level responses, please)


Recommended Posts

First thing to say is that I am rounding all my figures on this post, to try to get at the basic principles without getting bogged-down in the math.

I have a 71mm scope that gives me a resolution of about 2"/px with my cameras. As I understand it, this is just about right - any fewer "/px and one starts running into problems with atmospheric distortion, any more "/px and you start getting square stars.

Now, if I image a galaxy that is 5' across, this gives me a galaxy on my image that is 150px across. So far so good.

If I use my 8" scope with the same camera, I get 0.67"/px, so to get to my 2"/px target, I need to bin my image 3*3. And when I do this, I end up with a galaxy on my image that is the same 150px across. So what, if anything, have I gained?

Well, I suppose one thing is that, by binning 3*3, I am capturing data 9* as fast. Now my 8" is f/6 (with the reducer/flattener) and my 71mm is f/4.9 (for the purposes of this post, assume it is f/5). So the light I am getting in a given time is 9*(5*5)/(6*6) = 6.25*, so I can use exposure lengths of only 16% to get the same amount of light, but I have to guide a bigger/heavier scope, which causes extra stress (on me, even if the system can cope!).

But, as far as I can see, I cannot get more resolution (i.e. a "better" picture) with the 8" scope than with the 71mm scope.

So why use a larger scope, even when imaging really tiny objects?

What, if anything, am I missing in my analysis?

Thanks.

Link to comment
Share on other sites

  • Replies 43
  • Created
  • Last Reply

First thing is light gathering. Assuming you have exact the same pixel scale, then 8" scope will give you 8 times more light (200/71)^2 . So if you image galaxies for example and you shot one galaxy over night with 71mm, you can shot 8 galaxies with 200mm scope with the same pixel scale and similar result.

Another thing is resolution. With 2"/px scale you cannot actually sample well the image that have for example 2" resolution - image is undersampled. You can fight with it in some extent with drizzling. Resolution is limited by few factors (seeing, aperture, optics quality), but to sample 2" resolution image well you need at least 2x better pixel scale (ie. 1"px). For 71mm scope it may have not so much sense, because even perfect 71mm scope is diffraction limited to about 2" resolution. This plus seeing, tracking errors, etc, will give much worse resolution.

I used to have 130mm triplet and there was visible detail improvement when I changed camera from 5.4um pixel (1.5"/px scale) to 3.8um pixel (1"/px scale) when seeing was good. Now I have 10" scope with 0.44"/px native pixel scale, and I still noticed some improvement in resolution. Of course again - when seeing is good and mount behaves well. But I must admit my old EQ6 mount works at very edge at this 0.44"px scale :) 

Link to comment
Share on other sites

Thanks for the response, Lucas.

I'm guessing that I am confusing two things that you are saying are different. When I speak of "resolution" I am quoting the figure I got from CCD Calc, which actually says "image scale" - I thought they were the same. I'm now going to have to go away and try to find out which one it is that I'm supposed to get to 2".

Be back in a while ...

Link to comment
Share on other sites

Hi Demonperformer,

I'll be as interested in other's answers as you are, but can think of a few things, which I'll just list.

1) 9 x faster via binning is nothing to scoff at. It'll translate to beter tracking and a better S/N ratio, and will be faster, so lots of gains there.

2) Seeing is not always limited to 2" - it's not hard and fast and it is possible to go better than this, though you may need to throw away some subs. A 71mm scope has a Dawes limit of about 1.6" (from a rough calculation) - if seeing allows imaging better than this then an 8" will have obvious resolution advantages. Also, on planetary imaging (using lucky imaging) you can get down below 0.2" on a good night - may or may not be relevant to you.

3) [I think - from Chris Woodhouse's Astrophotography Manual] The different sources of error in an imaging train are not independent of one another, but stack. Same for visual with scopes and eyepieces - the idea (often expressed) that each item just is a "bottleneck" and the scope will perform as per its weakest part is wrong - every part introduces error into the system (though it makes sense to go after the biggest error when looking to improve). In a simplified imaging train you might have: a. error from the resolution limit of the scope; b. from seeing; c. from your camera resolution; d. shot noise; e. noise from the camera itself (bias, thermal). The total error in the system grows as the square root of the sum of squared errors. Anything you can do to bring any of these down helps, though some involve trade offs (binning sacrifices resolution but that's very little loss if you are oversampling compared to the gains).

If point 3 above is correct (it will be interesting to hear what more experienced imagers think on this), then a hypothetical, simplfied system with a 1.6" resolution scope and 2" resolution pixels would be getting a "real life" resolution of about 2.5", while moving to an 8" (0.57" resolution) would get you closer to 2".

Billy.

Link to comment
Share on other sites

25 minutes ago, Demonperformer said:

When I speak of "resolution" I am quoting the figure I got from CCD Calc, which actually says "image scale" - I thought they were the same. I'm now going to have to go away and try to find out which one it is that I'm supposed to get to 2".

Yep, these are two different things. Resolution is the detail level of the image created with your optics. It is affected by few factors - diffraction limit, optics quality, seeing, tracking, maybe something else. That's why star with apparent diameter of for example 0.01" is turned into 2-3" FWHM star image.

Pixel scale is about sampling this image - can be optimal, undersampled or oversampled. With undersampled image you will loose some resolution. Optimal will be optimal, and with oversampled image you will not gain any more resolution and will start to loose some SNR. Not much though - it depends on sky quality, filter used, subframe exposure time and camera noise. 

18 minutes ago, billyharris72 said:

3) [I think - from Chris Woodhouse's Astrophotography Manual] The different sources of error in an imaging train are not independent of one another, but stack. Same for visual with scopes and eyepieces - the idea (often expressed) that each item just is a "bottleneck" and the scope will perform as per its weakest part is wrong - every part introduces error into the system (though it makes sense to go after the biggest error when looking to improve). In a simplified imaging train you might have: a. error from the resolution limit of the scope; b. from seeing; c. from your camera resolution; d. shot noise; e. noise from the camera itself (bias, thermal). The total error in the system grows as the square root of the sum of squared errors. Anything you can do to bring any of these down helps, though some involve trade offs (binning sacrifices resolution but that's very little loss if you are oversampling compared to the gains).

If point 3 above is correct (it will be interesting to hear what more experienced imagers think on this), then a hypothetical, simplfied system with a 1.6" resolution scope and 2" resolution pixels would be getting a "real life" resolution of about 2.5", while moving to an 8" (0.57" resolution) would get you closer to 2".

That is correct. There is usually one or two dominant factors in total error - for my system it is seeing and tracking. Since it is square root of the sum of squared errors, then improving small error component will not improve total error much. For example if we have seeing error 2.5" and two mounts - one with 1" tracking and second with 0.2" tracking, then total error will be:

For the first one 2.69"
For the second one 2.51"

This is of course some simplifications here - because for example usually tracking error is anisotropic - it is different in different directions (elongated stars). But the idea of stacking errors is correct. However I am not sure about putting pixel scale (camera resolution) into this stack. I prefer to consider it separately as image sampling. 

Link to comment
Share on other sites

In thinking about resolution start with the idea of what is the smallest scale you can resolve in your situation. This is a function of aperture, seeing and how accurately you can track the stars. Let’s say it is 2” and seeing limited. 

To make the most of that your camera should have a pixel scale 2 or 3 times smaller. Any smaller is not worse but also has no benefit.

drjolo beat me to it ?

Link to comment
Share on other sites

All right, trying to cut through my confusion, I have now found this page. Something that will tell me simply whether I am going to have any problems.

With the 8" scope, I should use the camera unbinned only in "exceptional" seeing conditions, in "good" seeing I should bin 2*2, in "OK" seeing I should bin either 3*3 or 4*4, this last one being ok for "poor" seeing as well. This seems to make it a good combination for all circumstances.

With the 71mm scope, the only way I can avoid under-sampling is to use the cameras unbinned and hope for poor seeing conditions! To be in the right place for "OK" seeing, I would need to find a camera with pixels abour half the size of the pixels on my current cameras (about 2μm) - a cursory survey suggests these are not common. Maybe this new 71mm scope was not the best idea after all!

Thanks.

Link to comment
Share on other sites

71 f/4.9 scope is excellent for medium/wide field imaging. Especially with large sensor under dark sky. And for wide fields I think undersampling is not such issue, and you can still do drizzling. But it is not galaxies or planetary nebulae performer :) 

Link to comment
Share on other sites

I'm fairly sure that I've read from a number of different sources that ideal image scale is roughly seeing/3, i.e. if seeing is around 2" then an image scale around 0.67"/px is good. I think I've also read that oversampling is less of a problem than undersampling, but I'm a newcomer to astroimaging myself and despite trying to learn as much as I can as quickly as I can, I'm aware that there are many times that I've misunderstood some of the concepts. 

Link to comment
Share on other sites

There's no need to obsess on oversampling - it has no adverse effects. Whether to bin or not depends on the camera.  Can't be done with a colour camera. With a mono CMOS camera there is no real benefit as it takes place in software and the read noise is the same. You should be aiming to expose to a level that buries the read noise so with no change in read noise you should be exposing subs the same amount whether binned or not. With a mono CCD camera binning gives real benefits as regards read noise and you can reduce the exposure time per sub. That is, you can bury the read noise with shorter exposures. So if you can bin without undersampling it is worthwhile.

57 minutes ago, GraemeH said:

I'm fairly sure that I've read from a number of different sources that ideal image scale is roughly seeing/3, i.e. if seeing is around 2" then an image scale around 0.67"/px is good. I think I've also read that oversampling is less of a problem than undersampling, but I'm a newcomer to astroimaging myself and despite trying to learn as much as I can as quickly as I can, I'm aware that there are many times that I've misunderstood some of the concepts. 

Yes - as most of us are seeing limited a pixel scale 1/3 of seeing is quite reasonable to ensure the smallest resolvable details are properly sampled.

 

 

Link to comment
Share on other sites

There are a couple of things in your post, Ken, that do not make sense to me - don't be offended, I'm sure the problem is with me and not you!

1 hour ago, kens said:

Can't be done with a colour camera.

Now I understand that there is a matrix of red, green and blue sensors and nominally each group of 4 is receiving light from one "point" that becomes one pixel in the (unbinned) final result. However, if you bin, for example, 2*2, you now have a group of 16 sensors, each of which is still receiving only its particular colour light. Granted that the "point" has now become 4* - (2*2)* - as large, but that is equally true of a monochrome camera. If all the red sensors and all the green sensors and all the blue sensors are combined, resulting in one pixel in the final result, surely one has binned the colour camera 2*2. Granted that the technical execution may be more complicated than just dumping all the light from 4 monochrome sensors together, but still, I don't see why it can't be done.

1 hour ago, kens said:

With a mono CMOS camera there is no real benefit

Now I understand what you are saying about read noise. If each of the 4 binned pixels is producing its usual amount of read noise, then quadrupling the signal has also quadrupled the noise. But this just leaves me with the same S/N ratio as before. However, for each pixel in the final result, I have captured 4* as much light in a given time compared to an unbinned image. So, when I calibrate out the read noise (master bias?) then I am left with 4* the signal for each pixel in the final image for the same exposure time without the noise. I would consider that to be a real benefit. No?

1 hour ago, kens said:

You should be aiming to expose to a level that buries the read noise

OK, so how does that work in practice, how do I calculate the exposure time I should be using for my camera. My monochrome camera is a 1600MM, which has a read noise (according to FLO website) of "-1.2e@30db gain". So, what exposure length should I be using with this camera? Presumably this would alter if I use a different gain, but how?

Thanks for your help.

Link to comment
Share on other sites

4 hours ago, Demonperformer said:

Now I understand that there is a matrix of red, green and blue sensors and nominally each group of 4 is receiving light from one "point" that becomes one pixel in the (unbinned) final result. However, if you bin, for example, 2*2, you now have a group of 16 sensors, each of which is still receiving only its particular colour light. Granted that the "point" has now become 4* - (2*2)* - as large, but that is equally true of a monochrome camera. If all the red sensors and all the green sensors and all the blue sensors are combined, resulting in one pixel in the final result, surely one has binned the colour camera 2*2.

I think you're misunderstanding how a Bayer matrix works - there is no 2x2 group that becomes a single pixel.  If a pixel is red sensitive then it only gets its G and B values from the interpolation algorithm of the debayering process, likewise for the green and blue pixels.

Link to comment
Share on other sites

7 hours ago, GraemeH said:

I think you're misunderstanding how a Bayer matrix works

That is certainly a (strong) possibility. I have only given it a cursory glance, but I do know my colour ccd has the option to bin 2*2 (and I have used it in that mode).

Let me go away and read-up a bit on bayer matices and I will get back ...

Link to comment
Share on other sites

Right, i have had a read on the way the bayer matrix works on siliconimaging.com/.

Fully accept that it works differently from the way I was suggesting. But, if i've got it now, every pixel ends up with a red value, a green value and a blue value - one directly measured and the other 2 interpolated. 

So to bin 2*2, all that would be needed is for the individual resultant pixel values for each colour to be added in groups of 4.

Or am I still missing something?

Thanks.

Link to comment
Share on other sites

35 minutes ago, Demonperformer said:

Right, i have had a read on the way the bayer matrix works on siliconimaging.com/.

Fully accept that it works differently from the way I was suggesting. But, if i've got it now, every pixel ends up with a red value, a green value and a blue value - one directly measured and the other 2 interpolated. 

So to bin 2*2, all that would be needed is for the individual resultant pixel values for each colour to be added in groups of 4.

Or am I still missing something?

Thanks.

Yes: the superpixel made of 4 normal ones in Bin 2 will include red, green and blue filtered ones but this will be read as one. Therefore there is no colour information getting into the capture software. It has been merged before being read.

On the wider point there are a few caveats. Does binning 2x2 really reduce exposure time by 4? I don't think it gets anywhere near that. It's a fairly easy experiment but when I tried it I thought the gain was less than 2x. My test wasn't hugely scientific but my intentions are not scientific either. Also, don't bank on all cameras binning well. I've worked with three which didn't. They threw up an assortment of artefacts. Didn't Sara measure the binning effect? Maybe check her website.

You can certainly work below an arcsecond per pixel. At what point it becomes a waste of time doing so does depend on the seeing. But here's a curious thing, which flies in the face of the received wisdom: when I was working at 0.66"PP with Yves' 14 inch I don't recall the seeing being much of an issue. But when I'm working with our TEC140 at 0.9"PP the seeing is a very big deal. It is often not worth shooting colour. The standard wisdom says the smaller scope should be less affected by the seeing. My subjective/anecdotal impression is rather the reverse. (This may be an illusion arising from the fact that I focused with a B-mask on the 14 inch since FWHM proved unworkable. I do use FWHM on the TEC and this tells you straight away what the seeing is like.)

So how does the 14 inch with large pixels compare with the TEC140 and small pixels on the same target? I think the 14 inch gets it but only by a whisker. If Santa offered me a TEC or a 14 inch I'd take the TEC. The hassle factor comes into it and I prefer spike-free stars. Also remember that DS imagers usually find SCT stars to be large and soft. (Why, then, are they so good on planets? Don't know.)

Cloudy nights can lead to poring over numbers. Resist this temptation because you'll find in practice that the numbers are overwhelmed by the unidentified realities. Experiment is all!

Olly

Link to comment
Share on other sites

19 minutes ago, ollypenrice said:

Does binning 2x2 really reduce exposure time by 4? I don't think it gets anywhere near that. It's a fairly easy experiment but when I tried it I thought the gain was less than 2x.

My experience of binning is similar. When I bin my 414 with the Edge11 I get a x2 gain not a x4. Not sure why this should be so, but I do wonder if the binning process introduces a level of noise that counteracts, to some extent, the gain from reduced read noise.

Cheers, Ian

Link to comment
Share on other sites

Here is 100 minutes of Ha made with the same camera, filter, mount, location and similar conditions (however not the same night). In my notes I have written that FWHM was in the range 2.3-2.4" in long exposed frames. The only difference is the scope used:

 - at left it is 130mm APO triplet with pixel scale 1.04"/px 
 - at right it is Meade ACF 10" scope with pixel scale 0.44"/px

ACF image is resized 50% and APO image is aligned to ACF.

130vsACF.jpg.7597af01bafb68671e09a67522df948a.jpg

What is immediately visible is of course lower noise in 10" scope image due to larger aperture. But also stars are smaller and in my opinion detail is better in the right image - the one from 10" scope. 

Link to comment
Share on other sites

I see that sampling rate of seeing / 3 is mentioned as good sampling rate.

I would disagree with this. Proper sampling considering star FWHM in image should be FWHM * 0.622, so roughly double the suggested arc seconds per pixel (meaning lower resolution). This level of sampling cuts off frequencies that are 1% or less in power spectrum, resulting in contrast loss of only few percent (sum of all lost frequencies) - it will certainly be buried in the noise (very rarely we get SNR over 100).

While higher resolution will capture those frequencies with power =<1% it will not show up in the image, because in post processing we tend to bring out signal while keeping noise down, so any difference in contrast that is in level with noise (or even less) will not be noticeable.

Link to comment
Share on other sites

18 hours ago, Demonperformer said:

There are a couple of things in your post, Ken, that do not make sense to me - don't be offended, I'm sure the problem is with me and not you!

Now I understand that there is a matrix of red, green and blue sensors and nominally each group of 4 is receiving light from one "point" that becomes one pixel in the (unbinned) final result. However, if you bin, for example, 2*2, you now have a group of 16 sensors, each of which is still receiving only its particular colour light. Granted that the "point" has now become 4* - (2*2)* - as large, but that is equally true of a monochrome camera. If all the red sensors and all the green sensors and all the blue sensors are combined, resulting in one pixel in the final result, surely one has binned the colour camera 2*2. Granted that the technical execution may be more complicated than just dumping all the light from 4 monochrome sensors together, but still, I don't see why it can't be done.

Now I understand what you are saying about read noise. If each of the 4 binned pixels is producing its usual amount of read noise, then quadrupling the signal has also quadrupled the noise. But this just leaves me with the same S/N ratio as before. However, for each pixel in the final result, I have captured 4* as much light in a given time compared to an unbinned image. So, when I calibrate out the read noise (master bias?) then I am left with 4* the signal for each pixel in the final image for the same exposure time without the noise. I would consider that to be a real benefit. No?

OK, so how does that work in practice, how do I calculate the exposure time I should be using for my camera. My monochrome camera is a 1600MM, which has a read noise (according to FLO website) of "-1.2e@30db gain". So, what exposure length should I be using with this camera? Presumably this would alter if I use a different gain, but how?

Thanks for your help.

Looks like the colour binning bit has been well covered by others so I'll leave it there.

As regards binning a CMOS camera. You don't calibrate out read noise. By stacking a number of subframes you can average it out. And binning does also improve the SNR - but the thing here is that you get as good results "binning" in post processing as you do at capture time. A CCD on the other hand combines the electrons from the 4 pixels BEFORE they are read out and read noise gets introduced. So there is an SNR improvement at read time and the same benefits from stacking.

There's a huge amount of info on exposure times over at Cloudy Nights but at risk of going beyond "dummy" level.... - the aim is to expose the sky background to a level well above the read noise. Amounts of 3 to 10 times the square of the read noise are touted (its not an exact science). At this level the sky background (the lowest signal you are interested in) is so much higher than the read noise that is ceases to be the major noise contributor. The trick is to convert the, say, 10 time Read Noise Squared  (10.RN^2) from electrons to ADU for a given level of gain. The ASI1600MM gain is in decibels times 10 relative to zero gain. Unity gain (1 electron per ADU) is 139. Lets work with that. So at a gain of 139 the read noise is around 1.8e- (from the graph on the web site) which would appear to equate to 1.8 ADU. BUT you also need to add the offset, say 50, which gives 51.8 ADU and then you need to multiply by 16 to convert the 12 bit output to 16 bits. So at gain 139 and offset 50 the read noise equates to 829 ADU. But we are looking for the ADU value equivalent to 10.RN^2 which is  10x1.8x1.8 = 32.4. We add the offset of 50 and multiply by 16 to get 1318 ADU. So you if you expose your sky background, the peak of the histogram, to around 1300 or so then read noise is not the major noise contributor.

 

Link to comment
Share on other sites

1 hour ago, drjolo said:

Here is 100 minutes of Ha made with the same camera, filter, mount, location and similar conditions (however not the same night). In my notes I have written that FWHM was in the range 2.3-2.4" in long exposed frames. The only difference is the scope used:

 - at left it is 130mm APO triplet with pixel scale 1.04"/px 
 - at right it is Meade ACF 10" scope with pixel scale 0.44"/px

ACF image is resized 50% and APO image is aligned to ACF.

130vsACF.jpg.7597af01bafb68671e09a67522df948a.jpg

What is immediately visible is of course lower noise in 10" scope image due to larger aperture. But also stars are smaller and in my opinion detail is better in the right image - the one from 10" scope. 

That's an excellent post.  I agree with your conclusions and am surprised by the star size.

Olly

Link to comment
Share on other sites

2 hours ago, ollypenrice said:

Therefore there is no colour information getting into the capture software. It has been merged before being read.

From my position of monumental ignorance, I beg to differ:

RGB is usually stored in a 24-bit format in which the first 8 bits are red, the second 8 bits are green and the remaining 8 bits are blue. Each of my 4 "normal" pixels will therefore contain data that can be extracted as an independant figure by ORing it with the appropriate mask.

Converting to hexadecimal, and using arbritrary figures, I might have:
pixel 1: 0x123456 in which the red is 0x12, the green is 0x34 and the blue is 0x56
pixel 2: 0x789ABC in which the red is 0x78, the green is 0x9A and the blue is 0xBC
pixel 3: 0xDEF012 in which the red is 0xDE, the green is 0xF0 and the blue is 0x12
pixel 4: 0x345678 in which the red is 0x34, the green is 0x56 and the blue is 0x78

The software would merely have to extract these individual values, add them and then (presumably) divide by 4 to produce another 24-bit RGB "superpixel", in this case 0x678667

Surely, I have now "binned" my 4 "normal" RGB pixels into one RGB "super" pixel?

 

That said, I take your point about experience often bearing (apparently) very little similarity to theory. But there was a practical application to my original question - albeit that I was confusing two different ideas that each happened to coincide with 2" in my particular circumstances ("seeing" and "image scale"). From the perspective of my confusion, and the fact that it is considerably easier to load the small scope onto the mount than the big one, the "why would I do it?" question seemed a reasonable one. The "binning" question sort of got tacked onto that and has taken on a life of its own (albeit with my help).

Thanks.

Link to comment
Share on other sites

It depends on where the binning is happening. With a CCD camera it is on the sensor and before debayering. For the ASI1600 (ignoring its hardware binning option) it happens in the driver. I suspect/expect that is also before any debayering. In post-processing it is after debayering.

 

ETA: I hesitate to call it "binning" when it takes place in the driver or post processing - downsampling is probably more appropriate. To me, binning is a hardware operation.

Link to comment
Share on other sites

1 hour ago, drjolo said:

What is immediately visible is of course lower noise in 10" scope image due to larger aperture. But also stars are smaller and in my opinion detail is better in the right image - the one from 10" scope. 

Thanks, Lucas. I think that admirably demonstrates the answer to my basic question: Why would I use the bigger scope?

Link to comment
Share on other sites

13 minutes ago, kens said:

the thing here is that you get as good results "binning" in post processing as you do at capture time.

So there is no saving in time (whether 1/4 or 1/2 as discussed in the previous posts)?

14 minutes ago, kens said:

So you if you expose your sky background, the peak of the histogram, to around 1300 or so then read noise is not the major noise contributor.

So this is not going to be a major contributing factor to my imaging - I would normally want to be exposing to a point where the histogram is further over to the right than that anyway (assuming it runs from 0 to 65535, as you have converted 12-bit to 16-bit in your calculation)?

Thanks.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.