Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

How do I choose which cooled CMOS camera?


Petergoodhew

Recommended Posts

I would base the decision on

Budget

Osc/mono (level of light pollution, nb or lrgb imaging)

Sensor size

Read noise

Pixel size ("/pixel)

The asi 1600 has proven itself as very capable. It seems to hit the sweetspot for most of the above criteria. Most cameras  with larger sensors only come as osc. While cameras with small pixels and small sensors are mainly used for planetary imaging, and galaxy imaging.

Link to comment
Share on other sites

I have been deep sky imaging for 13 years and over this time have used a variety of cameras - DSLR, ccd,  mono, osc.  Recently I have been using an asi  1600 pro and wouldn't hesitate to recommend it to anyone.

It's a contentious issue but I much prefer mono to osc.

Why also make a very nice camera using the same chip.

Link to comment
Share on other sites

I don’t think you can pick a camera – it’s a delicate balancing act between just too many variables: desired FOV and pixel resolution; focal length of scope(s); exposure length; just how many and how big image files you’re prepared to process...; noise level; ...

I just picked up by first CMOS at AstroFest and went for an ASI 294. 11 Mpixels – fewer and bigger ones than the 1600.  But I have a comprehensive spreadsheet with all my scopes and possible cameras showing FOV and resolution.  First light last night very briefly. But what gain to use?  We’ll see how it goes!

Link to comment
Share on other sites

1 minute ago, AKB said:

I don’t think you can pick a camera – it’s a delicate balancing act between just too many variables: desired FOV and pixel resolution; focal length of scope(s); exposure length; just how many and how big image files you’re prepared to proces...; noise level; ...

i just picked up by first CMOS at AstroFest and went for an ASI 294. 11 Mpixels – fewer and bigger ones than the 1600.  But I have a comprehsive spreadsheet with all my scopes and possible cameras showing FOV and resolution.  First light last night very briefly. But what gain to use?  We’ll see how it goes!

This is true, which is why the op finds the choice perplexing! Other than very long focal lengths and assuming an adequate budget I am happy to recommend an asi 1600 pro or similar.  Too many of these threads end up generating a lot of heat and little light. 

What are the principle issues that have been perplexing you Peter?

Link to comment
Share on other sites

Thanks for all of the comments. I can narrow down the choice to mono, and cooled (to mimimise noise).

I actually have two potential uses:

1. For planetary, lunar and small deep-sky objects (planetary nebulae) on a Celestron C11 and so long focal length (2880mm). A small sensor would be fine except for lunar mosaics where a larger sensor would help.

2. As an alternative to a CCD Camera on a 1200mm focal length refractor which is part of a multi-rig setup if I cannot overcome issues of differential flexure (which seems to be an issue on most multi-rigs). My thinking is that CMOS gives me very short exposures and thus reducing the impact of any flexure.

Link to comment
Share on other sites

1 hour ago, MartinB said:

What are the principle issues that have been perplexing you Peter?

Martin, primarily the sheer choice of models, such as ASI1600MM vs ASI183MM vs ASI178MM etc etc.  I read that some of these are only suitable for short focal lengths (whereas my need is for long focal lengths) and yet I see amazing images using them on Astrobin produced on long focal length scopes. 

Link to comment
Share on other sites

The problem with small pixels at longer focal lengths is that atmospheric turbulence smears the signal across the pixels.  The only problem with this is that you you don't benefit from the small pixel size and are better off binning your pixels.  Unfortunately binning doesn't work as well with cmos as it does with ccd (the 4 binned pixels are still read individually so there is no reduction in read noise - low read noise enables shorter exposure times and gives increased dynamic range).  For deep sky a ccd is probably still a better option for long focal lengths, but, as you have seen, results in practice often defy the theory!

Link to comment
Share on other sites

1 hour ago, Petergoodhew said:

1. For planetary, lunar and small deep-sky objects (planetary nebulae) on a Celestron C11 and so long focal length (2880mm). A small sensor would be fine except for lunar mosaics where a larger sensor would help.

ASI174MM-cool

1 hour ago, Petergoodhew said:

2. As an alternative to a CCD Camera on a 1200mm focal length refractor which is part of a multi-rig setup if I cannot overcome issues of differential flexure (which seems to be an issue on most multi-rigs). My thinking is that CMOS gives me very short exposures and thus reducing the impact of any flexure.

ASI1600MM-cool or ASI071MC-cool

Link to comment
Share on other sites

5 hours ago, Petergoodhew said:

Thanks for all of the comments. I can narrow down the choice to mono, and cooled (to mimimise noise).

I actually have two potential uses:

1. For planetary, lunar and small deep-sky objects (planetary nebulae) on a Celestron C11 and so long focal length (2880mm). A small sensor would be fine except for lunar mosaics where a larger sensor would help.

2. As an alternative to a CCD Camera on a 1200mm focal length refractor which is part of a multi-rig setup if I cannot overcome issues of differential flexure (which seems to be an issue on most multi-rigs). My thinking is that CMOS gives me very short exposures and thus reducing the impact of any flexure.

For first point - ASI290.

Planetary / lunar imaging is governed by two parameters - QE and read noise. You want to maximize first, while minimizing second. Pixel size is not important as you can adjust it via use of barlows / telecentric lens. For use on PNs/SNRs - see if you will have enough FOV with such small sensor. You can always bin it in long exposure to get needed SNR.

For second point - I think ASI1600 would be the best, binned x2. This will give you resolution of 1.36"/pixel - which very good resolution to work with. You can achieve similar with ASI183 but you will need to bin it x3 and you will be giving up FOV.

 

4 hours ago, MartinB said:

The problem with small pixels at longer focal lengths is that atmospheric turbulence smears the signal across the pixels.  The only problem with this is that you you don't benefit from the small pixel size and are better off binning your pixels.  Unfortunately binning doesn't work as well with cmos as it does with ccd (the 4 binned pixels are still read individually so there is no reduction in read noise - low read noise enables shorter exposure times and gives increased dynamic range).  For deep sky a ccd is probably still a better option for long focal lengths, but, as you have seen, results in practice often defy the theory!

Hm, not sure on that one ...

Even if you do 2x2 bin on CMOS to get larger pixel, you still end up with better specs than large pixel CCD. Only benefit of CCD would be larger sensor in this particular case (more field with long focal length scope if it provides corrected field, as we have not seen any mono CMOS sensors larger than 4/3 as far as I know).

Let's take ASI1600 for example - pixel size is 3.8um. If you bin it in software, you'll get equivalent of 7.6um pixel size with read noise of 3.4e. Do you know any such CCD?

Link to comment
Share on other sites

2 hours ago, vlaiv said:

 

Hm, not sure on that one ...

Even if you do 2x2 bin on CMOS to get larger pixel, you still end up with better specs than large pixel CCD. Only benefit of CCD would be larger sensor in this particular case (more field with long focal length scope if it provides corrected field, as we have not seen any mono CMOS sensors larger than 4/3 as far as I know).

Let's take ASI1600 for example - pixel size is 3.8um. If you bin it in software, you'll get equivalent of 7.6um pixel size with read noise of 3.4e. Do you know any such CCD?

The point about binning with CCD  is that the read noise stays the same with binning but with 2x2 binning you are gaining four times the signal  which is a major benefit especially when trying to capture very faint signal.  When you bin a cmos sensor each pixel is read so the read noise is quadrupled, the final read noise will obviously depend on the gain applied.  You could achieve a binned read noise of 3.4e but at the expense of dynamic range.  An Atik one would have a binned read noise of 5e and a pixel size of 7.4, pretty handy for imaging at 1200mm.  I am the proud owner of an ASI1600 and think it is a wonderful camera.  I would happily use it at 1200mm but I don't think it fully matchs CCD at this focal length.  However, a big apology...Peter wasn't asking for an opinion on CCD vs CMOS !  So Peter, please ignore all my CCD related comments!

Link to comment
Share on other sites

2 minutes ago, MartinB said:

The point about binning with CCD  is that the read noise stays the same with binning but with 2x2 binning you are gaining four times the signal  which is a major benefit especially when trying to capture very faint signal.  When you bin a cmos sensor each pixel is read so the read noise is quadrupled, the final read noise will obviously depend on the gain applied.  You could achieve a binned read noise of 3.4e but at the expense of dynamic range.  An Atik one would have a binned read noise of 5e and a pixel size of 7.4, pretty handy for imaging at 1200mm.  I am the proud owner of an ASI1600 and think it is a wonderful camera.  I would happily use it at 1200mm but I don't think it fully matchs CCD at this focal length.  However, a big apology...Peter wasn't asking for an opinion on CCD vs CMOS !  So Peter, please ignore all my CCD related comments!

Yes, my apologies as well for slight off topic.

I just need to clarify this, as I believe it will be beneficial to community.

No need to do anything at expense of dynamic range, nor fiddle with gain to get what I described. Simple bin x2 at unity gain will do it. You are absolutely right about difference between hardware bin and software bin, but let's leave that aside and just consider CMOS pixels as they are - and do it with ASI1600 as example.

Let's take default specs for ASI1600, and observe single pixel - it will have 3.8um side, read noise of 1.7e at unity gain, 4096e well depth at unity gain (just because of 12bit ADC this value is clipped, otherwise it's higher).

Now let's look at 4 adjacent pixels in grid 2x2 and observe what happens if we sum their values.

1. Sum of signal will have exactly the same value as single pixel of same QE but with sides 7.6um x 7.6um. This is true for both target signal and LP signal. What about shot noise? Let's do it for target signal and LP signal will be the same. Let V stand for signal of one pixel - associated noise is sqrt(V). If we add signal we will have 4V same as in larger pixel. Shot noise of larger pixel will be 2sqrt(V). If we add 4 times shot noise from single pixel by using noise addition we will get sqrt( 4xsqrt(V)^2) = 2x sqrt(sqrt(V)^2) = 2 x sqrt(V). Same thing as with larger pixel.

2. Let's add read noise of each of 4 pixels. Again we have sqrt(4 x 1.7^2) = 2 x sqrt(1.7^2) = 2 x 1.7 = 3.4e

3. It's worth mentioning that dark current is also added, and shot noise of dark current is also increased by factor of 2, but with sub zero cooling and set point temperature this value is really small in both CCD and CMOS.

If you look at above, 2x2 binned pixel in CMOS differs by larger pixel of same QE just by 2 things - double the read noise and double of dark current noise. And this is with software binning. No need to fiddle with gain, or to put it in another words - when you bin in software you end up with x2 higher read noise over baseline value (what ever gain you used to capture data and associated read noise).

Link to comment
Share on other sites

4 hours ago, vlaiv said:

Yes, my apologies as well for slight off topic.

I just need to clarify this, as I believe it will be beneficial to community.

No need to do anything at expense of dynamic range, nor fiddle with gain to get what I described. Simple bin x2 at unity gain will do it. You are absolutely right about difference between hardware bin and software bin, but let's leave that aside and just consider CMOS pixels as they are - and do it with ASI1600 as example.

Let's take default specs for ASI1600, and observe single pixel - it will have 3.8um side, read noise of 1.7e at unity gain, 4096e well depth at unity gain (just because of 12bit ADC this value is clipped, otherwise it's higher).

Now let's look at 4 adjacent pixels in grid 2x2 and observe what happens if we sum their values.

1. Sum of signal will have exactly the same value as single pixel of same QE but with sides 7.6um x 7.6um. This is true for both target signal and LP signal. What about shot noise? Let's do it for target signal and LP signal will be the same. Let V stand for signal of one pixel - associated noise is sqrt(V). If we add signal we will have 4V same as in larger pixel. Shot noise of larger pixel will be 2sqrt(V). If we add 4 times shot noise from single pixel by using noise addition we will get sqrt( 4xsqrt(V)^2) = 2x sqrt(sqrt(V)^2) = 2 x sqrt(V). Same thing as with larger pixel.

2. Let's add read noise of each of 4 pixels. Again we have sqrt(4 x 1.7^2) = 2 x sqrt(1.7^2) = 2 x 1.7 = 3.4e

3. It's worth mentioning that dark current is also added, and shot noise of dark current is also increased by factor of 2, but with sub zero cooling and set point temperature this value is really small in both CCD and CMOS.

If you look at above, 2x2 binned pixel in CMOS differs by larger pixel of same QE just by 2 things - double the read noise and double of dark current noise. And this is with software binning. No need to fiddle with gain, or to put it in another words - when you bin in software you end up with x2 higher read noise over baseline value (what ever gain you used to capture data and associated read noise).

Thanks Vlav, agree with all that and you have shown nicely that read noise is doubled not quadrupled as I had suggested.  The absolute value for read noise is related to gain of course.  I also agree that binning doesn't increase the signal to shot noise ratio.  Binning a CCD chip will deliver a larger benefit in improved SNR, because of reduction in read noise, than will binning a CMOS chip.  This doesn't mean it's not worth binning with CMOS, in fact you are probably better off binning earlier than you would with CCD because of the pixel size.  The low read noise and dark current advantages of CMOS aren't entirely lost when binned but when comparing CCD and CMOS there is obviously more to consider than read noise and dark current.  I have a venerable QSI with a KAF 3200 chip.  It has 6.8 micron pixels, a max QE of 87%, A full well depth > 60K, 7e dark current, just 3.2 megapixels, crude microlensing and a chip just a little smaller than my ASI 1600.  I t couldn't be a more different chip to the ASI with very different characteristics but if I'm running my 10"LX200 it will still be my goto camera.  In fact right now it is a clear night, my ASI is attached to a 200mm lens and my QSI is running on an FSQ106 reduced to F3.7.  Despite the low count of bulky pixels I still think that old chip will do a decent job but I'd better go an check it!

Link to comment
Share on other sites

Its definitely worth binning a CMOS camera, while in the strictest sense you dont change the signal to noise ration you do increase perceived image quality by an equivalent amount. If you are over sampling in the first instance there is no disadvantage to binning 2x2 only advantages. You will not loose any detail as you probably did not resolve anything at the 1x1 pixel scale anyhow, but you will be able to stretch the image more etc.

Link to comment
Share on other sites

2 hours ago, Adam J said:

Its definitely worth binning a CMOS camera, while in the strictest sense you dont change the signal to noise ration you do increase perceived image quality by an equivalent amount. If you are over sampling in the first instance there is no disadvantage to binning 2x2 only advantages. You will not loose any detail as you probably did not resolve anything at the 1x1 pixel scale anyhow, but you will be able to stretch the image more etc.

What do you mean by not changing SNR in "strictest sense"?

Link to comment
Share on other sites

3 hours ago, vlaiv said:

What do you mean by not changing SNR in "strictest sense"?

Because the camera collects a given number of photons in total integrated over all it's pixels and converted.to electrons. The camera also has a total amount of noise in electrons integrated across all it's pixels. Those numbers are fixed and so the s/n ratio is fixed and will not change with digital processing. Sampling 2x2 in processing does not strictly speaking change those values. You have improved image quality by exchanging detail for a smoother image, digital processing gain if you will. But the data still contains that original noise and original signal irrespective.

For example if you take a uniform signal across the entire sensor averaging 10e / second /pixel and a read noise of 1e and a dark current of 1e / pixel /second. Then take a 1second exposure you will get an array of pixels with a distribution of values centered on 12e. If you then take the extreme case and average accross the entire sensor creating a single super pixel you will get an average value of 12e but that will very greatly less between exposures than any given single pixel value. As such you have a more accurate measurement of signal + noise but your average value is still made up of 10e of signal and 2e of noise and without knowing the signal level in advance you still have no way of knowing how much of the total 12e signal + noise is signal and how much is noise. It could be 5e of signal and 7e of noise, you have no way of knowing exactly. 

Adam

Link to comment
Share on other sites

2 minutes ago, Adam J said:

Because the camera collects a given number of photons in total integrated over all it's pixels and converted.to electrons. The camera also has a total amount of noise in electrons integrated across all it's pixels. Those numbers are fixed and so the s/n ratio is fixed and will not change with digital processing. Sampling 2x2 in processing does not strictly speaking change those values. You have improved image quality by exchanging detail for a smoother image, digital processing gain if you will. But the data still contains that original noise and original signal irrespective.

For example if you take a uniform signal across the entire sensor averaging 10e second /pixel and a read noise of 1e and a dark current of 1e / pixel /second. Then take a 1second exposure you will get an array of pixels with a distribution of values centered on 12e. If you then take the extreme case and average accross the entire sensor creating a single super pixel you will get an average value of 12e but that will very greatly less between exposures than any given single pixel value. As such you have a more accurate measurement of signal but your average value is still mage up of 10e of signal and 2e of noise. 

Adam

There is no inherent noise in signal value. Noise can be thought of consequence of physical processes associated with sampling / measuring that value.

Let's say that we have a light source that shines with certain intensity (even very low intensity like 0.8 photons per second per some area unit). Is there noise associated with this value - 0.8 photons/s? Not intrinsically. There is however noise associated with measurement / recording of that value as you can't measure indefinitely long. So you are restricted to certain measurement period, and number of photons will produce Poisson distribution based on real intensity value.

Measure above value (with perfect detector that adds no noise of its own) for 10 seconds - you will end up with one SNR value.

Measure it again (with again perfect detector) for 100 seconds - and you will end up with a different SNR value. How can signal have two associated noise values? It can't - it does not inherently have a noise associated with it. Noise is associated with the way of recording value.

Software binning can be thought of as another part of sampling / measuring process. It changes amount of noise associated with signal (and other noise sources) in real and measurable way - in the same way longer exposure changes noise - alteration of measurement procedure.

You don't even have to think about software binning 2x2 (for example) in mathematical terms. You can view it as taking 4 times as many recordings with coarser sampling resolution. If you think about it, binning can be done by simply taking every other pixel and putting it separate sub.

So if we take X even samples, and Y even samples and put it in first sub, X odd samples / Y even samples and place that in second sub, ... with X odd/Y odd in fourth sub we effectively did nothing to SNR of each pixel as recorded, but we produced 4 times more subs. Stacking that will yield x2 better SNR. In case of oversampling we even did not trade detail for better SNR - as detail was not there to begin with.

Binning indeed produces higher SNR for single sub if adjacent pixels are averaged / summed, but you don't have to look at it that way - you can look at it as producing x4 more samples which when stacked produce again - same x2 increase in SNR.

You could argue that stacking is not really increasing SNR, it's just "smoothing" image - but stacking is precisely increasing SNR because it's altering data collection / measurement process and SNR depends on this process and not original value.

Link to comment
Share on other sites

2 hours ago, vlaiv said:

 

There is no inherent noise in signal value. Noise can be thought of consequence of physical processes associated with sampling / measuring that value.

Never said there was.

Let's say that we have a light source that shines with certain intensity (even very low intensity like 0.8 photons per second per some area unit). Is there noise associated with this value - 0.8 photons/s? Not intrinsically. There is however noise associated with measurement / recording of that value as you can't measure indefinitely long. So you are restricted to certain measurement period, and number of photons will produce Poisson distribution based on real intensity value.

You will still get variability on any signal due to shot noise, even is the source is uniform with time.

Measure above value (with perfect detector that adds no noise of its own) for 10 seconds - you will end up with one SNR value.

Measure it again (with again perfect detector) for 100 seconds - and you will end up with a different SNR value. How can signal have two associated noise values? It can't - it does not inherently have a noise associated with it. Noise is associated with the way of recording value.

Up until now we have been considering binning data that is already in digital form and so variable exposure is not a factor in the argument I was making. 

Software binning can be thought of as another part of sampling / measuring process. It changes amount of noise associated with signal (and other noise sources) in real and measurable way - in the same way longer exposure changes noise - alteration of measurement procedure.

You don't even have to think about software binning 2x2 (for example) in mathematical terms. You can view it as taking 4 times as many recordings with coarser sampling resolution. If you think about it, binning can be done by simply taking every other pixel and putting it separate sub.

So if we take X even samples, and Y even samples and put it in first sub, X odd samples / Y even samples and place that in second sub, ... with X odd/Y odd in fourth sub we effectively did nothing to SNR of each pixel as recorded, but we produced 4 times more subs. Stacking that will yield x2 better SNR. In case of oversampling we even did not trade detail for better SNR - as detail was not there to begin with.

I think I made this exact point above when arguing that there is a point in binning CMOS data. 

Binning indeed produces higher SNR for single sub if adjacent pixels are averaged / summed, but you don't have to look at it that way - you can look at it as producing x4 more samples which when stacked produce again - same x2 increase in SNR.

You could argue that stacking is not really increasing SNR, it's just "smoothing" image - but stacking is precisely increasing SNR because it's altering data collection / measurement process and SNR depends on this process and not original value.

That is indeed my argument, by binning you are reducing measurement uncertainty by increasing sample size and that works really well in the artificial uniform illumination example we used as resolution is not a factor in improving measurement accuracy. However, in a real image your not doing that at all, if I want to measure the brightness of a 1 arc-second by 1 arc-second section of nebula then averaging it with three other pixels will not do that, in fact those additional pixels might as well be considered additional noise and increase my measurement uncertainty, you have gained no additional information by binning.  You have not reduced the measurement error in regard to how bright that 1 arc-second square is, however if you stack more images you do, hence they are different processes. 

What you may have done is increase the perceived image quality. Certainly so in the case of a over sampled setup. However, there is a point at which you will start reducing image quality if you bin, if i bin 8x8 the loss of detail in most cases will result in a decrease in perceived image quality....hence SNR does not equal image quality once you are digitally manipulating the image but higher SNR at readout does always equal higher image quality.

Once you are critically sampled all you can do to increase image quality is add more subs. As such binning in not having an equivalent effect to adding more subs.

For me I dont like using the term SNR once you have started to digitally manipulate the image as at that point all you are doing it trading one thing for another. SNR for me is to do with the measurement process / readout itself and anything after that is better discussed in terms of image quality. I know many people who have a preference for higher resolution with more residual 'speckle' than myself and some people who have a preference for a much smoother image than me. Digital noise reduction is after all analogous to binning, but with noise reduction you at least have the option to apply it to the parts of the image with low signal only. 

For me once you are working in the digital domain your true SNR is fixed and all you are doing is finding your personal happy medium where by you perceive the image quality to be optimal. However irrespective of how you digitally process the image to your own taste you will always end up with a better image if the original SNR (prior to digital manipulation) is higher. On the other hand more binning does not universally equal a better image.

To sum that up, higher SNR as you define it does not universally equal higher image quality beyond the highly under sampled. But SNR as I use it always results in higher image quality irrespective of sampling.

Adam

 

Link to comment
Share on other sites

58 minutes ago, Adam J said:

if I want to measure the brightness of a 1 arc-second by 1 arc-second section of nebula then averaging it with three other pixels will not do that, in fact those additional pixels might as well be considered additional noise and increase my measurement uncertainty, you have gained no additional information by binning.  You have not reduced the measurement error in regard to how bright that 1 arc-second square is, however if you stack more images you do, hence they are different processes. 

Just as thought experiment, let's say you have 1"x1" patch of sky that contains some structure to it - meaning it is not uniformly lit up by what ever object is there, and we have an optical system coupled with atmosphere that blurs details and creates uniform light in that 1"x1" patch of sky.

Now we do the following:

Sample that 1"x1" patch of sky at 1"/pixel resolution - i.e. measure it with single pixel, by means of time integration. We do the same at resolution 0.5"/pixel - meaning we have 4 adjacent pixels covering exact same patch of the sky.

Single pixel measurement will yield certain SNR, so will 2x2 pixel array in second case - each will have certain SNR. Since 1"x1" contains same level of signal because it has been smoothed out by optics and seeing, we can say that each of 4 pixels has same measured value (think of it as measuring depth of a flat bed pond at four different places).

When we add up those 4 pixels, or rather their values, do we introduce anything unnatural over single larger pixel? Will we get same value as with single pixel? I believe we will. On the other hand each of 0.5x0.5" pixels will have certain noise associated with measured value - measured value will deviate from true value by small amount. Averaging (or adding - same thing) - will do the same thing as stacking 4 different measurements. We are averaging 4 different measurements of the same value. Value is the same not because of pixels but because of optical system and seeing.

We will indeed improve our measurement precision by adding 4 adjacent pixels over each of single pixel measurements - thus increasing SNR.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Just as thought experiment, let's say you have 1"x1" patch of sky that contains some structure to it - meaning it is not uniformly lit up by what ever object is there, and we have an optical system coupled with atmosphere that blurs details and creates uniform light in that 1"x1" patch of sky.

Now we do the following:

Sample that 1"x1" patch of sky at 1"/pixel resolution - i.e. measure it with single pixel, by means of time integration. We do the same at resolution 0.5"/pixel - meaning we have 4 adjacent pixels covering exact same patch of the sky.

Single pixel measurement will yield certain SNR, so will 2x2 pixel array in second case - each will have certain SNR. Since 1"x1" contains same level of signal because it has been smoothed out by optics and seeing, we can say that each of 4 pixels has same measured value (think of it as measuring depth of a flat bed pond at four different places).

When we add up those 4 pixels, or rather their values, do we introduce anything unnatural over single larger pixel? Will we get same value as with single pixel? I believe we will. On the other hand each of 0.5x0.5" pixels will have certain noise associated with measured value - measured value will deviate from true value by small amount. Averaging (or adding - same thing) - will do the same thing as stacking 4 different measurements. We are averaging 4 different measurements of the same value. Value is the same not because of pixels but because of optical system and seeing.

We will indeed improve our measurement precision by adding 4 adjacent pixels over each of single pixel measurements - thus increasing SNR.

I agree, I did say that the exception is when over-sampled in my last post.

As a thought for my sky I perceive that I am over-sampled even at 1.2 / pixel in all but maybe a couple of nights per year. But at the same time if I bin I end up with 2.4 / pixel and that is too much. What I tend to do is re-sample the image to about 1.8/ pixel...it also makes the image feel smoother of course. Yet it seems to give more detail than I get at 2.4 / pixel (2x2)...now that should not be possible if I really am over-sampled as anything sub 2x2 should result in no more improvement in resolution than 2x2....as such my assumption is that detail is resolved at the pixel level, but that the subtle differences between pixels are not perceived at 1x1 due to being hidden by the pixel to pixel noise. As such I never shoot in 2x2 on the camera and always shoot in 1x1 as it gives me more flexibility in processing.

However what this also imply to me is that I would see detail at 1x1 if i took more subs.

Like I say in any case, my entire argument is that while image quality is directly linked to SNR before digital processing it is not a direct measure of image quality once you start using noise reduction and re-sampling / resizing.

For a moment lets imagine that image quality becomes visually pleasing once we have a SNR of >20 in the weakest part of the target that we want to bring out. The target is the veil nebula and so we assume it as structure down to sub arc-second resolution.

Example:

Image 1-1

Taken at 1.5 arc-seconds per pixel in good seeing at 1x1 with a SNR of 20/1

Image 1-2

Taken with the same setup as image 1 but binned 2x2 (3 arc-seconds) per pixel with a SNR of 40/1

Image 2-1

Taken at 2.0 arc-seconds per pixel with a SNR of 15/1

Image 2-2

Taken with the same setup as image 2-1 but binned 2x2 (4 arc-seconds per pixel) with a SNR of 30/1

Personally I would rank the images as follows:

Image 1-1

Image 2-1

Image 1-2

Image 2-2

So yes, while you can gain if you are over sampled this is not the case for the vast number of setups as people tend to try and avoid this when selecting their equipment.

 

Adam

 

 

Link to comment
Share on other sites

3 hours ago, Adam J said:

as such my assumption is that detail is resolved at the pixel level, but that the subtle differences between pixels are not perceived at 1x1 due to being hidden by the pixel to pixel noise. As such I never shoot in 2x2 on the camera and always shoot in 1x1 as it gives me more flexibility in processing.

Thing with resolved detail is quite a bit more complicated than this. You never resolve anything within a single pixel. It is ratio of adjacent pixels that resolves detail.

From what I've gathered most people don't understand what resolved means in terms of imaging, neither the impact of undersampling to the image. This is probably because most people go by double star separation and visual "resolve" criteria - if two stars are separated by larger distance than radius of airy disk - they are resolved. This is certainly true for visual and point like sources that one is trying to asses as two distinct stellar sources.

With imaging, resolving is about local contrast - it's a about smooth transition between "there is sharp contrast" to "there is no contrast at all" as you remove "detail" - shapes in the image don't vanish all of a sudden when under sampling - they start to lack contrast until they "vanish into the background" or "join into larger forms".

Here is example - 4 images, first is sampled at "proper" sampling rate, second is x2 under sampled, third is x3 under sampled and fourth is x4 under sampled - all are resampled to original resolution (note that there are no "blocky" stars - often associated with undersampling, that is another "myth" - one should get square stars when undersampling).

image.png.0234737ef7e9215f041692c73d913160.png

You can see that even when very under sampled - x3 of optimum you can still see most of the features and with a bit of skill you would be able to read what it says in the text

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.