Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

M13 RGB


Rodd

Recommended Posts

2 minutes ago, DaveS said:

I think I have to take some of the blame sorry. It was that image I chucked into the ring that did it :grin:.

 I'm still following it off and on, but it's getting a bit beyond me at the moment.

No blame David, it's just further illuminated an area of AP that I don't understand. It may be peeling an onion, but if Vlaiv or someone can help peel some of the layers for me, that would be good - though it might just make me cry of course.....:icon_rolleyes:

  • Haha 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

I understand what you are saying. I guess that is personal preference, and this image was obviously taken in rather poor seeing conditions.

I rather like when image looks like this in detail it presents (that is equivalent of x4 bin for this particular image):

image.png.178522b891eab62e37cc463a65b9c01a.png

Now there is no trace of noise in the image and contrast is good and image looks sharp. I would be more than happy if image looked like that, at this resolution:

Screenshot_1.jpg.5a3f760d200b196cff7e5b0d2f8588f1.jpg

But it starts to go a little soft and a little noisy at this resolution (which is 50% of original - equivalent to bin x2). By the way - this is in ball park of what can be expected from amateur setups with larger apertures and good mounts - sampling rate of about 1"/px-1.2"/px. Anything below that is 99.99% unrealistic for amateurs in long exposure imaging.

Ahh--this brings us to an ancillary point--focal length and aperture.  The reason I am using the C11 is to image small galaxies that the TOA 130 can't really resolve.  Even if pixel scale is identical--say .7 arcsec/pix, with the TOA, NGC 6050 say, will be a spec.  Enlarging it will not be fruitful--there are not enough photons to allow it.  at 2800 with 11" of aperture, the galaxy  (three actually) will still be small, but at full resolution it will be recognizable.  I guess what I am saying is to image above a certain focal length you have to have enough aperture to support it.  A 4" scope shooting at 5,000mm will not yield decent results for tiny objects, while a 14" scope shooting at 5,000 mm can yield excellent results.  Even if resolution is somehow the same between the systems (same pixel scale), there is obviously a benefit, other than the other idea of resolution (sharpness and fine details).  Note to those who get confused here--there are three meanings of the word resolution (That I am aware of).  One technical meaning--which is the number of pixels (i.e the resolution of a camera expressed in MP), another technical meaning, which is pixel scale (how fine a detail can be picked up by a system expressed in arcsec/pixel), and the colloquial, lay person's meaning of how sharp and clear an image looks, expressed in terms of clarity.

  • Like 1
Link to comment
Share on other sites

48 minutes ago, geoflewis said:

Hi everyone,

I'm continuing to enjoy this discussion with the addition of @DaveS image and binning experiment, but it's taken me in to territory that I really don't understand, so I'm hoping that someone can clarify for me. The bottom line is that I don't fully understand the terminology, nor the real effects on an image of some of the operations being discussed. What do terms like downsample, upscale, binning actually mean in terms of image quality, resolution, or whatever is the correct term? I think I understand what hardware binning means, as that actually combines a grid of pixels (2x2, 3x3, 4x4, etc) on the sensor into an effective single pixel before downloading the data. Software binning is similar (?), but works differently (I think), but what about downsample, upscale, etc. Are these terms (especially downsample and binning) interchangeable, if not then what is the difference? If I bin 2x2 in sftware, then upscale x2, do I end up in 'exactly' the same place? What if I resize/upscale x2 then bin 2x2, is this also the same, i.e. what is actually happening with the image data as these operations are applied? Are resize x2 and upscale x2 even the same? What is downsampling? is it the opposite of upscaling? Sorry if I'm showing my ignorance too much, but I see these terms bandied about all over the place, but I can't find an authorative reference to distinguish between some of them. Anyone fancy taking a punt at this for me please?

Rodd, sorry for hijacking your thread, but it was following the discussion that started with your superb M13 that got me here.

Ok, so lets clarify things.

First is sampling rate / resolution. Resolution is maybe not the best term to be used as it is used in so many contexts and has different meaning.

Sampling rate just means how many pixels / sample points covers certain part of the sky. It is expressed in arc seconds per pixel. It can be thought of as size of pixel at certain focal length - or maybe better way to think about it would be distance between image samples (hence sampling rate).

When we work with image - samples don't have size. Physical pixels have size, but samples are dimensionless points - just "coordinates" at place where we measure light intensity. Sampling rate is distance between these points.

Up sampling and down sampling - just simply means change of sampling rate. General process is called Resampling - or change of sampling rate.

Up sampling - means adding more sampling points - sampling at smaller intervals, while down sampling means using fewer sampling points.

Result of up sampling is that we have lower number of arc seconds per pixel, while result of down sampling means higher number of arc seconds per pixel. Say we have original sampling rate to be 1"/px - or in another words - there is one arc second between each consecutive sample (both in X and Y direction).

Up sampled image would then have 0.5"/px and down sampled image would for example have 2"/px (I used x2 in this example to go both up and down - but you can use any number).

Binning is one form of resampling. In some ways it is the same as any other type of resampling, but in other ways it is quite special.

It is the same as other resampling methods because it changes sampling rate - in particular - it down samples by factor that is the same as bin factor. Say you bin x2 - you'll get twice down sampled resolution. For bin x3 - you get 3 times down sampled image.

It is specific in that it:

- has very precisely defined improvement in SNR - improvement in SNR is equal to bin factor - bin x2 yields x2 improvement in SNR, bin x3 - yields x3 improvement in SNR, etc ...

- it does not introduce sample to sample correlation (this is math stuff - we can go into it - but not that important for this discussion)

- increases "pixel blur". This is equivalent of saying - there is no difference between hardware and software binning - or results of binning in software where we add / average group of samples is the same as using equivalently larger pixel to image with. This pixel blur is consequence of the fact that pixels have physical size and are not point samples and we treat values they produce as point samples (btw - there is no point in treating them otherwise - in case you were wondering what if ...)

In the end - it is important to understand that certain sampling rate can record information of certain resolution. You can't record smaller detail than what is possible at certain sampling rate. For example - you can't resolve two stars that are separated by smaller distance than that of sampling rate. This sort of stands to reason.

There is well defined theory about sampling and famous Nyquist-Shannon sampling theorem, which defines when signal can be perfectly restored. It needs to be band limited and you need to sample at twice highest frequency component of that band limited signal in order to record it in such way that it is possibly to perfectly reconstruct it.

Up sampling will not decrease level of detail - but it will not add any detail either.

Down sampling (and binning as a form of down sampling) - will loose detail that is finer than particular sampling rate can sustain (from above Nyquist criteria - this really means that we will loose high frequency components that are larger than half of sampling rate). However - if image is blurred and does not contain that finer detail - we can down sample it without loosing anything - it can be restored to original with even lower sampling rate.

In example above where I down sampled image and then up sampled it - I played on that card. Up sampling will not do anything in terms of detail loss / gain, and down sampling will only loose detail - if detail is there. If it is not - result will be the same.

As far as other differences between some other sort of down sampling vs binning:

- other types of resampling first interpolate point samples with a function:

nninterp.png

Let's assume that bar graph are pixels, red dots are sample points (values at "centers of those pixels") and green line is interpolation function.

Now that we have green line - we can just "measure" it at any point we want - we can take a set of sampling points at wanted distance and get new set of samples - either more of them - up sampling, or less of them - down sampling.

Type of curve that we "draw" thru our original set of sampling points - defines type of interpolation that resampling uses. It can be bilinear (meaning just linear in both X and Y), bicubic (which is cubic in X and Y) and various others like splines, windowed functions like Lanczos - which is windowed Sinc interpolation and so on ...

All those different interpolation algorithms have different properties and do different things on your data. They improve SNR by different factors, cause different levels of blur similar to pixel blur, etc ...

If you look it like that, then binning by x2 is the same as half sample shift and down sampling using bilinear interpolation.

Hope this helps a bit to understand a bit more samples / resampling and binning.

 

 

  • Thanks 3
Link to comment
Share on other sites

@vlaiv many thanks for the detailed reply. I understood some, but by no means all of it. Certainly the section on sampling rates helps and you clarifying that binning is a form, albeit specialist form, of sampling is very good, thanks. I am going to read and re-read these notes several times to see if I can force a bit more understanding into my grey matter.

I have another question which arises because when I checked my ImagesPlus astroprocessing software I see that there are 3 operation methods for binning - Add, Average and Median for each of bin 2x2, 3x3, 4x4, 5x5. So when folks say Bin 2x2, which operation do they mean, Add, Average or Median? I note that 'Add' clearly brightens the image, where 'Average' and 'Median' somewhat preserve similar brighness to the original, but perhaps are not quite the same.

The ImagesPlus reference summary for the binning function says...

 "Bin Image - Applies an add, average, or median 2x2, 3x3, 4x4, or 5x5 bin operation to an open image."

That's not the most helpful note, as it gives no clue as to when to use which of those operations, so I never have used them...

TIA

Edited by geoflewis
Link to comment
Share on other sites

1 hour ago, geoflewis said:

@vlaiv many thanks for the detailed reply. I understood some, but by no means all of it. Certainly the section on sampling rates helps and you clarifying that binning is a form, albeit specialist form, of sampling is very good, thanks. I am going to read and re-read these notes several times to see if I can force a bit more understanding into my grey matter.

I have another question which arises because when I checked my ImagesPlus astroprocessing software I see that there are 3 operation methods for binning - Add, Average and Median for each of bin 2x2, 3x3, 4x4, 5x5. So when folks say Bin 2x2, which operation do they mean, Add, Average or Median? I note that 'Add' clearly brightens the image, where 'Average' and 'Median' somewhat preserve similar brighness to the original, but perhaps are not quite the same.

The ImagesPlus reference summary for the binning function says...

 "Bin Image - Applies an add, average, or median 2x2, 3x3, 4x4, or 5x5 bin operation to an open image."

That's not the most helpful note, as it gives no clue as to when to use which of those operations, so I never have used them...

TIA

Add, average and median refers to math operation used to produce single value (single sample) from multiple adjacent pixel values - in bin x2 case - there are 4 samples / pixel values that are used (in bin3 - its nine - 3x3 grid of samples for each resulting sample).

Add and average are virtually the same if underlying data type is float point precision number. If underlying data type is integer - different things happen.

Add can saturate as resulting value is sum of individual values and hence higher than each - can hit max for that integer value. With average - it can lead to loss of precision. For example - what is average of 2 and 3? It is obviously 2.5 - but if you need to store it as integer value - you need to discard that decimal place and you'll store it as 2 (or 3 if you round up) - so there is additional error introduced in some cases.

If numbers are floating point - then add and average is the same in principle. Add behaves as regular hardware binning - where electrons from each pixel are joined and then read out - here they are "joined" by addition. Average is just addition divided with number of samples - and for each group of 2x2 pixels - that divider is the same - its 4. Average binning is same as hardware binning where you changed e/ADU by factor of 4.

As far as contents of image is concerned - they are identical (as you can scale and set black and white point to any image in processing).

Median is there just for special cases. Pretty much same as median stacking. Sometimes it can do better job if your camera has very strange noise patterns - it will for example do better job of eliminating hot pixels if you don't use darks or hot pixel map. It is less sensitive to outliers and can sometimes be used instead of average (in fact - for perfect Gaussian distribution both median and average give same result, that is why sometimes median is used for bias - those should be pure Gaussian type noise).

  • Thanks 1
Link to comment
Share on other sites

@vlaiv - Again thank you for the reply, but I am still not clear which option I should use when. I understood the mathematical differenaces between add, average and median and useually use median for sigma stacking as I understand that it handles outliers better. When it comes to binning I was thinking average as I process at 32bit floating so was thinking average would be best.

8 hours ago, vlaiv said:

Add and average are virtually the same if underlying data type is float point precision number. If underlying data type is integer - different things happen.

Did you mean add and average are the same, or median and average are the same? Add seems completely different to me, as you also state.....

8 hours ago, vlaiv said:

Add can saturate as resulting value is sum of individual values and hence higher than each - can hit max for that integer value.

This is what makes me think that add in software binning is not a good option, as if I have a region of 4 pixes with ADU values of say 15k, 18k, 18k, 20k, the their sum is 71k, but max is ~65k, so result will be saturation, which isn't good.

8 hours ago, vlaiv said:

As far as contents of image is concerned - they are identical (as you can scale and set black and white point to any image in processing).

Which are identical? All 3 or just Average and Median? If all 3 then how can setting white point 'correct' a saturated binned pixel from Addition, like in my example above - addition above 65k ADU is stuck at 65k regardless of the value of the binned pixels.

Vlaiv, you are really helping to gradually unlock some of this for me, so I hope that you don't mind the slew of questions while I continue to pick away at this.

Best regards,

Link to comment
Share on other sites

1 hour ago, geoflewis said:

Again thank you for the reply, but I am still not clear which option I should use when. I understood the mathematical differenaces between add, average and median and useually use median for sigma stacking as I understand that it handles outliers better. When it comes to binning I was thinking average as I process at 32bit floating so was thinking average would be best.

Don't use median if you use 32bit float (as you should). Sigma stacking (or sigma clip / sigma-kappa reject and all other names it comes in) handles outliers with regular average.

Average produces better value than median. In perfect distribution with very large number of samples - median and average are going to tend to each other - but for real data, average is often better.

1 hour ago, geoflewis said:

Did you mean add and average are the same, or median and average are the same? Add seems completely different to me, as you also state.....

As far image is concerned, if you work with 32bit values - add and average are the same - or at least equivalent.

Think of it this way. If you have image in 32bit format and you multiply it with some number (meaning you multiply every pixel value with that number), will the image change? Image represents relative intensities of pixels with respect to other pixels and when you add a constant value or you multiply with constant value - contents of image does not change.

You can get the same image by using two minute exposures and four minute exposures. You can get same image using 1e/ADU and 0.5e/ADU gains.

Both of these "multiply" image with some value. You then further multiply pixel values in processing to get them into wanted range for color rendition - you do linear scaling (and non linear - but I'm not talking about that part).

Only difference between addition and average is that you divide average in the end with some value. Sum of 4 numbers is a+b+c+d, while average of those numbers is (a+b+c+d)/4. If you divide every pixel in the image (and you will because you'll calculate average of every pixel group) - that is the same like dividing whole image with same number.

Take average binned image and multiply with a number and you'll get sum binned image - and you can do it other way around as well.

Take median binned image - and there is no theoretical way to convert that into either average or sum binned image.

2 hours ago, geoflewis said:

This is what makes me think that add in software binning is not a good option, as if I have a region of 4 pixes with ADU values of say 15k, 18k, 18k, 20k, the their sum is 71k, but max is ~65k, so result will be saturation, which isn't good.

Quite the opposite. This shows that software binning is better than hardware binning - or binning in firmware. Both hardware binning and software binning "in camera" (or in camera firmware) still work with 16 bit data and there is upper limit to how large a number you can store - it will produce clipping as result in example you quoted.

If you work with 32bit float point numbers (and you should), well, there is also max value there, but it is something like ≈ 3.4028235 × 10^38 and you are not likely to produce it even if you add all the pixels in the image.

When working with 32bit float - you can bin and you won't neither clip nor loose precision due to rounding.

2 hours ago, geoflewis said:

Which are identical? All 3 or just Average and Median? If all 3 then how can setting white point 'correct' a saturated binned pixel from Addition, like in my example above - addition above 65k ADU is stuck at 65k regardless of the value of the binned pixels.

To reiterate:

- average and add are identical if you use 32bit float precision, median is different. You can easily get sum binned image from average binned image by multiplying pixel values by bin factor and same is true in reverse - divide sim binned image with bin factor to get average. There is no way - even in theory, to reconstruct either add or sum bin from median bin and vice verse

- average and add produce good / correct / unsaturated / not rounded up or down values when you use 32bit float point precision. This is better than hardware or software bin "in firmware".

  • Like 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

Quite the opposite. This shows that software binning is better than hardware binning - or binning in firmware. Both hardware binning and software binning "in camera" (or in camera firmware) still work with 16 bit data and there is upper limit to how large a number you can store - it will produce clipping as result in example you quoted.

If you work with 32bit float point numbers (and you should), well, there is also max value there, but it is something like ≈ 3.4028235 × 10^38 and you are not likely to produce it even if you add all the pixels in the image.

Wow--did not know it is better.  However, when binning in hardware or in firmware (in camera) each sub is binned individually, so there is no chance of exceeding the 16 bit capacity, especially with 12 and 14 bit cameras.  Binning a five hour stack in software is, of course, another story.  If we remove the 16 bit capacity issue, do you still feel software binning is superior to hardware binning?

  • Like 1
Link to comment
Share on other sites

26 minutes ago, Rodd said:

Wow--did not know it is better.  However, when binning in hardware or in firmware (in camera) each sub is binned individually, so there is no chance of exceeding the 16 bit capacity, especially with 12 and 14 bit cameras.  Binning a five hour stack in software is, of course, another story.  If we remove the 16 bit capacity issue, do you still feel software binning is superior to hardware binning?

You can still reach 16bit capacity in some cases.

If you have 14bit camera - anything over bin x2 will potentially overflow 16bits. With 12bit camera - it is harder to do so - you need to bin more than 4x4 in order to potentially overflow.

I still feel that binning in software is better than binning on camera / in firmware for couple more reasons.

1. you can use different bin methods - like "smart" binning which will take into account noise statistics of each pixel in the group and possibly hot / dead pixels and assign weights. Normally - each pixel is weighted at 0.25 with average bin (1/4 of sum) - but you can tell if pixel is noisier - CMOS cameras often have telegraph type noise and based on pixel values and that statistic - assign different weights.

For example - let's say that you have group of 2x2 pixels and you know that one of those pixels exhibits telegraph type noise (you know this because of statistics from darks), you also know that those pixels represent background (from values compared to other values in the image) - you can then decide to lower contribution of that noisy pixel

Similarly - you have dead pixel that does not record any value - you simply exclude it from the group and average other three pixels.

2. Regular binning produces something called pixel blur. Or rather - fact that pixels have physical dimensions (and are not simple points) causes pixel blur. When you bin - you are getting the same effect as if using larger pixel. Larger pixel - larger pixel blur.

Since we align and stack our subs - we can exploit this and do special kind of binning - split bin.

Split bin produces all the same effects of regular binning - reduction in sampling rate, predictable improvement in SNR, no pixel to pixel correlation - but it avoids this increased pixel blur.

This thing is not very known (I'm not sure if anyone besides me thought of it really - never heard of it other than me thinking of it and testing it out - it works) - but it can be easily implemented.

It works by producing 4 smaller subs from regular sub (for x2 bin) - or 9 smaller subs for x3 bin ... Pixel size is kept the same (no increase in pixel blur) - but since we use every other (or third) pixel - then sampling rate is reduced. We end up stacking much more subs - and improvement in SNR is obvious.

3. You can do fractional binning - or combination of resampling and binning. If you are far from your target sampling rate - then you can do this kind of combination. It will result in bit lower SNR than if using pure binning - but higher SNR than if binning to "closest" integer bin.

Say you natively have 0.6"/px - but you figure out that optimum sampling rate is 1.5"/px. You can bin x2 to get to 1.2"/px - and that is still over sampling, or you can bin x3 to get to 1.8"/px. Those work with integer bin factors.

How about binning x2.5 - to get exactly 1.5"/px. You can't do that with regular binning - but you can with various fractional binning techniques.

 

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

I still feel that binning in software is better than binning on camera / in firmware for couple more reasons.

perhaps all true--but for the person that uses regular software to bin and does not write their own scripts, such as PI (for example) most of these advanced methods are not part of the tool box.  So my question is, how many of these advantages are realized by the typical imager - for as you say, few but you (and only you in one case) are even aware of these things, let alone have the ability to take advantage of the.  It would be kind of like saying "CMOS is better than CCD because the $4,000,000 CMOS sensor I work with at NASA is far superior to the CCDs one can get at the local store".  So, for normal people, such as someone using generic resampling tools in PI, is software binning still better?

 

12 minutes ago, vlaiv said:

If you have 14bit camera - anything over bin x2 will potentially overflow 16bits. With 12bit camera - it is harder to do so - you need to bin more than 4x4 in order to potentially overflow.

Remember, most of the targets we are imaging are very faint--sometimes not particularly well defined in a single sub--even with a 16 bit camera.  For faint targets, say the IFN, or faint galaxies, you think 16 bit capacity will be exceeded?

Link to comment
Share on other sites

3 minutes ago, Rodd said:

Remember, most of the targets we are imaging are very faint--sometimes not particularly well defined in a single sub--even with a 16 bit camera.  For faint targets, say the IFN, or faint galaxies, you think 16 bit capacity will be exceeded?

No, not of course - it will be exceeded only where it is normally exceeded - bright star cores or perhaps galaxy cores.

It really does not matter if those are exceeded - if we use short subs to fill in those parts.

5 minutes ago, Rodd said:

perhaps all true--but for the person that uses regular software to bin and does not write their own scripts, such as PI (for example) most of these advanced methods are not part of the tool box.  So my question is, how many of these advantages are realized by the typical imager - for as you say, few but you (and only you in one case) are even aware of these things, let alone have the ability to take advantage of the.  It would be kind of like saying "CMOS is better than CCD because the $4,000,000 CMOS sensor I work with at NASA is far superior to the CCDs one can get at the local store".  So, for normal people, such as someone using generic resampling tools in PI, is software binning still better?

You have a point there - it is down to software availability.

Most people just uses what software offers them. This does not mean we should just stick to basic functionality - it means that our software should be upgraded with new features and then people will use it.

I still think that it is better - regardless of the fact that it is not readily available.

I also think that Ferrari is a better sports car then my Dacia Duster - but I still drive Dacia Duster :D - matter of availability.

Link to comment
Share on other sites

10 minutes ago, vlaiv said:

No, not of course - it will be exceeded only where it is normally exceeded - bright star cores or perhaps galaxy cores.

It really does not matter if those are exceeded - if we use short subs to fill in those parts.

Here's a question for you....If we bin in hardware--the pixels are combined, their FWC increased and data is collected, lets say not exceeding the FWC.  For software binning (or firmware in camera) this is done after the data is collected.  So if 4 pixels are combined into 1 (Bin 2), and one or two of those pixels were saturated prior to being combined--wont that impact the binned super pixel?  Clipping will be passed onto the super pixel and only 1/2 of it will really be viable.  Once the data is clipped--its lost, and can't be retrieved just by increasing the size of the bucket that the data is put into.  So won't the  unclipped pixels combine data into the super pixel, but the clipped pixels  only be able to impart the data they contain--which does not represent all the data that would be maintained in hardware binning?

There is a high probability that we may miss one another with this due to my inability to articulate what I am really trying to say.  

Link to comment
Share on other sites

18 minutes ago, Rodd said:

Here's a question for you....If we bin in hardware--the pixels are combined, their FWC increased and data is collected, lets say not exceeding the FWC.  For software binning (or firmware in camera) this is done after the data is collected.  So if 4 pixels are combined into 1 (Bin 2), and one or two of those pixels were saturated prior to being combined--wont that impact the binned super pixel?  Clipping will be passed onto the super pixel and only 1/2 of it will really be viable.  Once the data is clipped--its lost, and can't be retrieved just by increasing the size of the bucket that the data is put into.  So won't the  unclipped pixels combine data into the super pixel, but the clipped pixels  only be able to impart the data they contain--which does not represent all the data that would be maintained in hardware binning?

There is a high probability that we may miss one another with this due to my inability to articulate what I am really trying to say.  

I understand what you want to say - and you are right - but that happens to both hardware and software binning.

When you image with hardware binning turned on - binning happens on read out. With CCD sensor, ADC is away from pixel - and electrons are "marshalled" to ADC column by column. This is where pixel binning happens. But before that there is pixel potential well that gathers electrons - and if you saturate that one - any surplus electrons are drained away by ABG.

Only CCD sensors without ABG keep those electrons and they "spill over" to other pixels - this creates funny looking saturated columns on bright stars that look like this:

image.png.006f50761b613e7ece4f5635a23a5b3e.png

In any case, with anti blooming gate - saturated electrons are lost before binning - in both CCDs and CMOS sensors - in both hardware and software binning.

Link to comment
Share on other sites

24 minutes ago, vlaiv said:

but that happens to both hardware and software binning

But what I am saying is it could happen if you software bin but not in hardware bin under the right circumstances.  In hardware binning, lets say the FWC is 80,000 (20,000 x 4).  And you fill it up to 79,000...no clipping.  But if you fill up the 4 unbinned pixels to 30,000, 30,000, 15,000 and 15,000--20,000 of those photons will be lost to clipping , so you would be left with 20,000 + 20,000 + 15,000 + 15,000 which is 70,000.  You have lost 9,000.

Link to comment
Share on other sites

15 minutes ago, Rodd said:

In hardware binning, lets say the FWC is 80,000 (20,000 x 4).  And you fill it up to 79,000...no clipping.

There is no single pixel well for binned pixels in hardware binning. There is again 4 x 20000 and not single 80000

If you manage to put 79000e in those 4x20000 without saturation then you'll be able to do so in software as well. If any one of those 20000 wells saturate - they will do so in hardware as well.

Hardware binning is the same as software binning that adds electrons together - but it adds them after exposure and before ADC stage - before read noise is added - so we get single read noise for total value.

In fact - it looks like hardware binning might have issues with higher bin number that software bin does not have.

Quote

Hi Mike,

It depends.  With CCDs, binning is accomplished by first transferring the rows into the "serial register", which is like a CCD row except covered so light can't get to it.  Each pixel in the serial register typically has 2x the well capacity of an imaging pixel.  So binning 2x rows won't overflow the serial register.  Binning 3x rows might overlow.  Then, the serial register is moved, pixel by pixel, into the "summing well," which is what is sampled for the readout.  Again, it looks like an imaging pixel, but with an opaque cover so that no incident light interferes.

For most scientific CCDs, like those from E2V, the summing well has 4x the well capacity of an imaging pixel.  So binning 2x2 won't overflow the summing well.  Binning 3x3 may overflow.  For most commercial CCDs, the summing well only has 2x the well capacity of an imaging pixel.  So even binning 2x2 may overflow that "pixel".

The best way to tell is to do a linearity curve and see where overflow begins.  If you can also obtain the gain factor, then you know how many electrons are involved.  If you look at Berry & Burnell's great book, you can get information on how to perform these tests.

Arne

Taken from here:

https://www.aavso.org/does-binned-full-well-capacity-differ-among-different-vendor-camera-designs

As you can see - first two pixels are summed - but each of them can saturate its own well prior to moving into serial register. Then serial register is read out and two successive pixels are again summed in summing well - which has x4 capacity - meaning that summing process wont overflow for 2x2 bin (but it might for 3x3 - unless both serial register and summing well are not properly designed).

However - individual pixel full well capacity is still a limit - you don't to "overflow" electrons from one pixel into another to save them for binning stage - if pixel hits saturation - it will saturate prior to any binning done.

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

As you can see -

So the concept of 4 pixels being added together to equal one big pixel is just not true.  That is what often is explained.  they make it sound like the pixels are added together on the sensor and after 2x2 binning, you have pixels that are 4x as big.  they make it sound  like the super pixel acts like a normal pixel--just more sensitive with a greater FWC.  As if the boundary between pixels is removed and they merge.  Not true.   Not true at all.  Really, 2x2 binning is only creating a large receptacle (in the serial register) where data from the individual pixels is measured together.  So in reality--hardware binning is still software binning--its just done in the registration part of the sensor as opposed to a camera's firmware--or a computer.  

Link to comment
Share on other sites

1 minute ago, Rodd said:

So the concept of 4 pixels being added together to equal one big pixel is just not true.  That is what often is explained.  they make it sound like the pixels are added together on the sensor and after 2x2 binning, you have pixels that are 4x as big.  they make it sound  like the super pixel acts like a normal pixel--just more sensitive with a greater FWC.  As if the boundary between pixels is removed and they merge.  Not true.   Not true at all.  Really, 2x2 binning is only creating a large receptacle (in the serial register) where data from the individual pixels is measured together.  So in reality--hardware binning is still software binning--its just done in the registration part of the sensor as opposed to a camera's firmware--or a computer.  

Except with small issue of FWC - for all intents and purposes - both software and hardware binning is the same as using larger pixel.

If you don't saturate individual pixels in 2x2 group - you won't notice any difference - and if you saturate one pixel - odds are - you are going to saturate others as well. If there is big difference between adjacent pixels - well that means detail and you are under sampling - most of values will be very similar in that 2x2 group as signal is changing very gradually - it is smooth (except for noise of course).

In any case - it is the same except for that FWC thing - and saturation in pixels is handled differently (by using shorter filler exposures). There is no camera that has infinite FWC - and there will always be star that is strong enough to saturate your pixels - no matter how big FWC is.

Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Except with small issue of FWC - for all intents and purposes - both software and hardware binning is the same as using larger pixel.

Not the larger pixel to which I refer.  The explanation often given is that the larger pixel works as I said--it has a 100,000 FWC, say and it acts like a single pixel, only clipping if you exceed 100,000.   But in reality, it does not.  The individual pixels that make up this super pixel still behave as individual pixels and if FWC is exceeded--that will be passed to the super pixel and data will be lost...or more specifically, not as much data will be collected (for both software, firmware and hardware binning).

Link to comment
Share on other sites

From reading this it sounds as though it might be better to image unbinned and then bin in software.

Am I reading this correctly?

If software binning is better then that would save me from having to get a new set of calibration frames at 2x2 but I would lose the advantage in download times and file size.

This refers to the G3 16200 which has the option of hardware binning being a CCD. CMOS cameras AFAIK can only bin in software after capture.

Link to comment
Share on other sites

13 minutes ago, DaveS said:

From reading this it sounds as though it might be better to image unbinned and then bin in software.

Am I reading this correctly?

If software binning is better then that would save me from having to get a new set of calibration frames at 2x2 but I would lose the advantage in download times and file size.

This refers to the G3 16200 which has the option of hardware binning being a CCD. CMOS cameras AFAIK can only bin in software after capture.

Yep--that's how I see it now

  • Like 1
Link to comment
Share on other sites

8 minutes ago, DaveS said:

From reading this it sounds as though it might be better to image unbinned and then bin in software.

Am I reading this correctly?

If software binning is better then that would save me from having to get a new set of calibration frames at 2x2 but I would lose the advantage in download times and file size.

This refers to the G3 16200 which has the option of hardware binning being a CCD. CMOS cameras AFAIK can only bin in software after capture.

There is advantage with CCD cameras in using hardware binning.

If you use hardware binning - you maintain read noise at original level. CCD cameras have same read noise for regular pixel readout, bin x2, bin x3 - etc ...

Software binning multiplies read noise. If you have camera like say ASI1600 that has 1.7e of read noise (at unity gain) and you bin that x2 - it will behave like camera with twice as larger pixels - but with 3.4e read noise. If you bin x3 - it will have 5.1e read noise, etc ...

If you have CCD - on one side you have - faster download times, smaller files and read noise thing - on the other side - you have benefits of software binning presented above.

With CMOS - it is far simpler - as you only give up smaller file size. Download times are fast as is and there is no read noise benefit - so it stands to reason to go for fully software binning versus firmware binning.

With CCDs - well, its up to you to weigh pros and cons and decide which one is better for you.

  • Thanks 1
Link to comment
Share on other sites

9 minutes ago, vlaiv said:

oftware binning multiplies read noise. If you have camera like say ASI1600 that has 1.7e of read noise (at unity gain) and you bin that x2 - it will behave like camera with twice as larger pixels - but with 3.4e read noise. If you bin x3 - it will have 5.1e read noise, etc ...

Would you be able to show how this manifests.  Off hand I want to say that read noise is a tiny part of the whole story.  that may not be true.  Anyway to show the difference between 1.6 and 5.1, say?

Link to comment
Share on other sites

21 minutes ago, Rodd said:

Not the larger pixel to which I refer.  The explanation often given is that the larger pixel works as I said--it has a 100,000 FWC, say and it acts like a single pixel, only clipping if you exceed 100,000.   But in reality, it does not.  The individual pixels that make up this super pixel still behave as individual pixels and if FWC is exceeded--that will be passed to the super pixel and data will be lost...or more specifically, not as much data will be collected (for both software, firmware and hardware binning).

That is correct - and that is edge case.

Say you have pixels that have 25,000 FWC and binned version that has "100,000" combined FWC.

There are (minority) cases where large pixel with true 100,000 FWC would not saturate and binned version will. These cases are rare indeed. Worst case is that binned pixel will saturate at 25,000 - if one of 4 pixels is 25,000 in value and other 3 are precisely 0. But you can see how rare that case is if you work with over sampled data - no way that one pixel would be that high while surrounding pixels have 0 value. That only happens with highly under sampled systems - where whole star fits on just one pixel and even "wings" of it don't make it to other pixels.

However you can easily see that if signal in original pixels is less than 25,000 - you get much larger FWC. Say you have about 20,000e in each of individual pixels - you can easily bin to get 80,000e in binned pixel. Same value as in larger pixel and much larger value than individual pixel can capture. This is much more common scenario with over sampled data - where we advocate use of binning.

Here there is much more chance that if any one pixel is saturated and thus binned pixel is saturated - that would also happen with larger pixel.

But again - saturation of pixels is edge case and is handled differently - by use of shorter exposures to replace saturated values.

Link to comment
Share on other sites

Thanks Vlaiv

I wish we had enough clear nights to be able to use them for testing rather than imaging. Guess I will have to do some more reading and thinking.

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

that would also happen with larger pixel.

Not if the big pixel was an individual, non binned pixel.  Say a camera with 24 um pixels or something.  that is what I am trying to get at.  if the pixel is truly a single pixel with 100,000 FWC--it will not clip until it surpasses 100,000.  If the single pixel is really a collection of binned pixels--it can "miss out on data" that was clipped from individual pixels in the binned association....no?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.