Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Next obstacle to understand


alan potts

Recommended Posts

Can any one point me to, or explain in dimwit language what Binning is please, I have two camera now that I believe will do this but when does one Bin? I have an 071 and 183 both cooled models. I got the 183MC for the Borg 77EDll which has a focal length of 331mm so fairly short and widefield. I also have a scope at 420mm which whilst it is not in the same class it is meant to be ED, so should at least be useable.

Alan 

Link to comment
Share on other sites

Under resources , astronomy tools  at the top of this page you can enter your scope / camera combination and get an arc seconds per pixel size, this is best between 1 and 3 you can enter binning to see if it helps to achieve the best figure.

Dave

Edited by Davey-T
  • Like 1
Link to comment
Share on other sites

Binning (for example 2 x 2 binning) is a mode on a digital camera that treats groups of four adjacent pixels as a single 'super-pixel'. This super-pixel has greater sensitivity that a standard pixel but NOT 4 time greater sensitivity! This mode can be useful for imaging dim objects in shorter exposure lengths and for improving the signal to noise ratio in an image but, of course, there is a trade-off although the same full field of view will be in force, the resolution will reduce proportionally.

Have a quick read of this article.

  • Like 1
Link to comment
Share on other sites

5 minutes ago, DaveS said:

My understanding is that you can't bin colour cameras due to the Bayer pattern.

That's correct, the Bayer Matrix will be destroyed so you will only get a mono image and lose some of the sensitivity gain because of the filter matrix but it can physically be done.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, DaveS said:

My understanding is that you can't bin colour cameras due to the Bayer pattern.

Oh I don't know, I would have thought Steve would have noticed he is very good, I would have thought most in AP know the 071 is only a colour camera, maybe your right but it seems to do live view in binning mode but B&W.

Alan

Link to comment
Share on other sites

13 hours ago, Davey-T said:

The 071 on the Borg will give you 2.97 arc seconds per pixel so doesn't need binning you could use a 2 X Barlow to get a usable 1.49.

Dave

I actually bought the 183mc for the Borg as I really like wide field and whilst I guess I could use the )&! and get a result with the decent skies I get here thought it was worth the outlay to have a camera aimed at short scopes, whilst the Borg it not quite as short as  the Red Cat it's not far behind and is meant to be very high quality from what read.  I am trying to understand how you would use a barlow or powermate on a camera, just screw on the end and you have a different sleeve to slot into the end like an eyepiece?

Alan

Alan

Link to comment
Share on other sites

16 hours ago, DaveS said:

My understanding is that you can't bin colour cameras due to the Bayer pattern.

 

16 hours ago, steppenwolf said:

That's correct, the Bayer Matrix will be destroyed so you will only get a mono image and lose some of the sensitivity gain because of the filter matrix but it can physically be done.

Actually... 

You can't bin a colour camera in hardware due to the Bayer matrix. But you can do it in software during the calibration process. This is called superpixel deBayering, and the colour information of one 2x2 pixel group (RGGB) is used to cleate one colour pixel without interpolation. The only advantage that I know of this technique is to lower the resolution of an otherwise oversampled image, giving a more realistic pixel scale and smaller files to work with. 

  • Like 1
Link to comment
Share on other sites

50 minutes ago, wimvb said:

Actually... 

You can't bin a colour camera in hardware due to the Bayer matrix.

This is an interesting comment as Starlight Xpress indicate that you can bin the images 'in camera' using my Starlight Xpress SXV-M25C colour CCD camera. Here is an extract from the manual:-

Using the ‘Binned’ modes:

Up to this point, I have assumed that the full resolution, imaging mode is being used. This is essential for colour imaging, but it will often provide more resolution than the optical system, or the seeing, allows. ‘Binned 2x2’ mode sums groups of 4 pixels into one output pixel, thus creating a 696 x 520 pixel image with 4 times the effective sensitivity. Using 2x2 binning, you can considerably improve the sensitivity of the SXV-M25C without losing a great deal of resolving power, so you may like to use this mode to capture many faint deep-sky objects in monochrome. Other binning modes (3x3 and 4x4) are available and will further increase the image brightness and reduce its resolution. However, generally, these are more useful for finding faint objects, than for imaging, as the colour information is lost in all these modes.

However, it is entirely possible that SX have not described the process in sufficient detail? My own interpretation is that colour or not, the sensor itself is still just a mono sensor (and, therefore, can be binned) with a set of filters (The Bayer Matrix) sublimated on top of it which can later be used to produce a colour image during the de-Bayering process.

Link to comment
Share on other sites

As Starlight point out, when you do bin a colour image internally in their camera, you lose the colour data.
The presumption is that binning an OSC image should preserve the colour information

Edited by pete_l
Link to comment
Share on other sites

In principle, super pixel deBayering could be done in the camera, it just need a processer to do so. If the calibration frames are treated the same way, then it would still be possible to calibrate and stack as normal. (This is equivalent to processing (lossless) jpeg images from a dslr.) But in reality, the sensitivity in the colour channels is different, and the monochrome image you get from binning RGGB data will be weighted by the transmission characteristics of the dye that is used in the Bayer matrix, as well as the QE of the pixels. If 3x3 binning is used in an OSC camera, these colour differences may actually make it into the final monochrome image, since some binned pixels will have more green, while others will have more red or more blue, depending on the position on the sensor. I would want to see the result before I'd be convinced that this actually works, as this kind of binning could very well lead to Moiré patterns.

But maybe we're digressing from the original post.

1 hour ago, pete_l said:

As Starlight point out, when you do bin a colour image internally in their camera, you lose the colour data.

I probably should have clarified that, rather than just imply it.

Link to comment
Share on other sites

2 hours ago, wimvb said:

In principle, super pixel deBayering could be done in the camera, it just need a processer to do so. If the calibration frames are treated the same way, then it would still be possible to calibrate and stack as normal. (This is equivalent to processing (lossless) jpeg images from a dslr.) But in reality, the sensitivity in the colour channels is different, and the monochrome image you get from binning RGGB data will be weighted by the transmission characteristics of the dye that is used in the Bayer matrix, as well as the QE of the pixels. If 3x3 binning is used in an OSC camera, these colour differences may actually make it into the final monochrome image, since some binned pixels will have more green, while others will have more red or more blue, depending on the position on the sensor. I would want to see the result before I'd be convinced that this actually works, as this kind of binning could very well lead to Moiré patterns.

But maybe we're digressing from the original post.

I probably should have clarified that, rather than just imply it.

No not at all, interesting reading Win and Steve, not sure I understand it but interesting. Let me push you further then, if the camera we talk of are basically mono sensors with a matrix then can we actually make mono images with use of filters as I shot a black and white image the first I ever did, before I understood how to make it colour.

Alan 

Link to comment
Share on other sites

Here is my ASI178MC camera (in my all sky camera) binned 4x4 with duly increased sensitivity.  Captured in KStars/Ekos.  I've been trying to sort out the focus motor drive.

397517220_Screenshotfrom2019-09-2914-44-25.png.a7595c4d06e6e02c83d44ddabbc61157.png

  • Like 2
Link to comment
Share on other sites

Even if you loose the colour information when binning a colour CMOS, it could be a  useful procedure to create a lum layer for an over-sampled image as the S/N ratio would be increased. You can then process this separately from the RGB data and finally put that as lum on top of the RGB image (unbinned but downsampled to match the binned lum).

Edited by gorann
  • Like 1
Link to comment
Share on other sites

Color cameras can be binned of course.

Depending on type of binning applied you can either loose color information or retain it.

I meant to write one extensive post about different aspects of binning, and what I consider to be the best way to bin the data when doing images from multiple exposures (for other uses, other methods of binning might be more suitable).

Anyway, I'll briefly outline some things related to color imaging that people might not be aware (not much talk about it elsewhere) and explain how can binning both loose and preserve color data.

First thing to understand is that resolution of color sensor is not the same as mono sensor. It is a bit different, and we can argue that it is twice as coarse as mono sensor.

I've included image of bayer pattern for easier understanding:

image.png.39eeeaecacdd3213df8a8460b217847f.png

Now with mono sensor, we think of resolution of the sensor as being "how much sky is covered by single pixel", expressed in "/px and calculated based on size of pixel. This works fine for mono sensors, but there is alternative way of thinking that is better when we want to discuss both mono and color sensor. Instead of thinking about "width" of the pixel and length on sky covered by this width - let's think about pixels being points (without dimension) and resolution being "distance" between these points. If we place one such point at the center of each pixel for mono camera, then distance between two points will be same as pixel width - so effective resolution is the same - no change there (two views are in this case compatible).

However when we apply "pixel" approach to bayer pattern of color sensor, we have a slight problem - look at red pixels in above image - there is a pixel, than a gap, then a pixel, then a gap (going in horizontal, but same in vertical direction). How do we "incorporate" these gaps in our thinking about resolution if we accept "pixel per sky area/length" approach?

It is much easier to think about sampling points (without dimensions, and only distance between them). In this case we can see that red color is sampled at two "pixel lengths" rather than one. Distance between sampling points is twice the length (and height in vertical) of a single pixel.

If pixel/resolution calculation for this sensor gives us 1"/px - red color will be actually sampled at 2"/px. Same is true for blue color. In principle we can say that same is true for green color, although it is not quite clear why (there is a bit of problem with X pattern of green pixels), but because both red and blue are sampled at twice calculated resolution, we should treat green as well. In fact if we "split" green into green1 and green2 where green1 and green2 are two respective pixels in 2x2 element of bayer matrix like this:

image.png.84b89c6d7a2b250706d2f7d62c030376.png

Here green1 being denoted as Gb and green2 being denoted as Gr, and threat each green component separately (namely green1 and green2 or as denoted in image Gr and Gb), we can see that each of those two components of green (although they are the same filter) are indeed sampled at twice "pixel size" resolution.

Now that we understand why is resolution of color sensor twice lower than that of mono version of same sensor, we can see further how we can do debayering of such sub and how we can do different binning on it.

First let's do bayer split. Most people think about debayering process as creating color image out of mono. It is perhaps better to think about it being "channel extraction" process. When you do it like that, you turn single color sub into corresponding R, G and B subs - which are in principle the same as if taken with mono camera and filter (apart from the fact that regular filters have different response curve and QE).

Regular debayering employs interpolation to fill in missing pixels and produces color sub of same dimensions (pixel count in height and width) from mono sub. Problem with this approach is that you are effectively "making up" missing values. It is done cleverly so that image remains smooth (like doing average of two adjacent red pixels to create missing value), but problem with this approach remains - you cannot recover missing detail. Image created this way will still have the same detail as image sampled at twice lower rate (which it in fact is) - you will get the same thing if you take image sampled at twice lower rate and upsample it to larger size (that is what debayering in fact is).

If you want sharp image (or rather sharper) from bayer matrix - you are better off splitting image into colors without resampling / debayering the regular way.

Easiest way to describe this process would be with following image:

image.png.6593422ded323f99a07ba9782d7e53d0.png

First part of the process would be to split colors into separate "sparse" grids, and then next step (here denoted as interpolation because it explains regular debayering workflow) would be to "condense" those sparse grids instead of interpolating missing pixels.

This way bayer matrix sub is split into 4 smaller subs, one containing only red pixels, one with blue pixels, and two with green pixels ("green1" into one smaller sub and "green2" into another). Now we have color subs as if we took them with mono camera and appropriate filter with a small difference - sampling rate is twice lower (because of color sensor and bayer matrix) and you end up with twice as much green subs then red and blue (this is due to color bayer matrix being developed for day time photography primarily and eye is most sensitive in green and green response curve closely matches that of perceived luminance in human vision so having more green gives better SNR and definition in that part of spectrum that is more important for human vision).

Resulting subs will have half the pixels in height and width resulting in x4 smaller file.

After we do this - then we can bin each of those color subs further in software to increase SNR and decrease sampling rate. This way we preserve color information and increase SNR - however note that x2 binned color sub will in effect have x4 less pixels in height and width than original sub. This might look odd, but that is only because we are used of thinking in full pixel count for color subs from OSC camera, when in fact, as shown above you do have "half" of pixel count in both height and width for each color.

Of course when you bin raw image in the regular way - you add one red pixel, one blue pixel and two green pixels and you get single value, which turns to "mono" simply because there is no way to reconstruct what numbers you added together to get result (4 can be either 2+2 or 1+3 or 4+0 or in principle infinite number of different numbers can add up to 4 like -8+12, ... so when you take 4 you can't know what two original numbers are added together to form it).

 

  • Like 1
Link to comment
Share on other sites

59 minutes ago, alan potts said:

No not at all, interesting reading Win and Steve, not sure I understand it but interesting. Let me push you further then, if the camera we talk of are basically mono sensors with a matrix then can we actually make mono images with use of filters as I shot a black and white image the first I ever did, before I understood how to make it colour.

Alan 

Pixels on sensors only detect the light that falls on them. They don't care about the colour of that light. The colour information in an OSC (one shot colour camera) is contained in the Bayer matrix.

On top of each pixel sits a dot of dye (reg, green or blue) that absorbs all the other colours. If you put another filter in front of the camera (eg a red Ha filter), virtually no light will reach into the pixels that have a green or blue dye dot on top of them. Only the "red" pixels will register light. In the deBayering process, the information of the red pixels is used as colour information for the green and blue pixels. You can make B/W images with a colour camera by just ignoring any colour information. The problem is that if you don't deBayer even mono images, the pixels will show through. This is because the pixels have different sensitivities for the three colours. If you were to take a B/W image of a neutral gray wall, the green pixels in your colour camera would register more signal, simply because all the pixels are more sensitive to green light than to red and blue. That's why a raw image from a colour camera looks pixelated.

Here's an extreme crop of a raw image from a dslr that has not been deBayered. It's B/W and you can clearly see that adjacent pixels have different intensities, even though the subject had much less detail.

IMGP7247_Preview01.jpg.f6c68513c50ad08f8560e5925a48bbd4.jpg

  • Like 1
Link to comment
Share on other sites

Interesting topic !

im currently trying to use a 071 cmos with 2x binning with my edgehd at f7. This gives me about 1” per pixel. Without the binning in APT I’d be at 0.5” pixel which is clearly over sampled. 

From reading this thread am I doing something wrong by selecting 2x binning in APT ? 

  • Like 1
Link to comment
Share on other sites

I'm thinking that binning will only work with my ASC if the image is slightly out of focus.  If focus is sharp each star will land on just one pixel and binning would not increase the sensitivity.  OTOH defocussing reduces the light falling on each pixel.  Also, with an OSC (which I'm using ATM), a sharp image is liable to "loose" stars because they fall on the wrong dye blob.  OR maybe worse the star colour will depend on which pixel the star falls on.  Hmmm...  Stars will only show their natural colour with a poorly focused image.  Getting star colour and a reddish tinge from the Milky Way from hydrogen gas emission was one reason for using an OSC camera - the other was distinguishing between blue sky and grey cloud in daylight.

Now I'm wondering if I would be better off using the ASI178MM for nighttime and a separate ASC for daytime.  A daytime camera would not need an astro camera, or cooling, or dew heater or... etc...  A mono camera has better sensitivity by not having the Bayer matrix

  • Like 1
Link to comment
Share on other sites

2 hours ago, Ken82 said:

Interesting topic !

im currently trying to use a 071 cmos with 2x binning with my edgehd at f7. This gives me about 1” per pixel. Without the binning in APT I’d be at 0.5” pixel which is clearly over sampled. 

From reading this thread am I doing something wrong by selecting 2x binning in APT ? 

If you are doing binning in drivers (which I discourage for number of reasons) likely thing that is happening is following:

I'll explain on red channel, but others are the same. Going from left to right, and let's focus just on "one row", rest is the same, first 2x2 of red pixels is summed to form single pixel, then next 2x2 red pixels are summed to get next pixel, etc....

But these pixels are not put next to each other, but rather one "space" is left between them and then interpolated as with regular debayering.

image.png.a10ca0e6647b48f24f098eb932e05c8d.png

Result of this is the same as I described above where you get x4 less width and height and then upscaled by x2 so you are effectively sampling at 2"/px this way (not that there is anything wrong with that, only minimum detail will be lost in very best seeing).

Btw, this is another way to bin color data and to still retain color (but it is similar to outlined above except it is upscaled x2 to make image larger - of the size one would "expect" from binning x2 sensor with that many megapixels - like in mono version).

If you don't want to bin x2 you can shoot non binned. You will still sample each color at 1"/px as I described above - if you extract channels as described rather than normal debayering. There is another way to look at all of that.

One part of pixel QE depends on effective pixel light collecting area. This is why sensors with microlens have better QE than those without. Lens helps increase light collecting area. "Regular" QE of pixel is in fact multiplied by ratio of effective pixel light collecting area / geometrical pixel area (pixel size x pixel size).

Maybe these images will help understand it better:

image.png.92f96c7d6ef8929ed4490bbc8fe6d242.png

image.png.1477a4d780b0714b8d842c9f62359d32.png

These are different sensors under microscope. As you can see lens is a bit smaller than pixel "square".

Why am I mentioning this?

Because you can treat color sensor for "speed" calculations in pretty much the same way you do with mono, but if you account for light gathering surface. In above example with 071 with your EdgeHD where sampling resolution is normally calculated at 0.5"/px, if you shoot color data unbinned, all colors will effectively be sampled at 1"/px as we have shown. But light collecting area for colors will not be 1"x1". It will be 0.5"x0.5". You can get around this by still using 1"x1" if you modify quantum efficiency of the sensor by taking one quarter of it. That is because red collecting area is one quarter of area of bayer cell.

Thus color sensors are at least 1/4 of the sensitivity of mono sensor. In practice we don't see that much loss in sensitivity because one integrates OSC data for x4 longer than mono. If you for example shoot for 4h - you will shoot each channel for one hour with LRGB filters and mono camera. With color camera you will shoot all channels at the same time - but for 4h each (thus spending x4 of time on each "filter").

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.