Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

software binning to counter smaller pixels, guide errors and longer focal length scopes?


dyfiastro

Recommended Posts

Good evening everyone

Been doing a fair amount of reading and I am looking for any kind of input and help.

I am currently using a 130PDS and a 600D on a HEQ5. This gives me around 1.37" per pixel which is above my average guiding error of between 0.8" - 1" (something I am still working to improve).
Now I have just purchased an deforked Meade LX10 8" SCT for the summer / planetary but also intend to try and use it along with the 0.63 FR for some of the smaller stuff and a smaller FOV. 

Given the same guiding setup and average error I would then be well below my guiding as well as over sampling.

Am I correct in saying that I am able to use software binning (2x2) with either a DSLR or at some point a dedicated cmos camera that has around the same pixel pitch as my 600D to bring the sampling rate up to around  what I am getting at present?
A 600D using the LX10 with 2x2 binning would bring me up to around 1.41" per pixel. 

Any help or advise on this would be fantastic. trying to find an affordable somewhat modern camera be it dslr or dedicated with large enough native pixels is not easy, this would give me more scope going forward and would still allow me to use the same equipment on smaller scopes without the use of binning

Thanks in advance

Mark



 

 

Link to comment
Share on other sites

In one word - yes :D

Actual description of what is going on is a bit more complicated and involved. There is a difference in color and mono sensors and their resolutions. Color sensors are already sampling at twice the rate effectively. I you are using color DSLR and from pixel pitch you calculated 1.37"/pixel - you are in effect sampling at 2.74"/pixel, or in this case better call it 2.74"/sample.

There is also difference in the way one would bin color images vs mono images. If you debayer your color image prior to stacking - you are effectively not gaining any resolution from it - debayering is in effect - "making up" missing pixels - this does not add resolution and information to the image. Binning debayered image will not produce wanted result in terms of SNR since you will be averaging pixels with adjacent pixels that have already been produced by average of surrounding pixels.

Best way to deal with this situation is to use super pixel mode - this will create image with half sampling resolution, or to put it in better words - it will utilize sampling resolution that is inherent in sensor. There is approach that is one step better - decomposition instead of debayering - which creates 4 sub images from each light frame - one red, one blue, and two green ones - then stacking each color onto its own stack, but if software is lacking this feature - super pixel mode is close enough.

Anyways it's complicated topic, and not necessarily one that we need deep discussion on in order to address your concern. Simple answer is yes - use super pixel mode for debayering and you will get effect that you are after.

Link to comment
Share on other sites

33 minutes ago, vlaiv said:

In one word - yes :D

Actual description of what is going on is a bit more complicated and involved. There is a difference in color and mono sensors and their resolutions. Color sensors are already sampling at twice the rate effectively. I you are using color DSLR and from pixel pitch you calculated 1.37"/pixel - you are in effect sampling at 2.74"/pixel, or in this case better call it 2.74"/sample.

There is also difference in the way one would bin color images vs mono images. If you debayer your color image prior to stacking - you are effectively not gaining any resolution from it - debayering is in effect - "making up" missing pixels - this does not add resolution and information to the image. Binning debayered image will not produce wanted result in terms of SNR since you will be averaging pixels with adjacent pixels that have already been produced by average of surrounding pixels.

Best way to deal with this situation is to use super pixel mode - this will create image with half sampling resolution, or to put it in better words - it will utilize sampling resolution that is inherent in sensor. There is approach that is one step better - decomposition instead of debayering - which creates 4 sub images from each light frame - one red, one blue, and two green ones - then stacking each color onto its own stack, but if software is lacking this feature - super pixel mode is close enough.

Anyways it's complicated topic, and not necessarily one that we need deep discussion on in order to address your concern. Simple answer is yes - use super pixel mode for debayering and you will get effect that you are after.

Thank you very much indeed for the information.

If I was able to bin the images before debaying then this would have the effect required then?
I was looking at maybe using something like PiPP. There is an option to bin 2x2 but could disable the debaying part of the process, the outputted FITS could then but run through stacking software as normal.
Would this work? PiPP has the option of using the sum or average of pixels, if this could work which would be the best solution to try?

Thanks again in advance

Link to comment
Share on other sites

You can "bin" before you debayer, but it depends on how you bin.

Let me explain this for a moment (and it will answer one of your questions as well) - regular binning method is just taking groups of 4 pixels - 2x2 in size and getting either their average or their sum. From signal to noise perspective there is absolutely no distinction between the two since ratio of two numbers remains the same if you divide (or multiply) both with same number. In case of average you are just dividing with 4. Sum can be thought of as adding photon counts together, while average can be thought of as preserving brightness while binning - but same thing really. Problem with this approach is that OSC sensors contain color information, and each of these 2x2 pixel groups contains pixels that registered different color - light of different wavelength. That is not same signal and can't be averaged. I mean one can average those values, but result is not what one would expect - you will end up with mono image and loose color information.

What you can do is average 2x2 pixels of each color - but this will in effect be the same as lowering base resolution by x4 - not something that you might want to do as it will likely lead to loss of detail. Here is what I mean by this, take a look at bayer matrix of pixels:

image.png.93498a9e1022aa28b426d2cf544056bd.png

If we have single pixel width of let's say 4um and this gives you 1.37"/pixel - most people thing that 4um covers distance of 1.37" in sky - and that is true, but this is not what sampling rate means. Sampling rate means "how much do I need to move before I read of next sample". Now look at red pixels for example - you read off first red pixel in first row, then you have to move 4um or 1.37" to next pixel - but you don't read out that one as it's not red, so you move another space, or total of 8um / 2.74" and then you read out next red pixel and so on and so on ... This is why I say that color sensor already has half resolution of the one you would expect from pixel size calculation. Same thing goes for blue, and same thing goes for green - but you need to look at above matrix to see that there are "two different greens" (they are the same, but positions / offsets are different). There is green at 1,1 position and it combines with green at 1,3 ; 1,5 ... ; 3,1 ; 3,3 .... and then there is green at 2,2 position which combines with 2,4;2,6 ... ; 4,2; 4,4 ....

Now when you want to bin unbayered matrix you can't average adjacent pixels like in mono - you need to average adjacent 2x2 of same color.

So for one red pixel we will average following red pixels: 2,1; 4,1; 2,3; 4,3 - and this will produce 1 red pixel, but note that we have covered 4x4 group of pixels to extract this one red pixel - this is why we are reducing resolution by factor of 4 over "native" pixel size based resolution.

Super pixel mode works like this:

image.png.5a20b4a4b6056e8c74f5ae4db728ff43.png

Here blue and red are separated, but green is not separated in two distinct "subs" - it really should be, but I copied image from internet, and it is what it is :D

Now, if you observe top left "sparse" image of red - there is a lot of "room" between pixels (black in above image) - but like we said - resolution is not size of pixel, but space between pixels - and this shows how you are effectively sampling at half the rate you think you are sampling based on pixel size alone. It also shows how you can extract color without debayering, or rather interpolation debayering to fill in remaining gaps. This interpolation is just averaging of pixels that you have and "guessing" based on that what would be missing value - but in this process you are not adding real resolution - it is the same as when you just "condense" red pixels to smaller image (and half the resolution). So when you debayer something and create missing pixels, and then use those missing pixels for binning - you are not really gaining much. You can do it like that, but I think it will blur image more than bring in any SNR gain that one would expect from binning.

Simplest two things that you can try on your data is:

1. use super pixel mode - it is there in DSS for example under debayer method

2. use some sort of debayering, stack your image and then bin 2x2 result of stack

Compare two results and see if you prefer one over the other (in terms of both nose and "blurriness").

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

You can "bin" before you debayer, but it depends on how you bin.

Let me explain this for a moment (and it will answer one of your questions as well) - regular binning method is just taking groups of 4 pixels - 2x2 in size and getting either their average or their sum. From signal to noise perspective there is absolutely no distinction between the two since ratio of two numbers remains the same if you divide (or multiply) both with same number. In case of average you are just dividing with 4. Sum can be thought of as adding photon counts together, while average can be thought of as preserving brightness while binning - but same thing really. Problem with this approach is that OSC sensors contain color information, and each of these 2x2 pixel groups contains pixels that registered different color - light of different wavelength. That is not same signal and can't be averaged. I mean one can average those values, but result is not what one would expect - you will end up with mono image and loose color information.

What you can do is average 2x2 pixels of each color - but this will in effect be the same as lowering base resolution by x4 - not something that you might want to do as it will likely lead to loss of detail. Here is what I mean by this, take a look at bayer matrix of pixels:

image.png.93498a9e1022aa28b426d2cf544056bd.png

If we have single pixel width of let's say 4um and this gives you 1.37"/pixel - most people thing that 4um covers distance of 1.37" in sky - and that is true, but this is not what sampling rate means. Sampling rate means "how much do I need to move before I read of next sample". Now look at red pixels for example - you read off first red pixel in first row, then you have to move 4um or 1.37" to next pixel - but you don't read out that one as it's not red, so you move another space, or total of 8um / 2.74" and then you read out next red pixel and so on and so on ... This is why I say that color sensor already has half resolution of the one you would expect from pixel size calculation. Same thing goes for blue, and same thing goes for green - but you need to look at above matrix to see that there are "two different greens" (they are the same, but positions / offsets are different). There is green at 1,1 position and it combines with green at 1,3 ; 1,5 ... ; 3,1 ; 3,3 .... and then there is green at 2,2 position which combines with 2,4;2,6 ... ; 4,2; 4,4 ....

Now when you want to bin unbayered matrix you can't average adjacent pixels like in mono - you need to average adjacent 2x2 of same color.

So for one red pixel we will average following red pixels: 2,1; 4,1; 2,3; 4,3 - and this will produce 1 red pixel, but note that we have covered 4x4 group of pixels to extract this one red pixel - this is why we are reducing resolution by factor of 4 over "native" pixel size based resolution.

Super pixel mode works like this:

image.png.5a20b4a4b6056e8c74f5ae4db728ff43.png

Here blue and red are separated, but green is not separated in two distinct "subs" - it really should be, but I copied image from internet, and it is what it is :D

Now, if you observe top left "sparse" image of red - there is a lot of "room" between pixels (black in above image) - but like we said - resolution is not size of pixel, but space between pixels - and this shows how you are effectively sampling at half the rate you think you are sampling based on pixel size alone. It also shows how you can extract color without debayering, or rather interpolation debayering to fill in remaining gaps. This interpolation is just averaging of pixels that you have and "guessing" based on that what would be missing value - but in this process you are not adding real resolution - it is the same as when you just "condense" red pixels to smaller image (and half the resolution). So when you debayer something and create missing pixels, and then use those missing pixels for binning - you are not really gaining much. You can do it like that, but I think it will blur image more than bring in any SNR gain that one would expect from binning.

Simplest two things that you can try on your data is:

1. use super pixel mode - it is there in DSS for example under debayer method

2. use some sort of debayering, stack your image and then bin 2x2 result of stack

Compare two results and see if you prefer one over the other (in terms of both nose and "blurriness").

Thanks again

I will check this out of the coming weeks / months once the new scope is ready (and weather permits).
 

Mark

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.