Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Discussion on Software Binning / RC Reducer Review and final M81 Image


Catanonia

Recommended Posts

19 minutes ago, vlaiv said:

You can even bin after stacking (although this can have different results depending on what sort of alignment interpolation you used) - just make sure you do it while data is still linear prior to any processing.

Ah this is key and not what I was doing. I was Integer binning (in PI) after all the processing   doh !!! and that is probably why I did not see any differences or reasons to go down the software binning route.

My process will now be as follows

  • Take the images at 1x bin
  • Calibrate them
  • Debayer them
  • Integer bin them
  • Stack them
  • Process as normal

 

Link to comment
Share on other sites

53 minutes ago, Catanonia said:
  • Debayer them
  • Integer bin them

If binning is to work - you need to be careful what sort of debayer you are using.

Any sort of interpolation debayering will artificially increase resolution of image and further binning of that data will be "null operation" (like multiplying and dividing with same number).

In order for binning to work - you need to treat OSC data as it is - at lower resolution to start with. Best way to debayer data is "split debayering".

If you are using PixInsight - there should be a script to do that. @ONIKKINEN mentioned it few times, but since I don't use PixInsight - I don't really remember what it's called or how it's used.

Link to comment
Share on other sites

17 minutes ago, ONIKKINEN said:

Actually i have not used Pixinsight either, the script i use is for Siril.

The command in question for Siril is seqsplit_cfa "your sequencename".

I would be surprised if PI doesnt have the same feature though.

Thanks, trying to work this out in PI and keep coming up with Drizzle. Not sure if I am going down a dead end here.

Edit - In PixInsight it is called SplitCFA and MergeCFA - Reading up on it now

 

Edited by Catanonia
  • Like 1
Link to comment
Share on other sites

13 minutes ago, Catanonia said:

Thanks, trying to work this out in PI and keep coming up with Drizzle. Not sure if I am going down a dead end here.

Edit - In PixInsight it is called SplitCFA and MergeCFA - Reading up on it now

 

Yes, splitCFA sounds like a proper name for that procedure. Drizzle on the other hand is something you want to skip - except for bayer drizzle, that one is legit, but it is used when you want to make color sensor have the same resolution as mono version (which means that you are not over sampled at resolution given by pixel size).

Link to comment
Share on other sites

Update.

The 2 tools in PI are SplitCFA and MergeCFA. 

SplitCFA will take a load of images and split them into the 4 channels in a nice directory tree structure, BUT and here is the kicker, the MergeCFA can only take 1 set of files, ie cannot process a whole directory like the Split does. This means realistically it is limited to being done on a stacked linear image otherwise I would have to manually merge 100's of files at a time, very long and laborious clicking on 100's of 4x images and getting them in the right order.

So routine in PI is

  • Calibrate
  • Debayer
  • Align
  • Combine - linear stack
  • SplitCFA
  • IntegerRe-sample to 2x or 3x on each channel
  • MergeCFA to give back the final 2x or 3x binned stack

It seems to work

Edited by Catanonia
Link to comment
Share on other sites

14 minutes ago, Catanonia said:

Update.

The 2 tools in PI are SplitCFA and MergeCFA. 

SplitCFA will take a load of images and split them into the 4 channels in a nice directory tree structure, BUT and here is the kicker, the MergeCFA can only take 1 set of files, ie cannot process a whole directory like the Split does. This means realistically it is limited to being done on a stacked linear image otherwise I would have to manually merge 100's of files at a time, very long and laborious clicking on 100's of 4x images and getting them in the right order.

So routine in PI is

  • Calibrate
  • Debayer
  • Align
  • Combine - linear stack
  • SplitCFA
  • IntegerRe-sample to 2x or 3x on each channel
  • MergeCFA to give back the final 2x or 3x binned stack

It seems to work

That does not seem right. Here is what you should do:

- calibrate

- SplitCFA

- take images from one green directory and move into second green directory. This is because bayer matrix has one red, one blue and two green pixels - 4 resulting directories will contain: red, blue and two times green parts of the image. You want all green subs in single place

From this point workflow does not differ from LRGB workflow - you now have "per filter" subs in your directories (minus luminance as OSC camera does not produce that).

Next step can be:

- IntegerResample (or this can be done later)

- Integration (integrate each color separately but align them all against the same reference frame)

- RGB combine

- (you can do IntegerResample at this stage as well if you haven't done it above)

- Process your image as you normally would

 

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

That does not seem right. Here is what you should do:

....

 

So bypass Debayer and do it yourself with the SplitCFA and of course allows you to IntegerResample at will :)

Just did a SplitCFA on a non Debayered sub and it did split to down to 4 components which is nice

So if I do this to all the calibrated images, move the greens, I can then process like a RGB image as I used to in mono camera's

A bit more complicated, but not too much hassle.

Thanks for your time @vlaivfor this discussion. Much learnt

 

 

Edited by Catanonia
Link to comment
Share on other sites

3 minutes ago, Catanonia said:

So if I do this to all the calibrated images, move the greens, I can then process like a RGB image as I used to in mono camera's

Exactly.

If you think about it - color sensor / bayer matrix is nothing more then RGB filters on top of pixels. It is just the matter of grouping each color into a image - and that is what splitCFA does.

You'll also notice that each of those 4 images is in fact twice as smaller in height and width than sensor size. This is the reason why color sensors have effectively smaller resolution and not the one indicated by pixel size (every pixel of each color is spaced "two pixels apart" really as they are interleaved depending on color filter on them).

  • Thanks 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

Exactly.

If you think about it - color sensor / bayer matrix is nothing more then RGB filters on top of pixels. It is just the matter of grouping each color into a image - and that is what splitCFA does.

You'll also notice that each of those 4 images is in fact twice as smaller in height and width than sensor size. This is the reason why color sensors have effectively smaller resolution and not the one indicated by pixel size (every pixel of each color is spaced "two pixels apart" really as they are interleaved depending on color filter on them).

Tried out the process on some old data and worked out how to do it.

Will remember this now for the next imaging session and go with 2000mm F8 @bin 1 and use software binning. I will only use the reducer when I "need" the wider field

Shame I have a couple more images with the reducer to process and binning them would not be wise at 2.2"/px so will not re-bin them

Thanks @vlaivfor taking an extraordinary amount of time to explain this.

Edited by Catanonia
Link to comment
Share on other sites

3 minutes ago, Catanonia said:

Shame I have a couple more images with the reducer to process and binning them would not be wise at 2.2"/px so will not re-bin them

Actually - maybe try it, you might be surprised.

Proper sampling rate can be deduced from data - look at average FWHM in arc seconds in your subs and you can get the sense of sampling rate required. Relation is rather straight forward FWHM / 1.6 will give you sampling rate.

If you have 3.52" FWHM or larger - 2.2"/px is actually proper sampling rate for that case. Even if your FWHM is smaller than 3.52" you might not loose much in terms of detail, yet you might get quite a bit in terms of SNR.

  • Like 1
Link to comment
Share on other sites

The process for those wanting to follow - Basically a summary

For my scope and camera combination I should be aiming at 1.5"/px

If I bin my data by 2x (software or hardware) on 2000mm @F8 then this will be almost twice as fast as using a 0.67 reducer @1400mm and F5.3 and no loss in resolution.

Therefore I will only use the reducer "if and when" I need a larger FOV for a target.

 

I will do software binning in PI

The workflow is as follows for me on OSC ZWO 2600 MC Pro

  • Calibrate all images with darks, flats and bias
  • Blink images and remove bad ones
  • Perform a SplitCFA process on the calibrations to split them into CFA0 (R), CFA1(G), CFA2(G), CFA3(B) channel directories
  • Combine the 2 greens directories into 1
  • Align all the images to a reference one with ImageRegistration
  • ImageIntegration to produce Red, Green and Blue channels
  • Perform an IntegerResampling 2x bin on each of the RGB channel integrations
  • Use LRGBCombination to bring them back to a RGB image
  • Process as normal

 

Edited by Catanonia
  • Like 2
Link to comment
Share on other sites

32 minutes ago, vlaiv said:

Actually - maybe try it, you might be surprised.

Proper sampling rate can be deduced from data - look at average FWHM in arc seconds in your subs and you can get the sense of sampling rate required. Relation is rather straight forward FWHM / 1.6 will give you sampling rate.

If you have 3.52" FWHM or larger - 2.2"/px is actually proper sampling rate for that case. Even if your FWHM is smaller than 3.52" you might not loose much in terms of detail, yet you might get quite a bit in terms of SNR.

I did a quick check on unbinned (software or hardware) files.

I put in 0.79 as the arsecs/pixels = is this correct ??

If so, then I am hovering in that range and so may be worth it

BTW this is pretty poor data that was rejected as only 16 subs listed here worked out of 100

 

Sampler.jpg

Edited by Catanonia
Link to comment
Share on other sites

36 minutes ago, Catanonia said:

I put in 0.79 as the arsecs/pixels = is this correct ??

Depends on what kind of subs you measured. To be sure - you can plate solve single sub to get exact sampling rate.

Without focal reducer, regularly debayered image will have sampling rate of 3.76 * 206.3 / 2000 = ~0.388"/px

Without focal reducer, splitCFA image (or rather color channel sub) will have sampling rate of twice above, so 0.776"/px

With focal reducer and 1430mm of FL, regularly debayered image - you will have 3.76 * 206.3 / 1430 = 0.542"/px

splitCFA + reducer will hence have 1.085"/px

To be honest, I'm not surprised with those FWHM results. Even slightly worse than excellent seeing will easily push resolution above 3" FWHM (or to around 2"/px equivalent).

 

  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Catanonia said:

The process for those wanting to follow - Basically a summary

For my scope and camera combination I should be aiming at 1.5"/px

If I bin my data by 2x (software or hardware) on 2000mm @F8 then this will be almost twice as fast as using a 0.67 reducer @1400mm and F5.3 and no loss in resolution.

Therefore I will only use the reducer "if and when" I need a larger FOV for a target.

 

I will do software binning in PI

The workflow is as follows for me on OSC ZWO 2600 MC Pro

  • Calibrate all images with darks, flats and bias
  • Blink images and remove bad ones
  • Perform a SplitCFA process on the calibrations to split them into CFA0 (R), CFA1(G), CFA2(G), CFA3(B) channel directories
  • Combine the 2 greens directories into 1
  • Align all the images to a reference one with ImageRegistration
  • ImageIntegration to produce Red, Green and Blue channels
  • Perform an IntegerResampling 2x bin on each of the RGB channel integrations
  • Use LRGBCombination to bring them back to a RGB image
  • Process as normal

 

This workflow is more or less what i do in Siril to my data. It looks scary but once you get used to it its just another step in the voodoo magic of astrophoto processing and its not that big of a hassle in the end. Splitting has some very small benefits to debayering and then binning, but everything in AP processing is done to get a small gain so why not do it i think.

I dont blink my data, or really visually inspect the subs in any way for that matter. I just use statistics measured from the subs to weed out the bad ones. I use measured FWHM, star roundness, background levels and SNR to make the decisions. In Siril i use the dynamic PSF function to create a plot of all the subs and its just a few clicks to determine what gets kept and what gets thrown out. I think in PI you could use the subframe selector tool to do the same, but with much better stats than what Siril can provide. Blinking can be fun to see asteroids and other anomalies in the subs, but other than that i dont give it much value for determining whether the sub is kept or not.

Sometimes after a night i see that almost all the subs are less than ideal and they get thrown out, but with long integrations i care less about losing a few hours of data since its nowhere near done anyway. I have 2 in progress projects that are at 15+ hours so far and i estimate i may need to double that to reach a result that i really like, so whats losing a few hours when im looking at 15+ hours of more in the end? For the most demanding objects i see that i not only need a long integration, but also will need all of the data to be better than average to get a sharp and deep image in the end.

Edited by ONIKKINEN
Link to comment
Share on other sites

1 minute ago, ONIKKINEN said:

This workflow is more or less what i do in Siril to my data. It looks scary but once you get used to it its just another step in the voodoo magic of astrophoto processing and its not that big of a hassle in the end. Splitting has some very small benefits to debayering and then binning, but everything in AP processing is done to get a small gain so why not do it i think.

I dont blink my data, or really visually inspect the subs in any way for that matter. I just use statistics measured from the subs to weed out the bad ones. I use measured FWHM, star roundness, background levels and SNR to make the decisions. In Siril i use the dynamic PSF function to create a plot of all the subs and its just a few clicks to determine what gets kept and what gets thrown out. I think in PI you could use the subframe selector tool to do the same, but with much better stats than what Siril can provide.

Sometimes after a night i see that almost all the subs are less than ideal and they get thrown out, but with long integrations i care less about losing a few hours of data since its nowhere near done anyway. I have 2 in progress projects that are at 15+ hours so far and i estimate i may need to double that to reach a result that i really like, so whats losing a few hours when im looking at 15+ hours of more in the end? For the most demanding objects i see that i not only need a long integration, but also will need all of the data to be better than average to get a sharp and deep image in the end.

I have algorithm that will keep all the data and use most of it :D.

I just don't have time to implement it - but hopefully that will change soon.

We often use deconvolution on final stack to sharpen it up - but that is really not the best way to use deconvolution. Most deconvolution algorithms are designed with simple premise in mind - there is some read noise and some Poisson / shot noise and image has been blurred with known kernel.

When we deconvolve stacked image - it is no longer simple statistics. We have changed noise statistics when using interpolation for aligning and stacking combines individual PSFs / blur kernels into very complex shape. We then use approximate blur kernel for deconvolution.

My idea goes like this.

We split data into three groups - subs at target FWHM (with very small variation from target FWHM), subs below target FWHM and subs above target FWHM.

We stack subs with target FWHM to get reference PSF.

For each sub below target FWHM we derive convolution kernel by deconvolving reference stars with sub stars and averaging result. We use this kernel to convolve stars in sharper images to make them same as reference frame. We do similar but opposite thing with subs that have higher FWHM - we find deconvolution kernel and then deconvolve subs.

After this operation all subs will have about the same FWHM (this will also correct for star elongation if we form reference from subs with round stars). Subs from below FWHM group will have improved SNR, while subs from above FWHM will have worse SNR.

In the end we need very good adaptive algorithm that can take into account different levels of SNR (per pixel not per image as there is no single SNR value per image). I already have that bit implemented.

Above should produce better SNR than throwing away poor subs - without loss of resolution.

  • Like 1
Link to comment
Share on other sites

19 minutes ago, vlaiv said:

I have algorithm that will keep all the data and use most of it :D.

I just don't have time to implement it - but hopefully that will change soon.

We often use deconvolution on final stack to sharpen it up - but that is really not the best way to use deconvolution. Most deconvolution algorithms are designed with simple premise in mind - there is some read noise and some Poisson / shot noise and image has been blurred with known kernel.

When we deconvolve stacked image - it is no longer simple statistics. We have changed noise statistics when using interpolation for aligning and stacking combines individual PSFs / blur kernels into very complex shape. We then use approximate blur kernel for deconvolution.

My idea goes like this.

We split data into three groups - subs at target FWHM (with very small variation from target FWHM), subs below target FWHM and subs above target FWHM.

We stack subs with target FWHM to get reference PSF.

For each sub below target FWHM we derive convolution kernel by deconvolving reference stars with sub stars and averaging result. We use this kernel to convolve stars in sharper images to make them same as reference frame. We do similar but opposite thing with subs that have higher FWHM - we find deconvolution kernel and then deconvolve subs.

After this operation all subs will have about the same FWHM (this will also correct for star elongation if we form reference from subs with round stars). Subs from below FWHM group will have improved SNR, while subs from above FWHM will have worse SNR.

In the end we need very good adaptive algorithm that can take into account different levels of SNR (per pixel not per image as there is no single SNR value per image). I already have that bit implemented.

Above should produce better SNR than throwing away poor subs - without loss of resolution.

If i understood you correctly with how it works, i think this kind of automated process on subs of various quality would be almost revolutionary for astrophotography processing?

You are a seemingly endless well of knowledge and wisdom when it comes to this hobby, thanks for sharing some of that with us mere mortals.

Link to comment
Share on other sites

1 minute ago, ONIKKINEN said:

If i understood you correctly with how it works, i think this kind of automated process on subs of various quality would be almost revolutionary for astrophotography processing?

You are a seemingly endless well of knowledge and wisdom when it comes to this hobby, thanks for sharing some of that with us mere mortals.

Yes, it would be quite automated. It would read FWHM of all subs and show you distribution. You would only need to select target FWHM somewhere in that distribution and it would do the rest.

There is similar thing that can be done to correct known aberrations that also might be interested.

I see quite a bit of people image without coma corrector to start with. This approach could use synthetic coma to reverse its effects. It would have some drawbacks - since coma depends on distance - it would have to be variable PSF deconvolution and as such it would introduce more noise in outer regions of the FOV, but I guess it is a worth a try to implement something like that.

Link to comment
Share on other sites

11 hours ago, vlaiv said:

Depends on what kind of subs you measured. To be sure - you can plate solve single sub to get exact sampling rate.

Without focal reducer, regularly debayered image will have sampling rate of 3.76 * 206.3 / 2000 = ~0.388"/px

Without focal reducer, splitCFA image (or rather color channel sub) will have sampling rate of twice above, so 0.776"/px

With focal reducer and 1430mm of FL, regularly debayered image - you will have 3.76 * 206.3 / 1430 = 0.542"/px

splitCFA + reducer will hence have 1.085"/px

To be honest, I'm not surprised with those FWHM results. Even slightly worse than excellent seeing will easily push resolution above 3" FWHM (or to around 2"/px equivalent).

 

Thanks, those are the magic numbers I needed. 

The rest I can work out from a couple of images dropped into PI analysis :)

Link to comment
Share on other sites

10 hours ago, ONIKKINEN said:

This workflow is more or less what i do in Siril to my data. It looks scary but once you get used to it its just another step in the voodoo magic of astrophoto processing and its not that big of a hassle in the end. Splitting has some very small benefits to debayering and then binning, but everything in AP processing is done to get a small gain so why not do it i think.

I dont blink my data, or really visually inspect the subs in any way for that matter. I just use statistics measured from the subs to weed out the bad ones. I use measured FWHM, star roundness, background levels and SNR to make the decisions. In Siril i use the dynamic PSF function to create a plot of all the subs and its just a few clicks to determine what gets kept and what gets thrown out. I think in PI you could use the subframe selector tool to do the same, but with much better stats than what Siril can provide. Blinking can be fun to see asteroids and other anomalies in the subs, but other than that i dont give it much value for determining whether the sub is kept or not.

Sometimes after a night i see that almost all the subs are less than ideal and they get thrown out, but with long integrations i care less about losing a few hours of data since its nowhere near done anyway. I have 2 in progress projects that are at 15+ hours so far and i estimate i may need to double that to reach a result that i really like, so whats losing a few hours when im looking at 15+ hours of more in the end? For the most demanding objects i see that i not only need a long integration, but also will need all of the data to be better than average to get a sharp and deep image in the end.

I uses blink for Elon trails and obvious issues like out of focus

I will then start to use SubFrameSelector  before integration of channels to check and get the best ones. Played with it a few times so know how it works.

Link to comment
Share on other sites

  • Catanonia changed the title to Discussion on software binning / RC Reducer Review and M81 Image
  • Catanonia changed the title to Discussion on Software Binning / RC Reducer Review and final M81 Image

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.