Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ZWO 294MM Bin1 v Bin2


Recommended Posts

Hi all,

I was trying out some different camera settings the other night, mainly a test to see if using the "unlocked" bin 1 mode on my 294 would be worth it (I've previously avoided because I figured my local seeing would not support a resolution higher than what I can achieve using the "standard" bin2, and 47MP files take up quite a bit more storage space). My thoughts are below, I'd invite anyone to share their thoughts on this too. 

Acquisition & analysis info

  • The target, for no particular reason other than it was visible, was M13
  • Bin 1: gain 108 (unity), offset 30, -10C, 60s subs, image scale = 0.87"/pixel
  • Bin 2: gain 120 (unity), offset 30, -10C, 60s subs, image scale = 1.74"/pixel
  • 100mm triplet refractor, 550mm focal length
  • Astronomik L3 filter
  • Guiding was nice and consistent, around 0.5 - 0.6" RMS throughout capture, fairly equal in RA and DEC
  • bin1 frames taken first, immediately followed by bin2 frames
  • Each stack calibrated with its own set of darks, flats, and flat darks
  • Calibration, registration, and stacking done in Siril
  • Image analysis done in Pixinsight using FWHMEccentricity script on default settings - reported figures are median FWHMs expressed in pixels, not arcseconds

Results

For all images, the measured FWHM (in pixels) is stated in the top right hand corner. Also note that the text says "slight midtones stretch", but this was done using STF and data was linear

Comparison of stacks

The bin1 stack gave a reported FWHM of 3.210 pixels, which equals 2.793". Following the advice I've seen from @vlaiv on here (shameless tagging to get your attention 😃) of an optimal sampling rate of FWHM/1.6, that would infer an optimal sampling rate of 1.75"/pixel. This seemed to confirm my first thought (i.e. my conditions/equipment don't support a higher resolution than can be achieved using bin2, and in fact my sampling rate at bin2 is pretty much perfect (for the conditions that night, at least)). However, the bin2 stack gave a reported FWHM of 2.071 pixels (3.604"); somewhat higher than the bin1 data suggests should have been possible.

I then decided to bin the bin1 image x2 using IntegerResample in PI so both images were at the same scale for direct comparison (all the bin1 images below have been binned x2 in that way). Now, the x2 binned bin1 image gives a FWHM of 1.780 pixels (3.097") - when converted to arcseconds, it's a little higher than the native bin1 image*, but it shows ~9% improvement in measured FWHM compared to the native bin2 image (ok, not massive, but it's a free resolution improvement - the words "free" and "astrophotography" don't often cross paths 🤣). The bin1 image also appears visually sharper (again, not massively, but I think it is noticeable when viewed at 100%).

*I'm sure this is probably expected, but I don't know why - perhaps someone can explain that to to me.

stacks.thumb.png.e40bbb01937723e4cb52064870eba4bb.png

 

Comparison of raw subs

I thought maybe the seeing had suddenly worsened during bin2 exposures, so I looked at 2 subs taken right after one another (the last bin1 sub, and the first bin2 sub). These gave very similar FWHMs, with bin1 still taking the lead.

rawsubs.thumb.png.271c35f9c0f8b4a7abe05b45cb633809.png

 

Comparison of calibrated subs

Just to check the calibration wasn't doing something weird (you never know with a 294!! 😆), I compared those same subs. Very slight changes, but they're still quite similar.

calibratedsub.thumb.png.0554faf1896376f925929d72b147b45c.png

 

Comparison of calibrated and registered subs

Same two frames again, this time registered with Siril's default method (pixel area relation). A-ha! A 6% increase in FWHM for bin1, but a 12% increase for bin2. The issue with the bin2 images must therefore stem from the interpolation method used during image registration!

calibratedandregisteredsubs.thumb.png.c516dbf59daa06ec52de404e8e4ad553.png

 

Comparison of stacks using different registration methods (Bin2)

It is clear that different interpolation algorithms lead to different FWHMs in the stack. For this dataset, it seems that Lanczos-4 gives the best result and bilinear the worst (bilinear has also given the same result as Siril's default of pixel area relation), but it's still not quite as good as the x2 binned bin1 stack from above.

bin2registrationcomparison.thumb.png.b50f1c600479a30410e6422c3fa0985b.png

 

Comparison of stacks using different registration methods (bin1, stacks binned x2)

Same exercise using the bin1 data. This time bicubic has given the best result and bilinear again the worst (and also again, the bilinear result is the same as Siril's default pixel area relation algorithm). In each of these cases, the x2 binned bin1 stacks have between 5 - 9% improvement over the equivalent native bin2 stack, and even the worst bin1 stack still shows a lower FWHM than the best bin2 stack.

bin1registrationcomparison.thumb.png.3f028a46448a8e90afcf0d138bd99369.png

 

Conclusion

Based on this (admittedly very limited) data, my preliminary conclusion is that capturing data using the "unlocked" bin1 mode of the 294MM followed by binning the stack x2 in software, does have the potential to net a slight resolution increase in a stacked image, compared to capturing data using the "standard" bin2 mode. The difference between individual subs captured using the two modes is virtually non-existent, so the improvement seems to stem solely from better registration with the bin1 data. As to why this appears to be the case, I am assuming it is due to the finer sampling of the bin1 data allowing the registration algorithms to do their job with a bit more precision.

Happy to hear any thoughts on this, including any completely tearing apart the conclusion I've come to and/or pointing out any obvious or stupid mistakes I've made!!

Thanks

  • Like 1
Link to comment
Share on other sites

For almost all cases i have been satisfied with bayer splitting (effective bin2) my OSC stuff to an end resolution of 1.52""/px, but this spring season i managed to spend some time under decent seeing and its not the case now. I noticed that APP reports registration RMS as around 0.2px in both cases, so obviously the bin2 data has it twice as bad when the error is in pixels.

So i concluded in my case that if the data is really good there are actual gains in stacking at a higher resolution. I guess registering frames can only be so accurate and fwhm values of less than 3px are affected quite a lot.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, The Lazy Astronomer said:

*I'm sure this is probably expected, but I don't know why - perhaps someone can explain that to to me.

There is "pixel blur" component to the FWHM which is usually small, but can make a difference. Larger pixels contributed to larger FWHM because of this.

Cause is that pixels are not point sampling devices, but rather that they "integrate" over their surface - and this integration creates tiny blur on the pixel level. Since all blurs add up - this adds up to increase FWHM when converted to arc seconds.

Btw - it is always preferable to stack at higher resolution and bin in the end.

Stacking involves registration and registration uses some sort of interpolation to do its job. Interpolation introduces another little bit of blurring to the data - which is very similar to that pixel blur. Level of blur depends on interpolation algorithm used (I've written about this before in one topic) and Lanczos provides best results while surface methods and bilinear interpolation provide the worst (in terms of blur introduced).

In any case - interpolation of binned pixels makes interpolation blur operate on larger pixel size - and hence introduces larger blur when converted into arc seconds.

I think that best approach would be this:

1. use bin1 to capture the data

2. prepare data in bin x2 format - not by regular binning but by using split bin (similar to split debayer - it leaves pixel size the same but halves resolution and produces x4 more images to stack)

3. stack that with Lanczos resampling for alignment

This approach should produce best FWHM of resulting stack compared to other methods.

  • Thanks 2
Link to comment
Share on other sites

1 hour ago, vlaiv said:

I think that best approach would be this:

1. use bin1 to capture the data

2. prepare data in bin x2 format - not by regular binning but by using split bin (similar to split debayer - it leaves pixel size the same but halves resolution and produces x4 more images to stack)

3. stack that with Lanczos resampling for alignment

This approach should produce best FWHM of resulting stack compared to other methods.

Thanks @vlaiv - I think I've done this process correctly:

1. Used the bin1 data

2. Wasn't sure how to do this in Siril (or if it can do it all), so used Pixinsight's SplitCFA process on the calibrated subs - this gave me 4x as many images of 4144 x 2822 px (native bin1 is 8188 x 5644 px), so I think that's done what you suggest. 

3. Aligned using lanczos-4 and stacked

At the end of that I have one integrated image of 4144 x 2822 px, with a measured FWHM of 1.639 px or 2.85".

Overall, an additional 9% improvement on Lanczos-4 registration with native bin2 data, and a 21% improvement when compared to native bin2 data registered using Siril's default method!

Before I commit to capturing all my future data using the "unlocked" bin1 mode and stacking as above (and probably also having to purchase a few more TB of storage for the 89MB subs!) - are there any downsides you can think of when handling the data this way, in terms of the SNR of the stack, when compared to capturing with "normal" bin2 mode and aligning/stacking without the split binning?

Thanks in advance!

Link to comment
Share on other sites

8 hours ago, The Lazy Astronomer said:

I think I've done this process correctly:

Yes, that is correct process. Split CFA can be used instead of split bin x2. If you want to bin to higher number however, you'll need to use dedicated script for that (split CFA works only on 2x2, but not 3x3, 4x4 and so on).

8 hours ago, The Lazy Astronomer said:

Before I commit to capturing all my future data using the "unlocked" bin1 mode and stacking as above (and probably also having to purchase a few more TB of storage for the 89MB subs!) - are there any downsides you can think of when handling the data this way, in terms of the SNR of the stack, when compared to capturing with "normal" bin2 mode and aligning/stacking without the split binning?

There are no downsides if you want "good" SNR improvement.

Any blur will in fact increase SNR somewhat, but that is "poor" SNR improvement because it is tied to pixel to pixel correlation that is not present in original data.

If you measure SNR on different stacking methods - you will see that FWHM and SNR are in inverse relationship because of that. Gains in SNR due to this are small (as are gains in FWHM) compared to overall gain of binning.

Ideally - you would want to do "traditional" shift and add technique. This method allows you to skip interpolation altogether - but requires ability to guide very precisely and for guide system to be connected to main camera, or for software to understand relationship between guide sensor orientation and main sensor orientation. When doing dithers, software can issue move command that will place main sensor integer number of pixels in X and Y direction compared to last sub.

When stacking, all you need to do is then shift sub in opposite way compared to that dither.

(if you think about it - nothing is different from regular imaging except you assure that no interpolation is needed - just pixel indices shift in order to align subs. Data is left as is, no pixel values are changed in alignment process).

Polar alignment also needs to be perfect to avoid any field rotation.

In any case, since this is very hard to do on amateur setups with gear and software that we use - above method is next best thing. It preserves the most sharpness (introduces the least amount of blur) and keeps the data as clean as possible (which is beneficial in processing stage - denoise algorithms work better if data is good than if it is already too correlated).

  • Like 1
Link to comment
Share on other sites

9 hours ago, vlaiv said:

There are no downsides if you want "good" SNR improvement.

Music to my ears! 😁

I did notice some stars looked like they were a slightly diamond-like shape - perhaps I need to play with the clamping threshold a bit.

9 hours ago, vlaiv said:

Ideally - you would want to do "traditional" shift and add technique. This method allows you to skip interpolation altogether - but requires ability to guide very precisely and for guide system to be connected to main camera

Assuming it'd be possible to work out the guiding problem, is this potentially where we could get to in the near future if the re-emergence of dual sensor cameras takes off, do you think? 

Link to comment
Share on other sites

3 minutes ago, The Lazy Astronomer said:

Assuming it'd be possible to work out the guiding problem, is this potentially where we could get to in the near future if the re-emergence of dual sensor cameras takes off, do you think? 

Actually, all it takes is for imaging software to talk to guiding software.

It already does that on some basic level - imaging software sends dithering commands and reads guiding graph (waiting for it to settle in order to continue exposure).

What imaging software needs to do is to be able to read guider calibration or send its own plate solve data to it.

Instead of dithers being completely random - they need to be random so that star positions in two different light frames are integer number of pixels away.

In principle - if you don't have significant field rotation - you can already do this - force sub alignment to end up on integer boundary. Problem with this now is that you'll increase FWHM significantly - because there is possibility of sub pixel shift between frames as guider and imaging system are not in sync when performing dither (dither can have arbitrary value - and it often does - like 10.35 pixel shift in RA).

Once you have two subs that you need to shift 10.35px in order to best align them - but you only shift them by 10px (integer value, no need for interpolation) - you introduce that 0.35px elongation in your stars - or you increase FWHM. Ideally - dither needs to be 10px to start with (in imaging camera space).

 

Link to comment
Share on other sites

When I look at a picture, that's what I do. I look at a picture. When I want to compare two pictures I compare them, and if they are from the same data I can 'blink compare' them. I don't trawl back through loads of numbers agonizing over loads of numbers. What am I missing?

Olly

  • Like 1
Link to comment
Share on other sites

Ever the pragmatist, @ollypenrice 😁

I wouldn't say l was agonising though, more just curious. I suppose I liked the certainty I got with an objective assessment afforded by an image analysis tool, in addition to the subjective one made by my eyes (and indeed, where the difference was too small to see visually, it was the only way l could judge it).

I'll freely admit that, visually, l couldn't tell the difference between any of the bin1 images (I could, however, see an improvement in the [x2 binned] bin1 vs native bin2), but I have learned something interesting (well, I think it's interesting):

(1) My image integration routine was adding a not insignificant amount of extra blurring, not present at capture, and (2) @vlaiv has shown me how to minimise this extra blurring with some pretty simple changes.

Now, the key question: is it worth it? I'll let you know when I've had to buy ANOTHER storage drive 😄

Link to comment
Share on other sites

6 hours ago, The Lazy Astronomer said:

Ever the pragmatist, @ollypenrice 😁

I wouldn't say l was agonising though, more just curious. I suppose I liked the certainty I got with an objective assessment afforded by an image analysis tool, in addition to the subjective one made by my eyes (and indeed, where the difference was too small to see visually, it was the only way l could judge it).

I'll freely admit that, visually, l couldn't tell the difference between any of the bin1 images (I could, however, see an improvement in the [x2 binned] bin1 vs native bin2), but I have learned something interesting (well, I think it's interesting):

(1) My image integration routine was adding a not insignificant amount of extra blurring, not present at capture, and (2) @vlaiv has shown me how to minimise this extra blurring with some pretty simple changes.

Now, the key question: is it worth it? I'll let you know when I've had to buy ANOTHER storage drive 😄

I'm not averse to using the tool but, in a case where I disagree with its findings, I'll give precedence to my subjective opinion. I'm certainly a pragmatist and remember Dennis Isaacs saying, 'It isn't the histogram you hang on the wall.'

A more general point: we know that stars are 'exceptional cases' in the world of optics. I wonder to what extent we can rely, therefore, on stellar FWHM as an indicator of the resolution of non-stellar details?

Olly

Link to comment
Share on other sites

2 hours ago, ollypenrice said:

A more general point: we know that stars are 'exceptional cases' in the world of optics. I wonder to what extent we can rely, therefore, on stellar FWHM as an indicator of the resolution of non-stellar details?

Olly

Optical theory would say yes you can. Regards Andrew 

PS On reflection that's not quite true as a CMOS or CCD sensor has finite sized pixels and is thus not shift invariant.  That it it depends exactly where the point source (star) fall on the pixel.

However, for all practical purposes it does as other methods like MTF are similarly impacted.

Edited by andrew s
  • Like 1
Link to comment
Share on other sites

2 hours ago, andrew s said:

PS On reflection that's not quite true as a CMOS or CCD sensor has finite sized pixels and is thus not shift invariant.  That it it depends exactly where the point source (star) fall on the pixel.

However, for all practical purposes it does as other methods like MTF are similarly impacted.

I'm not entirely sure that it will make a difference in case where you are optimally sampling.

Sure, if you take two images with finite sized pixels, with positional shift between them - then sample values will be different, but those sample values will tied to different locations (sampling positions) so it's normal for them to be different. Question is, when you restore original function using both sets of data and ideal interpolation function (sinc) - will you get same or different thing?

My guess is that you will get the same thing - it will be original function produced by optical system convolved with function representing pixel shape and sensitivity over that shape, and if I'm not mistaken - convolution is shift invariant.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I'm not entirely sure that it will make a difference in case where you are optimally sampling.

Sure, if you take two images with finite sized pixels, with positional shift between them - then sample values will be different, but those sample values will tied to different locations (sampling positions) so it's normal for them to be different. Question is, when you restore original function using both sets of data and ideal interpolation function (sinc) - will you get same or different thing?

My guess is that you will get the same thing - it will be original function produced by optical system convolved with function representing pixel shape and sensitivity over that shape, and if I'm not mistaken - convolution is shift invariant.

I had a good paper on this which of course I can't find now. 

Shift invariance is fundamental to all linear imaging theory. Hence the point sampling of Nyquist. Things like MTF, convolution and optimal sampling don't formally apply with integrating areal detectors.

I do have a paper showing this via simulation for slit spectrographs if your interested.

I doubt that with pixels getting ever smaller and the over sampling it makes possible that there will be any practical difference. 

Regards Andrew 

PS "Theoretical Bases And Measurement Of The MTF Of Integrated Image Sensors" is behind a pay wall. However the abstract makes my point 

"By analogy with optics, the spatial resolution of image sensors is generally characterized by the Modulation Transfer Function (MTF). This notion assumes the system being a linear filter, which is not the case in integrated image sensors, since they have a discrete photoelement structure. These sensors must in fact be considered as integral samplers. Their response to any irradiance distribution can thus be computed, knowing the pitch of photoelements and using a characteristic function. This function is more or less similar to the MTF. Once exact theoretical foundations have been defined, a computer simulation enables the various MTF measuring methods to be compared this makes it possible to rule out er-rors inherent to experiments. The most accurate and reliable method appears to be the knife edge method, applied with a relative displacement of the sensor and of the image. This avoids the occurence of aliasing phenomenon. Experimentation of this method for measurement of the CCD sensors characteristic function, which we call MTF as agreed, is described. This method also makes it possible to evaluate the transfer inefficiency of shift registers."

Edited by andrew s
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.