Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

1x1 bin v's 2x2 bin comparison


swag72

Recommended Posts

I have been imaging with a C9.25 of late and with this I have been using a small chip camera. I know that this is not idea, using 3.9um pixel size when imaging at 2.3m.

I have been told that its best to use 2x2 bin with this as otherwise I am going to be over sampling - which is not ideal.

So if anyone is interested, here's the facts of the matter.

1x1 bin 0.32 arc seconds/ pixel image scale

2x2 bin 0.65 arc seconds / pixel image scale.

Theory (That one beginning with N that I can never spell, let alone understand) says that we should be imaging at around 1.5-2 arc seconds / pixel image scale. So here I am, plugging away at stupid figures.

This example wasn't done on the same night, but both images are 58x300s worth of exposures.

I'll let you make up your own mind, but I know what I will be doing in future. I would be interested in your thoughts based on the following comparison. Would you be imaging now at 0.32 arc seconds /  pixel or 0.65?

I hope that this will generate an interesting discussion and food for thought for all those long focal length imagers out there.

post-5681-0-82473000-1404415961_thumb.jp

Link to comment
Share on other sites

Interesting. I don't think it's an entirely fair comparison though as you've reduced the size of the bin 1 image and I'd like to see them both at 1:1. If you're over sampled then deconvolution can be very effective.

Theory says the bin 2 will be less noisy, do you have some std deviation figures?

The bin 1 image is showing some faint stars that I can't see in the bin 2 and this is surprising. I'm guessing this is due to the differing screen stretch between the two.

The PI team recommend dispensing with L images and just imaging with RGB at bin 1 but this approach requires a lot more time to acquire the images. LRGB with binned colours is compromise for the time challenged! The results can be impressive though.

Andrew

Link to comment
Share on other sites

I stacked these in PI and just zoomed them to the same size on the screen and then used the same auto stretch on them both. Would this not give a fair comparison?  I've certainly not resized anything.

No standard deviation figure - I'd not know that if it bit me on the nose - I'm afraid I try to deal with pretty pictures over figures. I have an innate fear of maths.

Link to comment
Share on other sites

Your images are reproduced on the screen at 1:1 and 1:2. While you haven't actually resized the images, the screen representation is different for each.

Pixel values for the bin 2 will be four times larger. I'm not sure how the auto-stretch copes with this. On my monitor the faint stars are less visible on the bin 2 image and this image lacks some contrast.

The eye will take you some way in assessing the quality of an image, we all know when something looks good. It is very easily deceived though. That's when the numbers can come to your rescue. This demonstration is very effective: https://games.yahoo.com/blogs/plugged-in/amazing-shade-illusion-see-light-024226513.html. Trust your eyes now? :shocked:

Andrew

Link to comment
Share on other sites

I have wondered long and hard about whether to use 1x1 or 2x2 bin at 2.3m and f10 with such small pixels. All I have done is take a similar image at each image scale and tried to decide what I think makes a sharper image.

To my eyes, rightly or wrongly, it appears that the 1x1 is shaper, although more noisy (the latter I am not surprised about) - Will it create a pleasing image? That depends in the eye of the beholder.

I guess that I am interested to see that at focal lengths and imaging speeds that as imagers we would all baulk at and say are not doable are looking OK in reality. I am not good with mathematical theory I am afraid. If something looks better to me, then that's good enough. I'll post the finished image sometime so that people can make their own decision.

Link to comment
Share on other sites

On the bare facts I would have to go with the image that looks best - 1x1. Who cares what the maths say?  Maybe you could try software binning the 1x1 image to see how that then compares to the 2x2? Do you lose the apparent improvement in resolution.

ChrisH

Link to comment
Share on other sites

It is an interesting comparison. I get a bit lost when on the one hand you have planetary imagers using 0.2" and on the other deep-sky imagers being told it's not worth going below 1.5-2". Also, whilst I can think of ways of reducing background noise in software I can't think of any way of regaining lost resolution that was never captured in the first place. Yes, you can enhance what is already there (deconvolution) but beyond that all you can do is interpolate between pixels (i.e., a calculated guess). So I will continue to image at unrecommended arc-sec/pxl resolutions until such point that I see it isn't working.

The bottom line is how it looks to your eyes, if there's no improvement then it's not worth doing.

ChrisH

Link to comment
Share on other sites

It might be better to first work out your gain between unbinned and binned, then you will have a better idea of whether the trade-off in scale is worth it. Working at f10, you need all the speed you can lay your hands on, especially if youre putting a mosaic together because the sum of the binned panes will still make for a large image scale.

To look at this another way, which image has less noise? To my eyes, its the 2x2. Its not really about getting more stuff when binning, its getting the same stuff - but with a lot less noise. However, that means a complete new set of calibration frames are needed.

Link to comment
Share on other sites

I agree Rob that the 2x2 has less noise, but I guess there's a possible trade off between sharpness and noise in this example. If only I understood the technical stuff behind it all!

Link to comment
Share on other sites

I agree Rob that the 2x2 has less noise, but I guess there's a possible trade off between sharpness and noise in this example. If only I understood the technical stuff behind it all!

Would it not be the case that you can make the 1x1 less noisy by taking more subs but you can't make the 2x2 any sharper?

Also, as you took these on different nights, could it be the sky conditions have had an effect?

Link to comment
Share on other sites

I stacked these in PI and just zoomed them to the same size on the screen and then used the same auto stretch on them both. Would this not give a fair comparison?  I've certainly not resized anything.

No standard deviation figure - I'd not know that if it bit me on the nose - I'm afraid I try to deal with pretty pictures over figures. I have an innate fear of maths.

Hi Sara,

To my eyes and on my monitor the 2x2 is almost as good . Given the saving in imaging time I think that it is a worthwhile exercise.

A.G

Link to comment
Share on other sites

I'm only looking at the images on my phone right now, so can't really accurately assess them - though 1x1 seems to be sharper. Anyway, as far as I understand it, the N theory is all to do with multiples of the seeing... So your min recommended resolution is determined by your seeing resolution. Do you know what that is/was Sara? Nyquist states that you shouldn't image at under half the seeing resolution. So with your set up at 1x1, if your seeing is around 0.64"/pixel or better, you would be fine... I think!

Great to see the comparison though, thanks for sharing.

Sent from my iPhone using Tapatalk

Link to comment
Share on other sites

Theory (That one beginning with N that I can never spell, let alone understand) says that we should be imaging at around 1.5-2 arc seconds / pixel image scale. So here I am, plugging away at stupid figures.

I don't think the results are so much surprising as they are pleasing.

Nyquist says that you should sample at twice the rate of the highest frequency you want to reproduce.  (In discussions the waters get muddied a bit as the theory is based on one dimensional waveforms and infinitely narrow samples, whereas in reality imaging is done in two dimensions using pixel samples of a not-infinitely narrow size, but I think your example demonstrates it holds up well enough to be valid).

So assuming perfect seeing and using the Dawes limit, the C9.25 should be able to resolve to 0.49 arc-seconds, and Nyquist means you should be imaging at a scale of 0.245 arc-seconds per pixel to resolve all of the available detail.  In reality seeing effects will mean that the effective resolving power is usually going to be worse than 0.49 arc-seconds.

- For a murky UK location the limit is anecdotally given as somewhere between 2 and 4 arc-seconds (so you'd be want a pixel scale of 1 to 2 arc seconds depending on the seeing).  I assume this is where your statement above about "theory says" comes from?  I've not seen any rigorous data/analysis to support this 2-4 arc-second average; clearly on a few nights you will get superb seeing and on others (many) you may do much worse than 4 arc-seconds.  In practice this number seems about right to me, but has anyone got links to data showing the average figures by location that they can share perhaps?

- As for Spain, I have no idea but it seems obvious you were doing much better than 1.36 arc-seconds of seeing since your 0.34 app (I don't make it 0.32 app but let's not quibble) image shows more detail than your 0.68 app image.

As you say, the image is noisier per expectations.  (The unbinned pixels capture a quarter as many photons as the binned ones, so the SNR lower).  Ultimately you have to decide which is more pleasing - more noise and more detail,  less noise and less detail, or spending more time on the unbinned version to reach a similar SNR to the binned version.  Beauty is in the eye of the beholder; personally I like the unbinned one better but it might be different for the whole image rather that this small section!

You can compare the noise in the two images in one of two ways:

1. Use the PI 'statistics' process.  Open your pair of images and then run Statistics from the process menu.  Click on the 'spanner' and make sure the MAD (Median Absolute Deviation) box is checked.  Close the dialog and then select one of the images in the dropdown at the top of the statistics window.  Look at the figure in the MAD row (you'll get one figure for a mono image, or three figures for the RGB channels if colour).  Now select the other image and look at the MAD figure.  The one with the lowest value is the least noisy image - usually it is obvious by eye, but if you have marginal differences in capture or processing this can give you a clear indication of the better image in terms of noise.

One caveat is that for the MAD figures to be comparable, the two images must contain the same part of the sky.  If you have slightly different framing then create two previews that cover the same area (I find dragging the preview box from one star to another star gives a pretty reliable way of doing that).  You'd compare the MADs by selecting the previews in the statistics dropdown.

2. The other way is to use the NoiseEvaluation script on the script menu.  You just run it on a given pair images - again for the figures to be comparable you need to make sure they are of the same part of the sky by using previews.  Look at the value of σK for each channel - again smaller numbers are less noisy.  (Bear in mind the numbers are given in scientific notation so you need to move the decimal point the number of places specified after the e, i.e. 1.34e-2 is 0.0134 and 1.34e2 is 134.0).

Just to make your life even more joyous,  I suggest you have a poke about on the PixInsight forums and read up on the newly released DrizzleIntegration tools if you haven't already.  Your binned image is undersampled, and even your unbinned image might be undersampled if you had truly excellent seeing.  Using Drizzle isn't much harder than normal integration (they've even put it in to the BatchPreProcessing script).  You can recover more detail assuming your images are undersampled and that you also used dithering between frames.  (The downside is more noise, but again you have to judge what the best trade-off is).  Here's one I did on my 80ED / DSLR, which is definitely undersampled even for typical UK seeing (1.9 app) Left is normal, right is drizzled - you can see the stars are much rounder/smoother (click to enlarge!):

post-18840-0-73936900-1404473117_thumb.p

(Yes the left image has been scaled up to the same size as the right image for comparison purposes, but that's the whole point - there is more detail in the drizzled image).

(By the way, it isn't a fair comparison to talk about long exposure imaging vs. planetary imaging when trying to decide the correct pixel scale.  The seeing will blur the incoming photons over an area called the point spread function.  Deconvolution attempts to measure or model the PSF and reverse the process to some extent.  For a long exposure image of a few seconds or more the PSF will be the average effect of the seeing over the duration of the exposure.  For planetary imaging, the target is much brighter and so you can take an really short exposure of hundredths or thousandths of a second, so the PSF will tend to be smaller as there is less time for the seeing to shift the photons about, and for some of the frames in some parts of the image, the PSF will be very small or nil. Planetary stacking tries to identify the sharp parts of the image in each frame and average them in to a master image - that's why it's technical name is 'Lucky Imaging' - but you have to shoot hundreds or thousands of frames to get enough luck. In theory you could do the same for DSO imaging, but you can't get sufficient SNR in such short exposures for it to work. TL:DR planetary imagers try to get a pixel scale that is twice the Dawes limit of their scope as for some fraction of the time they can capture that detail, whereas DSO imagers don't because seeing will always win for images that long.)

Link to comment
Share on other sites

I would have said that imaging in the 0.3 area would be a waste of time but your results don't show that. Like you I'm a pragmatist and you do have slightly better resolution unbinned. However, you also have significantly more noise. This leads me to three further questions;

1) How do the images respond to sharpening in post processing? Could the Bin 2, with its lower noise, actually end up as sharp or sharper than the sharpened bin1? (I don't know, it is a genuine question.)

2) Why not shoot both? In combining them weight the contributions to favour the dark stuff from bin 2 and the bright stuff from bin1.

3) How does 1.6 hours in Bin 1 compare with 1 hour in Bin 2? (Expect a real world gain of about 1.6 or a little better from binning. This was what Dennis found by actually measuring them both. Nothing like the 4x gain claimed by some theroists.)

I would also factor in the nature of the image. If it has lots of detail to resolve, Bin1. If it is all about faint stuff, Bin 2. If it has both, both. (Leo Triplet, get the galaxy cores in Bin1 but go for Bion2 for the tidal tail.)

Olly

Link to comment
Share on other sites

With the usually poor seeing we have in the UK I have found using 0.9" pixel resolution with my MN190 and 460EX definitely comes out worse than binning at 2x2 and 1.8" resolution.  With the FWHM showing 3 or 4 unbinned I believe I'm considerably over sampling.  Often the seeing is even worse and has been quite bad in recent nights and I've found that even with the Esprit 80ED and 2.2" resolution unbinned shows a poorer image than binned 2x2.  With the short nights ATM going to dozens of 20m exposures for NB would take until Christmas and the target would have submerged below the trees :(  DSO imaging and particularly a matrix is in the lap of the Gods in this country.  I have a go though :D

Link to comment
Share on other sites

Nothing is easy and there's certainly no free lunch :grin:

I'd love to say that I will look into the drizzle function in PI - I may actually do that if its not too cumbersome. In true PI style it's probably harder than fitting into a size zero bikini!! - No hang on that's impossible!!

I was fully expecting from this little experiment to do both 2x2 bin and 1x1 depending on the target and what I want to get out of it.

Lots to think about  .............. I thank you! :smiley:

Link to comment
Share on other sites

Nothing is easy and there's certainly no free lunch :grin:

I'd love to say that I will look into the drizzle function in PI - I may actually do that if its not too cumbersome. In true PI style it's probably harder than fitting into a size zero bikini!! - No hang on that's impossible!!

I was fully expecting from this little experiment to do both 2x2 bin and 1x1 depending on the target and what I want to get out of it.

Lots to think about  .............. I thank you! :smiley:

If you're already doing all of your calibration, registration and integration in PI I think you'll be pleasantly surprised at how easy it is to get drizzle working - it's just a few more boxes to tick in the existing tools to make it happen.  You will want to experiment with the handful of settings to get the best trade-off between increased resolution and increased noise, but it isn't like some of the tools where it can be hours of trial and error (I'm looking at you Deconvolution!)  Of course the results rely on having enough under-sampled / dithered subs to feed in, but the same could be said of the rest of the workflow - you need good data to start with.

Link to comment
Share on other sites

Nothing is easy and there's certainly no free lunch :grin:

I'd love to say that I will look into the drizzle function in PI - I may actually do that if its not too cumbersome. In true PI style it's probably harder than fitting into a size zero bikini!! - No hang on that's impossible!!

I was fully expecting from this little experiment to do both 2x2 bin and 1x1 depending on the target and what I want to get out of it.

Lots to think about  .............. I thank you! :smiley:

I got the drizzle working in PI by using the BPP and then the Star alignment to generate the drizzle data and then a drizzle integration. The result was not bad on my image but true to PI style it is a manual process and  time consuming, nothing like the drag and drop routine of the DSS. There are also some parameters that  are supposed to make a difference but life is too short to learn all these in PI. For me the default settings will have to do or I just use the DSS to drizzle.

Regards,

A.G

Link to comment
Share on other sites

It seems to me the noise in the 1x1 is of much higher frequency than the smallest 'real' detail element in the image - meaning it would be easy to remove whilst leaving all the useful image data intact. However, without the raw data to hand to process I cannot be certain how well it would clean up. Just how it looks to me.

ChrisH

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.