Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Cameras with larger pixels (for longer FL)? Or just binning?


StuartT

Recommended Posts

I currently have two ZWO cameras. They both have a pixel size of 3.76 um. With my Esprit 150 and reducer this is giving me a pixel scale of 0.96 which is just about OK (slightly verging on the oversampled).

But I also have a 20cm SCT with a focal length of 2030mm, for which both cameras would definitely be oversampling.

So.. for the SCT, should I (a) try and find a camera with bigger pixels? If so, any recommendations? Or (b) should I just use 2x2 binning on the existing cameras?

Is there a difference between the two approaches?  

Link to comment
Share on other sites

By binning you join the pixels together to become a super pixel.. depending on if you bin 2x2 or 3x3 depends on that size of the super pixel

Binning is at the cost of resolution 

I'm sampling at .71 with my 8 inch sct with reducer and I'm in the middle of a test of binning 2x2 and the standard 1x1.. work in progress as they say

Edited by newbie alert
Added info
  • Thanks 1
Link to comment
Share on other sites

Not sure how well OSC CMOS cameras BIN TBH, it’s not the same as CCD binning which is done at the hardwear level, CMOS I think is just at software level, so not true binning….🤔🤔

I may not have explained that very well, and am sure someone will correct, or explain it better….👍🏼

Edited by Stuart1971
  • Thanks 1
Link to comment
Share on other sites

It is true that CMOS is done in software (or hardware), but you do still get the S/N benefits. I'm not sure how much benefit there would be from buying a camera with much larger pixels other than more cost. I use an RC8 with 2x or 3x binning which is fine. Similarly I use an FMA180 at around 5px/" but there is no serious loss of resolution unless pixel peeping.

I think the binned 2600 would be absolutely fine.

  • Thanks 1
Link to comment
Share on other sites

@vlaiv has tried to explain it to me several times and in various threads, but im not sure i "get it" yet fully. From what i understood on the matter OSC binning is not the same as mono bining and the data has to be treated accordingly. ASTAP has a binning option that can be used after capture and works for my OSC data, but i wont be claiming i understand why.

Another option is splitting the calibrated subs to their individual channels instead of debayering with interpolation, which results in 1 R, 2 G and 1 B subs that are in their actual resolution of capture which is pixelsize x2 since only every 4th pixel contributes. Then stack them as mono subs and process accordingly. I tried this as a proof of concept but since i am shooting hundreds of subs per target i am really not (yet) willing to do this. Maybe once i get a new mount and can take longer subs. 

  • Thanks 1
Link to comment
Share on other sites

I just bin when I shoot with my asi533/asi1600 and the C9.25x0.67

and I mean, you don't need to bin when stacking - you can bin later when editing. If I'm using startools, I tend to bin as it really only works well if you can get the noise down. But if I'm editing in APP/affinity/topaz I tend to not bother binning till the end. Then I use gigapixel to bin to 50%.

stu

  • Thanks 1
Link to comment
Share on other sites

I've written so many times about this subject that it sort of feels like repeating myself :D

Mono:

Hardware binning is almost the same as software binning - only difference is read noise. Everything else is the same.

Say you have 3.75µm camera and you wonder whether you should get one that has 7.5µm pixel size or bin? Depends on sensor size of larger camera and do you have a need for larger FOV and can your scope handle larger imaging circle. 2000x2000 px sensor with 3.75µm pixels will be x4 smaller by surface than 2000x2000px 7.5µm pixel sensor.

If you plan on purchasing same size sensor - then just don't. Bin. FOV when binning remains the same - only thing that changes (for software binning) - is amount of read noise and hence needed exposure length. If you bin x2 - you'll have x2 read noise over regular image. If you bin x3 - you'll have x3 read noise. This might seem large - but it is not.

Modern CMOS sensors have ~2e of read noise. x2 is ~4e, x3 is ~6e read noise - still lower than most CCDs regardless of pixel size.

Color - things are more difficult to explain because color sensors don't operate on resolution suggested by their pixel size.

If you for example debayer images using regular debayering (interpolation), stack them and then bin x2 like @powerlord suggests - you'll get coarser sampling rate - but you won't get SNR improvement that you are looking for (by the way - this approach works fine on mono sensors as those don't debayer).

You can bin color sensor in software and do it properly so that it produces expected results - but it is not as straight forward as mono.

In the end - 0.96"/px is still oversampling.

If you want to know sampling rate that is appropriate for any given image - measure average FWHM of stars in the image (in arc seconds) and divide value with 1.6. If you measure 3.2" FWHM - sampling rate should be 2"/px, if you measure 2.56" - then sampling should be 1.6"/px, etc ... (in order to sample at 1"/px - you need 1.6" FWHM stars in your image).

  • Like 2
  • Thanks 2
Link to comment
Share on other sites

Side note @vlaiv: where does the 1.6" figure come from? I tend to follow it when deciding how much to bin my images, but I don't really know "why" in terms of the theory behind it. 

You've probably explained that about 1000 times already before too 😁 (in fact, you may have already explained it specifically to me before! Apologies if so, my memory can be very selective)

Link to comment
Share on other sites

can I ask a really dumb question vlaiv - my asi1600 and my asi533 have the same pixel size. one is colour, one is mono.

so..

4656*3520 3.8um on the asi1600

3008x3008 3.8um on the asi533

so, i get that the asi1600 being mono, every pixel is that size and mono output.

on the asi533, being colour, there is a bayer mask, and each pixel is actually 4 subpixels right ? each a 1/4 of the area. And must be able to be read out seperately as 4 values ?

So, if by some magic you took the bayer mask off the asi533, would you have a 4 x 9mp = 36mp sensor ?

And my asi1600s effective pixel size if I'm shooting, say Ha vs the asi533 with an L-extreme is gonna be 4x area on asi1600 as on the asi533, 3 out of the 4 subpixels are picking up no Ha at all ?

I'm kinda hoping I've not got all the wrong, but as they say - there's no such thing is a stupid question*

stu

*only a stupid person asking it

 

Link to comment
Share on other sites

4 minutes ago, The Lazy Astronomer said:

Side note @vlaiv: where does the 1.6" figure come from? I tend to follow it when deciding how much to bin my images, but I don't really know "why" in terms of the theory behind it. 

You've probably explained that about 1000 times already before too 😁 (in fact, you may have already explained it specifically to me before! Apologies if so, my memory can be very selective)

In fact - that 1.6 figure is approximation - not definitive value, but it works exceptionally well and is backed by theory.

It comes from several approximations - first is that stars can be approximated by Gaussian distribution (each time we talk about FWHM - we talk of FWHM of Gaussian approximation to star profile). Next ingredient is Fourier transform of that PSF to get frequency response. FT of Gaussian is again Gaussian. We take that "frequency Gaussian" and see for what frequency its value falls below 0.1 - or 10%. Above that frequency all higher frequencies are attenuated to less than 10% of their original value by low pass filter that is PSF (essentially gaussian blur that has profile of single star PSF in the image).

I chose 10% because even with sharpening it is very hard to restore frequencies that are attenuated to less than 10% of their original value - you need to multiply them with reciprocal - or in this case larger value than 10. This brings noise - and you can do that only if you have very high SNR in order not to make noise very noticeable - not something we have in abundance in AP.

You can take different threshold - like 5% - and that will change that 1.6 number slightly - but in reality - even those frequencies that are attenuated to 10% can't be restored fully and thus can be considered cut off point.

7 minutes ago, powerlord said:

4656*3520 3.8um on the asi1600

3008x3008 3.8um on the asi533

so, i get that the asi1600 being mono, every pixel is that size and mono output.

on the asi533, being colour, there is a bayer mask, and each pixel is actually 4 subpixels right ? each a 1/4 of the area. And must be able to be read out seperately as 4 values ?

So, if by some magic you took the bayer mask off the asi533, would you have a 4 x 9mp = 36mp sensor ?

Your logic is sound and that is how it goes - but there is slight error in what you've written.

ASI533 is indeed 9mp sensor and bayer matrix does consist out of 2x2 pixels - but not sub pixels. This means that 9mp number already contains - what you call "sub pixels", and by your logic sensor should be seen as 2.25mp pixel instead so when you count in all sub pixels you get 4 x x2.25mp = 9mp

Out of those 3008 x 3008 pixels you really have only 1504 x 1504 pixels that are red, 1504 x 1504 that are blue and two times 1504 x 1504 that are green.

11 minutes ago, powerlord said:

And my asi1600s effective pixel size if I'm shooting, say Ha vs the asi533 with an L-extreme is gonna be 4x area on asi1600 as on the asi533, 3 out of the 4 subpixels are picking up no Ha at all ?

When you shoot Ha with ASI533 you are effectively only using red pixels (not really true because QE is never 0 for all colors, so both green and blue catch some light - but let's put that aside for now) - you are using only 2.25mp of your sensor.

It is if you are having 7.5µm pixel sensors that have 1/4 of QE that is declared for red pixel.

In fact - that is one way of looking at bayer matrix element.

You can look at it as pixel that has twice the side (hence x4 area) of specified pixel, captures all three channels at the same time, but has 1/4 of declared QE in red and blue and 1/2 of declared QE in green.

With L-extreme it is a bit different story as you put use to all pixels at the same time.

You'll capture OIII with both green and blue pixels (with a bit different QE) and Ha with red pixels at the same time. This offsets a bit fact that only part of pixels are working for any given filter (because you "image in parallel").

  • Thanks 1
Link to comment
Share on other sites

38 minutes ago, vlaiv said:

I've written so many times about this subject that it sort of feels like repeating myself :D

Every time you do, someone learns valuable info that seems to be difficult to come by. 

Perhaps there should be a BIN sticky thread somewhere? Pixels get smaller and smaller so binning is just a necessity for most folks. OSC is also alive and well and like you said, not the same as mono.

  • Thanks 1
Link to comment
Share on other sites

30 minutes ago, vlaiv said:

Out of those 3008 x 3008 pixels you really have only 1504 x 1504 pixels that are red, 1504 x 1504 that are blue and two times 1504 x 1504 that are green.

 

huh. that I did not know. so it's a squiz basically. you've got sort of 9mp luminance, but only 1/4 of the res for R and B, 1/2 for green.

so in regular photography too.. i just always thought a 10mp sensor was 10 mps, each with 4 subs. sneaky.

which does somewhat explain that the results i get with my asi1600 seem massively better than my asi533 sometimes - 12mp real vs sort of 2.2mp.

understanding that now makes things clearer, thanks.

Link to comment
Share on other sites

14 hours ago, vlaiv said:

I've written so many times about this subject that it sort of feels like repeating myself :D

Mono:

Hardware binning is almost the same as software binning - only difference is read noise. Everything else is the same.

Say you have 3.75µm camera and you wonder whether you should get one that has 7.5µm pixel size or bin? Depends on sensor size of larger camera and do you have a need for larger FOV and can your scope handle larger imaging circle. 2000x2000 px sensor with 3.75µm pixels will be x4 smaller by surface than 2000x2000px 7.5µm pixel sensor.

If you plan on purchasing same size sensor - then just don't. Bin. FOV when binning remains the same - only thing that changes (for software binning) - is amount of read noise and hence needed exposure length. If you bin x2 - you'll have x2 read noise over regular image. If you bin x3 - you'll have x3 read noise. This might seem large - but it is not.

Modern CMOS sensors have ~2e of read noise. x2 is ~4e, x3 is ~6e read noise - still lower than most CCDs regardless of pixel size.

Color - things are more difficult to explain because color sensors don't operate on resolution suggested by their pixel size.

If you for example debayer images using regular debayering (interpolation), stack them and then bin x2 like @powerlord suggests - you'll get coarser sampling rate - but you won't get SNR improvement that you are looking for (by the way - this approach works fine on mono sensors as those don't debayer).

You can bin color sensor in software and do it properly so that it produces expected results - but it is not as straight forward as mono.

In the end - 0.96"/px is still oversampling.

If you want to know sampling rate that is appropriate for any given image - measure average FWHM of stars in the image (in arc seconds) and divide value with 1.6. If you measure 3.2" FWHM - sampling rate should be 2"/px, if you measure 2.56" - then sampling should be 1.6"/px, etc ... (in order to sample at 1"/px - you need 1.6" FWHM stars in your image).

Thanks for this @vlaiv and sorry you have had to repeat yourself.

So first thing is, my camera is colour, so it sounds like what you are saying is that there is not a lot of point in binning as it won't improve my images in any meaningful way.

BTW - According to this, 0.96 is not oversampled.

 image.thumb.png.cc69827baafdd04b32156159071b00c8.png

Link to comment
Share on other sites

14 hours ago, vlaiv said:

If you for example debayer images using regular debayering (interpolation), stack them and then bin x2 like @powerlord suggests - you'll get coarser sampling rate - but you won't get SNR improvement that you are looking for (by the way - this approach works fine on mono sensors as those don't debayer).
 

just a note, that absolutely Vlaiv is correct here, but at least in my experience, the improvement gigapixel can make with its AI/sharpening/etc it does when 50% binning is far more pleasing to my eye than the very basic PI/startools/etc binning algorithms, even though you don't get the SNR benefit.

But I understand that's definately a person thing - If your aim is maximum scientific accurancy then it's not the way to go. For me, it's all about something that looks nice to hang on the wall 😁

Link to comment
Share on other sites

47 minutes ago, StuartT said:

So first thing is, my camera is colour, so it sounds like what you are saying is that there is not a lot of point in binning as it won't improve my images in any meaningful way.

You can of course bin color data and depending on how you do it - it will provide some or all "benefits" of binning.

There are two benefits of binning - and we often bin to achieve them both:

1. bring image to more suitable image resolution (this can also be done with resizing / resampling of the image).

For example - your image is 0.67"/px and when you zoom in 100% - you find that your stars are bloated, image looks blurry and detail is missing and you want your image to look good even when zoomed in 100% - for those that want to see even tiniest galaxies in the background.

2. Improve SNR. In this regard - binning works as stacking - if you average some samples - you improve SNR. With stacking - we take same images and average them whole, while with binning we take "same pixels" and average those locally (if image is over sampled - adjacent pixels can be thought as having almost the same signal).

When you have OSC data - you can have either of above - or both, but most people don't manage to get both - I'll briefly explain why.

Data from OSC sensor is "sparse" - meaning not every pixel is filled in with all colors. Some have only red, some have only green and some have only blue - rest is missing. You now have two options to resolve this:

1. Fill in the blanks (this is usually done)

2. Squeeze the data (this is super pixel mode of debayering)

With option 1. - you simply make up missing data. There are algorithms develop to do good or better job of that, but point is - you don't have new original data in those places - you have data that is made up based on the data you have. When you try to bin such data - it will not provide you with SNR improvement because you are not stacking original data. It is like trying to stack 10 copies of same image and hoping that it will somehow improve SNR - it won't as you are not stacking original data - you are stacking data that you derived from data you already have - and that does not improve SNR

With option 2 - you will start with all data there - but your resolution / sampling rate will already be altered. It will in fact - be "normal" for that sensor - you will only see it as halved - because you are thinking of that sensor in terms of pixel size it has on the label - which is not correct. OSC sensors simply have lower resolution than equivalent mono sensors (there are special cases where they can have the same resolution - and that is for example bayer drizzle, but that is beyond the scope of this reply :D ).

There is option where you can have your cake and eat it too - where you prepare your data so it's like it came from OSC sensor with bigger pixels. I'm not aware of any software being capable of doing that - because that would mean x4 lower sampling rate (two times because it is OSC sensor and additional two times because of binning). This is actually feasible on some sensors with very small pixels - like ASI183mc for example - that one has 2.4µm pixel size and in some cases that is way too small.

In the end - If you are over sampled, then I would recommend using one of two approaches:

- one that @powerlord recommended few posts ago - just stack your image and bin linear stack. This will not have the same effect as binning mono data, but if you dither (and you should) - some SNR improvement will still happen (effect is similar to bayer drizzle - but not completely the same).

- if you are hugely over sampled (by factor of x4) or you are doing mosaic and you can afford to bin further (doing wider field instead of going for max resolution) - then use super pixel mode debayering and then bin again after that for true x2 SNR improvement

1 hour ago, StuartT said:

BTW - According to this, 0.96 is not oversampled.

If we understand that tool in actual / math terms - then it is simply wrong.

We should look at it like this: Given your camera and telescope - image will look "ok" in that range of sampling rates.

I can actually go thru the math and science of why is that so - or you can do simple experiment. Post any image that is at 0.96"/px and I'll show you that it is over sampled :D. In fact - you can do it yourself. Take your image at 0.96"/px, estimate what is optimum sampling rate by that criteria I gave you - FWHM/1.6, then resize your image to smaller size to match that optimum sampling rate. Take small image - and resample it back to larger size.

If you can't spot any difference between original and resized - that just means that original contained no detail that smaller image could not also hold - so, smaller image was sampled enough to hold all detail in the image.

(I probably over complicated that last sentence - but if you have properly sampled image and you reduce its size and enlarge it back - you'll be able to see the difference. If you can't see the difference - then no finer detail is present in larger image to start with - it is over sampled).

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Thanks again @vlaiv I don't know how you know all this stuff, but it's truly impressive. Unfortunately, I don't really understand much - if anything - of what you actually wrote. I am clearly out of my intellectual depth here (and I am a scientist!! 😬 )

I don't really know what you mean about finding out if my images are oversampled. I don't know how I would measure FWHM (which I think is the size of the stars, right?). But all my images are shot with the same rig (so at 0.96"/px) so perhaps you can tell me if they are oversampled? For example, 

https://stargazerslounge.com/topic/387610-m33-and-m42-last-night/
https://stargazerslounge.com/topic/386331-final-image-of-the-eastern-veil-nebula-for-this-year/
https://stargazerslounge.com/topic/385080-eastern-veil-with-l-extreme/

 

Edited by StuartT
Link to comment
Share on other sites

57 minutes ago, StuartT said:

I don't really know what you mean about finding out if my images are oversampled. I don't know how I would measure FWHM (which I think is the size of the stars, right?). But all my images are shot with the same rig (so at 0.96"/px) so perhaps you can tell me if they are oversampled? For example, 

FWHM is measured in software. I'm sure PI has that functionality and you can also use AstroImageJ - free software for astronomical use. This is done on linear images after stacking (or calibrated subs if you wish to measure sub).

In any case - I'll assume that optimum working resolution is ~1.5"/px rather than 0.96"/px - and we can check if that is true. I'll use parts of M33 and M42 images.

First M42 image (actually part of M43) - you need to watch this on computer screen - don't use phone or small device - you want to see all the details in these images:

m43_original.jpg.7d2c2cf0a446de7a39cc3296589c95ba.jpg

m43_rescaled.jpg.eb119b47a8bcdd4f31826dc7ced94a48.jpg

Ok, so can you tell the difference between these two images - in nebulosity or stars? There is small difference in noise - one of them has a bit smoother noise profile (sampling down does that), but stars and signal - do you see the difference?

Second image was created by taking first image, then scaling it down to 66% of original size:

m43_smaller.jpg.f25136227c8b51fa7f6f3959b9c93ce9.jpg

and then enlarging it back up to original size. If you look at this smaller image - every detail that you can see in larger image is here as well - every star, every feature - nothing is missing yet - things are smoother, noise is smaller.

This just shows that you could have captured this image using 1.5"/px - without loosing any sort of detail in the image. You did not need to go for 0.96"/px resolution.

In fact - it is quite possible that on a given night, actual resolution that you could have used is 2"/px without loss of any detail - that corresponds to 3.2" FWHM stars - and that is quite "common" star size for ~2" FWHM seeing.

Let's do that with second image as well:

First original:

m33_original.jpg.deaa390aab619e33c6ab51f49847f871.jpg

then rescaled (can you tell?):

m33_rescaled.jpg.f2430032f65093b47d7ef6c3ba7c7685.jpg

and of course - smaller image:

m33_smaller.jpg.e57735e6a0f59f5d38f94ef45fd4a2e8.jpg

One way to visually tell if you are at good sampling rate is to look at your image at 100% zoom - and look at faintest stars - they really need to be point like - as soon as you see them as little circles - you are oversampled.

For example:

image.png.c972598de4d362faa0445f8ffe1126bf.png vs image.png.df711c49098323b424ccf2f66b6f4591.png

Makes sense?

Why is this important at all? Well - because if you over sample - you needlessly loose SNR. In same imaging time you could have smoother deeper image if light is more concentrated rather than spread over more pixels than it needs to be.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

6 minutes ago, powerlord said:

And here's the M33 one shrunk to 50% with gigapixel, then standard resize up just for comparison.

50% could be too much as I'm starting to see the difference - and also, quality depends on choice of resampling algorithm. Above was done in IrfanVew with Lanczos resampling.

Link to comment
Share on other sites

Sorry, yeh my point here (if i had one) is that there will be differences with topaz - it's using ai stuff to remove noise, sharpen things, even make things up, etc depending on how far you take the options. But it uses that original data to 'make it up'. I suppose I could apply exactly the same settings to your 50% reduced and enlarged one and see if result is the same ? But dunno what I'd be trying to prove really.

I suppose only, Topaz is black magic - and a lot cheaper than buying another OTA or camera 😉

Link to comment
Share on other sites

Actually @vlaiv I can't see any difference in the two versions - either in terms of signal or noise! But clearly you can, so I shall take your word for it. 

So I guess all I need to do is reduce the size of my images after I have finished processing them. I can do this in Photoshop - so just resize down to 66%. Thanks.

One last thing... you said

Quote

You did not need to go for 0.96"/px resolution

 the word 'need' suggests I have a choice in this. But the 0.96 is a function of my camera and my focal length, so I don't really have a choice. Unless I can find a camera with larger pixels (which you advised against). So I guess just resizing down to 66% is the solution

 

Link to comment
Share on other sites

22 minutes ago, StuartT said:

So I guess all I need to do is reduce the size of my images after I have finished processing them. I can do this in Photoshop - so just resize down to 66%. Thanks.

Well - no :D

I mean - you can, but it is not the same thing as binning or resizing prior to processing - result is not the same.

Binning is process that produces known amount of SNR improvement. Simple resampling will also produce SNR improvement - but it depends on data and algorithm use for resampling. It also introduces pixel to pixel correlation - or certain amount of blur. Resampling algorithms can be broadly split into two categories - ones that improve SNR more - but also blur more, and those that don't blur as much - but also don't improve SNR as much.

For that reason - it is best to bin if you can.

Reducing image after processing - will just make it look nicer / sharper when viewed at 100% zoom - but it will not improve SNR. It will not make it less noisy when stretching your data.

22 minutes ago, StuartT said:

the word 'need' suggests I have a choice in this. But the 0.96 is a function of my camera and my focal length, so I don't really have a choice. Unless I can find a camera with larger pixels (which you advised against). So I guess just resizing down to 66% is the solution

One thing you can do is - to change camera. Other thing that you can do is to bin, so you have a bit of choice in there.

Instead of working at 0.96 - bin x2 and work with 1.92"/px. You won't loose much detail at all - images will look the same as far as FOV is concerned - but you might find it easier to process with less noise.

In fact - you have nothing to loose - you can take linear data from those images you linked to - and bin those x2 before you process them again. If weather is poor - there is something you can do when not imaging - reprocess your images with bin x2 at linear stage.

If you are using PI - just take your stacked data and before you start any stretching / processing - do linear resample x2 with average (that is binning) and then proceed to process.

22 minutes ago, StuartT said:

Actually @vlaiv I can't see any difference in the two versions - either in terms of signal or noise! But clearly you can, so I shall take your word for it. 

Yes, it is very hard to tell - but you can see some difference in the noise - if you blink the two images or if you make difference of them (subtract one from another):

image.png.0361da752c22a18a96170cf4f2e176a6.png

this is M33 ones subtracted from one another. Darker part is where signal is stronger and SNR is better (less noise) - so there is not much difference between the two - in fainter parts - its more noisy. This is very stretched different difference image.

Here is what unstretched difference image looks like:

image.png.07a686bd65d441ab512a8a1503dfdd0c.png

There is a bit of grain that can be seen as difference - and that is subtle difference in noise that I was talking about - but no difference in signal - because smaller image contains all the signal in larger image

 

Edited by vlaiv
maybe autocorrect - or maybe I have no idea of what I'm typing :D
Link to comment
Share on other sites

great! Thanks so much. I shall bin 2x2 from now on then. I can set that in NINA easy enough. I think it will also reduce my file size yes?

Quote

In fact - you have nothing to loose - you can take linear data from those images you linked to - and bin those x2 before you process them again. If weather is poor - there is something you can do when not imaging - reprocess your images with bin x2 at linear stage.

Not really sure how I would bin after the images are taken. I thought binning was a camera setting? But maybe there is image processing software that can do this? I use SiriL for stacking and processing, with a little Photoshop at the end to remove gradients. 

Link to comment
Share on other sites

2 minutes ago, StuartT said:

great! Thanks so much. I shall bin 2x2 from now on then. I can set that in NINA easy enough. I think it will also reduce my file size yes?

With CCDs - binning is hardware thing (although you can bin in software later as well), but with CMOS - it's always software thing - whether it is done in drivers in time of the capture or later - in processing.

I advocate that you do it in processing - later, not at the time of capture - unless you have compelling reason to do so (like file size savings - but storage is cheap these days), because this lets you control the process. You can stack with and without binning and compare results and you can choose bin method that suits you best.

I'm not sure that NINA will do good job of binning OSC data, and I would advise you to bin your stacked image while still linear.

Siril does not have bin functionality as far as I can tell, but you can always save your stack in Siril, open it in ImageJ - do the binning and then load it back in Siril for processing it further (color calibration and histogram stretch and so on). ImageJ is open source software that can easily bin your data (and do some other fun stuff).

I can now see why people don't do this on regular basis - it's just too much work - and new stuff to learn. I would expect Siril to have bin - and apparently it did at one point, but it was removed when some library for debayering was changed and they now plan to reintroduce it.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.