Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Bubble Nebula rehashed L(Ha)SHO, Foraxx palette


windjammer

Recommended Posts

By the way @CCD Imager if you want to test if your image is over sampled - there is another handy method - just do frequency analysis of the image - here is an example:

image.png.619a4d7d9504830cf1001acb7b99fad4.png

You see that concentration of the signal in the center and "empty" noisy space around that - well that shows over sampling.
image.png.97f34cef86f3f15e16929bbb3dd913e1.png\

That circle should extend right to the edge for properly sampled image. Here, look what happens when I resample that image and do the same:

image.png.ac39ec0ed5ff0431e32153095b7d6b4c.png

left one is FFT of image down sampled x2 and right one is FFT of image down sampled x3

For reference - here is where signal ends in frequency domain:

image.png.6e8f3e74c0643e972ce5e2479eae138a.png

so x2 reduced (or 50%) is yet not quite properly sampled, but x3 is under sampled as there is clipping in frequency domain.

 

Link to comment
Share on other sites

Vlaiv, it is impossible to interpret a JPEG image and also one that is processed.

I did a quick clone of the TIFF imaged, resample then increased magnification to match. My result is below. 

But really I need to go back to original data and make the comparison with resampling. I must add that the original will be much more amenable to deconvolution algorithms like Blur XTerminator.

Resampled.jpg

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Most of what you need to know about Nyquist sampling theorem can be found here:

https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

It's not that hard to read up on proven mathematical theorem and when it is applicable and how to apply it.

I never stated "my theory", so please refrain from such comments.

If you wish to know on what I base my recommendation to sample at x1.6 of FWHM - I'm happy to explain the details - it is fairly simple to follow for anyone "knowledgeable" enough. Explanation goes like this:

- Gaussian form does not qualify for Nysquist sampling theorem as it is not band limited signal

- Star profile is band limited signal because it was captured with aperture of limited size and is not gaussian form. It is in fact convolution of airy pattern with at east two gaussian forms (seeing and mount performance). This is for perfect aperture and perfectly random tracking error (seeing is believed to be random enough over course of long exposure to qualify for central limit theorem). Optical aberrations and less then perfect tracking will only lower resolution of the image, so we are aiming for best case scenario. Airy pattern is band limited signal and therefore convolution of it with other signals (multiplication in frequency domain) will be band limited signal.

- as such we must select sampling limit in such way as to most closely restore original signal (https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem#Sampling_below_the_Nyquist_rate_under_additional_restrictions).

- in ideal world, regardless of other factors - we would need to sample at rate determined by aperture of telescope as that will determine true cut off frequency, however in real world - seeing and mount performance dominates airy pattern enough that star profiles are well approximated by gaussian profile and we have presence of noise.

- in such conditions, sensible cut off point is one where frequencies are attenuated by more than 90% in frequency spectrum (sharpening past that point will simply bring too much noise forward).

- from above conditions it is fairly easy to find proper sampling frequency - we take gaussian shape of certain FWHM, do Fourier transform of it (which is again gaussian function), find at which point it falls below 90% threshold and set our sampling at that frequency.

 

So, if I understand you correctly, the 1.6xFWHM recommendation will capture the signal from frequency components with greater than 10% of the peak frequency amplitude.  And we are talking about the spatial frequencies in the FT of the point response fn. 

And by 1.6xFWHM you mean: if my FWHM is 3.4 asec my sensor resolution should be 3.4/1.6 ~ 2asec per pixel.

An explanation of your 1.6 factor would be very helpful.

Simon

Link to comment
Share on other sites

Just now, windjammer said:

So, if I understand you correctly, the 1.6xFWHM recommendation will capture the signal from frequency components with greater than 10% of the peak frequency amplitude.

Not quite. Here is explanation of how it works.

Optical system and seeing attenuate high frequency (and aperture at one point cuts them off completely), so if we examine our image in frequency spectrum we will have something like this:

image.png.9d21f3bf18200f907db464e251491f81.png

first graph is our representation of image in frequency domain. This is image unaltered by optical system and atmosphere (think Hubble class image outside of our atmosphere). Second graph is attenuation function or filter function in frequency domain. It is working much like equalizer for sound - it shows how much on scale of 0-1 given frequency is attenuated. Third function is result of modulation of original image with optical system and atmosphere.

In spatial domain operation is called convolution (application of blur) which is equal to multiplication in frequency domain - third graph is (or at least it should be, disregard my drawing skills) first two functions multiplied.

What my proposal is - which translates to using sampling rate of x1.6 less than your FWHM (so your understanding of that part is correct - if FWHM is 3.2" you should sample at 2"/px) - is to take frequency cutoff point at place where optics+atmosphere drops off to below 10%:

image.png.22721f741866385006c3a292136453ce.png

Now, pure Nyquist says that you need to choose place where this function drops to 0. That is proper place to be in order to record all that there is, but I've modified this due to several things:

1. We are assuming Gaussian distribution of star profile when we talk about FWHM - and Gaussian distribution never hits 0 in frequency domain so it is illusory to try to take value where it hits 0 as it never happens

2. We must consider the fact that our images are polluted by noise and this noise pollution is not affected by blur - it happens later. There is another interesting fact about the blur - it has equal energy density at all scales.

If you do FFT of pure noise - you'll get something like this:

image.png.4cdcc9588d8b2c890b97fb1edfd10fb3.png

it will be random in intensity but average intensity will be the same regardless of frequency.

Now - remember that SNR we so love? In frequency domain we have curious thing - higher the frequency - more attenuation of signal, but noise remains the same - so SNR per frequency component goes down. In fact - SNR above that cut off point is at least x10 as low as one where signal is close to 100% in strength - and you need at least x100 exposure time to be able to sharpen above those 10%.

By the way - sharpening is doing inverse operation to that multiplication - we divide with same frequency response of blur (which is why sharpening is tricky - we don't know exact shape of the blur most often). There is second part to that - when you divide with something smaller then 1 - you actually amplify value - so we amplify both signal and noise, and since noise is not falling off - it is staying constant, more we sharp - more noise will appear.

So that 10% cutoff was chosen as place where simply noise will take over any sharpening attempts and you need crazy amount of subs (order of few thousand) just to approach detectable levels of SNR after sharpening (SNR ~=5).

Hope this makes sense.

Link to comment
Share on other sites

6 minutes ago, windjammer said:

..and BTW, that is a v nice looking bit of FFT software. Is that readily available ?  Also the line profile generator ?

Simon

ImageJ is packed with all sorts of tools that you'll find useful for this. It is free / open source software aimed at scientific microscopy image processing - but it is loaded with plugins (which you can choose to install) and offers many tools that you can use in general image analysis.

Link to comment
Share on other sites

25 minutes ago, vlaiv said:

Not quite. Here is explanation of how it works.

Optical system and seeing attenuate high frequency (and aperture at one point cuts them off completely), so if we examine our image in frequency spectrum we will have something like this:

image.png.9d21f3bf18200f907db464e251491f81.png

first graph is our representation of image in frequency domain. This is image unaltered by optical system and atmosphere (think Hubble class image outside of our atmosphere). Second graph is attenuation function or filter function in frequency domain. It is working much like equalizer for sound - it shows how much on scale of 0-1 given frequency is attenuated. Third function is result of modulation of original image with optical system and atmosphere.

In spatial domain operation is called convolution (application of blur) which is equal to multiplication in frequency domain - third graph is (or at least it should be, disregard my drawing skills) first two functions multiplied.

What my proposal is - which translates to using sampling rate of x1.6 less than your FWHM (so your understanding of that part is correct - if FWHM is 3.2" you should sample at 2"/px) - is to take frequency cutoff point at place where optics+atmosphere drops off to below 10%:

image.png.22721f741866385006c3a292136453ce.png

Now, pure Nyquist says that you need to choose place where this function drops to 0. That is proper place to be in order to record all that there is, but I've modified this due to several things:

1. We are assuming Gaussian distribution of star profile when we talk about FWHM - and Gaussian distribution never hits 0 in frequency domain so it is illusory to try to take value where it hits 0 as it never happens

2. We must consider the fact that our images are polluted by noise and this noise pollution is not affected by blur - it happens later. There is another interesting fact about the blur - it has equal energy density at all scales.

If you do FFT of pure noise - you'll get something like this:

image.png.4cdcc9588d8b2c890b97fb1edfd10fb3.png

it will be random in intensity but average intensity will be the same regardless of frequency.

Now - remember that SNR we so love? In frequency domain we have curious thing - higher the frequency - more attenuation of signal, but noise remains the same - so SNR per frequency component goes down. In fact - SNR above that cut off point is at least x10 as low as one where signal is close to 100% in strength - and you need at least x100 exposure time to be able to sharpen above those 10%.

By the way - sharpening is doing inverse operation to that multiplication - we divide with same frequency response of blur (which is why sharpening is tricky - we don't know exact shape of the blur most often). There is second part to that - when you divide with something smaller then 1 - you actually amplify value - so we amplify both signal and noise, and since noise is not falling off - it is staying constant, more we sharp - more noise will appear.

So that 10% cutoff was chosen as place where simply noise will take over any sharpening attempts and you need crazy amount of subs (order of few thousand) just to approach detectable levels of SNR after sharpening (SNR ~=5).

Hope this makes sense.

>>Hope this makes sense.

Yes, actually very clear.  And the 1.6 - is that just the point at which the 'generalised' gaussian reaches our 10% cutoff point (I understand the FT of a gaussian is another gaussian)?  And the Nyquist x2 is higher because his 'simple' case does not include noise considerations?

Simon

Link to comment
Share on other sites

30 minutes ago, vlaiv said:

ImageJ is packed with all sorts of tools that you'll find useful for this. It is free / open source software aimed at scientific microscopy image processing - but it is loaded with plugins (which you can choose to install) and offers many tools that you can use in general image analysis.

I will have a look

Link to comment
Share on other sites

6 minutes ago, windjammer said:

>>Hope this makes sense.

Yes, actually very clear.  And the 1.6 - is that just the point at which the 'generalised' gaussian reaches our 10% cutoff point (I understand the FT of a gaussian is another gaussian)?  And the Nyquist x2 is higher because his 'simple' case does not include noise considerations?

Simon

Yes, 1.6 factor is derived by taking gaussian with certain fwhm and then doing FFT of that gausian and finding where it hits 10% in frequency domain and then taking that frequency as cutoff, getting it back in spatial domain and comparing to original FWHM value and we get factor of 1.6.

Nyquist sampling theorem always stats to sample at twice max frequency component we want to capture, that is included in above calculation when we return into spatial domain from frequency. Only difference between "pure" Nysquist sampling and above case is choice of cut off frequency. In "pure" case - we choose cut off at place where it literally cuts off any further signal - frequency components hit zero and remain zero - that is why we call it highest frequency component as all other have no value (or have value of 0 and don't contribute to sum in FT).

In this case - we choose cut off frequency not based on that criteria but based on criteria that above this threshold any signal that we might capture will be simply eaten up by noise and there is no point in capturing it.

In theory - you could sample all the way up to planetary critical frequency which depends only on aperture - and in lucky imaging we do just that - but there we can do it because:

a) planetary signal is much much stronger

b) we tend to minimize impact of atmospheric blur and remove impact of mount induced blur so our gaussian is much much "sharper" in spatial domain which makes it much much broader in frequency domain - it is attenuating much more slowly

c) we stack thousands of subs to improve SNR

This gives us chance to sharpen all the way - and even then we often hit the noise wall in how much we can sharpen things up.

In long exposure imaging we have all the opposite:

- signal is much weaker

- we collect bulk of seeing and mount performance influence in our long exposure

- we stack only hundreds of subs to improve SNR

this is why we need to "cut our losses" at some point and accept that we simply can't sharpen up data enough past some point as we don't have good enough SNR.

Link to comment
Share on other sites

Just now, CCD Imager said:

Vlaiv

As a follow up on downsizing the image 2x, using the original unprocessed data, the FWHM increased from 1.4 arc secs to 1.6 arc secs. So evidence I wasnt over-sampled with 3x sampling.

Adrian

What method of downsampling did you use? Resampling can and will introduce additional blur if simple resampling method is used (namely bilinear or bicubic) - in fact, there is a thread over here where I discuss resampling algorithms and their frequency response curves

Here are comparison of attenuation curves for different resampling algorithms:

image.png.227a2ff6f7c4b38626744cbc26bbaa

Bilinear, bicubic, cubic spline, quintic spline

If you want to preserve as much detail as possible with interpolation - use sophisticated algorithm like Lanczos, Mitchell-Netravali  / Catmull-Rom, B-Spline of high degree - but in general I think that Lanczos is the best for astronomical images.

Link to comment
Share on other sites

19 hours ago, CCD Imager said:

Here is the single sub image taken through a blue filter. Stars in the center measure around 1.1 arc secs and periphery around 1.2 arc secs. BUT, a line plot thru stars gives insufficient points to adequately measure, i.e. under-sampled

M51b sub.jpg

Well, this is one definition of under sampling but it isn't a definition that's going to get me worked up into a tizzy. This image looks, to me, very much like images of M51 which I've shot at 0.6"PP and 0.9"PP. I remember very clearly Valiv doing the same test on my data that he performed on yours, and I was convinced.

You have captured what most competent imagers will capture at non-Atacama/mountaintop observing sites. You are not, in any meaningful sense, undersampled. That's to say, if you reduced your arcsecs per pixel you would gain no new detail that anyone would be able to see. That's my definition of over/under sampling. Definitions which don't involve what you can see are, for me, so much waffle. While all this chatter is going on, what really matters in amateur astrophography is being missed, and that is going deeper.  What I've discovered since moving to super-fast systems is that going deeper is a darned sight more interesting than trying to go for more resolution. It's not just that you go deeper: you also gain more control over dynamic range in processing.

Olly

  • Like 1
Link to comment
Share on other sites

3 minutes ago, ollypenrice said:

While all this chatter is going on, what really matters in amateur astrophography is being missed, and that is going deeper.

Over sampling is one of main adversaries of going deeper - people just waste SNR in hope of capturing some detail that is simply not there.

  • Like 1
Link to comment
Share on other sites

4 hours ago, ollypenrice said:

Well, this is one definition of under sampling but it isn't a definition that's going to get me worked up into a tizzy. This image looks, to me, very much like images of M51 which I've shot at 0.6"PP and 0.9"PP. I remember very clearly Valiv doing the same test on my data that he performed on yours, and I was convinced.

You have captured what most competent imagers will capture at non-Atacama/mountaintop observing sites. You are not, in any meaningful sense, undersampled. That's to say, if you reduced your arcsecs per pixel you would gain no new detail that anyone would be able to see. That's my definition of over/under sampling. Definitions which don't involve what you can see are, for me, so much waffle. While all this chatter is going on, what really matters in amateur astrophography is being missed, and that is going deeper.  What I've discovered since moving to super-fast systems is that going deeper is a darned sight more interesting than trying to go for more resolution. It's not just that you go deeper: you also gain more control over dynamic range in processing.

Olly

With your sampling at 0.9 arc sec/pixel, even given Nyquist at 2x sampling, you would not achieve resolution better than 1.8 arc secs.  That is quite a lot worse than 1.4 arc secs and would be visually noticeable. I think you have answered your own question with your belief that higher S/N is more gratifying. To me the two most important factors are BOTH S/N and resolution, I treat them with equal respect.

Link to comment
Share on other sites

8 hours ago, CCD Imager said:

With your sampling at 0.9 arc sec/pixel, even given Nyquist at 2x sampling, you would not achieve resolution better than 1.8 arc secs.  That is quite a lot worse than 1.4 arc secs and would be visually noticeable. I think you have answered your own question with your belief that higher S/N is more gratifying. To me the two most important factors are BOTH S/N and resolution, I treat them with equal respect.

Well, as I said earlier, I've shot the same targets at 0.6"PP and at 0.9"PP, the former with a far higher theoretical resolution (350mm aperture versus 140mm) but I could find no significant or consistent difference in real detail captured. I think we are, quite simply, seeing-limited and that shooting at less than an arcsecond per pixel is a waste of time.

Olly

Link to comment
Share on other sites

We'll just have to disagree. You have a wonderful site to image from, when I visited 7 years ago, I had a QSI683 with whopping 5.4u pixels, so maximum resolution I could achieve was 2.0 arc secs. Every image I took that week was 2.0 arc sec or thereabouts, I was sampling limited. I am sure that your site has raw sub arc sec seeing. Buy a 10 micron, Astrophysics or Paramount mount with absolute encoders and exclude guiding error from your blurring equation :)

 

I know you disagree on this too, but the resolution of my images definitely improved since owning the GM2000, Per, even from the heavens would agree with me.

Adrian

Link to comment
Share on other sites

10 hours ago, CCD Imager said:

With your sampling at 0.9 arc sec/pixel, even given Nyquist at 2x sampling, you would not achieve resolution better than 1.8 arc secs.  That is quite a lot worse than 1.4 arc secs and would be visually noticeable. I think you have answered your own question with your belief that higher S/N is more gratifying. To me the two most important factors are BOTH S/N and resolution, I treat them with equal respect.

Ok, you say that you have ability to "resolve" at 0.47"/px or whatever your rate is - and I'll show you that this is not true.

To resolve means to distinguish between two or more entities (not to record or detect as some people mistakenly believe - you don't resolve Cassini division for example - you detect it).

Look at following example:

image.png.4e61211c823384fd6c535c2237282d76.png

left is Hubble M51 reduced to about the same size as your image, and right is your image at 100% zoom. Hubble image clearly resolves group of stars at this sampling rate. There is no mistaking what those are. In your image we simply see a blob of something - no way of telling if it is dust or nebulosity or galaxy (like galaxy next to it - shape is 100% the same).

To understand at what scale you have achieved the resolution of your system - you can run a set of comparisons. Do progressive reduction of the size of your image (resample it - use Lanczos resampling - for example in free software called IrfanView) and compare it to the same size Hubble image. As long as Hubble image looks sharper - you have not matched sampling rate to the resolution of data - or in another words - at that sampling you can have more detail than your image contains (as Hubble image clearly shows).

Here is example of this procedure with your image - first at 100% as you have recorded it:

image.png.1f01a72482362315fa39d5aaabd91e6e.png

Left is your image and right is Hubble (which I converted to black and white to be more similar and more comparable). It is clear that your image does not contain data that can be recorded at this resolution - as Hubble image is sharper.

Now let's look at images at 50% resampled size:

image.png.6fb865bc8bd15c11dcd1fdcf1a6e4837.png

Ok, now we are getting somewhere. They look more like each other but Hubble image is still sharper by tiny bit. This means that your image does not have the sharp enough data (or detail) that can be recorded at this resolution either.

Let's do another round and this time reduce it to say 40%:

image.png.476574d7aa10369d95baa4889074d464.png

I would say that now the detail in two is about the same. There are a few differences - but mostly due to processing and level of contrast - but not in detail.

This simply matches what I've shown you with frequency analysis - you are over sampled by factor of ~x2.5 in this image

If your original sampling was 0.47"/px then in reality what you've achieved is 0.47"/px * 2.5 = 1.175"/px

Don't get me wrong - I think that is superb resolution for amateur setup. Like I've mentioned - ones who approach 1"/px in actual detail they capture are at the top of what 99% of amateurs can hope to achieve.

Thing is - you wasted a lot of SNR getting there because you sampled at 0.47"/px.

Again, you don't have to image at 1.175"/px - you can just choose bin factor close to that and simply bin your data in processing. You won't loose detail because there is no detail there - just look at images above.

Link to comment
Share on other sites

Firstly, I appreciate your time to discuss this image and also comparing my image to Hubble, I take that as a compliment :)

A couple of points: My image and no doubt the image from Hubble have both been significantly processed, probably differently

Secondly the raw FITS image is 71MB, the image I presented here is 1.8MB, an awful lot of data has been lost

I'm happy to run your re-sampling experiments on my RAW data, I use Pixinsight that is very powerful and has many algorithms for re-sampling, both Lanczos 3 and 4, which would you prefer?

Edited by CCD Imager
posted to quickly
Link to comment
Share on other sites

8 minutes ago, CCD Imager said:

Secondly the raw FITS image is 71MB

Do you mind sharing your raw data for FWHM measurement like you've been asked before?

Don't need to upload whole image - just a crop containing few stars will be enough.

Link to comment
Share on other sites

Another comment regarding compromised S/N with over -sampling. S/N is governed by aperture (among other obvious parameters - exposure duration, transparency, light pollution, object etc) and not focal length or pixel sampling. Object S/N remains the same, with over sampling, it is just spread out over more pixels. You dont lose S/N with modern CMOS camera's and you are unable to bin in camera, only cosmetically with software. In this respect, it doesnt matter if you are over sampled, simply re-sample in post to achieve your desired sampling, nothing lost. The opposite is that you are potentially missing out on resolution. And lastly, deconvolution is more effective on over sampled images. Try it, take an image from a little refractor and watch zero improvement in FWHM. Dont get me wrong, I do like the wide vistas that small refractors can produce, great for those huge narrowband nebula.

Link to comment
Share on other sites

Just now, CCD Imager said:

Another comment regarding compromised S/N with over -sampling. S/N is governed by aperture (among other obvious parameters - exposure duration, transparency, light pollution, object etc) and not focal length or pixel sampling. Object S/N remains the same, with over sampling, it is just spread out over more pixels.

Well, that is the key. You spread the same amount of photons over more pixels with higher sampling rate.

- Photon shot noise is related to amount of photons that land on pixel - it is equal to square root of number of photons

- Dark current noise is not related to resolution it is per pixel. It is always the same per pixel

- Read noise is not related to resolution - it is again per pixel

Say that you have 100 photons land on either 4 pixels or 25 pixels. Let's say that dark noise and read noise are 0 for this case (but they just contribute to SNR drop - they can't improve it as they just add noise)

4 pixels will have 25 photons each, so signal is 25 and shot noise is square root of signal (Poisson distribution) - 5

SNR is therefore 25 / 5 = 5

In second case we have 25 pixels so each get 4 photons (100/25 = 4)

Signal is 4 and again - Poisson distribution tells us that noise is square root of that - or 2. SNR is here 4/2 = 2

Look! just by using 25 pixels instead of 4 pixels - and that is over sampling by x2.5 (2x2 square vs 5x5 square) - we went from SNR 5 to SNR 2 for same number of photons (same exposure length with same aperture, light pollution, transparency and rest).

You would need roughly x6.25 more imaging time to compensate!

8 minutes ago, CCD Imager said:

You dont lose S/N with modern CMOS camera's and you are unable to bin in camera, only cosmetically with software.

Wrong again. Software binning works and it works predictably and works much like hardware binning works as far as SNR goes. Only difference is that it does not reduce per pixel read noise. With hardware binning you readout less pixels and each pixel has read noise that it has. With software binning you read out more pixels and each has its read noise - so you end up with more read noise with software binning - but that is something you can control in choice of exposure length as long as you swamp read noise with LP noise - it makes no noticeable difference.

Software binning works much like stacking works - averaging of samples reduces noise and keeps average signal the same so SNR goes up. No magic in it - that is how hardware binning works after all - there is really no difference in software vs hardware binning apart from some technical aspects (that with read noise vs more flexibility in software binning).

In fact - you don't need to do binning with your data if that worries you.

You can simply split all your sub exposures into 4 separate subs - each containing odd / even pixels. That way you don't change any of the pixel data - each of these 4 new subs has twice lower sampling rate (as binning does) - and you end up with x4 more subs to stack which improves SNR by x2 -  same as binning 2x2.

Binning is simply the same thing (or rather very very similar thing) as stacking.

14 minutes ago, CCD Imager said:

n this respect, it doesnt matter if you are over sampled, simply re-sample in post to achieve your desired sampling, nothing lost.

Well not true. Binning and resampling are not the same thing. Binning is form of resampling but not all resampling are the same. In order to achieve wanted SNR you need to bin / resample data while it is linear before you start processing it.

Difference between binning and resampling is in pixel to pixel correlation. Resampling also improves SNR - but at expense of pixel to pixel correlation (introduces kind of blur into image) and it does so with trade of. Most resampling algorithms don't improve SNR as much as binning does - they do it less and non predictably. Binning always improves SNR by set factor (and that is bin size - for bin x2 SNR improvement is x2, bin 3 = SNRx3 and so on).

There are resampling methods that improve SNR even a bit more than binning but do that at expense of resolution - they blur image in doing so.

Even binning by addition blurs data very slightly - because of already mentioned pixel blur. Larger the pixel - larger the pixel blur, but it is still much lower than other blur contributions discussed.

In fact - method of splitting subs is the only one that lets you have your cake and eat it too - it improves SNR but does not introduce any additional blur even in form of pixel blur. Perks of software binning and choice of how to best do it.

20 minutes ago, CCD Imager said:

And lastly, deconvolution is more effective on over sampled images.

And once again - you are wrong.

Sharpening and deconvolution are really the same thing only done with different algorithms. If you remember my explanation above with multiplication with certain filter form in frequency domain - deconvolution is inverse operation. Convolution/deconvolution are related to multiplication / division in frequency domain (see convolution theorem https://en.wikipedia.org/wiki/Convolution_theorem )

Fact that you over sampled simply means that you have zeros or noise past one point in graph:

image.png.ac3101f3a6b052379bc8036aae52b88a.png

Above is frequency graph of your data and its sharpened form. Blue line is proper sampling with respect to 0 / noise and red line is over sampling. You restore low frequencies properly by dividing data with appropriate blur kernel FT, but what do you end up with if there is no data to restore - you have 0 or in real case just some noise at those scales - signal is much lower than the noise.

You end up with amplified noise at those frequencies - that is all.

Link to comment
Share on other sites

One area of noise which has grown in importance with small pixel CMOS cameras is random telegraph noise. If it's been covered above please forgive my not reading the whole thread.

I have not seen it discussed much but it can be the dominant noise. C Buil discusses it on his site and give an algorithm for reducing it in over sampled images here in section 6 (wrongly labled as 5) It is in French but Google translate does a fair job.

There are also some English discussion on it in the CMOS camera reviews. Here for example. 

Regards Andrew 

Note his data refers to the native camera bit depth not 16 bit unless it is a 16 bit camera.

Edited by andrew s
Link to comment
Share on other sites

2 minutes ago, andrew s said:

I have not seen it discussed much but it can be the dominant noise. C Buil discusses it on his site and give an algorithm for reducing it in over sampled images here in section 6 (wrongly labled as 5) It is in French but Google translate does a fair job.

I think that there are better methods - do agree on issue with telegraph noise if camera is susceptible to that - but selective binning is better way of dealing with it than using median filter before binning.

Median filter acts as noise shaper and reshapes the noise in funny ways and if possible it would be best to avoid it as algorithms further down the pipeline often assume noise of certain shape (most notably Gaussian + Poisson).

In any case - telegraph noise can be dealt with in manner similar to hot pixels - by using sigma clipping. If enough samples show that there should not be anomalous value there - then anomalous value should be excluded from bin / stack (rather than all values replaced with some median value).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.