Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Bubble Nebula rehashed L(Ha)SHO, Foraxx palette


windjammer

Recommended Posts

21 minutes ago, vlaiv said:

I think that there are better methods - do agree on issue with telegraph noise if camera is susceptible to that - but selective binning is better way of dealing with it than using median filter before binning.

Median filter acts as noise shaper and reshapes the noise in funny ways and if possible it would be best to avoid it as algorithms further down the pipeline often assume noise of certain shape (most notably Gaussian + Poisson).

In any case - telegraph noise can be dealt with in manner similar to hot pixels - by using sigma clipping. If enough samples show that there should not be anomalous value there - then anomalous value should be excluded from bin / stack (rather than all values replaced with some median value).

I am surprised at such a quick dismissal. Christian is a careful worker and I respect his opinions as I do yours.

However,  given the speed if your response I assume you have looked at his algorithm before and compared the results. If so could you share the results.

Regards Andrew 

Link to comment
Share on other sites

2 hours ago, CCD Imager said:

 Buy a 10 micron, Astrophysics or Paramount mount with absolute encoders and exclude guiding error from your blurring equation :)

 

 

I host or have hosted four 10 Microns and five Mesus. Returns to manufacturer stand at 10 Micron four, Mesu zero.  I'm hardly likely to want to change... :grin:  

Olly

  • Like 1
Link to comment
Share on other sites

Just now, andrew s said:

I am surprised at such a quick dismissal. Christian is a careful worker and I respect his opinions as I do yours.

However,  given the speed if your response I assume you have looked at his algorithm before and compared the results. If so could you share the results.

Regards Andrew 

I'm not dismissive of his work related to spectroscopy, quite the opposite. I just fitted his workflow into long exposure imaging.

Two are quite different as far as conditions for algorithm application go.

In low light regime where SNR is below 1, doing median on single sub can be detrimental on final result. Over sampling by factor of x4 just to be able to do median filter and later bin again is extremely impractical as far as long exposure imaging go.

I suspect that dithering is much more difficult with spectroscopy and stacking is not utilized often?

In any case, before I make qualitative statement on his method I'd need to asses few things:

- how does median filter reshape noise

- how does median filter impact resolution of the image (what is its frequency response and can we even talk about general frequency response or is it random in nature)

My gut feeling (we might say educated one :D ) - tells me that on both counts we might expect a surprise, possibly not nice one (there might be side effects of using the method that we might not be aware of).

Here is an example:

image.png.85b9dffebbe00baba149673c6c61474a.png

Gaussian noise on the left top, median x3 below that - frequency domain image on the top right, and corresponding frequency domain image on bottom right.

As you can see - median filter does funny thing with the data and attenuates high frequencies in a weird way.

 

  • Thanks 1
Link to comment
Share on other sites

47 minutes ago, vlaiv said:

In any case, before I make qualitative statement on his method I'd need to asses few things:

Don't feel you need to do this unless you want to. However,  if you do you can't use gaussian noise alone as rtn is not gaussian.  Best to use the noise from a small pixel CMOS camera. Regards Andrew 

Link to comment
Share on other sites

10 minutes ago, andrew s said:

Don't feel you need to do this unless you want to. However,  if you do you can't use gaussian noise alone as rtn is not gaussian.  Best to use the noise from a small pixel CMOS camera. Regards Andrew 

I'd be more concerned with gaussian filter in the workflow in light of above discussion to be honest.

It is deliberate reduction in resolution without any real reason to do so.

image.png.06569f3bf9ab2fae196d074e99a61c2d.png

Even if data is over sampled - using gaussian filter on it will further reduce detail.

Now I wonder if just using Gaussian with FWHM of 2-4 pixels binning the data x3 (similar to above median x3 but better behaved) would yield the same noise reduction of x4.

Alternatively, I do wonder how would stack of such subs with selective bin x4 behave (and also in terms of resolution).

Link to comment
Share on other sites

2 hours ago, vlaiv said:

Wrong again. Software binning works and it works predictably and works much like hardware binning works as far as SNR goes. Only difference is that it does not reduce per pixel read noise

I now ignore read noise, it is vanishingly small, less than 10% the values seen in CCD camera's 5-10 years ago, especially if you have sufficiently long exposures.

Link to comment
Share on other sites

2 minutes ago, CCD Imager said:

No argument with your pixel mathematics, but my point is that object S/N remains the same, no matter how many pixels are used to sample it

Can you explain what "object S/N" is?

How do you define it?

Link to comment
Share on other sites

44 minutes ago, CCD Imager said:

Wrong!

You hosted mine and its been perfect for 8 years :)

:grin: I meant 'host permanently.' One of the four I host permanently has never gone wrong and one I host permanently has been back twice. I didn't say that every 10M I host permanently has been back... Naturally, I'm delighted that yours has been so good but, on my sample, I would have to stay with Mesu myself.

Olly

Link to comment
Share on other sites

2 hours ago, vlaiv said:

Fact that you over sampled simply means that you have zeros or noise past one point in graph

I'm afraid you are taking me too literally, maybe my fault generalizing. Deconvolution will work best with sampling at the Nyquist sampling rate, for me, that equates to 3x and you, 2x or less. You advocate 1.0 arc sec/pixel sampling (correct me if I am wrong), so my image with a FWHM of 1.4 arc secs, stars fall on 1 pixel with some spillage. Deconvolution works on a central pixel or central group and compares with surrounding pixels, then adjusting the value of the center higher and surrounding pixels lower. It just wouldnt have an effect on my image.

 

After seeing your hand drawn graph, maybe you should re-visit the line graph through a star in my image, see below, there is data spreading around 7 pixels from the center, that equates to over 2 arc secs and looks similar to your top graph.

 

Take a small refractor with a FL of 400mm and use a common camera with 3.8u pixels, the sampling rate becomes 2 arc sec/pixel and if the seeing is around 2.0 arc sec, deconvolution is basically useless.

 

image.thumb.jpeg.ee6629a102816fc92f93423b7d1019bb.jpeg

Link to comment
Share on other sites

47 minutes ago, vlaiv said:

I'd be more concerned with gaussian filter in the workflow in light of above discussion to be honest.

It is deliberate reduction in resolution without any real reason to do so.

image.png.06569f3bf9ab2fae196d074e99a61c2d.png

Even if data is over sampled - using gaussian filter on it will further reduce detail.

Now I wonder if just using Gaussian with FWHM of 2-4 pixels binning the data x3 (similar to above median x3 but better behaved) would yield the same noise reduction of x4.

Alternatively, I do wonder how would stack of such subs with selective bin x4 behave (and also in terms of resolution).

Maybe, as you said, it's too different a use case. In high resolution spectroscopy you have one long exposure image. So stacking is not normally an option.  In the text he does discuss the issues you raise but obviously answerers them in his context.

Regards Andrew 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Can you explain what "object S/N" is?

How do you define it?

Object light intensity divided by sky noise (light pollution, transparency etc). Whatever camera you point at the object at whatever resolution, sensitivity, aperture, object S/N remains the same. It is a useful and important start to understanding imaging

Link to comment
Share on other sites

1 hour ago, CCD Imager said:

Deconvolution will work best with sampling at the Nyquist sampling rate, for me, that equates to 3x and you, 2x or less. You advocate 1.0 arc sec/pixel sampling (correct me if I am wrong), so my image with a FWHM of 1.4 arc secs, stars fall on 1 pixel with some spillage.

There is no different interpretation of Nyquist sampling rate. It is very clear in what it states - you need to sample at x2 maximum spatial frequency of the image or in another words - 2 samples per shortest wavelength in the image.

I'm not advocating 1"/px sampling - I mentioned that sampling rate as one that is exceptionally hard to attain as far as signal from ones image goes.

Here is breakdown of what I'm saying, just to be clear:

1. Ideal sampling rate of long exposure image is around x1.6 smaller than average FWHM of the image. This is based on correct application of Nyquist theorem in frequency domain, approximation of blur affecting the image with gaussian distribution (justified by fact that both tracking and seeing errors are gaussian in nature and overwhelm airy pattern on all but smallest telescopes) and limit imposed by noise

In simple words - sample at x1.6 smaller sampling rate than your FWHM - it your FWHM is 3.2" - then your optimal sampling rate is 3.2/1.6 = 2"/px

If your FWHM is 1.4" then your optimal sampling rate is 1.4" / 1.6 = 0.875"/px, etc ...

2. Most amateur setups are simply not able to attain data with resolution higher than 1"/px - or in another words - most amateur setups produce stars with FWHM higher than 1.6" - regardless of the sampling (unless severely under sampled).

This is because of couple of things:

1. aperture size

2. seeing effects

3. guiding performance

Just to give you an illustration - imagine that you are imaging in superb seeing with 1" FWHM seeing influence, on exceptional mount that has 0.3" RMS performance (tracked or guided) with 6" diffraction limited scope.

Your total FWHM that you can expect will be ~1.93"

This is because total gaussian rms will be square root of sum of respective gaussian rms-s:

Seeing will therefore be 1" FWHM = 1/2.355 = ~0.425"

Mount RMS is already given as RMS = 0.3"

Airy disk RMS is about 2/3 of Airy disk radius (0.84/1.22 to be precise). For 6" scope Airy disk radius is 0.923" so RMS is 0.635"

Now we just calculate square root of sum of squares of those so we have sqrt(0.635^2 + 0.3^2 + 0.425^2) = 0.8209" RMS and to get FWHM we need to multiply back with 2.355 so it is ~1.93"

In another words - if you have close to perfect conditions - best you can capture is 1.93" / 1.6 = 1.2"/px - which matches almost perfectly with resolution that we have established for your image (about x2.5 over sampled given 0.47"/px sampling or about 1.175"/px).

This is not something that I'm making up - I can quote source for every single statement.

And just to address your comment that 1.4" FWHM star pattern on 1"/px sampling will be on a single pixel with some spillage ...

1.4 / 2.355 = ~0.6, so this Gaussian profile has sigma of 0.6.

One pixel is +/- 0.5 if we assume that center of gaussian and center of the pixel coincide 0.5/0.6 = that is 0.8333 sigma.

Between -0.8333 sigma and 0.8333 sigma lies only about 60% of signal in 1D case - in 2D case, I believe it will be only about %36 of total signal. So star is definitively spread over much more pixels than a single one.

 

Link to comment
Share on other sites

14 minutes ago, CCD Imager said:

Here is a cropped part of my M51 image. It is RAW without any processing.

Binning or resampling, whatever you want to call it, results in loss of resolution - 1.4 to 1.6 arc secs.

Adrian

M51-cropped.fits 3.73 MB · 0 downloads

Ok, so here is measurement of FWHM values in AstroImageJ

image.png.30438b031a48f243c4297e5777301810.png

It hovers around 3.7 pixels and if your sampling rate is 0.47"/px - that equates to 1.74" FWHM

Link to comment
Share on other sites

29 minutes ago, CCD Imager said:

Object light intensity divided by sky noise (light pollution, transparency etc). Whatever camera you point at the object at whatever resolution, sensitivity, aperture, object S/N remains the same. It is a useful and important start to understanding imaging

It is completely wrong concept if you want to understand imaging.

You are not doing photometry of object - you are imaging it.

More you "magnify" object - darker the image becomes because we are working with surface brightness - not integrated brightness.

This also applies to sky noise (which is not only source of the noise by the way - there are read noise, thermal noise and target shot noise) - so there is no single sky noise

Increase sampling rate and for same exposure time - sky brightness will go down and so will associated noise.

Link to comment
Share on other sites

1 hour ago, CCD Imager said:

. Deconvolution works on a central pixel or central group and compares with surrounding pixels, then adjusting the value of the center higher and surrounding pixels lower. It just wouldnt have an effect on my image.

Deconvolution is applied to the whole image not just star cores (or at least it should be done that way).

1 hour ago, CCD Imager said:

After seeing your hand drawn graph, maybe you should re-visit the line graph through a star in my image, see below, there is data spreading around 7 pixels from the center, that equates to over 2 arc secs and looks similar to your top graph.

My graph represents data in frequency domain - it is not star profile it is what you get when you apply Fourier transform to your data (whole image not just stars).

1 hour ago, CCD Imager said:

Take a small refractor with a FL of 400mm and use a common camera with 3.8u pixels, the sampling rate becomes 2 arc sec/pixel and if the seeing is around 2.0 arc sec, deconvolution is basically useless.

Again not true.

I just happen to have such data and I'll show you in a minute I just need to dig it out

Link to comment
Share on other sites

Here it is 80mm TS F/6 APO with flattener / reducer x0.79 and ASI1600 with 3.8um pixel size - effective sampling rate around 2"/px

image.png.bceb1e724c89401be9d7e14281c7a999.png

image.png.f301c8b13e508f93d260641ee3454e03.png

This was just quick process and slight sharpening to show you that it can make a difference.

Link to comment
Share on other sites

38 minutes ago, vlaiv said:

There is no different interpretation of Nyquist sampling rate. It is very clear in what it states - you need to sample at x2 maximum spatial frequency of the image or in another words - 2 samples per shortest wavelength in the image.

Oh yes there is. Here is text from Stan Moore a well known and respected astro imager, he was the author of CCD Stack and someone I conversed with for a long time in the early days of CCD imaging. this is what he wrote:

"There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution.  The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM.  But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave.  The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels.  But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels."

So a different interpretation from yours and shows that the original theorem designed for an audio sine wave is different from a stellar image as I have tried to point out. Sorry to differ, but I agree with Stan Moore

 

Also you mention an ideal sampling rate for my image with a FWHM of 1.4, but if I had not "over-sampled" I would never have realised this FWHM. And if you recommend a scale of 0.875/p, then according to Nyquist even with a sampling rate of 2x, I could not achieve better resolution than 1.75, so your method doesnt make sense

49 minutes ago, vlaiv said:

Just to give you an illustration - imagine that you are imaging in superb seeing with 1" FWHM seeing influence, on exceptional mount that has 0.3" RMS performance (tracked or guided) with 6" diffraction limited scope.

Your total FWHM that you can expect will be ~1.93"

But I have shown you an image with 1.4 arc sec resolution

 

51 minutes ago, vlaiv said:

Now we just calculate square root of sum of squares of those so we have sqrt(0.635^2 + 0.3^2 + 0.425^2) = 0.8209" RMS and to get FWHM we need to multiply back with 2.355 so it is ~1.93"

Seems like I am defying mathematical laws

Link to comment
Share on other sites

It hovers around 3.7 pixels and if your sampling rate is 0.47"/px - that equates to 1.74" FWHM

1.74?? Wrong! I have never heard or used AstroimageJ I have no idea how reliable it is, but it is certainly NOT consistent with several astronomical programs like Pixinsight, MaximDL, ASTAP, CCDStack. When I took the sub exposure of 1.2 arc secs, I sent it to colleagues to qualify.

Your measurement is intriguing, so when there is doubt, always look to another way to measure. As mentioned it has been through the mill with other Astro programs, so look from another angle and visually inspect the stellar profile via a line graph. I have drawn where the half max is and extrapolated down to full width. Tell me what you think the FWHM is??

Stellar-profile-2.jpg

Link to comment
Share on other sites

4 minutes ago, CCD Imager said:

"There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution.  The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM.  But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave.  The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels.  But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels."

I'm sorry to say - that is complete nonsense and utter misunderstanding of Nyquist sampling theorem.

I don't understand why people insist relating x2 to something in spatial domain when it is clear in what it says:

image.png.d88411355e02a9ced8e244d6a84d5809.png

So it is not twice FWHM, it's not twice Reyleigh criterion it is not twice Gaussian sigma - it is none of that.

You need to perform Fourier transform of the signal to find out where cut off frequency is and then use that (twice that value) in order to determine sampling rate.

As for 2D case, here is simplified proof that above x2 max frequency component still stands even in case of non optimum rectangular sampling grid (optimal sampling grid for 2d case is actually hexagonal, but that is different matter).

For sine wave either vertical or horizontal - we have reduction to 1d case. For any wave that is at an angle so not either horizontal nor vertical - it will be sampled at higher rate in X and Y then it's wavelength is suggesting:

image.png.e6ed9f7dcf433ab899240fd36cbbb7c0.png

So wavelength of any wave at an angle will produce sine wave on X and Y with longer wavelength, so if you sample at twice per green arrow in X and Y direction  - you'll produce more than two samples per blue arrow (or in fact along X axis).

 

 

Link to comment
Share on other sites

58 minutes ago, vlaiv said:

It is completely wrong concept if you want to understand imaging.

It is where imaging starts, understanding object S/N. Read noise, thermal noise are not relevant to object S/N.

When imaging with a camera, sure all of what you mention comes into play

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.