# Bubble Nebula rehashed L(Ha)SHO, Foraxx palette

## Recommended Posts

Just now, CCD Imager said:

Lol Olly
apologies about the spelling, I forgot you were a teacher!
The image in question is a single sub and the final image had a FWHM of 1.4 arc secs. I don't think you would be enamoured with a single sub?

I'd be delighted to look at a single sub, Adrian. Most of us can see where a single sub will go if backed up by 80 more of the same.

But why was it a single sub? If you failed to achieve this resolution on the rest, what does that say about your assertion that anything less than 0.5"PP is under sampled? I spent two years imaging at a little over 0.6"PP and was never, ever, able to present an image at full size. Unfortunately the camera in use refused to bin properly so we had to resample downwards in software. I then started shooting the same kind of targets, and sometimes the same targets, at 0.9"PP and found no consistent difference.

Olly

• Replies 130
• Created

#### Posted Images

13 minutes ago, vlaiv said:

Not misinterpreting anything. It is a 2d function.

It's like saying sine is 2d function because it has height / intensity - it is not, it is 1d function.

That doesnt fit with image analysis experience, when a line plot was drawn through a star, it is obvious there are insufficient points for meaningful measurement. Do go and have a look at the CM discussion

##### Share on other sites

Just now, CCD Imager said:

That doesnt fit with image analysis experience, when a line plot was drawn through a star, it is obvious there are insufficient points for meaningful measurement. Do go and have a look at the CM discussion

Not sure what are you trying to say here in response to my assertion that image is in fact 2d function - intensity depends on x and y coordinate of pixel and f(x,y) is in fact 2d not 3d.

Further, any image you try to measure will consist out of finite number of samples - it is not continuous function but rather discrete one - and that is what sampling is. Nyquist sampling theorem deals precisely with that and gives criterion of when can original signal be fully restored given sampled function (function that has values only at certain points) and further more - how to properly restore the function.

Whenever I took line plot of any star - I always had enough samples to do some sort of meaningful measurement, so I'm not really sure what you mean by "insufficient points".

Could you provide an example please and explain what you find insufficient for meaningful measurement?

##### Share on other sites

7 minutes ago, vlaiv said:

Not sure what are you trying to say here in response to my assertion that image is in fact 2d function - intensity depends on x and y coordinate of pixel and f(x,y) is in fact 2d not 3d.

Further, any image you try to measure will consist out of finite number of samples - it is not continuous function but rather discrete one - and that is what sampling is. Nyquist sampling theorem deals precisely with that and gives criterion of when can original signal be fully restored given sampled function (function that has values only at certain points) and further more - how to properly restore the function.

Whenever I took line plot of any star - I always had enough samples to do some sort of meaningful measurement, so I'm not really sure what you mean by "insufficient points".

Could you provide an example please and explain what you find insufficient for meaningful measurement?

A stellar profile has three axes - x-axis (right ascension), y-axis (declination), and intensity. Brighter stars have higher intensities. We shouldn’t compare with a sine wave, a completely different signal, an optical system forms an image of a point source as a Gaussian curve.
Let me dig out the image in question and show you a line plot of said image, It is on CN, but I need to find it again for here

##### Share on other sites

15 minutes ago, ollypenrice said:

But why was it a single sub? If you failed to achieve this resolution on the rest, what does that say about your assertion that anything less than 0.5"PP is under sampled? I spent two years imaging at a little over 0.6"PP and was never, ever, able to present an image at full size. Unfortunately the camera in use refused to bin properly so we had to resample downwards in software. I then started shooting the same kind of targets, and sometimes the same targets, at 0.9"PP and found no consistent difference.

It was a single sub simply because the seeing was its best for that night, other subs had higher FWHM measurements. As mentioned, the final image which was taken over a couple of nights had a FWHM of 1.4, so if you apply the Nyquist sampling theorem, I need a scale of less than 0.5 arc sec/pixel to take advantage of that excellent seeing, anything more and I am missing out.

##### Share on other sites

Here is the single sub image taken through a blue filter. Stars in the center measure around 1.1 arc secs and periphery around 1.2 arc secs. BUT, a line plot thru stars gives insufficient points to adequately measure, i.e. under-sampled

##### Share on other sites

8 minutes ago, CCD Imager said:

A stellar profile has three axes - x-axis (right ascension), y-axis (declination), and intensity. Brighter stars have higher intensities. We shouldn’t compare with a sine wave, a completely different signal, an optical system forms an image of a point source as a Gaussian curve.
Let me dig out the image in question and show you a line plot of said image, It is on CN, but I need to find it again for here

Please do understand that the fact that you can represent function in 3 dimensions - does not make it 3d. In order for function to be 3d - it must be function of 3 parameters.

Sine function is not 2d function because we can draw it on piece of paper (2d medium) or computer screen (again 2d medium) - it is 1d function because it accepts 1 parameter - sin(x).

Image is 2d function because value / intensity of image depends on two parameters - x, and y so image = f(x,y). It is not 3d.

Point source does indeed form quasi Gaussian shape. It is not true Gaussian shape because it consists (mainly) out of - Airy pattern shape convolved with two Gaussian functions (seeing and mount track / guide performance), however that has nothing to do with

a) image being 2d function in x and y

b) your ability or inability to measure something from profile plot of that function (Which is itself 1d function - not 2d as you are taking a slice of 2d function)

namely presenting sub that is under sampled at 0.5"/px or presenting sub that has 1.5" FWHM or less and was made with 150mm aperture - or to explain how is x3 FWHM optimum sampling given Nyquist theorem, or rather, given it's definition - how is x3 FWHM related to x2 maximum spatial frequency of image?

##### Share on other sites

3 minutes ago, CCD Imager said:

Here is the single sub image taken through a blue filter. Stars in the center measure around 1.1 arc secs and periphery around 1.2 arc secs. BUT, a line plot thru stars gives insufficient points to adequately measure, i.e. under-sampled

Can you provide linear unaltered sub. This sub has been stretched and modified in image manipulation software

##### Share on other sites

In either case - quick measurement (but not accurate one as data seems not to be linear) gives fwhm of:

\

average of 5.6px or with sampling of 0.47"/px - FWHM of ~2.6"

##### Share on other sites

Here is profile plot of one star in center:

for reference, here is what linear data looks like if you can see the core and what profile of that star looks like in linear data:

and this is from sub with similar resolution of about 2.6" FWHM

##### Share on other sites

19 minutes ago, vlaiv said:

Please do understand that the fact that you can represent function in 3 dimensions - does not make it 3d. In order for function to be 3d - it must be function of 3 parameters.

Sine function is not 2d function because we can draw it on piece of paper (2d medium) or computer screen (again 2d medium) - it is 1d function because it accepts 1 parameter - sin(x).

Image is 2d function because value / intensity of image depends on two parameters - x, and y so image = f(x,y). It is not 3d.

Point source does indeed form quasi Gaussian shape. It is not true Gaussian shape because it consists (mainly) out of - Airy pattern shape convolved with two Gaussian functions (seeing and mount track / guide performance), however that has nothing to do with

a) image being 2d function in x and y

b) your ability or inability to measure something from profile plot of that function (Which is itself 1d function - not 2d as you are taking a slice of 2d function)

namely presenting sub that is under sampled at 0.5"/px or presenting sub that has 1.5" FWHM or less and was made with 150mm aperture - or to explain how is x3 FWHM optimum sampling given Nyquist theorem, or rather, given it's definition - how is x3 FWHM related to x2 maximum spatial frequency of image?

I dont like comparing an electrical sine wave to a Gaussian plot of a star, apples and oranges
Lets remember that Nyquist 2x sampling was specifically related to continuous electrical signals, a stellar plot is a snapshot and presents a gaussian profile. The Nyquist theorem was adapted to be used for applying to stellar images, measuring circular stars with square pixels has its issues and explains why you need greater sampling.
I cant agree with you that a stellar profile is not 3D. Thirty years ago, this was discussed ad nauseum on Compuserve and agreed by many mathematicians and astrophysicists.

##### Share on other sites

24 minutes ago, vlaiv said:

In either case - quick measurement (but not accurate one as data seems not to be linear) gives fwhm of:

\

average of 5.6px or with sampling of 0.47"/px - FWHM of ~2.6"

Never play with a JPEG and stretched data!

##### Share on other sites

and here is the linear unstretched data, but to perform an analysis you need the raw FITS data.

This is purely to give a visual interpretation that Olly was asking for

##### Share on other sites

Here is the processed luminance image taken over 2 nights and shown at 100% full resolution, but please bare in mind a JPEG will reduce its quality, the average FWHM was 1.4 arc secs. I would like to point out the beauty of over-sampling, it gives deconvolution something to get its teeth into.

Ive uploaded the luminance only so that you eye is directed to resolution, colour images in this respect are distracting

Hopefully now, I have satisfied Olly's curiosity.

##### Share on other sites

10 hours ago, CCD Imager said:

Lets remember that Nyquist 2x sampling was specifically related to continuous electrical signals, a stellar plot is a snapshot and presents a gaussian profile.

Incorrect. Nysquist sampling theorem clearly states what it applies to - any band limited signal / function (which really means that Fourier transform of that function has maximum parameter value after which all values of Fourier transform are zero).

##### Share on other sites

10 hours ago, CCD Imager said:

Never play with a JPEG and stretched data!

Could you please provide fits then to verify 1.5" FWHM claim?

##### Share on other sites

11 minutes ago, vlaiv said:

Nysquist sampling theorem clearly states what it applies to - any band limited signal / function (which really means that Fourier transform of that function has maximum parameter value after which all values of Fourier transform are zero).

Strictly,  it also has to be point sampled. This is not the case with areal sensors like CMOS cameras.

Regards Andrew

##### Share on other sites

1 minute ago, andrew s said:

Strictly,  it also has to be point sampled. This is not the case with areal sensors like CMOS cameras.

Regards Andrew

Yes, but, as we have discussed before - sampling with area sampling devices is the same as sampling the function that has been convolved with appropriate pixel response - which in general lowers the resolution of the data.

In magnitude it is rather small contribution and further blurs the image thus lower and not enhancing resolution of the image.

##### Share on other sites

44 minutes ago, vlaiv said:

Incorrect. Nysquist sampling theorem clearly states what it applies to - any band limited signal / function (which really means that Fourier transform of that function has maximum parameter value after which all values of Fourier transform are zero).

Nyquist formulated the theory based on telegram electrical signals that have a CONTINUOUS sinusoidal waveform. It has since been adapted to other signal forms including measuring stars. But there are significant differences, not least that you are measuring round signal with square pixels.
In simplistic terms, it has been translated that you need 2x sampling to reproduce signal and can I stress this is AT LEAST 2x sampling, the more, the more likely it is to be accurate. Because of the 3D nature and wasted area of square pixels, the consensus in the knowledgeable astro community is that 3x sampling is necessary.
I see this in practise and this very M51 image that I have uploaded. Given a sampling rate of 0.47 arc secs, I should be able to resolve 1.4 arc secs, but according to your theory I could resolve 0.94 and when you perform a stellar line graph, its plainly obvious there are insufficient points. Just from eyeballing, I believe the 1.1 measurement is false, purely because it is under sampled.

##### Share on other sites

11 minutes ago, CCD Imager said:

Nyquist formulated the theory based on telegram electrical signals that have a CONTINUOUS sinusoidal waveform. It has since been adapted to other signal forms including measuring stars. But there are significant differences, not least that you are measuring round signal with square pixels.
In simplistic terms, it has been translated that you need 2x sampling to reproduce signal and can I stress this is AT LEAST 2x sampling, the more, the more likely it is to be accurate. Because of the 3D nature and wasted area of square pixels, the consensus in the knowledgeable astro community is that 3x sampling is necessary.

Most of what you need to know about Nyquist sampling theorem can be found here:

It's not that hard to read up on proven mathematical theorem and when it is applicable and how to apply it.

13 minutes ago, CCD Imager said:

I see this in practise and this very M51 image that I have uploaded. Given a sampling rate of 0.47 arc secs, I should be able to resolve 1.4 arc secs, but according to your theory I could resolve 0.94 and when you perform a stellar line graph, its plainly obvious there are insufficient points. Just from eyeballing, I believe the 1.1 measurement is false, purely because it is under sampled.

If you wish to know on what I base my recommendation to sample at x1.6 of FWHM - I'm happy to explain the details - it is fairly simple to follow for anyone "knowledgeable" enough. Explanation goes like this:

- Gaussian form does not qualify for Nysquist sampling theorem as it is not band limited signal

- Star profile is band limited signal because it was captured with aperture of limited size and is not gaussian form. It is in fact convolution of airy pattern with at east two gaussian forms (seeing and mount performance). This is for perfect aperture and perfectly random tracking error (seeing is believed to be random enough over course of long exposure to qualify for central limit theorem). Optical aberrations and less then perfect tracking will only lower resolution of the image, so we are aiming for best case scenario. Airy pattern is band limited signal and therefore convolution of it with other signals (multiplication in frequency domain) will be band limited signal.

- as such we must select sampling limit in such way as to most closely restore original signal (https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem#Sampling_below_the_Nyquist_rate_under_additional_restrictions).

- in ideal world, regardless of other factors - we would need to sample at rate determined by aperture of telescope as that will determine true cut off frequency, however in real world - seeing and mount performance dominates airy pattern enough that star profiles are well approximated by gaussian profile and we have presence of noise.

- in such conditions, sensible cut off point is one where frequencies are attenuated by more than 90% in frequency spectrum (sharpening past that point will simply bring too much noise forward).

- from above conditions it is fairly easy to find proper sampling frequency - we take gaussian shape of certain FWHM, do Fourier transform of it (which is again gaussian function), find at which point it falls below 90% threshold and set our sampling at that frequency.

• 1
##### Share on other sites

24 minutes ago, CCD Imager said:

Nyquist formulated the theory based on telegram electrical signals that have a CONTINUOUS sinusoidal waveform. It has since been adapted to other signal forms including measuring stars. But there are significant differences, not least that you are measuring round signal with square pixels.
In simplistic terms, it has been translated that you need 2x sampling to reproduce signal and can I stress this is AT LEAST 2x sampling, the more, the more likely it is to be accurate. Because of the 3D nature and wasted area of square pixels, the consensus in the knowledgeable astro community is that 3x sampling is necessary.
I see this in practise and this very M51 image that I have uploaded. Given a sampling rate of 0.47 arc secs, I should be able to resolve 1.4 arc secs, but according to your theory I could resolve 0.94 and when you perform a stellar line graph, its plainly obvious there are insufficient points. Just from eyeballing, I believe the 1.1 measurement is false, purely because it is under sampled.

Obviously, you can apply Nyquist's theorem to non-sinusoidal signals by virtue of the fact that any complex signal can be reduced to a set of sinusoids via Fourier transforms. The only component necessary to consider for this is the highest harmonic you need to reproduce.

• 2
##### Share on other sites

38 minutes ago, vlaiv said:

from above conditions it is fairly easy to find proper sampling frequency - we take gaussian shape of certain FWHM, do Fourier transform of it (which is again gaussian function), find at which point it falls below 90% threshold and set our sampling at that frequency.

I have read Wiki, many other articles and also watched presenters discuss Nyquist. Most make reference to continuous electrical signal.

The problem with your proposal is that FWHM varies wildly. It is better to know the best FWHM with an over-sampled system and use that. Should FWHM be higher, you can always bin/resize to suit. Under sampling equals loss of resolution

And as most astro imagers suffer from less than perfect mount tracking/guiding/focusing, there is an acceptance that resultant FWHM values are the best they can achieve. However, if the "real seeing" is known, it brings home how their equipment is compromising FWHM. Having near perfect optics, a mount that doesnt require guiding and better autofocus hardware and algorithms, would suit greater over sampling. In other words your proper sampling frequency completely changes.

##### Share on other sites

53 minutes ago, Mandy D said:

Obviously, you can apply Nyquist's theorem to non-sinusoidal signals by virtue of the fact that any complex signal can be reduced to a set of sinusoids via Fourier transforms. The only component necessary to consider for this is the highest harmonic you need to reproduce.

Indeed and is applied to astro images, but there are differences in its application

##### Share on other sites

3 minutes ago, CCD Imager said:

In other words your proper sampling frequency completely changes.

Proper sampling frequency is simply related to quality of the data - regardless of how good or bad that data is.

Over sampling is much much more detrimental to astronomical data then under sampling is. Here is an example:

This is your image that you posted. Left one is original, middle one is downsampled to 50% and then up sampled - so effectively it has only half a sampling rate then your original data. Virtually no change there.

Third one was downsampled to 33% of original - and yes, there is some change in level of detail - absence of sharpness is obvious - but this is x3 coarser sampling rate.

First bit shows that your image is over sampled by factor of x2 - and second part shows that there is only slight loss of sharpness when you under sample. In fact, If I present image at proper sampling rate - it won't be even less obvious:

left image is your image at proper sampling rate, and right is one that has been under sampled by factor of x3 compared to your original image, or x1.5 compared to proper sampling. Hardly any difference in detail

In any case - over sampling is bringing in needles loss of SNR and for that reason is much bigger problem than slight under sampling (if any).

• 2

## Create an account

Register a new account