Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Imaging small galaxies with focal length under 800mm


TheDadure

Recommended Posts

1 hour ago, vlaiv said:

 

I found it rather interesting that you both actually saw difference between 0.9" and 1.1". I think that most people would be hard pressed to see difference between double sampling rates - for example 0.8"/px and 1.6"/px.

Could you at least tell me what sort of difference did you see?

image.png.96f99c489f91fbd895eb62a63787ea85.png

Here is some text that has been blurred so that we hit optimum sampling rate - one has been sampled at 1.1"/px and other at 0.9"/px.

Do you see any difference - apart from left image obviously being slightly smaller. To my eye it just seems a bit sharper than image sampled at 0.9"/px (larger one), although smaller letters might be harder to read without glasses :D

 

The main advantage I was seeing may just have been in final image size rather than in the isolation of new features. Over-sampling does become visually obvious somewhere along the line. The 14 inch data I obtained at about 0.6"PP did not look appealing at full size, ever. On the other hand I regularly post my 0.9"PP images at full size and cropped. Here's an example below at full size and very heavily cropped. I'd rather have it at this screen size than 20% smaller because it's already minute!  🤣

1893364839_NGC7479LRGBtightcrop.jpg.df62e0ab3db53fb319a01ae3b298bd28.jpg

Olly

 

 

 

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

But look - I just made it even bigger now!

 

All it took was CTRL+ in my browser to do so ...

😁  Magic!

But the image is now, to my eye, screaming 'empty resolution' where the original was just about OK.

Olly

  • Like 1
Link to comment
Share on other sites

8 minutes ago, ollypenrice said:

😁  Magic!

But the image is now, to my eye, screaming 'empty resolution' where the original was just about OK.

Olly

I always love to measure things - simple math in this case - take your stack (luminance is fine) while linear and measure FWHM. Divide that with 1.6 and - you get optimum resolution for your image.

0.9"/px is tall order - you need something like 1.44" FWHM.

Btw, purely visually - I would say that your crop above is over sampled as well. here is what I would say visually - properly sampled image:

image.png.614e9f7a3151900bc96557f082369ac9.png

Stars really need to be pinpoints. Then again - some people don't mind if image is blurry a bit if they don't have to put too much effort in seeing details.

 

  • Thanks 1
Link to comment
Share on other sites

I think we are in to aesthetics and personal preference. A lot depends on screen / image size and viewing distance. 

Where should you view an impressionist painting from. Close up to see the dots or afar to see the whole merge in to ...

Regards Andrew 

  • Like 1
Link to comment
Share on other sites

3 minutes ago, vlaiv said:

I always love to measure things - simple math in this case - take your stack (luminance is fine) while linear and measure FWHM. Divide that with 1.6 and - you get optimum resolution for your image.

0.9"/px is tall order - you need something like 1.44" FWHM.

Btw, purely visually - I would say that your crop above is over sampled as well. here is what I would say visually - properly sampled image:

image.png.614e9f7a3151900bc96557f082369ac9.png

Stars really need to be pinpoints. Then again - some people don't mind if image is blurry a bit if they don't have to put too much effort in seeing details.

 

Yes, that's why I considered the image 'just about OK' at 1:1.  Choosing between 1:1 and a downsampled presentation is done subjectively, I'd say. 

We do occasionally get to FWHM 1.4 on this rig but it varies wildly.

Olly

Link to comment
Share on other sites

I'm losing sight of what the point is. The difference in my rig between bin 1 and 2 is 0.52" and 1.04". The seeing is the limiting factor, because obvious. This is measurable.

Going into what we can "see" is subjective, as we are essentially artists pixel peeping at thousands of images. I find this equally important, regardless of the measures, because we become experts at our own rigs and outputs. As such, I find the comparisons Vlaiv made in text to be borderline nonsense, because our brains observe text in a massively different way than it does images and especially closeup of starfields that we all know so well.

Anyway, I tried to measure something on existing images. Same target, IC410, in Ha and OIII. It is still not a fair comparison, because the Ha is 3nm, while the O3 is 8nm and exposure time is different. They are also on different nights, because that's how I roll. But, all other things being equal, that shouldn't affect FWHM too much.

2020-05-06_1359.png.e1a87ff0fed945d2f78821791404cb61.png

2020-05-06_1400.png.0799fc6cbad73f714e32e19889d668bd.png

Bin1: 0.52" per pixel      Bin2: 1.04" per pixel

So, the Ha is not clipped (much) and if I'm mathing correctly, the 1.576px*0.52"/px=0.81952" seeing? With that, I should be able to see a visible difference in an image where I sample above and below. Agree?

I'm not sure what to conclude about the O3. It is clipped in some stars, so that probably skews the measurement?

Link to comment
Share on other sites

30 minutes ago, Datalord said:

I'm not sure what to conclude about the O3. It is clipped in some stars, so that probably skews the measurement?

Could you post crop of both frames - just a piece so it'll be small upload size - but with enough stars without clipping? I want to measure both with AstroImageJ.

32 minutes ago, Datalord said:

As such, I find the comparisons Vlaiv made in text to be borderline nonsense, because our brains observe text in a massively different way than it does images and especially closeup of starfields that we all know so well.

I've done similar comparison with actual data and I'd be happy to do it again for you if you wish. Point is in the numbers rather than text - difference between two subs if we know that resolution has been limited by blur of certain FWHM.

Link to comment
Share on other sites

32 minutes ago, vlaiv said:

Could you post crop of both frames - just a piece so it'll be small upload size - but with enough stars without clipping? I want to measure both with AstroImageJ.

I've done similar comparison with actual data and I'd be happy to do it again for you if you wish. Point is in the numbers rather than text - difference between two subs if we know that resolution has been limited by blur of certain FWHM.

How confident are we that FWHM is an accurate indicator of non-stellar resolution?

Olly

Link to comment
Share on other sites

10 minutes ago, ollypenrice said:

How confident are we that FWHM is an accurate indicator of non-stellar resolution?

Olly

In principle - there is little doubt that it is.

Stellar objects are far enough to be really considered point sources in our case (even closest stars have diameter in mas - milliarcseconds, for example Betelgeuse has 50mas and it is gigantic and very close - most other have less than 1mas which is 1/1000 of our regular sampling rates).

Blur by its definition is "how does light from a single point spread around".

Star profile contains only light from that star so it is indeed PSF of combined optics / tracking and atmosphere (in fact everything that contributes).

Most star profiles are indeed close to Gaussian. Some will say that Moffat is better PSF approximation for astronomy, and it probably is for telescopes that have very good tracking - professional telescopes. Unfortunately, amateur telescopes don't usually have such perfect tracing and because tracking error comes into it - star profile is probably more Gaussian than Moffat - but both are pretty close:

image.png.8e2d415b0c3c6badbc88c127d6581062.png

FWHM is rather good indicator for the shape of Gaussian - as it is tied to key parameter of the curve via simple relation sigma = FWHM/~2.355.

We get different values of FWHM for stars in our image (single sub) - because optics have different aberrations depending on how far away we are from optical axis. Different stellar classes have different amounts of various wavelengths - hot stars are more blue and produce more blue light, while red stars are obviously rich in red wavelengths. Different wavelengths behave differently in atmosphere and also in optics - even with mirrored systems, Airy disk is of different diameter.

All of that creates FWHMs of different values across the image, but these differences are very small compared to actual resolution. As we have seen, even sampling rate change of x2 will not have drastic impact on what we see in the image.

Small differences in FWHM depending on where in the image we are looking and wavelength distribution of the source don't have drastic impact on resolution. No one ever said - look half of my frame is blurrier than other half :D  - well, maybe Newtonian scope owner without coma corrector might think that but they won't say it out loud :D.

If you are doubtful about FWHM approach - there is a simple way to test it. Take one of your images and measure FWHM. Take one of high resolution images of same object by Hubble space telescope. Scale Hubble image so that it is same resolution as yours and then do gaussian blur on it such that sigma is equal to your FWHM / 2.355.

Compare results visually and you should see about same level of blur in both images (above is not strictly correct - as you'll be doing blur on stretched image rather than on linear data - but it is good enough approximation).

  • Like 1
Link to comment
Share on other sites

11 minutes ago, vlaiv said:

In principle - there is little doubt that it is.

Stellar objects are far enough to be really considered point sources in our case (even closest stars have diameter in mas - milliarcseconds, for example Betelgeuse has 50mas and it is gigantic and very close - most other have less than 1mas which is 1/1000 of our regular sampling rates).

Blur by its definition is "how does light from a single point spread around".

Star profile contains only light from that star so it is indeed PSF of combined optics / tracking and atmosphere (in fact everything that contributes).

Most star profiles are indeed close to Gaussian. Some will say that Moffat is better PSF approximation for astronomy, and it probably is for telescopes that have very good tracking - professional telescopes. Unfortunately, amateur telescopes don't usually have such perfect tracing and because tracking error comes into it - star profile is probably more Gaussian than Moffat - but both are pretty close:

image.png.8e2d415b0c3c6badbc88c127d6581062.png

FWHM is rather good indicator for the shape of Gaussian - as it is tied to key parameter of the curve via simple relation sigma = FWHM/~2.355.

We get different values of FWHM for stars in our image (single sub) - because optics have different aberrations depending on how far away we are from optical axis. Different stellar classes have different amounts of various wavelengths - hot stars are more blue and produce more blue light, while red stars are obviously rich in red wavelengths. Different wavelengths behave differently in atmosphere and also in optics - even with mirrored systems, Airy disk is of different diameter.

All of that creates FWHMs of different values across the image, but these differences are very small compared to actual resolution. As we have seen, even sampling rate change of x2 will not have drastic impact on what we see in the image.

Small differences in FWHM depending on where in the image we are looking and wavelength distribution of the source don't have drastic impact on resolution. No one ever said - look half of my frame is blurrier than other half :D  - well, maybe Newtonian scope owner without coma corrector might think that but they won't say it out loud :D.

If you are doubtful about FWHM approach - there is a simple way to test it. Take one of your images and measure FWHM. Take one of high resolution images of same object by Hubble space telescope. Scale Hubble image so that it is same resolution as yours and then do gaussian blur on it such that sigma is equal to your FWHM / 2.355.

Compare results visually and you should see about same level of blur in both images (above is not strictly correct - as you'll be doing blur on stretched image rather than on linear data - but it is good enough approximation).

But what happens in the case of those many telescopes which fail to control the short wavelengths all that well? Won't their performance in delivering a small FWHM be worse than their ability to resolve, say, fine detail in Ha?

Olly

Link to comment
Share on other sites

34 minutes ago, ollypenrice said:

But what happens in the case of those many telescopes which fail to control the short wavelengths all that well? Won't their performance in delivering a small FWHM be worse than their ability to resolve, say, fine detail in Ha?

Olly

You are quite correct. In fact there are several scenarios to consider there.

First, let me just say that star FWHM is very good indicator of captured resolution in well corrected systems. This means stars pretty much the same on axis and in far corners and color correction that is apochromatic. Also, system needs to be diffraction limited across range of wavelengths (this pretty much excludes achromats unless they are crazy slow).

In the case we don't have above conditions met - different things might happen. One images with Ha filter and said scope - star FWHM matches resolution of Ha nebula.

One images with lum filter - some stars will have larger FWHM than others - depends on star type. In general, red stars will have FWHM that is better match to Ha resolution than blue stars. However - we assume perfect focus here. What if focus was done using FWHM as a measure on blue hot star? Then Ha will be out of focus and resolution will suffer for that - FWHM is then again better match to Ha resolution although FWHM suffers from CA and Ha suffers from slight defocus.

In fact, interesting experiment can be done - take any scope and OSC sub and debayer it using some simple technique like super pixel or splitting. Measure FWHM of each channel for single star - you will find them different.

image.png.2a091deb3f7368c0238b75c5b6802409.png

image.png.40d30506b4424f1e001a259a8816d18a.png

image.png.1d7b5ccfe7960182f501a6525f52b663.png

Could be that I accidentally swapped blue/red as I was not paying attention to bayer matrix orientation (green is easy - there are two of them so you can't miss). Scope is TS 80 F/6 APO, and sampling rate is around 2.6"/px (1.3"/px when raw, but this is every other pixel of bayer matrix).

FWHM varies by 20% - but you can't really see that in the image. That is too small variation between red and blue to be noticeable - it might be that sub to sub variation in FWHM is as much as that.

And this is well corrected scope, Strehl 0.95 in green and red and over 0.8 in blue.

  • Like 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Could you post crop of both frames - just a piece so it'll be small upload size - but with enough stars without clipping? I want to measure both with AstroImageJ.

I've done similar comparison with actual data and I'd be happy to do it again for you if you wish. Point is in the numbers rather than text - difference between two subs if we know that resolution has been limited by blur of certain FWHM.

I tried to match the crops of those particular images to the same area. They were raw subs without alignment.

L_Ha_2020-01-06_23-40-59_Bin1x1_1200s__-40C_IC410-crop.fit L_OII_2020-01-12_20-27-54_Bin2x2_600s__-40C_IC 410-crop.fit

Link to comment
Share on other sites

14 minutes ago, Datalord said:

I suspected that PI got silly results - and indeed it seems to be the case.

I'm rather reluctant to speak against PI because it will look like I'm bad mouthing their developers, but on several occasions I found that their algorithms and explanations don't make complete sense (photometric color calibration, their explanation for drizzle, their SNR estimation, ...).

In any case, here is what AstroImageJ reports for some stars in Ha image:

image.png.8400f0e647b9fe24b9d3ec5281c90b33.png

5.81, 5.83, 5.72, 5.75, 5.86 - let's call that average of 5.8px or at resolution of 0.52"/px that gives FWHM of 3.016" or 3".  You should be in fact bin to x4 rather than x2 as that will give you 2.08" which is closer to real resolution of 1.885"/px that this frame supports.

image.png.523c9980bc73e5b1fcae0c5ebe314ecf.png

OIII sub has values of 3.83, 3.85, 3.89, 3.5, 3.82, 3.7, 3.76 - I would again say that average is somewhere around 3.75px (maybe 3.8). This time we are sampling at 1.04"/px so resulting FWHM is 3.9".

It is to be expected to 500nm be worse than 656 due to atmosphere. Ideal sampling rate here again is 2"/px as 3.9 / 1.6 = 2.4375 (actually it would be bin x5 as is closer with 2.6"/px)

 

Link to comment
Share on other sites

4 minutes ago, Datalord said:

So, what you're saying is that I would have same quality image if I shoot at bin4 and up sample? 

Here is comparison that I did with text - only this time applied on OIII sub that you supplied:

 

image.png.e04e96cec8b9591a69b9b63794a04e6a.png

I used simple bicubic interpolation (nothing fancy it was easier to control exact position) to scale up.

This is stretched difference and histogram (display range is from -0.003 to 0.003 and image was scaled to 0-1 range):

image.png.e635ba565c93f76b4c9dedb3445212ef.png

image.png.8275f04facde9d981c21b81b61fd379c.png

Not much difference between these two images.

I would bin x4 and leave as is - upsampling will produce same result that we discussed above - empty resolution. You can bin x2 everything and then in the end decide if you want to bin another x2 in software or if you have really good SNR - you can do good and careful sharpening of your data.

 

Link to comment
Share on other sites

30 minutes ago, vlaiv said:

I would bin x4 and leave as is

This makes no sense to me. Don't get me wrong, if I could get the same result by using 16x less time, I'm going to be all over it. But I can't. And it's pretty easy to see why.

image.png.4671b4bc2833951d681c3d7a3e9a5a1a.png

That's a closeup of the brightest star in the Ha image. If I were to bin4 on that, I would have a star with these 9 pixels:

image.png.d9fbeff02d807900eb8e73027dc77241.png

You're saying I wouldn't lose information in my images by doing this?

Btw, I sent my entire set of images through CCDInspector to let it go through my FWHM. These are the results it gave:

image.png.de57889abcabddd71b55e008aca65ea4.png

Finally, this is the actual image I made with these frames. I really want to see the process with which I can achieve this result, in this same "empty resolution" by doing 16 times less imaging. Mind you, I used 19 hours of imaging time, so if I can achieve this result in 1½ hours, I'm SO game!

IC410.thumb.jpg.1a85bf5e6088806ab5a6e4c8af712dd9.jpg

Link to comment
Share on other sites

37 minutes ago, Datalord said:

Btw, I sent my entire set of images through CCDInspector to let it go through my FWHM. These are the results it gave:

That is very interesting.

Here is profile of the brightest star in Ha crop that you attached above:

image.png.6c6d49c9284b2d21d648af8b8b416f48.png

Simple yet not very precise measurement puts left side of half height at about 12 and right side at about 18 - difference being 6px - same as above measurement in AstroImageJ - which gave around 5.8px to be average FWHM for this particular crop.

42 minutes ago, Datalord said:

This makes no sense to me. Don't get me wrong, if I could get the same result by using 16x less time, I'm going to be all over it. But I can't. And it's pretty easy to see why.

Ok, I understand that it is hard to see, and I can offer one of three ways of explaining. In fact I'll do two right away.

1. I'll show you

2. I'll provide you with analogy

3. If you want, we can go thru complete math behind it so you can see that it is correct (or maybe we can spot the error if there is one)

1. Let's do demo:

image.png.569e5dca7210929bb2171cd0c0850e8b.png

This is brightest star in that Ha crop - enlarged x10 - with nearest neighbor resampling used (which is really not resampling - but rather just drawing large squares instead of sample points - this is often reason people think pixels are little squares - other is that camera pixels are indeed almost squares - not quite but almost).

Indeed, when we bin 4x4 - we get this:

image.png.dce8ea5744328dc42c5a53be3d7d4aad.png

Almost only 3x3 pixels (not quite, some surrounding pixels as well have some value).

Look what we get when we enlarge back this little star:

image.png.1dcbf3720c13105ff0584f9743122fdc.png

We get that same star - just minus noise. Look how big it is - left one has 14px across and right one has 14px across - same diameter.

Look what happens when I add random noise to right image - just some Gaussian noise:

image.png.05e0807d7b05fb2fee2526d894831a39.png

Yes - that star on the right was actually binned x4 and it now looks the same after we enlarged it and then added some noise that was removed when we binned it by x4 (x4 SNR improvement).

2. Let me try to use some analogy.

Take a look at regular linear function - just a straight line. You can take 20 points on it and record all 20. You don't need all 20 of them to exactly describe that straight line - you need only two. You can throw away 18 points and you won't loose any precision - you'll still be able to draw that exact straight line. You won't have all the points on it - but you'll be able to calculate and draw any point on it.

Line is highly regular - it has no bumps - no details, it is just straight - and for that reason - it requires only 2 points to know every point on that line.

Let's go a bit more complex - let's take quadratic function. Here you no longer can know quadratic function with only 2 points. You can still record 50 or 100 points - but you don't need all of those points. You in fact need only 3 points to record quadratic function perfectly. You can throw away other points.

Now we have introduced a bit of curvature - we no longer have a straight line - we can have one bump.

It turns out that for any polynomial function there is exact number of points that you need to record that function. Larger the polynomial degree - more points you need, but also more "intricate" (or wavy) that function is. There is relationship between smoothness and waviness (if there is such word) of function to number of points that you need to record that function.

This happens with images as well - or particularly with blurred images. Blur adds certain level of smoothness to image - and image can be viewed as 2d function where pixel intensity is value of that function at (X, Y) - pixel coordinates. Pixel becomes single point in 2d plain (rather than being a square). There is also certain relationship like above with polynomial functions - depending how smooth 2d function is - you need certain number of points to record it. More smooth it is - less points you need to record it.

FWHM of star is telling you how smooth the image is - it shows you blur kernel that has been used to blur the image (more or less - see above discussion on how well star PSF describes blur of the image). It tells you how smooth the function is and from that you can determine how many points you need to record the image.

Last part of analogy is - if you have 2 points on a line you can know every other point on a line. Here it goes - if you sample at certain resolution - you can restore image at higher resolution - you'll know value of smooth function at higher resolution -but it will not contain detail because detail was not there in the first place when you recorded it - blur was such that it smoothed out detail on that scale.

3. I won't go into complete math here unless you want me to, but just wanted to point out that it will require a bit of Nyquist theorem, a bit of Fourier transform - basic knowledge of Gaussian distribution and such - and we get basically same thing as with polynomials - we get rule on how much sampling points we need to use (sampling rate) on 2d function with certain smoothness (band limited signal) to completely record it. We can also discuss noise and how it behaves in all of that.

 

  • Like 2
Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Yes - that star on the right was actually binned x4 and it now looks the same after we enlarged it and then added some noise that was removed when we binned it by x4 (x4 SNR improvement).

It's an interesting exercise, but from a processing point of view, those images are simply not comparable from a processing PoV imho. The one binned, enlarged and put through noise is simply more noisy. It is a harder image to process. For that individual star, yes, I can see how that makes sense, but when you start processing nebula or galaxy dust, that noise will be a nightmare to process. It is simply not the same. Ok, we can create a starfield from a 100x100 image, sprinke a few pixels, enlarge, noise, great. That is not detail.

Have you actually done this on an image? Do you have some real data from such upsampling that results in a final image of reasonable resolution?

Link to comment
Share on other sites

4 minutes ago, Datalord said:

It's an interesting exercise, but from a processing point of view, those images are simply not comparable from a processing PoV imho. The one binned, enlarged and put through noise is simply more noisy. It is a harder image to process. For that individual star, yes, I can see how that makes sense, but when you start processing nebula or galaxy dust, that noise will be a nightmare to process. It is simply not the same. Ok, we can create a starfield from a 100x100 image, sprinke a few pixels, enlarge, noise, great. That is not detail.

Have you actually done this on an image? Do you have some real data from such upsampling that results in a final image of reasonable resolution?

I don't upsample images - I leave them at resolution that is compatible with level of detail that has been captured - or at least I advocate for people to do it like that.

What would be the point of upsampling? In one of my posts above I shows how easy it is for anyone viewing the image to upsample it for viewing - just hit CTRL + in the browser and hey presto, you got larger image without additional detail. Why would you make that your processing step when anyone can do it if they choose to do so?

If anything, I often bin data if it is over sampled - it improves SNR (removes noise, btw - I added noise above just to show you that it is the same thing - with smooth star it is not obvious right away since original star is noisy and looks a bit different and sometimes it is hard to tell what is detail and what is noise - so I added noise to show you that original image does not contain detail it is just smooth star profile + noise) and it looks nicer.

There is additional bonus in all of this that we have not discussed. Even at proper sampling rate - data is still blurred. Because it is blurred we get to sample it at that rate, but we can use frequency restoration technique to sharpen image further and show all the detail there is at that level and make image really sharp. In order to do this - we need very good SNR and one way to get this SNR is to bin in the first place - or not to over sample as that spreads signal and lower SNR.

Look at this image captured by Rodd and processed by me (only luminance):

here is original thread:

And here is processing of the luminance (this is actually a crop of central region):

image.png.7865e84f173b4033dacd5a67484b8a02.png

In my book this is what is opposite of empty resolution - fully exploited resolution. This data is also binned x2. If you look at other processing examples - you will find full resolution versions of this image - look at them at 1:1 and you'll see "empty resolution".

This data had very high SNR (a lot of hours stacked) and that enabled me to sharpen things up at this resolution to this level.

I feel that we are again digressing into what looks nice vs what is actually needed to be recorded in the first place. Math above and examples show that you don't need to do high sampling rate if your FWHM is at certain level. Whether you choose to use high resolution - is up to you really. If you like it like that - larger and all - again up to you and I'm certain there is no right or wrong for that. However there is right sampling rate for certain blur if you want to optimize things.

 

 

 

  • Like 1
Link to comment
Share on other sites

I need to see a real life comparison, back to back, from start to end with a comparison between data gathered in bin1 and bin4 (or 3) to be convinced about this. It's counter intuitive and goes against what I have found when I process subs in just bin1 vs bin2. I find it massively better to have bin1 data for the final result.

Do you know of any a real life comparison somewhere?

  • Like 1
Link to comment
Share on other sites

8 hours ago, Datalord said:

I need to see a real life comparison, back to back, from start to end with a comparison between data gathered in bin1 and bin4 (or 3) to be convinced about this. It's counter intuitive and goes against what I have found when I process subs in just bin1 vs bin2. I find it massively better to have bin1 data for the final result.

Do you know of any a real life comparison somewhere?

Like you I'm a pragmatist. I need to take the process through from beginning to end. However, I'm sufficiently piqued by Vlad's arguments to want to give it a go - especially since, at 0.9"PP, I'm easily oversampling on nights of indifferent seeing anyway.

An aspect of the argument which I found particularly convincing concerned sharpening. Take the core of M101 since I, too, processed Rodd's data and, by chance, another member's on M101 the following day. I then went back to my own for a reprocess, so three different captures. In none of these cases did a basic log stretch reveal much spiral structure right into the core. It was all pretty soft. However, in each case, sharpening revealed a spectacular level of spiral structure right into the core. Better still, the structures were the same in all three datasets and they agree with the Hubble image insofar as they go. This convinces me that the sharpening routines are revealing genuine information contained in the capture. Now for the crux...

Why was this particularly applicable to the core? Because it had the best SNR. The further out from the core, the less it was possible to sharpen as the SNR tailed off. (My own image was criticized on the French forum, perfectly reasonably, for being out of balance with itself in terms of sharpening, the core being sharper than the rest.) So might we actually do better to get more signal of lower resolution and to sharpen harder than to chase more resolution at an inferior SNR?

I'm up for an experiment.

Olly

  • Like 2
Link to comment
Share on other sites

9 hours ago, Datalord said:

I need to see a real life comparison, back to back, from start to end with a comparison between data gathered in bin1 and bin4 (or 3) to be convinced about this. It's counter intuitive and goes against what I have found when I process subs in just bin1 vs bin2. I find it massively better to have bin1 data for the final result.

Do you know of any a real life comparison somewhere?

You have the data, you already did process at bin x1 or bin x2 - just take those stacks and bin them until you get 2"/px and then process them again.

Software binning is available in PI as integer resample (hopefully it works ok) - just select average method.

1 hour ago, ollypenrice said:

Like you I'm a pragmatist. I need to take the process through from beginning to end. However, I'm sufficiently piqued by Vlad's arguments to want to give it a go - especially since, at 0.9"PP, I'm easily oversampling on nights of indifferent seeing anyway.

An aspect of the argument which I found particularly convincing concerned sharpening. Take the core of M101 since I, too, processed Rodd's data and, by chance, another member's on M101 the following day. I then went back to my own for a reprocess, so three different captures. In none of these cases did a basic log stretch reveal much spiral structure right into the core. It was all pretty soft. However, in each case, sharpening revealed a spectacular level of spiral structure right into the core. Better still, the structures were the same in all three datasets and they agree with the Hubble image insofar as they go. This convinces me that the sharpening routines are revealing genuine information contained in the capture. Now for the crux...

Why was this particularly applicable to the core? Because it had the best SNR. The further out from the core, the less it was possible to sharpen as the SNR tailed off. (My own image was criticized on the French forum, perfectly reasonably, for being out of balance with itself in terms of sharpening, the core being sharper than the rest.) So might we actually do better to get more signal of lower resolution and to sharpen harder than to chase more resolution at an inferior SNR?

I'm up for an experiment.

Olly

This does warrant additional theoretical explanation. I'll try to keep it at a minimum, but still provide correct and understandable explanation of what is going on.

Blur is convolution of function with another function (we will call this other function blur kernel) - this is included for correctness and if you want you can look up convolution as math operation.

Blur kernel in our case is well approximated with Gaussian shape - it is point spread function and also a star profile (all three are connected and the same in our case).

Convolution in spatial domain is the same as multiplication in frequency domain. In another words  - if you blur by some kernel (convolution) - you are in fact using filtering (multiplication of Fourier transforms - multiplication in frequency domain).

This is crucial step for understanding what is going on, since convolution is not easy to visualize and analyze - but multiplication is - we are used to it.

Another important bit of information is that Fourier transform of Gaussian profile - is another Gaussian profile. This means that our blur is in fact a filter that has Gaussian shape. We now have means to understand what happens with data when it gets blurred. We just need to add some noise into the mix.

Here is a screen shot that will be important:

image.png.b36aabb8e68f9e94bd9b8acaf19eaed9.png

I had to do it for my benefit - I was not sure that this will happen (I thought it will), so I needed to check - we want to be right in our explanation. This is just random noise, pure noise (first image - Gaussian noise). Second image is Fourier transform of first image and third image is second image plotted as 2d function.

I wanted to show that if you have random noise - it will be equally distributed on all frequencies.

image.png.42f905d11c237e3f17aa0721d486a646.png

Now we have this - this explains all of the above - if we just analyze it right. This represents our filter response in frequency domain - Gaussian (as FT of Gaussian blur kernel) and red represents noise distributed over frequencies.

Left on X axis are low frequencies - right on this axis are high frequencies. At 0 this graph is 1 or 100% high. As you go higher in frequencies - value of this graph falls and approaches 0. Remember we are multiplying with this.

Number multiplied with 1 gives the same number

Number multiplied with less than 1 - say 1/2 gives smaller number

Number multiplied with 0 gives 0 (regardless what original number is - we loose its value).

At some point, multiplication factor gets very low - like 1% or 0.01 as number - and this means that this frequency and all frequencies above that frequency - are simply very low - in fact, at some point they become lower than that noise floor on the graph. We don't need to consider those frequencies and all frequencies above that frequency - there simply is no meaningful information there any more - it has been filtered out by blur and noise is probably larger than information. SNR at these frequencies is less than 1 (and progressively less as frequencies get higher).

Now comes in Nyquist theorem that says - you need to sample x2 per that highest frequency that you want to record. This is how blur size - FWHM relates to sampling rate. Simple as that. We take Gaussian profile of that FWHM - do Fourier transform of it to get another Gaussian - and look at what frequency that other Gaussian falls below some very small value - like 1% - no point in trying to record higher frequencies than that.

What does this have to do with SNR and sharpening and all? Let's take another look at above graph:

image.png.e68eb6a6bc51da9662de66a4b89b1513.png

We decided on our sampling rate (red vertical line - we record only frequencies less than that - those that are left of it). Ideal filter would be that red box - all frequencies to our sampling frequency are filtered fully - multiplication with 1 gives back exactly that number and after that frequency - 0, we simply don't care about higher frequencies since we already sampled at our sample rate. However - blur on our image does not act like that. It gradually falls from 1 down to close to 0 - blue line.

In the process it attenuates all the frequencies - by multiplying with certain number less than 1. We loose all the information in shaded part of graph. We don't actually loose it - it is there it is just attenuated - multiplied with number less than 1 (height of blue graph). In order to restore full information - we need to multiply each of these frequencies with inverse of number it was originally multiplied.

This means that if frequency was multiplied with 0.3 - we need to multiply it back with 1/0.3 (or in another words - divide it with 0.3 - inverse operation of multiplication) to get original number back.

That is what sharpening does - and above Gaussian curve explains why we can sharpen - because our blur is not simply frequency cut-off, it is actually gradual filter that slowly reduces higher frequencies - we sharpen by restoring back those frequencies until we reach limit we set by sampling rate.

Last bit is to understand how noise is impacted by this - just look at previous filter image - one that shows both Gaussian and noise. Each time we restore certain frequency - we push blue line up back to 1 - we do the same with red line - we also push it up equally (as we make frequency component larger we do so with associated noise - we increase noise on that frequency as well). This is why sharpening is increasing noise in the image, and that is the reason you can sharpen only if you have high enough SNR. If not - you will amplify the noise.

 

  • Like 1
Link to comment
Share on other sites

@vlaiv excellent description. Your ability to put these complex ideas across is very impressive you clearly have a good grasp of the subject.

I do wonder if some of the differences seen by others is down to the way different software renders the image on to the monitor. Do you have any thoughts on this?

Regards Andrew 

Edited by andrew s
Link to comment
Share on other sites

24 minutes ago, andrew s said:

@vlaiv excellent description. Your ability to put these complex ideas across is very impressive you clearly have a good grasp of the subject.

I do wonder if some of the differences seen by others is down to the way different software renders the image on to the monitor. Do you have any thoughts on this?

Regards Andrew 

Thanks.

I'm not 100% sure what you are referring to as differences seen by others?

If you mean that they see for example 1.1"/px containing less information than 0.9"/px (which could be true, depending on actual blur, but hardly seen by a human in an image), then I don't think it is due to software. I can't imagine what a piece of software could do to make it so.

On the other hand, I do have a candidate explanation why people might think that over sampled images contain more information / detail than they do in reality.

It is pretty much the same mechanism that is causing us to see the man on the moon or shapes in the clouds. Our brain is good at spotting patterns. So good that it will see patterns where there are none. It will pick up patterns from the noise - something that resembles something else that we have seen before.

Over sampled images are noisy and "more zoomed in". Brain just tried to make sense of it - since it is more zoomed in - it expects to see more detail even if there is none. Noise is interpreted often as sharpness and as detail - because our brain wants to see detail - it expects detail if image is already zoomed in that much.

Smooth image that has no noise will look artificial - same image with added noise will look better - although there is no true detail added. Example above is a good one - smooth star looks like very zoomed in and artificial - add a bit of noise - and it much more looks like original star - although both patterns are random and nothing alike - our brain recognizes that randomness as similarity.

We often mistake noise for sharpness - because noise has those high frequency components - here is example:

image.png.e22f9270a3970e4debea211850d96da2.png

Which image looks more blurred? Same image - left one just had some random noise added.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.