Jump to content

290128050_ConstellationBanner.jpg.6eb5d1fe82e0853d4c3b80a745d12d74.jpg

Planetary imaging - what do I need to know.....?


fwm891
 Share

Recommended Posts

So, time for a little test... I took an image of white noise from WikiPedia and made two copies of it: the first one twice the canvas size, but with the original image centred in it on a black background (thus still original sampling), the second by resizing the original image with smart resize in Paint Shop Pro (thus oversampled by a factor of 2). Those three images then were processed in Fiji using the FFTJ plug-in:

image.thumb.png.34bafc198d86c1ada4c30968d273999d.png

The top row shows the three images, the bottom row the frequency spectra. At the lower left I made a copy of the left spectrum to indicate two dark regions that can be seen in all three spectra (is the original image truly white noise then?).

Adding more canvas causes FFTJ to enlarge the output image and to stretch the frequency response over the canvas (the two dark regions a further apart), as a result of which the whole image is filled with data (and as a bonus we get a central cross due to the black background). Resizing the original image by 200% causes FFTJ to enlarge the output canvas, but the frequency response remains the original size (we can see the two dark regions on their original places).

So, indeed oversampling is detected by this method. This leaves us with the question why the greyscale version of my Jupiter image results in such a different response when compared to the green channel, I guess it is due to all the processing steps I did (LRGB combine in WinJupos, saturation in PSP and denoise in Topaz AI).

Nicolàs

  • Like 1
Link to comment
Share on other sites

24 minutes ago, inFINNity Deck said:

So, indeed oversampling is detected by this method. This leaves us with the question why the greyscale version of my Jupiter image results in such a different response when compared to the green channel, I guess it is due to all the processing steps I did (LRGB combine in WinJupos, saturation in PSP and denoise in Topaz AI).

Yes, very nice analysis.

By the way - there is regular FFT in ImageJ that has very nice feature. It does not offer the flexibility of FFTJ but does provide way to very quickly measure frequencies in question.

It is located under Process / FFT / FFT menu item.

Once executed (and it works even for RGB images) - it will create sort of Hybrid image - one that will display log of intensity in frequency domain (but it will keep phase as well so it can perform inverse fft when you finish modifying it) - it will be displayed in 8bit mode and very stretched. When you hover cursor over parts of the image it will give you info on frequency:

image.png.b85aa095d6b7deea254e73e5899420cd.png

Here is FFT of crop of one of images that were used in this discussion (I think it is Tak image) and I marked with pointer where cursor was when I took screen shot (screen shot won't capture cursor).

Above we can see that frequency or rather wavelengths is 5.17 pixels per cycle at that point.

One more note - this FFT function is "cheating" a bit - it scales image to nearest largest power of two and then performs FFT so it is very fast.

  • Like 1
Link to comment
Share on other sites

What was left to test, is how seeing affects the frequency spectrum. For this I have taken three images from the article in which Siebren van der Werf and I discuss the effect of seeing and aperture on the visibility of sunspots (or other small detail). The three images are produced for a C11 at f/10 with a 2.95 micron camera to which a convolution of Gaussian seeing and Fraunhofer diffraction is applied (as per our article). From left to right they represent perfect seeing (i.e. no seeing, only Fraunhofer diffraction applied), 1" seeing and 2" second seeing and with seeing defined as 2 sigma (some seeing measurement-systems use 2.355 sigma).

image.thumb.png.03aef02ef310e7e523c601474e054996.png

As can be seen, and as expected, the seeing has a dramatic effect on the frequency distribution, doubling the oversampling per arc-second seeing (I am surprised to see that the first image is already oversampled by a factor of 2, but that is most likely due to the image that was used as input).

Of course these represent single images as one would capture under these circumstances, stacking would improve the result.

Nicolàs

  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...

Had a go using the normal FFT function and got this (loaded up my winjupos derotation output OSC with no further post processing/sharpening).  Edge of circle is 2.06 pix per cycle.

 

Does this look right @vlaiv.  Seems to be a large concentration of low frequencies

 

image.png.32b95cd66a85c3f0b86c92ec44fef0a6.png

 

If I try the same thing on the fully post processed image (no rescaling occured) I get this weirdness

 

 

image.png.5f94c111c5bf60dd9150f869d243db17.png

 

 

Link to comment
Share on other sites

5 hours ago, CraigT82 said:

Just having a go with the FFTJ plugin but I'm doing something wrong I think... what settings do you use in this box?

Difference between FFTJ and FFT plugin in ImageJ is how they handle the data. Both do the same thing (although FFTJ does it with greater precision), but with FFTJ you have range of options for output data.

You can choose Real and Imaginary part or you can choose frequency or power spectrum (power being square of frequency spectrum) - which are linear.

You can also choose logarithmic representation of power or frequency spectrum.

Standard FFT is logarithmic frequency spectrum scaled in certain way (not sure what is scaling function - but minimum values hover around 90 while max is 255). FFTJ leaves actual float point numbers - and it will depend on how you set "white" and black point to how they will look.

image.png.bca9ba966191ecf9e5b1bab1aa1addf3.png

Here is example - I took screen show of the image and did both FFTs on it - and result is the same, except for white and black point

4 hours ago, CraigT82 said:

If I try the same thing on the fully post processed image (no rescaling occured) I get this weirdness

That is quite normal for processed image.

I'll try to explain what is happening. It is effect of sharpening.

Image from telescope has this sort of signature:

image.png.9e795cfdb11e65734e072a6d1b5e9335.png

Where curve represents top or max or clamping function - no signal can go above it, but actual signal is limited by that function (or multiplied with it).

Ideal sharp image would have this sort of shape (or clamping function)

image.png.e76dc068ec2e181171486c969f4bfbf3.png

This means that all frequencies remain the same - none is attenuated.

With sharpening, what we really do is this:

image.png.82a6b1299eb77e78c0852daaec9d097e.png

We try to boost each frequency to its original unaltered / unscaled value (by multiplying it with some constant - which is inverse of original MTF of image).

Problem is that this MTF is unknown to us and depends on combination of telescope MTF and seeing MTF (whatever seeing was left in stack - we don't get all perfect frames in our stacks).

For this reason - we try to guess MTF in different ways - we move sliders in Registax or guess sigma of Gaussian for deconvolution kernel and so on - until we are visually happy with result.

Thing is - what satisfies us as result - might not be actual inverse of blur that was applied.

What this shows:

image.png.a5f40fc76f60a08469f083c71c94f996.png

Is that your processing instead of this:

image.png.82a6b1299eb77e78c0852daaec9d097e.png

Resulted in something like this:

image.png.0162ac4e07bd34acc695964d902da394.png

You did not manage to restore all frequencies properly but there was "gap" left in there. Not your fault - it is just the settings that you applied when sharpening that resulted in such restoration.

 

Link to comment
Share on other sites

34 minutes ago, vlaiv said:

You did not manage to restore all frequencies properly but there was "gap" left in there. Not your fault - it is just the settings that you applied when sharpening that resulted in such restoration.

Thanks Vlaiv, super informative as ever.
 

The individual stacked tiffs were all sharpened with wavelets and deconvolution before going into WJ…. And I guessed at the kernel size during the deconvolution step exactly like you said.

Any ideas how I could properly restore all frequencies?! 

Sampling looks ok though does it? I’m not sure. 

Link to comment
Share on other sites

11 hours ago, CraigT82 said:

Sampling looks ok though does it? I’m not sure. 

It does look good - 2.06px/cycle is very close to 2px/cycle

Have you done F/ratio vs pixel size math on it to confirm sampling rate?

It might be that inner circle is actual data limit and outer circle is some sort of stacking artifact.

11 hours ago, CraigT82 said:

Any ideas how I could properly restore all frequencies?! 

It is always guess a work.

Resulting MTF of the image is combination of telescope MTF and resulting seeing from all little seeing distortions combined when stacking subs. It will change whenever we change subs that went into stack - and we hope that they average out to something close to gaussian (math says that in limit - whatever seeing is, stacking will tend to gaussian shape), however, we don't know sigma/FWHM of that gaussian - and that is part of guess work.

Different algorithms approach this in different way.

Wavelets do detail decomposition to several images. There is Gimp plugin that does this - it decomposes image into several layers, each layer containing image composed out of set of frequencies.

In the graph, it would look something like this (theoretical exact case, but I don't think wavelets manage to decompose perfectly)

image.png.45579352b57a87acbb7e688362626690.png

So we get 6 images - each consisting out of certain set of frequencies.

Then when you sharpen - or move slider in registax - you multiply that segment with some constant - and you aim to get that constant just right and you end up with something like this:

image.png.650f852ce0520021c3b57b28826ab1a8.png

(each part of original curve is raised back to some position - hopefully close to where it should be) - more layer there is, better restoration or closer to original curve.

This is just simplified explanation - curves don't really look straight. For example in Gaussian wavelets, decomposition is done with gaussian kernel, so those boxes actually look something like this:

image.png.c6ff31f314466f4f0b9059df02c7dc22.png

(yes, those squiggly lines are supposed to be gaussian bell shapes :D ).

Deconvolution on the other hand handles things differently. In fact - there is no single deconvolution algorithm, and basic deconvolution (which might probably be the best for this application) is just simply division.

There is mathematical relationship between spatial and frequency domain that says that convolution in one domain is multiplication in other domain (and vice verse).

So convolution in spatial domain is multiplication in frequency domain, and therefore - one could simply generate above curve with some means and use it to divide Fourier transform (inverse of multiplication) and then do inverse Fourier transform.

Other deconvolution algorithms try to deal with problem in spatial domain. They examine PSF and try to reconstruct image based on blurred image and guessed PSF - they solve the question - what would image need to look like if it was blurred by this to produce this result. Often they use probability approach because of nature of random noise that is involved in the process. They ask a question - what image has the highest probability of being a solution, given these starting conditions - and then use clever math equations to solve that.

 

  • Like 1
Link to comment
Share on other sites

8 hours ago, vlaiv said:

It does look good - 2.06px/cycle is very close to 2px/cycle

Have you done F/ratio vs pixel size math on it to confirm sampling rate?

It might be that inner circle is actual data limit and outer circle is some sort of stacking artifact.

It is always guess a work.

Resulting MTF of the image is combination of telescope MTF and resulting seeing from all little seeing distortions combined when stacking subs. It will change whenever we change subs that went into stack - and we hope that they average out to something close to gaussian (math says that in limit - whatever seeing is, stacking will tend to gaussian shape), however, we don't know sigma/FWHM of that gaussian - and that is part of guess work.

Different algorithms approach this in different way.

Wavelets do detail decomposition to several images. There is Gimp plugin that does this - it decomposes image into several layers, each layer containing image composed out of set of frequencies.

In the graph, it would look something like this (theoretical exact case, but I don't think wavelets manage to decompose perfectly)

image.png.45579352b57a87acbb7e688362626690.png

So we get 6 images - each consisting out of certain set of frequencies.

Then when you sharpen - or move slider in registax - you multiply that segment with some constant - and you aim to get that constant just right and you end up with something like this:

image.png.650f852ce0520021c3b57b28826ab1a8.png

(each part of original curve is raised back to some position - hopefully close to where it should be) - more layer there is, better restoration or closer to original curve.

This is just simplified explanation - curves don't really look straight. For example in Gaussian wavelets, decomposition is done with gaussian kernel, so those boxes actually look something like this:

image.png.c6ff31f314466f4f0b9059df02c7dc22.png

(yes, those squiggly lines are supposed to be gaussian bell shapes :D ).

Deconvolution on the other hand handles things differently. In fact - there is no single deconvolution algorithm, and basic deconvolution (which might probably be the best for this application) is just simply division.

There is mathematical relationship between spatial and frequency domain that says that convolution in one domain is multiplication in other domain (and vice verse).

So convolution in spatial domain is multiplication in frequency domain, and therefore - one could simply generate above curve with some means and use it to divide Fourier transform (inverse of multiplication) and then do inverse Fourier transform.

Other deconvolution algorithms try to deal with problem in spatial domain. They examine PSF and try to reconstruct image based on blurred image and guessed PSF - they solve the question - what would image need to look like if it was blurred by this to produce this result. Often they use probability approach because of nature of random noise that is involved in the process. They ask a question - what image has the highest probability of being a solution, given these starting conditions - and then use clever math equations to solve that.

 

Thanks! I’m not going to pretend I understood much of that but it’s given me some direction for further reading. Thanks for taking the time to respond 😊

I have looked at the planet size in the image and back calculated to FR and it came out to F/17, so I thought I was actually oversampled with my 2.9um pixels. 

Edited by CraigT82
Link to comment
Share on other sites

46 minutes ago, CraigT82 said:

I have looked at the planet size in the image and back calculated to FR and it came out to F/17, so I thought I was actually oversampled with my 2.9um pixels. 

For 2.9um - optimum sampling is at F/14.5 so yes, it is a bit over sampled

It is over sampled by factor of 17/14.5 = ~1.1724 - or signal should end at 2.34px / cycle

or about here:

image.png.cc5446f2907e3ca2a35b74ec365fd07f.png

You can verify that by looking at pixel per count in that FFT in ImageJ (or alternatively - it will be at 1024 / 2.34 = ~438px away from center of the image

Ok, so it is that outer ring after all and not inner. Inner ring must be sharpening artifact.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.