Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Flattener/comacorrectors software


Robindonne

Recommended Posts

It is possible, but you won't like the results.

Problem is that math is there for pure signal, but our images are never pure signal and in fact SNR (signal / noise ratio) is something that we deal with on a regular basis. Mathematically correcting for defocus or coma or other aberrations that can be represented by Zernike polynomials comes down to a process called deconvolution (with changing PSF).

One part of the problem is that PSF (point spread function) is not known unless exact model of optics is established. It can be approximated by examining stars in the image (which are in fact PSF od system - since stars are point sources) and mathematical approximation (coma depends on distance from the center, tail is always away from center, all that stuff), but other more important problem is that noise is random and therefore not subject to PSF in classical sense. It is related to light intensity (shot noise) or not (read noise and dark current noise) and we can't include it in "restoration process" - but it is embedded in the image and can't be separated - otherwise we would have prefect noise free images.

When we try to restore original image by deconvolution - noise which was not convolved in the first place, undergoes reverse operation - and that just makes things much worse.

If you want to see that in action - take a blurry image that has a bit noise in it and then sharpen it. You will see that sharpening really brings the noise up. Same process happens when you do deconvolution (which is just fancy word for sort of sharpening that we are talking about) - noise will be blown up and would become non random (but rather ugly looking).

If one has high enough SNR in image - then coma correction and field flattening in software is actually feasible. People, when processing images, use deconvolution to do sharpening on the parts of the image where signal is strong and it works.

  • Like 1
Link to comment
Share on other sites

Okok.  I can only agree with you😅.  
 

i didn’t think of all the other problems, just thought without a flattener you get an increase in stretch from the center.   What a comacorrector does i cant really tell, dont know what directions its deformed.  I thought it was just increase the shrinking of the image till the stars are round.  I have Too less experience.   But thanks for explaining very well. Im going to read some more about what you wrote.  

Link to comment
Share on other sites

40 minutes ago, Robindonne said:

Okok.  I can only agree with you😅.  
 

i didn’t think of all the other problems, just thought without a flattener you get an increase in stretch from the center.   What a comacorrector does i cant really tell, dont know what directions its deformed.  I thought it was just increase the shrinking of the image till the stars are round.  I have Too less experience.   But thanks for explaining very well. Im going to read some more about what you wrote.  

What you are suggesting is related to geometric distortion of the image and not field flattening.

It is part of the stacking routine when one works with short focal length instrument - so it is easily possible and it is usually performed by software without user intervention. Celestial sphere is - well sphere and we are trying to map part of the sphere onto a flat surface - image that is 2d. If focal length is big - there is almost no problem because distortion is very small.

When focal length is short - we get large distortion due to type of projection used by lens. Take for example all sky camera type lens which is sort of fish eye lens. It produces images like these:

image.png.d808b052e6453ac570c915fdbeb49b80.png

That is very distorted image and if you tried to for example match a feature like triangle to actual triangle in the sky - you would see it very distorted (changed angles).

Maybe better example is this:

image.png.8bd7a9493a6acccda877f1e53d41ef20.png

No, those walls are not bent - they are straight walls, it is just the projection when image is placed onto 2d surface of the image that shows them being bent. Btw, observe central part of the image - it is almost undistorted. That is the same as using long focal length - longer the focal length of instrument - less distortion.

Field curvature is something else. Coma also. These are optical aberrations that are related to single point rather than geometry of image. This is why we can tell from star shape if it has been affected by coma or field curvature (defocus really) - but we could not tell above distortion of geometry from a single star image - as it would still be single point (maybe displaced, but still just a point).

Field curvature is actually defocus that depends on distance from optical axis. Coma is a bit different, but both are blur rather than geometrical distortion.

Here is example of field curvature:

image.png.402830affd29fbc8a3b58b2f73711f4d.png

It is not related to slight curving of the straight lines (that again is geometric distortion) - it is related to blurring of the lines further away from the center - as if being out of focus. In fact - that is what field curvature is - out of focus outer parts of the image - it happens because surface of best focus is not flat like imaging sensor but rather curved like this:

image.png.7a2a94597ac3a08f1fb62ff5e6781a67.png

Either your center is in focus (more often) and sides are out of focus, or center is out of focus and edges are in focus - but they can't be in focus both at the same time.

Btw, look what happens when you try to deconvolve noisy image vs original noise free image:

Here is a base sharp image that we are going to use in this example:

image.png.89cc875d5da70963efeb38184608a1d6.png

Here is blur kernel and convolved (blurred) image:

image.png.59297e3320ad2ac225c08a4ee1e07117.png

Blur is just PSF - which would be coma kind of blur in coma case, or simple round disk in defocus case (each can be calculated from aperture image and Zernike polynomials with Fourier Transform).

Now let's look at result of deconvolution:

image.png.a33cf9af58edd5845e72794b7fe4a0ec.png

Algorithm used here is naive inverse filtering (which is just division in frequency domain - look up Fourier transform and convolution / multiplication).

Pretty good result - if we know blur kernel / PSF, we can get pretty good sharp image back from blurred version.

But look what happens if I add some noise in the mix:

image.png.b2d521081918578d14a70aa57c3c0723.png

Here I added Poisson modulatory noise and additive Gaussian noise - simulating shot noise from target + read noise from camera (we could play around and add LP noise and thermal noise - but it really does not matter - this will be enough for example).

Here is restoration by naive inverse filtering:

image.png.05ef5d8e91c53dbc4a18fa85e2c5c7cb.png

Doesn't look nearly as what we have been hoping for, right?

Luckily there are much better / advanced algorithms that can deal with noise in better ways - for example Lucy-Richardson deconvolution (often used in astronomy applications):

image.png.1316c2b3cb43b91837fcd5c9e7b40e42.png

Much better, but still not nice and sharp like noise free example from the beginning. There are even better algorithms like regularized LR deconvolution (LR with total variation regularization):

image.png.8f90f482e91690d0c3d9bf0112478cc3.png

Keep in mind that these are synthetic examples and I have used constant blur kernel. With above approach one needs to use changing kernel and real time examples will be worse.

It can be done with high enough SNR or very specific algorithms and approaches, but in reality it is far far simpler to purchase suitable Coma Corrector or Field flattener and use that instead.

 

Edited by vlaiv
autocorrect typo?
  • Like 4
  • Thanks 1
Link to comment
Share on other sites

Wow what a explanation.   And a very helpful one.   Thx a lot.  Bit by bit learning more.   I cant do anything other then buy that comacorrector.  Already had a flattener and while checking if it was still clean i began to think about what it actually does Other then flattening.  I thought it must “shrink” the view in a increasing way from the center and never thought of that area being slightly out of focus.   I do wonder how much of an “out of focus” star will look “in focus” when its squeezed to the original round shape.  Im going to read your text again, and maybe again.  Thx a lot.  

Link to comment
Share on other sites

You’ve learned me a lot of terms and its completely explained very well.  Dont know if i will ever face an image thats corrected in this exact way.   Do you know if, besides the “sharpening” with this method, this software also does this calculation in a (don’t know the exact translation) Increasing way from the center.

So lets say m13 is a perfectly round starCluster and is imaged slightly oval and out of focus because it was in the curved area while being photographed. Does this method sharpen the individual stars and leave the oval shape? Or does it decrease the stretch, of the whole image, back to the center and end up as a perfectly round cluster of perfectly round stars?    

 

Edited by Robindonne
Link to comment
Share on other sites

8 hours ago, Robindonne said:

You’ve learned me a lot of terms and its completely explained very well.  Dont know if i will ever face an image thats corrected in this exact way.   Do you know if, besides the “sharpening” with this method, this software also does this calculation in a (don’t know the exact translation) Increasing way from the center.

So lets say m13 is a perfectly round starCluster and is imaged slightly oval and out of focus because it was in the curved area while being photographed. Does this method sharpen the individual stars and leave the oval shape? Or does it decrease the stretch, of the whole image, back to the center and end up as a perfectly round cluster of perfectly round stars?    

 

Deconvolution (sharpening) is very difficult process, while geometric distortion correction is fairly easy process. I explain why is that in a minute.

These two are handled by different algorithms of course, and geometric distortion is readily available in software. For example in Gimp there is lens distortion plugin:

image.png.82ea065f52170c1370eb7458d1803deb.png

That can easily reverse geometric distortion made by lens / telescope.

Difference between these two operations can be explained like this. Imagine you have very large table and some glasses of water on table. Geometric distortion is just moving glasses of water around - changing their position. It is rather easy to correct for that - just move glasses back where they were (you need a way of figuring out how they were moved - but that is easy - you know position of the stars or you can take image of the grid like above to see how lens distorts the image).

Blurring due to coma or field curvature process does something different - it takes a bit of water from the glasses and transfers to other glasses - it mixes the waver in the glasses - and now you need to "unmix" it back. That is much more difficult problem than geometric distortion - even if you take test shots.

Btw - moving of the glasses is moving of the pixel values around, and mixing of the water is actually changing pixel values in exactly the same way - you take some of the value of one pixel and spread it around and add to values of other pixels - and you do that for all the pixels in the image - that is what blurring does.

Deconvolution is not that common in software and if it is implemented - it is usually implemented for constant kernel - which is useful in regular blur - like defocus blur or seeing blur or maybe motion blur - where all the pixels in the image are affected in the same way.

Coma blur and field curvature blur add another level of complexity - they change how "the water is spread around" depending on what pixel we are talking about. In the center of the image there is almost no coma and no field curvature (in fact there is none) and amount of it grows as we move away from the optical axis. It is very difficult to model and calculate how to get image back under these changing circumstances.

Link to comment
Share on other sites

3 minutes ago, Robindonne said:

I tried it with an image with a couple of stars.  Blurred it and refocused it.   Never got it back as sharp.   But thx again.  I will go and play with the info.   

What software did you use? Do you have image of blurred star for me to try and deconvolve?

Link to comment
Share on other sites

Wel it was Not a really representative.  While at work tried it on my phone online.
 

I started with a basic 3 white dots on a black background.  Blurred it and reopened the Blurred file with their Software to sharpen it.  But after 4 times Doing a maximum sharpening over the image, i couldnt see much differences compared to the blurred version.  I realized it was indeed not how it went in my imagination yesterday😬 

But it was a small try.   I definitely want to see how the outcome is when trying to form a sharp dot from a blurred elipse

E6B8EE86-20E5-4D16-AE94-53510DB00B46.png

AAE972D6-09D3-439D-8D13-112B503281BB.png

Link to comment
Share on other sites

Hi 

Interesting discussion.  

I have been thinking about how to fix guidescope sag and guide algorithm hunting about its equlibrium position
giving elliptical star images.  I had a couple of occasions when the errors happened to be such that the star trail
ran horizontally across the frame. Which triggered a few thoughts about a simple case-

Suppose we determine from the elliptic star image that the light is smeared equally over L pixels - equivalent to a constant 
sagging over the exposure duration, say left to right for the sake of argument.  

Suppose the true intensity at the i'th pixel is Ni or N(i).  The camera records N'i = (1/L) Sum(Nj), j=i, i+L.  

At the adjacent pixel k = i+1, the camera records N'k = (1/L) Sum(Nj), j = i+1, i+1+L

so (N'i - N'k) = (1/L) (N(i)-N(i+1+L))

So if we knew just one of the N(i+1+L) values we could work backwards and untangle the whole row to produce all the Ni values.  

The edges of the frames would be boundary conditions where things go differently - but might have a redundancy to enable an estimate of
a starting or ending Ni to be made... but brain goes blank at this point.

There is a subtraction of one pixel from another, which would double the noise.

Simon

Link to comment
Share on other sites

10 hours ago, windjammer said:

I have been thinking about how to fix guidescope sag and guide algorithm hunting about its equlibrium position
giving elliptical star images.  I had a couple of occasions when the errors happened to be such that the star trail
ran horizontally across the frame.

In my experience, guide issues are again best solved "in hardware". What you are describing sounds more like issue with periodic error than guide scope sag / guide issues (well, it can possibly be solved with guide settings, but sometimes periodic error is fast enough that guiding simply can't deal with it efficiently).

You can check if it is indeed periodic error by aligning your frame with RA/DEC - make RA direction be horizontal for example and DEC vertical. This is easily checked if you start exposure and slew mount in RA - you should get horizontal line from a bright star. If it is at an angle - rotate camera a bit until it is horizontal.

Now just image as usual and later check direction of elongation of stars.

But you are right, for sake of argument and for academic reasons - let's discuss what can be done in software.

10 hours ago, windjammer said:

Suppose the true intensity at the i'th pixel is Ni or N(i).  The camera records N'i = (1/L) Sum(Nj), j=i, i+L.  

What you are describing here is known as PSF - or point spread function - way single point (or pixel) - spreads its value on adjacent pixels. You have made very simple case where PSF is just a single line L pixels long with uniform distribution.

In reality PSF will be a bit distorted.

Deconvolution does precisely what you have been trying to do. As long as you know PSF - description of how a single point spreads, you can reverse the process (to an extend - depending on how much noise there is).

I played with this idea long time ago when my HEQ5 mount had quite a bit of periodic error (since I added belt mod and tuned my mount and I no longer have these issues).

Here is an example of what I was able to achieve and description of how I went about it. By the way, tracking errors are much better candidate for deconvolution fixing than coma or field curvature - because PSF is constant across the image - whole image was subject to same spread of pixel values.

Here is base image:

M27.png

You can see that stars are elongated in ellipses. I shot this image at fairly high resolution and as a consequence even small PE that I was not able to guide out resulted in star elongation. If you check, you will see that elongation is in RA direction (compare to stellarium for example).

First step is to estimate blur kernel. I actually did following to do that:

Convolution is a bit like multiplication - if you convolve (blur) A with B, you get the same result as you would if you conovolved B with A (this is a bit strange concept - blurring of PSF with your whole image - but it works).

This helps us find blur kernel. I've taken couple stars from the image and deconvolved those with "perfect looking star". I did this for each channel.

kernels_red.png

These are resulting blur PSFs - 5 of them. I then took their average value to be exact PSF. Testing it back on a single star:

star_compare.png

We managed to remove some of the elliptical shape of the star. Here is what final "product" looks like after fixing elongation issue:

M27_127F_final.png

Of course, process of deconvolution raised noise floor and it was much harder to process this image (hence black clipping and less background detail) - but result can be seen - stars do look rounder and detail is better in central part of nebula.

If you are a PixInsight user - you might want to check out dynamic PSF extraction tool (I think it is called something like that). I don't use PI so can't be certain, but I believe it will provide you with first step - average star shape over the image. From that you can deconvolve that with perfect star shape to get blur kernel and then in turn use that blur kernel to deconvolve whole image.

 

Link to comment
Share on other sites

On 16/06/2020 at 19:17, vlaiv said:

"... happens because surface of best focus is not flat like imaging sensor but rather curved like this:

image.png.7a2a94597ac3a08f1fb62ff5e6781a67.png

Either your center is in focus (more often) and sides are out of focus, or center is out of focus and edges are in focus - but they can't be in focus both at the same time."

We will have to wait for this: https://petapixel.com/2017/07/21/nikon-patents-35mm-f2-lens-full-frame-camera-curved-sensor/

 

Christer, Sweden

 

Link to comment
Share on other sites

Thanks for the info on PSFs.  You achieved a very good improvement on the M27 pic.  A good motivation to experiment.

I have just started to use PI and was vaguely aware of processes with PSF in their names!  I am still l climbing the learning curve on PI and expect to be on that trek for a fair while, but this would be a good excuse to have a play with some of this stuff.  

I definitely have guide scope flex - over the space of a few hours the target will drift a bit over the frame - star ellipses have their  major axis aligned on the drift direction, and successive exposure ellipses join where the previous exposure left off.   And guide algo hunting - I can see that happening in real time.   I agree sorting the mechanics is best for flex, and adding brackets and draw tube locks to my rig has improved it immensely.  Still a little way to go on that.  My hunting issue comes from a poor g/scope star image, and incorrect settings which given a decent run of weather (!)  I might actually solve.

Simon

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.