Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Mapping the point-spread function over the image


Recommended Posts

In another thread (http://stargazerslounge.com/imaging-deep-sky/170063-more-testing-mesu2-odk14.html) a discussion on roundness of stars popped up. Usually this is judged qualitatively, but I proposed to measure it quantitatively using the so called attribute filters we have developed here. These filters allow measurement of the roundness, elongation, curvature, triangularity, etc. of stars over the entire image in a matter of seconds for 12 Mpixel images. The program could in principle map the optical performance and potentially separate guiding errors from collimation issues. An interesting point raised by Neil Hankey would be to use this information to improve the image.

I will try to get one or two of our students to develop such a program (cross platform portable). Any input would be welcome.

Link to comment
Share on other sites

  • Replies 63
  • Created
  • Last Reply
I will look for the link but a Belgium university has developed a superior deconvolution algorithm ...

That would be very nice. Many deconvolution programs do still assume a constant PSF over the image, and try to estimate it from the image itself (blind deconvolution), here we try to get information from the image on variations of the PSF over the image (so the assumption of a linear shift invariant system is not made). It may well be that such spatially variant methods exist.

However, these are not based on morphological attribute filters (if they were, I would know :icon_scratch:), so we can do some new science here.

Link to comment
Share on other sites

Thinking about this.

If the only thing the program could do is state how much the collimation is out, how good / bad the tracking was and give numbers for coma etc. it would be hugely useful.

fixing the problems in software would be the icing on the cake.

Derek

Link to comment
Share on other sites

No optical system is perfect, unfortunately, not even an RC, although its closer than most.:icon_scratch:

I agree about noise and deconvolution, no issues there but if you do this you will know or be able to infer the geometry of the optical system in question. If that geometry is stable and reproducible, then there exist the possibility to correct it mathematically.

At least the starting point should be a quality optical system etc...

We should also be able to separate the effects of optical induced aberration and mount inducted guiding issues, right?

I think applying a function to describe the psf over the whole image (air cell) might be a step too far but what the hell lets give it a try!!! Nothing ventured nothing gained.

Link to comment
Share on other sites

No optical system is perfect, unfortunately, not even an RC, although its closer than most.:icon_scratch:

I agree about noise and deconvolution, no issues there but if you do this you will know or be able to infer the geometry of the optical system in question. If that geometry is stable and reproducible, then there exist the possibility to correct it mathematically.

At least the starting point should be a quality optical system etc...

We should also be able to separate the effects of optical induced aberration and mount inducted guiding issues, right?

That separation might be possible, but we will have to see. It might have to rely on models of the kinds of errors that occur.

Link to comment
Share on other sites

That separation might be possible, but we will have to see. It might have to rely on models of the kinds of errors that occur.

Well we can start with the following assumptions;

1. Optical aberration will have a defined spacial centre, increasing in effect at the edges of the ccd frame

2, Guiding errors will predominately be in RA (East -West) assuming good polar alignment to at least 2" of the NCP.

3. Field rotation should be minimal considering the above and the fact that most ccd exposures are 10 - 15mins in length or shorter

Of course, we can't correct everything so the following provisos seem to be reasonable.

1. Good collimated optics

2. Good alignment with the NCP

3. Reasonably stable (repeatable) mount / guiding function

Then the only variable, the air cell above the telescope creating the PSF is left. Or am I wrong?

Link to comment
Share on other sites

Hmm ok maybe I was to optimistic about the Belgium part ;-)

And you probably need a supercomputer or at least GPU acceleration.

AIP: deconvolution

Yves.

Well, we have got a few of those available :icon_scratch:. I am about to get a 48-core compute server at our institute, and might put in a few Tesla boards or similar as well.

Let's first get some students interested, then get it working, and only then think about acceleration on supercomputers and/or GPUs.

Link to comment
Share on other sites

Well we can start with the following assumptions;

1. Optical aberration will have a defined spacial centre, increasing in effect at the edges of the ccd frame

2, Guiding errors will predominately be in RA (East -West) assuming good polar alignment to at least 2" of the NCP.

3. Field rotation should be minimal considering the above and the fact that most ccd exposures are 10 - 15mins in length or shorter

Of course, we can't correct everything so the following provisos seem to be reasonable.

1. Good collimated optics

2. Good alignment with the NCP

3. Reasonably stable (repeatable) mount / guiding function

Then the only variable, the air cell above the telescope creating the PSF is left. Or am I wrong?

We need to be careful about assumptions. The optical centre might not coincide with the centre of the CCD. The system will show flexure when not pointing to zenith, and some mounts are set up every time, rather than sitting nicely in an observatory. Collimation can vary (especially in imaging Newtonians). Fluctuations in temperature occur, leading to (differential) thermal expansion problems.

Field rotation can be kept under control, I assume.

A tool which can detect these errors is the first aim of the project.

Link to comment
Share on other sites

We need to be careful about assumptions. The optical centre might not coincide with the centre of the CCD. The system will show flexure when not pointing to zenith, and some mounts are set up every time, rather than sitting nicely in an observatory. Collimation can vary (especially in imaging Newtonians). Fluctuations in temperature occur, leading to (differential) thermal expansion problems.

Field rotation can be kept under control, I assume.

A tool which can detect these errors is the first aim of the project.

A FLAT frame could be used to determine the centre of the optical system, since it will have a clearly defined centroid or hot spot graduating in luminosity further out towards the edges... Temperature effect on this should be minimal since its not at focus but flexure is the real enemy.

Link to comment
Share on other sites

A FLAT frame could be used to determine the centre of the optical system, since it will have a clearly defined centroid or hot spot graduating in luminosity further out towards the edges... Temperature effect on this should be minimal since its not at focus but flexure is the real enemy.

I agree that a flat would be helpful, but the exact centre of a flat frame could be difficult to find, because in a decent system the centre is quite flat. Besides, flats measure the centre of vignetting, not necessarily the optical centre of the rest of the system (they should coincide, or at least be close). I have seen badly collimated Newtonians with the best image quality off centre. Temperature effects on collimation and pinching of the optics (TS65 mm quadruplet anyone?) could be present. I think we should measure rather than assume.

Link to comment
Share on other sites

FLATS can do also more damage when something is wrong with them, and including them will increase the risk of mis correcting and make it harder to troubleshoot ...

In a perfect world FLATS would be perfect ...

Yves.

Link to comment
Share on other sites

A minor aside here for just a sec; are we certain that an absolutely flat field would look right to the eye? Becasue the FOV is tiny in most astronomical images I think it likely that the answer will be 'yes' but as the son of a perception theorist I have to ask! I can no longer ask the man himself, alas.

Olly

Link to comment
Share on other sites

A minor aside here for just a sec; are we certain that an absolutely flat field would look right to the eye? Becasue the FOV is tiny in most astronomical images I think it likely that the answer will be 'yes' but as the son of a perception theorist I have to ask! I can no longer ask the man himself, alas.

Olly

Interesting point. Given the tendency of the naked eye to suppress low spatial frequencies (slow variations) in favour of high spatial frequencies (sharp edges, points) I think the answer is that the human eye will not notice the difference.

Link to comment
Share on other sites

How do we kick this ball off and what language with the solution be programmed in? :(:confused:;)

The reason I ask is I'm not interested in a bespoke app that only runs on a supercomputer at some university, sorry for that. The final solution must run on a standard multicore PC, albeit that it takes longer to run and output the results?

Language choices are in my opinion;

1. C

2. C++

3. Java

4. Python

5. Any mix of the above!

Python and Java are ideal choices for the interface while C and C++ are the obvious choices for the mathematically intensive routines.

Please dont mention FORTAN :icon_scratch::rolleyes::D

Link to comment
Share on other sites

Why would anyone mention FORTAN?

Now, Fortran, that's a language! :-)

Nothing personal, but a dead language I hope!!!:icon_scratch::rolleyes::D

We work quite a lot with the nuclear industry and they have tons of unconverted legacy Fortan code!!!

Personally, I think they keep it in Fortan deliberately because no one understands it anymore???

Link to comment
Share on other sites

They keep it in Fortran because that is the language of choice for scientific/engineering computation. That was true in 1957 and it is even more true now, the latest revision of the standard includes parallelism constructs. And yes, I make and sell Fortran products :-)

Link to comment
Share on other sites

Nothing personal, but a dead language I hope!!!:icon_scratch::rolleyes::D

We work quite a lot with the nuclear industry and they have tons of unconverted legacy Fortan code!!!

Personally, I think they keep it in Fortan deliberately because no one understands it anymore???

I help replace all the obselete **** in nuclear plant processing systems. It might be prehistoric but it is very robust and it just isn't replaceable on a like for like basis so it is huge headache to verify that any new systems are fully compliant and replicate the original systems.

That's one of the reasons we still have 15 axis hydraulic manipulators, that we use for in-reactor inspection and repair work, controlled via dos based computers :(

Link to comment
Share on other sites

They keep it in Fortran because that is the language of choice for scientific/engineering computation. That was true in 1957 and it is even more true now, the latest revision of the standard includes parallelism constructs. And yes, I make and sell Fortran products :-)

I do scientific and engineering programming in C/C++ (and MatLab). I worked with Fortran (77/90/95) myself and coming from Pascal/Modula/C background I HATE the lack of proper scope (don't talk to me about common blocks), and the lack of strict typing (still don't like that in C, but at least I MUST declare every variable type in C). The parallel constructs are available in C/C++ with OpenMP in a very transparent way.

I "fondly" remember getting an error message from a NAG routine (with its meaningful name D02BAE) in FORTRAN which read "Impossible error". This was due to a named common block being shared accidentally on a parallel machine, so different instances of the routine were overwriting each other's data. There was no workaround for that (AARGH). They had forgotten to compile the library on our Cray J932 (long since dead) with the --taskcommon switch, which makes private copies of common blocks.

Debugging Fortran when some programmer has made an error by declaring the same named common block in two different ways in different subroutines is great fun. I know F90 and F95 improved some things by moving to a modular structure, but that was a very clunky solution.

According to some, editing Fortran without wearing a blue tie is a syntax error :icon_scratch:

Give me C(++) multi-threading and OpenMP pragmas any time.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.