Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Undoing seeing in post processing


NickK

Recommended Posts

The attached image is a slice showing the effect of seeing on a star over time.

Time starts at the top left and ends bottom right (X&Y are at the respective rotatations). The green smears are stars - the large smudge is a single bright star. There's also a slight amount of noise in there too as these are raw.

The interesting point here is that you can see the tube alter it's structure according to the seeing and guiding errors. It even drops out at one point during the time period. Using techniques used for CRT scan analysis it should be possible to (a) identify the star(s) specifically and then create a corrective mapping to the expected point spread function.

Interesting eh? :D

post-9952-0-51308600-1339744283_thumb.pn

Link to comment
Share on other sites

:eek: You want us to do SUMS now?

:)

Hmmm... find a bright pixel... plot it's "neighbours"... work out the track angle... grab everything along the track (in chunks as wide as the track is high?) and park them all on the top leftmost point...

..yeah... piece of cake :)

Link to comment
Share on other sites

The same techniques are used in my area of science (high-resolution spectroscopy) to deconvolute images to correct for known image-broadening phenomena. I think it should be possible with star images, but it might get very complicated if you were trying to deconvolute the image of a DSO. However, I'm sure there's someone out there who is good enough at maths (or has the software) who could give it a go....?

Chris

Link to comment
Share on other sites

The same techniques are used in my area of science (high-resolution spectroscopy) to deconvolute images to correct for known image-broadening phenomena. I think it should be possible with star images, but it might get very complicated if you were trying to deconvolute the image of a DSO. However, I'm sure there's someone out there who is good enough at maths (or has the software) who could give it a go....?

This is exceptionally similar :D The first hurdle is to associate the 3D structures into objects, then filter DSOs out to give stars then.. create a vector space to correct (this could be a space of functions too).

The idea is to take guider input frames, produce a vector mapping to correct stars. Then take the mappings and apply them to the final main image after capture.

Mmm.. vector spaces.. :D

CCDsharp does that.

Um.. nope. This is very different. LR analyses the image and corrects it iteration by iteration to sharpen it in 2D.

This is a different approach. It's using a model made from analysing the guider to deconvolute the main exposure.

:eek: You want us to do SUMS now?

:)

Hmmm... find a bright pixel... plot it's "neighbours"... work out the track angle... grab everything along the track (in chunks as wide as the track is high?) and park them all on the top leftmost point...

..yeah... piece of cake :)

For now.. I'm just thinking the seeing is bad enough... however the process should cope for tracking changes automatically (assuming the 3D object association works).

Hmm.. small slices ;)

Link to comment
Share on other sites

Ting is - if you think of a normal star image as a circular collection of pixels, then the leftmost one will be normal value, and the second one will be be the sum of itself plus the leftmost, and third one will be....

So if you assume the width of the star is the height of the track you can can "build" the "original" star. Then park it in the leftmost place and bin everything to the right.

Ish :)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.