Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Photoshop Sneak - De-Blurring!


DrNeb

Recommended Posts

Well, i came across this and i thought it might be of some interest to all you imaging types out there.

a new function for a future release (we hope).

it seems to be able to tidy up any blurry image.

I'm not sure if it will be of any use to all you pro's... but for someone like me, doing their best with a digiscope adapter and a manual dob, it could prove useful!

Link to comment
Share on other sites

  • Replies 28
  • Created
  • Last Reply

Cool we can add this to the photo shop sticky on the imaging section of the forum. I tend to use the unsharp filter in ps cs5 for blurry shots and Warren Keller has a massive colletion of tutorials available from his website on one dvd.

Sent from my GT-S5670 using Tapatalk

Link to comment
Share on other sites

This popped up in another thread the other day...

http://stargazerslounge.com/imaging-discussion/159324-cool-new-photoshop-feature.html#post1982638

There are already tools out there for out of focus blur and linear motion blur correction (I tend to use focus magic) but the fact that this seems to be able to handle non linear Blur is impressive...

Peter...

Link to comment
Share on other sites

So pick a few points on the image and track its movement.

In fact, cameras nowadays have GPS sensors in them OR they can also include ACCELEROMETERS such as in the iPhones and Droids, they can track the motion (not global position if you wish - but just RELATIVE motion) during the shot, and then software can use that to deblur.

No need for image movement detection at all really.

Even my camera has GPS, I am sure an accelerometer wouldn't add much to the cost.

Link to comment
Share on other sites

The main problem with this is ordinary users of Photoshop won't get to use this (without a 3rd party plugin) because of the high cost of Photoshop, people are still using CS3 as it is the cheapest in retail, and that is not cheap.

That is 3 (THREE) iterations behind, that is what, 4... 6 years behind?

Anyway how often do you find yourself deblurring an image? Most of the time you will just take another shot and delete the blurred one.

Not worth forking out big money for a PS upgrade for that single feature lol.

Link to comment
Share on other sites

GPS sensors would be useless for this type of correction they are nowhere near accurate enough, accelerometer/giros may have benefits, if the picture taking process captured all of the tiny camera movements that occur during the capture and then made that data available for post processing.

This motion analysis of the image content has been around for a while and it's nice to see it hit the main stream BUT

All of this post processing has to cost something and the cost will be overall image resolution i.e. it may look 'nice' but the fine detail will be long gone.

imo, if the image is blurred from movement or out of focus then bin it and try again, there is no magic cure all for bad photographic technique.

Link to comment
Share on other sites

GPS is never going to sub millimeter accurate to notice camera shake, that's irrelevant to this...

The key point is that detecting camera movement is fine, if it's the camera thats moving. Although TECHNICALLY in reference to the rest of the universe, in astrophotography, it's the camera that is moving (rotating with the world) in photographic terms, it's the subject that is moving against the field of view of the camera.

Cool feature though... and yeah, as someone who is about to invest in CS5 (for work reasons mainly) it'd be interesting to know if this feature will be in CS6 imminently as I might hold off the purchase.

Ben

Link to comment
Share on other sites

No need to worry about multi threads...

And yes the processing required and the algorithm ... Beyond me by a zillion miles... The results do look absolutely fantastic though...

I guess the OCT Core processors with loads of ram will be needed sooner rather than later

Peter...

depends, photoshop uses cuda. the new amd dozers aren't looking too good either..

Ivy bridge will probably show us the way forward.

Link to comment
Share on other sites

I currently use cs5 on my monster:

i7 2600k @ 4.8Ghz

16Gb Corsair Vengeance @ 1600 cas 8

GTX 295 & 8800gtx (physx) - waiting for Kepler release in 12/q1

RAM's dead cheap at the moment. i would be surprised if anyone didn't get at least 8GB when putting together a new build.

Link to comment
Share on other sites

RAM's dead cheap at the moment. i would be surprised if anyone didn't get at least 8GB when putting together a new build.

Well thar really depends on your OS.

64bit s still very unsupported by applications, so anything over 4gb for most people is pointless.

I have just gone upto 8gb myself though to take advantage of Pixinsight 64 bit and was very please with the improvement. I would of gone 16gb but getting 4gb sticks of ddr2 for my quadcore Motherboard is not easy.

Id like to upgrade but have no real need to do so yet.

Link to comment
Share on other sites

Again it is not a complicated algorithm, you can also track the "path" of the colour differences on an edge or point which would indicate the movement.

There is many cues in the image to use to track movement.

You don't half talk some rubbish. In fact your posting style seems to swing from the obnoxious to rude and back again - something that I feel that you may need to work on.

This look does look quite good - I'll rewatch the clip when I haven't got an 8 year telling me to turn it down :rolleyes: because he cannot hear Tom and Jerry!

Ant

Link to comment
Share on other sites

You don't half talk some rubbish. In fact your posting style seems to swing from the obnoxious to rude and back again - something that I feel that you may need to work on.

This look does look quite good - I'll rewatch the clip when I haven't got an 8 year telling me to turn it down :rolleyes: because he cannot hear Tom and Jerry!

Ant

Most likely it analyzes the “shape” of the camera shake — probably by isolating a point-source highlight (using either high-pass filtering or image matching, or both) — then, it uses this shape to generate a path along which it maps point spread functions. A point spread function is sort of like an impulse response function, with the difference being that while an impulse response tracks a single-dimensioned value with respect to time, a point spread function gives you the response of an imaging system (2-D) to a point source. They’re both basically the same idea, though, and you can apply the same techniques to both. Further, by generating this path, you can map the point spread function in terms of space (because it’s two-dimensional) and time. And this is where it gets really cool:

Just like an LTI impulse response, you can deconvolve the output (the blurry image) with your new mapped-in-time point spread function, and get something much closer to the original scene (a sharper image). Because a photosensor (or film) is basically a 2-dimensional integrator*, the whole thing is linear, so this method works. The only added step which I see is that every lens/sensor system has a different point-spread function, which varies further w/r/t the lens focusing distance and depth-of-field, so you’ll need this data too, but (most importantly) you can get this data at your leisure, either empirically or through modelling. Incidentally, this custom point spread can be also be used to de-blur images with bad focus but no shaking blur.

But you knew all that right? I wish I was smart enough to spend 25 GBP on space USB cables ;)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.