Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

AI Gigapixel testing


Recommended Posts

Topaz labs has released new AI based image resolution increase and enhance software and they claim it to be "stellar":

https://topazlabs.com/ai-gigapixel/

I have been briefly testing it and thought to post my findings here as well.

 

Here is the first test, original image is on the right and on the left is a 1/4 resolution reduced one to find out how AI can improve it and perhaps return it to back. Middle is the AI 400% resolution increase result.

Files are large so I only post links:

Test 1

 

Second test is a native resolution increase and then viewed in same size than original:

Test 2

 

It seems that AI work better, at least to my eye, when resolution is first increased and then reduced to back normal - use it as a sharpener of some sort:

Test 3

Link to comment
Share on other sites

Interesting, but the performance of 'AI' (actually, in this case, it seems to be a neural net) is critically dependent on the training set used to set it up in the first place.  So I do wonder whether this would help much for astro images, unless, of course, there was the capability to define your own training sets (it seems not?)  Even then, this is a science in itself, so casual attempts to teach a network generally fail spectacularly.

Whilst all us imagers lean heavily on digital signal processing techniques, it's hard to compete with choosing the right resolution and acquiring masses of quality data in the first place!

 

Link to comment
Share on other sites

Thanks for posting this info Herra.

Just gave the free trial a go.  It is very slow and the results are not that different from using PS. The price is way too high for what it does.

Have now deleted  it.

 

Link to comment
Share on other sites

What is funny is that AI deep style networks and normal image filters aren't that far apart. They both use what is effectively kernels. I've done a fair amount of both.

I've always get the feeling that Topaz oversell aggressively in their adverts, almost in the same scenario where the company is over-compensating.

My favourite processing is where swarming implemented in GPU was used to back propagate to define the PSF at a number of key points then using interpolation between the PSF across the image, the PSF deconvolution is applied - and a final deconvoluted sharp image appears.

The disadvantage of this solution is it take hours and hours of 100% GPU time.. 

Tempted to buy a seperate small tower computer and put a couple of GPUs for experimentation as I've bust one GPU due to heat in my MacBook Pro. However the other half has plans too..

Link to comment
Share on other sites

25 minutes ago, NickK said:

What is funny is that AI deep style networks and normal image filters aren't that far apart. They both use what is effectively kernels. I've done a fair amount of both.

I've always get the feeling that Topaz oversell aggressively in their adverts, almost in the same scenario where the company is over-compensating.

My favourite processing is where swarming implemented in GPU was used to back propagate to define the PSF at a number of key points then using interpolation between the PSF across the image, the PSF deconvolution is applied - and a final deconvoluted sharp image appears.

The disadvantage of this solution is it take hours and hours of 100% GPU time.. 

Tempted to buy a seperate small tower computer and put a couple of GPUs for experimentation as I've bust one GPU due to heat in my MacBook Pro. However the other half has plans too..

Not sure I understand what you said in describing your favourite processing method.  What software do you use to do the steps you describe.?

Can you share an example?

Link to comment
Share on other sites

If the data isn't there in the first place it is just making it up.

Some software can be better at making it up than others and produce a nicer image.

Now if it was part of the stacking process using the raw data it may be able to eke out more actual information rather than creating it out of thin air.

They will naturally say it is "stellar", "game changing" etc. They are hardly going to say it is "0.5%" better than their rivals now are they :)

Link to comment
Share on other sites

I am not too impressed by this. It has some success at sharpening the image, without exploding noise too much, but I have a feeling that the same kind of spurious details are appearing as I see in certain classical approaches (which take a lot less compute power, and are more predictable than much of the AI stuff I see).

Link to comment
Share on other sites

1 hour ago, Marci said:

Non-retina generation with ATI GPU by any chance?

I've bust both a nVidia and this one, a non-retina ATI GPU - basically Apple's design isn't good enough to cope with severe workloads vs the "high" workloads that people normally create.

It was nvidia at the time - a university paper exists that demonstrates the brute force mechanism to reverse engineer the psf. It was essentially using a PSF vs a known PSF of stars over the image.

I think there could be some optimisations - I've done IIR filters by pole fitting the PSF to a single pixel. Although a simple gather works better for non-symmetric, the IIR for a 2D image and 2D PSF was stupidly quick.

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.