Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Bit of coding and my 383 is now an integrating video camera :D


NickK

Recommended Posts

Some solar fun .. data from nearly a year old. 2000 frames to be precise - aligned by rotation, moving and then stacked using the GPU. I still need to sort out sub-pixel alignment, histographs, and I'll also add 2x or 4x upscale for a specific region of interest.

12/07/2014 20:25:41.197 ExampleApplication[47679]: 2000 : Rotate: 5.368171, Translate: 4.925136, render:0.732781
So in total 11.026088 seconds for 2000 frames loaded from disk, or 181.38 frames/second processing speed if the read from disk is not counted…
post-9952-0-43910500-1405193841_thumb.pn
12/07/2014 20:45:01.566 ExampleApplication[47774]: 2000 : Rotate: 5.769692, Translate: 5.155306, render:0.757205,  total 11.68, or 171 frames/sec (processing count, not disk read)

post-9952-0-54326900-1405194419_thumb.pn

Slightly blurry, however with proper stacking and frame grading with a reject option in future - that should help buckets.

So this is the 383L doing planetary:

 post-9952-0-28340900-1405195340_thumb.pnpost-9952-0-86302400-1405195706_thumb.pn

Second image here is using the stack at 0.2 for each added frame - hence darker. I should do a normalisation on this - the images are maintained in the range of 0.0 to 1.0 so this would be possible.

2014-07-12 21:04:33.476 ExampleApplication[47956:303] 8 : Rotate: 0.075574, Translate: 0.056363, render:0.003151, the 17.1MB images reduce the frame rate to 59 frames/sec… 

Link to comment
Share on other sites

  • Replies 81
  • Created
  • Last Reply

Good stuff nick,,coming on leaps n bounds now..davy

Thank you :)

Just adding grading and PSF deconvolution ..  so this means you can set a threshold and reject images that aren't of a high enough quality. Then instantly apply a PSF - although not a mass transference deconvolution, it will do..

Link to comment
Share on other sites

First result of on-the-fly grading- this allows me to set a minimum grade and the frames that don't make it are rejected.

Same data as above - just with the grading throwing away duff frames. Look at the edge of the sun and the spot. Far more defined.

post-9952-0-72901400-1405271847_thumb.pn

Link to comment
Share on other sites

Currently debugging the PSF based deconvolution.

See the bugs :D There's definitely an alignment bug and the PSF looks like it needs some TLC. It doesn't seem to have serious instability effects at the moment so that's a worry too.. 

post-9952-0-79134500-1405546725_thumb.jp

So with this you can create a simulated PSF from your scope - well single lens for the pedantic - over a range of wavelengths (needed for broadband imaging!). I should also add an QE curve for that too so that the response % over the wavelength range is catered for. It uses a basic Bressel based algorithm to generate the PSF using the pixel size, the focal ratio etc.

So once this is finished (given another evening perhaps), I'll start work on the sub-pixel alignment (although it has a form of sub-pixel alignment using centriods - these aren't as accurate as an upscaled second alignment).

Link to comment
Share on other sites

So far - this is looking quite interesting.

Here's the sub..

post-9952-0-85325500-1405585616_thumb.pn

Here's it sharpened using an inverse:

post-9952-0-72692600-1405585407_thumb.pn

Still got to locate the source of the shift… however I'm happy I have boundary issues - those are the horizontal artefacts.

Infact I think I still have this setup as 1340mm fl rather than 3350 that I think this was taken at. Here's the difference in getting right fl on the PSF! Actually the next sub is better:

post-9952-0-08254200-1405586733_thumb.pn

Link to comment
Share on other sites

Mwahaha :D

Now this is subtle sharpening :) currently waiting for the computer to generate the broadband PSF...

One of these is stacked without the scope PSF removed… the other is… can you see which is which?

post-9952-0-18319600-1405615709_thumb.pn

post-9952-0-14935500-1405615781_thumb.pn

Link to comment
Share on other sites

Sorted out a heap of issues - what should have been simple.. over sharpening at the moment - note the ringing:

post-9952-0-62844600-1405792814_thumb.pn

Currently playing with the PSF to get a good feel for how it responds.

Solar - single sub at 6700mm f/64 .. 

post-9952-0-81309700-1405794129_thumb.pnpost-9952-0-30638600-1405794141_thumb.pn

So there's still a little todo with scaling I need to sort out (it looks like the image values are wrapping).

Link to comment
Share on other sites

Ok, last for for a bit as everyone is probably sick of me by now ;)

This Ha data was taken at OllyP's using his 60mm lunt. I have two images here - first is the 158 frames pushed through the pipeline with sharpening on, sub-pixel (not using a sub-FFT, upscale drizzle but interpolation) but without grading switched on or any stretching (or normalisation which should be done!). You'll note that the first image is acting as key frame and so only stacks the subs that cover the first frame.

post-9952-0-40209600-1405806084_thumb.pn

The second image is a full on Registax image using all, following the flow (as the lunt was set up on a static mount) as the sung moves across the camera. Now the registax version has grading, then aligned before stacking in pixinsight & stretched as required:

post-9952-0-34329800-1405806302_thumb.pn

So, I'm getting there - the difference here is that the post processing of the second took a an hour.. the top one - at the speedy rate of 'realtime':

2014-07-19 22:23:17.921 ExampleApplication[11216:303] setImage start

...

2014-07-19 22:23:27.667 ExampleApplication[11216:303] 158 : Rotate: 1.760034, Translate: 0.366592, render:0.049403

So a bit slow as it's not optimal (by a long way) and includes the time to read the images from disk giving me 16 frames/sec from disk..

Link to comment
Share on other sites

4x upsampled drizzle with sub-pixel alignment now works (click for full resolution):

post-9952-0-40575800-1405885542_thumb.pn

As Paul noted with LL - interpolation with simple bi-linear interpolation doesn't scale as well as cubic splines etc but I can add that at a later date. The benefit for bi-linear is that it's supported in hardware on GPUs so you get a the speed for very little effort. Making a spline is easy enough with a GPU given it's horsepower.

I've also added statistics for linear stretching (the above auto-stretches each frame) and I have histogram stretches almost complete (just need a way to edit the desired correction to the curve). The non-linear stretch and splined interpolation will then give me the same faint details as the registax version.

I could do a final sharpen on the render as well on each frame but for now.. I'm happy with that.

Link to comment
Share on other sites

Here's the same process that used a small camera to provide hi-res but applied to Jupiter using a 383L - at f32, the image is dim enough through my 4" APO that I the exposure can be done with a mechanical shutter! At that fl, the 4" APO is below it's dawes limit..

Here the system is running 8x8 drizzle in realtime. If you click on the screen shot you'll get the full res of the screenshot - showing pretty good detail considering!

post-9952-0-50868800-1406023459_thumb.pn

This time using grading to reject the below average frames - only 4 of the 9 frames passed.

post-9952-0-94324800-1406023720_thumb.pn

I can define the grading characteristics, so I think I may need to move the grading up a little as I still think it's not pulling out the finer details.

Link to comment
Share on other sites

Testing out on some lunar images taken at SGL8 using the 383L.

Here's a jpeg of the full frame:

post-9952-0-46737700-1406364561_thumb.jp

Next a single sub stretched using pixinsight - the zooming does bi-cubic:

post-9952-0-88662100-1406364656_thumb.pn

And the final screenshot image from the app using 12 of the 19 images:

post-9952-0-08541200-1406364757_thumb.pn

post-9952-0-43143400-1406365628_thumb.pn
Link to comment
Share on other sites

I've added the following - single sub vs the rendered output - downloaded from the GPU to a file. Both auto-stretched using PI then zoomed. It shows the differences nicely:

post-9952-0-39303300-1406372998_thumb.pn

Link to comment
Share on other sites

Just looking at the lanczos sampling - http://en.wikipedia.org/wiki/Lanczos_resampling - this is relatively straight forward, however I think I will add my own slant on it in the air of experimentation.

I have two options here - use the obscene GPU horse power or use a pre-built lookup table with linear interpolation. The GPU workgroup has local memory that could quite happily fit a neat lookup table in - the GPU global memory is faster than CPU memory accesses.. the local GPU memory is akin to the cache on the CPU so results in a stupid speed increase with some limitations.

Link to comment
Share on other sites

Furthering the testing and the focus on upscaling :)

First - everything off - no pipeline processing other than alignment and stacking - image 1:1, image with max pinch expand:

post-9952-0-42440100-1406839476_thumb.pnpost-9952-0-67337000-1406839518_thumb.pn

Next the using bi-linear drizzle, image 1:1, image with max pinch expand:

post-9952-0-78311400-1406839727_thumb.pnpost-9952-0-36126100-1406839737_thumb.pn

Next using the region-based zoom - this is a 60x60 pixel width picture (approximately a 10:1 zoom) with bilinear. This is full image size (not pinch zoomed!):

post-9952-0-36126100-1406839737_thumb.pn

Finally the region-based zoom - this is the same 60x60 pixel region with lanczos-3 horizontal/vertical method:

post-9952-0-57910300-1406840818_thumb.pnpost-9952-0-86862300-1406840075_thumb.pn

Hmm slight scaling difference - still need to work out why that's the case.

You'll note the blocking - this because of the pixelation pattern. The new experimental method using all four pixels in a radial distance works better (less blocking and more detail as it's taking more points but I've not finished that yet).

The interesting point is that the OpenCL sampling is giving pretty good zoom results but when you study the two closely - the lanczos gives better details but there's just a bit of blocking.

I also have an idea for an even faster implementation of the OpenCL implementation (currently it does global memory access rather than caching into registers).

Link to comment
Share on other sites

So if you're wondering where I've disappeared to - I'm currently looking at an interesting issue with FFT with han windowing - glare. This is where the glare from solar white light images causes enough noise in the window to produce a higher energy reading in the window than the image itself. This exhibits itself that every so often (i.e. 1 in 50 or 100 frames) the image FFT will lock onto the window itself causing a translation of (0,0) rather than (120,-11) for example. With Ha this is far less of an issue. The proper term is "aliasing" where the signal and the noise are appearing as one. The good point here is that I'm causing the noise with the window.. so it's controllable.

It's the FFT equivalent of having stars so dim that you can't see align to. In this scenario I'm evaluating options of pre-processing to reduce the windowing - including window based deconvoution (i.e. flattening the window glare) which works for translations peaks inside the window but not outside!) along with median based background glare evaluation. I'm trying not to use a basic low frequency blocking filter as seems a blunt way of solving the issue.

Link to comment
Share on other sites

So I've been continuing to play.. I couldn't do any solar yesterday - other priorities :/ but this morning with the wind and rain.. I have been checking out some of the changes I've made.

Pointing at the bottom of the garden again - through a rain soaked window.. with wind… the alignment is picking up the movement of the plants in the wind, and the stacking is working - PSF deconv is also setup for the 670mm fl but for narrow band atm (generating a PSF takes a few minutes on the CPU with a full frequency range! OpenCL doesn't have a j0 implementation at the moment so GPUs don't have an implementation of j0 so it's a bit annoying!). The green line is the prototype focusing line - it's showing the level of detail (clipping out high and low) in each frame. It is light level sensitive - as light = additional detail signal strength to the CCD sensor :D Both sets of images set to renew the keyframe every 16 frames.

post-9952-0-87729000-1407661618_thumb.pn

Oops - forgot to turn on sub-pixel alignment! here's everything on (including lances 3 interpolation).. the flowers are stacking to produce burnout (I have a multiplier setup for the final rendering - this is the same for both images - it helps see with a apple laptop screen auto-adjusting the light levels):

post-9952-0-25890700-1407662263_thumb.pn

You can see on the graph - the increase in the green curve and the image quality where you can see the raindrops on the plant leaves

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.