Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.



  • Content Count

  • Joined

  • Last visited

Community Reputation

506 Excellent

About NickK

  • Rank
    White Dwarf

Profile Information

  • Gender
    Not Telling
  • Location
    Near Guildford, UK
  1. Even non-aligned 30 second exposures can give a decent result with a CDD; I don't bother with noise here (squiggly pixels):
  2. The issue with compute on USB is that (a) the data needs to travel both ways - one as part of processing and once for reading results. An eGPU with compute where the monitor is connected to the GPU would be more effective. As the data only needs to transverse the slower PICe/ThunderThingy bus once. Data (in CPU memory) -> GPU (stored in GPU memory) -> GPU processing -> GPU results stored in memory -> GPU render to screen -> SCREEN.
  3. Other options can be linux on something like the ODroid series of small systems. Good to see someone finally sees some merit in this style of astronomy.
  4. Yup - most Asti images are 16bit and I found that processing you want a wider precision. Annoying most GPUs are not IEEE either.. (for my work at the time)
  5. Noisechisel paper (direct link): https://arxiv.org/pdf/1505.01664.pdf interestingly I already use the psf to detect local noise and reject it.
  6. Just had an idea wrt burred signal and if the noise vs time can be used. not only for deconv, super resolutiob but noise signal analysis at the same time. just need time to think through some maths.
  7. https://www.sciencealert.com/this-enhanced-image-has-allowed-us-to-peer-deeper-into-space-than-ever-before This looks interesting as a technique.. if you dig into to process; it’s described in this paper: https://arxiv.org/pdf/1810.00002.pdf
  8. Hi, Sorry for the delay responding - just back yesterday. I'll have to have a look again when I have time. I hope soon. Nick
  9. Heres the swarm based paper: http://www.inf.ufpr.br/vri/alumni/2013-PeterPerroni/GPU-PSF-Estimation-with-CPSO-dissertation.pdf
  10. I've bust both a nVidia and this one, a non-retina ATI GPU - basically Apple's design isn't good enough to cope with severe workloads vs the "high" workloads that people normally create. It was nvidia at the time - a university paper exists that demonstrates the brute force mechanism to reverse engineer the psf. It was essentially using a PSF vs a known PSF of stars over the image. I think there could be some optimisations - I've done IIR filters by pole fitting the PSF to a single pixel. Although a simple gather works better for non-symmetric, the IIR for a 2D image and 2D PSF was stupidly quick.
  11. What is funny is that AI deep style networks and normal image filters aren't that far apart. They both use what is effectively kernels. I've done a fair amount of both. I've always get the feeling that Topaz oversell aggressively in their adverts, almost in the same scenario where the company is over-compensating. My favourite processing is where swarming implemented in GPU was used to back propagate to define the PSF at a number of key points then using interpolation between the PSF across the image, the PSF deconvolution is applied - and a final deconvoluted sharp image appears. The disadvantage of this solution is it take hours and hours of 100% GPU time.. Tempted to buy a seperate small tower computer and put a couple of GPUs for experimentation as I've bust one GPU due to heat in my MacBook Pro. However the other half has plans too..
  12. Oversaturation does mean you loose measured information, just loading the unstretched PSF into PI & the image, channel splitting, running seperate RGB deconv, then recombining (loosing some detail in png) but looking carefully I think it could work but I think trying to source from PNG has already killed it:
  13. I've already implemented this already - one fast option that works with GPU is phase correlation. Do 2D FFT on the image and the PSF, then do a image wide phase correlation. There will now be a pixel by pixel map - look at the phase correlation for the suspected hot pixel and volia! In the case below I'm using the guide start as the PSF rather than gaussian. Here: If you look at the first image - at the top you see noise (hot pixel), if process you get without using PSF phase correlation to detect the local noise you get image 2 where you can see noise processed into the image.. then with psf phase correlation noise removal you get the 3rd.. the 4th is an image from aladin.fr to demonstrate what is noise vs signal. In this scenario I use a quick detector to find the suspects then apply as needed for speed.
  14. Nice use of convolutional networks to remove point noise: https://arxiv.org/pdf/1612.04526.pdf although this can also be done against the PSF (ie point noise appears not affected by the same PSF that affects the rest of the image).
  15. A nice summary of nice - including this depth of focus using coded apertures. http://kiss.caltech.edu/workshops/imaging/presentations/fergus.pdf Which got me thinking ... if we use a coded aperture, could we remove near field issues such as atmospheric distortion? If we can detect the ranges of objects we could reject those outside of the range over time, thus we could then remove the error. For lunar images - this would be interesting.. using a split prism, it could be possible to have one camera with a coded aperture to estimate the error, the main camera then produces the image. Post processing then delivers the corrected image. For deep space - the image is at infinity, thus coding outside of this indicates non-deepspace data.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.