Jump to content



  • Posts

  • Joined

  • Last visited

Everything posted by NickK

  1. Just starting to write a full image version: Here's the guider star as a psf: Just don't attempt mesh() with a long exposure ....zzzz Also there's a library called Atlas that can be compiled on your system - if Octave finds this it will use it supposedly. Essentially atlas provides multi-core/threaded support for maths functions.
  2. So if we take a single width line can convolute with a simple point spread function that simulates our atmosphere: Then add these convolved stars into our picture at positions 23, 400 and 850, then add random noise up to 20% in the image to simulate the camera noise. I've also added 20% noise to the psf above too so we have a noisy guider psf star: Next we use phase correlation to detect the psf within the picture - the output here is the correlation, where we notices three peaks over 90% correlated these are at 23, 400 and 850 positions! Lets take the noisy image - take the 60 pixel psf sub images out of the noisy large image and stack them, we see the noise level of the averaged PSF reduce: The more stars we have (ie the more PSF sub-images we have the more we can average out and remove the noise. You notice the noise is dropping - it's now about 10%. This process then allows us to use the new stacked sub images as a new average psf (here's where the x,y variation means our error is growing). The alternative is we can the simply map out the variation on x,y based on the image stars so we can build a function(x,y) that describes the atmosphere. Also we can use the new psf (plural depending if we want x,y) to be able to ask "how much of is this pixel noise" by looking at the correlation strength. the lower correlation the higher the probability of noise. We can then use a reconstruction filter to process the noisy image using the psf and also deconvolve the picture! The above is simplistic example due to the speed of Octave (it's single threaded).
  3. I forgot to update my email address and home address.. so tried to get them to change it over email ... no response..
  4. Thinking if this could be used to clear up images statistically, although they're more use in that harder research! Although most QC commercial work is focused on the quantum inspired algorithms. I spent the last 6 months founding, building and driving one the worlds largest bank's quantum computing forum (financial, data science and security).. I don't have post doc maths/statistics required - the forum had members with post docs in probability, statistics, spin, semi conductor quantum ducting, quantum and particle physics.
  5. I have been away.. playing with the idea of building a DAC for my Mac mini with headphones from scratch (a R2R ladder DAC for those audio peeps). The fun is that I spend a week solidly researching the maths behind filtering, which is extremely closely related to images. Interpolation, FIR/IIR and (de)convolution are all seen as filters for example. This morning, with a head full of octave Sinc Filters, it occurred to me that my previous technique for deconvolution detailed previously in the thread isn't far away from a possibly new noise removal method for astro images. In audio filtering the FFT is used as sound is a cyclic sine wave as a pattern and that fact means noise in the form of random sample noise, can be removed relatively easily. If we want to remove CDD noise - then we look for patterns in the signal in the image (ie the sinusoidal cycles for audio). It occurred to me that every bit of light travels through the atmosphere - with a blurring point spread function. This represents a repeating pattern.. a cycle of point spread functions. (Ignoring turbulence variation over x,y for the time being). That means we can identify signals that have travelled through the atmosphere and those that have not (noise from the CDD). That got me thinking - we could make a system that uses the atmosphere blurring on the long exposure to reduce noise. The PSF noise from a guide star or from the image can be reduced by stacking the stars that exist in the long exposure we've identified - this can then be used repeatedly to reduce noise in the psf itself. (there is a psf error here too based on x,y turbulence but let's ignore that- we'll just average it) So using this lower noise psf, we can re-phase correlate from the original image and use the correlation strengths and the psf as a reconstruction filter. Hey preso a new reduced noise image. As part of the reconstruction filter - it could also deconvolute. I'm going to investigate this using Jupyter Notebook with Octave (Python sucks) - using maths, using a 1D slice inputs from a real image and then do the same for a 2D full image. Given Octave is single threaded and memory based, it may run out of memory - so I may have to code it up in C/C++ with FFTW.
  6. @jbrazio the motor heat on this is a factor of voltage and power. This is why I use the current chopper controller (DRV8825). It chops the current at a very high rate resulting in very little heating of the coils. The stepper is stone cold even after hours of movement at 12V with load on a 3.8V. Could you add support for two limit switches? Open until the focuser approaches it's limit at either end, then closed stopping the focuser unless a command to move away from the limit is received. In original focuser I made the controller cycle limit to limit on startup/reset so that it knew the range and scale available. For multi-night runs, the software may need to be checked to see if it allows a re-calibration step at the start of the session. The focuser could then re-check the limits.
  7. Deconvolution is an interesting problem - typically a PSF is created and used to deconvolute the large image. I seems that this has moved on quite a bit over the last few years. Plenty if tensor or GPU versions but this one stands out: https://www.hindawi.com/journals/jcse/2008/530803/ The simulation and parameterisation that is used for simulated annealing - can also be run on a quantum annealer (given the embedding and the constraints of mapping the algorithm to the topology). Essentially by using annealing to find the lowest energy point (ie the best deconvoluted image), the same could be run easily on both Digital Annealers (ie Fujutsu's Digital Annealer) and on the likes of D-Wave etc (see previous points on contraints). The result is a system that could estimate the PSF from an image to deconvolute it without needing to consume vast amounts of computational time. Would love to hear from the academics
  8. For the record - it appears R and Octave (Matlab clone) are single threaded. The libraries that perform functions may be multi-threaded in the function if they support it (unlikely for most) and Octave provides ndpar_arrayfun() and para_arrayfun() for parallelisation with n-dimensional arrays. Python you still have to essentially manage the parallel processes operating on the data as you would in c++ or objective-c++.
  9. The issues come when the psf for an extended saturated star is deconvoluted. As you're now forced to either rescale or expand the range. I want to keep the detail. On the OSC, it's probably better running a video with a constant movement across the matrix. The frames and offsets from the movement should give a better image akin to super resolution filling in the gaps of missing colour data. I could say average value removing the outliers. It may be faster, however you have information that is not being used.
  10. My thinking is if you can use some maths to better estimate the noise levels. For example Here they are using gems and thieves, however the same could be viewed as a pixel and the composition of noise - how much is signal, signal overlapping from elsewhere (PSF) and the various noise.
  11. Yes, it's possible to use the GPU to inspect the noise - you don't need a neural network per-say as you could simply use a a kernel to detect the high value vs local values but better still (and a method I've used in the original code for this) is to use the guider image to detect if the noise is a remote object by phase correlation - if there's no correlation then it doesn't look like light that has come through the atmosphere. This was from 5 years ago.. using the GPU in the laptop with FFT phase correlation to align the image (you'll note the image stays reasonably steady but the white borders move): However this is post processing as the GPU in most laptops/cards doesn't have the required floating point range. I could use a eGPU with the mini (original plan) but at the moment.. this will do. Oh - if you like solar processing there was a nVidia deconvolution using swarms to create the PSF to help improve the image.
  12. The annoying piece is I keep having to re-write the code - first Grand Central, then OpenCL, then C++ because OpenCL was unsupported and Metal is now the in-thing however I need wider precision and Metal is single precision floating point. So.. it's either write for Octave (a free R clone) or Python. I understand Python is interpreted, as is Octave, however it seems more scalable with a move to arrays with pre-coded libraries. The old code did phase correlation with the PSF from the guide star in the Z axis to rebuild saturated stars. This works well but expands the dynamic range (hence needing larger range). I did realtime GPU based alignment and stacking too but this is more processing offline concentrating on the maths rather than re-writing based on Apple's whims. The reason I've picked something Linux friendly (python) is that I have a ODroid 4 core arm that runs INDI + kstars that performs plat solving all onboard.
  13. I've been away for too long as it happens I've been made redundant again.. but not before spending 6 months playing with python and quantum computing as part of work. So that got me thinking .. and playing. I've started looking at using Python with astropy (astropy.org) that can load FITS images. Coupled with Jupyter notebook seems to work. As my MBP finally died, I've now switched to a Mac Mini i7 12core with 32GB ram which seems happy as Larry. So my intent is to move the GPU/c++ code I had for processing images into python and continue working on it whilst looking for a new job. I also have some new ideas for processing.
  14. Even non-aligned 30 second exposures can give a decent result with a CDD; I don't bother with noise here (squiggly pixels):
  15. The issue with compute on USB is that (a) the data needs to travel both ways - one as part of processing and once for reading results. An eGPU with compute where the monitor is connected to the GPU would be more effective. As the data only needs to transverse the slower PICe/ThunderThingy bus once. Data (in CPU memory) -> GPU (stored in GPU memory) -> GPU processing -> GPU results stored in memory -> GPU render to screen -> SCREEN.
  16. Other options can be linux on something like the ODroid series of small systems. Good to see someone finally sees some merit in this style of astronomy.
  17. Yup - most Asti images are 16bit and I found that processing you want a wider precision. Annoying most GPUs are not IEEE either.. (for my work at the time)
  18. Noisechisel paper (direct link): https://arxiv.org/pdf/1505.01664.pdf interestingly I already use the psf to detect local noise and reject it.
  19. Just had an idea wrt burred signal and if the noise vs time can be used. not only for deconv, super resolutiob but noise signal analysis at the same time. just need time to think through some maths.
  20. https://www.sciencealert.com/this-enhanced-image-has-allowed-us-to-peer-deeper-into-space-than-ever-before This looks interesting as a technique.. if you dig into to process; it’s described in this paper: https://arxiv.org/pdf/1810.00002.pdf
  21. Hi, Sorry for the delay responding - just back yesterday. I'll have to have a look again when I have time. I hope soon. Nick
  22. Heres the swarm based paper: http://www.inf.ufpr.br/vri/alumni/2013-PeterPerroni/GPU-PSF-Estimation-with-CPSO-dissertation.pdf
  23. I've bust both a nVidia and this one, a non-retina ATI GPU - basically Apple's design isn't good enough to cope with severe workloads vs the "high" workloads that people normally create. It was nvidia at the time - a university paper exists that demonstrates the brute force mechanism to reverse engineer the psf. It was essentially using a PSF vs a known PSF of stars over the image. I think there could be some optimisations - I've done IIR filters by pole fitting the PSF to a single pixel. Although a simple gather works better for non-symmetric, the IIR for a 2D image and 2D PSF was stupidly quick.
  24. What is funny is that AI deep style networks and normal image filters aren't that far apart. They both use what is effectively kernels. I've done a fair amount of both. I've always get the feeling that Topaz oversell aggressively in their adverts, almost in the same scenario where the company is over-compensating. My favourite processing is where swarming implemented in GPU was used to back propagate to define the PSF at a number of key points then using interpolation between the PSF across the image, the PSF deconvolution is applied - and a final deconvoluted sharp image appears. The disadvantage of this solution is it take hours and hours of 100% GPU time.. Tempted to buy a seperate small tower computer and put a couple of GPUs for experimentation as I've bust one GPU due to heat in my MacBook Pro. However the other half has plans too..
  25. Oversaturation does mean you loose measured information, just loading the unstretched PSF into PI & the image, channel splitting, running seperate RGB deconv, then recombining (loosing some detail in png) but looking carefully I think it could work but I think trying to source from PNG has already killed it:
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.