Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

NickK

Members
  • Posts

    3,804
  • Joined

  • Last visited

Posts posted by NickK

  1. Not much progress since the last update the job has been OTT busy :/

    I have just watched an interesting lecture on deep learning for image processing. The interesting bit is that the model they use for identification looks similar to what I'm doing here i.e. using convolution filters and then a  3D output of correlation for the neural net. The difference here is that I'm using FFT full image they're using linear sub-images that are down-sized. In essence they're doing a correlation and then working out if the pattern of correlations from multiple feature filters = man, dog or raspberry based on the net recognising the location and the object input patterns it's trained on.

    The job has a (legal) mandatory 2 week vacation so as part of that I get to tune out and one of those is to have a think about this in further detail.

     

  2. I found yet another negative aspect - which is any PSF noise, at stacking, becomes a valid signal.

    I have an idea to remove the noise from the PSF (as I do with the main image) which should improve things massively.

    Now I'm comfortable that the processes are good - after the noise I may look to start being a bit more rigorous and optimise (possibly GPU it).

  3. So... I've been playing more..

    Centre is a 3D plot of the input area.. showing the input with stars - including a saturated star (large sloped star). The output looks like it loses detail.. but it's down to scale - so the right plot shows the rebuilt saturated stars and deconvolution, the right shows an autostretched lower clipped (you can see the base of the PSF as weird circle) - simply because the full scale plots of the saturated stars really dwarf the normal signal level!

    So the output needs double precision! I'm currently using TIFF but I think I may need to switch to FITS!

     

    Screen Shot 2016-12-07 at 20.16.37.png

     

    So improvements.. there is still a little noise being added by the PSF still (the little peaks near the larger stars) which is next on the hit list!

     

  4. Found another slightly larger bug.. in my logic - phase correlation is phase strength, so scale is irrelevant. Hence if you scale a psf to estimate, it will simply tell you the same things - all scales have the same correlation.

    Now if you do (x,z)*(y,z) correlation at the centre point.. that will give you the scale correlation without needing to estimate using iterations.. 

    #start coding...

    (x,y) correlation will give you the pin point of the pst to cover with the saturation but ignores scale (it's why this method is useful) but for incremental step based iterative estimation.. it's not the complete solution

    (x,z)(y,z) correlation should then take 3D in slices to do estimation of the scale (rather than attempting todo a full 3D FFT thus should be faster

  5. So I've had a play over the last couple of nights - including leave the laptop running (30mins per frame! CPUs are slow!) then used PI drizzle stack and autostretch:

    drizzle_integration_Preview01.png

    The PNG looses it a little (the PI 64bit version is a little more subtle on the noise square (I've yet to sort out the noise level adjustment so this is with noise).

    The two galaxies have appeared with some additional shape (in PI and a little above).

    I've also optimised this a little (hence only 30minutes!).. but there's still some more to go in terms of algorithm and optimisation.

  6. 17 hours ago, freiform said:

    Hi all,

    interesting thread. I played around with INDI on a RPi and connecting to it from my laptop. While everything was fine control-wise, I had Issues displaying images, i.e. to judge focus, field or whatnot. The (raw-) images loaded very slowly which I found rather annoying. I read on the INID-Forums  [1] that a good alternative might be using USB/IP [2], but I did not come around to test it. How do you handle this with remote setups? Or isn't this an issue for you?

    Sven

    [1] http://www.indilib.org/forum/general/800-raspberry-image-download-duration.html

    [2] http://usbip.sourceforge.net/

     

    C2 with both Ekos and INDI on the same embedded board running bleeding edge.

    INDI is the driver and control process, not the display/analysis/guiding of images.

    Ekos does the capture and control, this seems fine - however likewise the FITS viewing is slow.

    An example of file system slowness can be seen when using astrometry.net. With annotated images the process takes minutes, however without annotation and just the RA/DEC results etc the process takes 14s with the entire 32GB set of indices in use. Astrometry is a separate project to INDI and Ekos - so the only shared component open source libraries are cfitsio and a few others along with the operating system.

    I have pointed out the efficiency aspect of INDI handling large images before (i.e. the architecture makes it inefficient) however this would point to the fact that it's more the FITS handling and the underlying OS filing system performance.

     

  7. What I have noted is that Linux is about as susceptible to problems with USB-serial devices in use.

    For example, with an arduino and serial issues, if you disconnect in INDI then unplug and replugin the USB for the arduino the driver simply creates another /dev/ttyACM0 -> /dev/ttyACM1 ->/dev/ttyACM2 ... and eventually the device link fails requiring a reboot (as is a kernel space driver). It also randomly stops working.

    Never had a problem like that in OSX or Windows. OSX seems very reliable but that is likely to be the SP210x driver itself on the host system - not INDI and not Arduino's fault.

  8. The INDI folks have a few things they're looking at from my understanding:

    * lighter capture functionality

    * indi management through web interfaces rather than always needing a big application

    Just remember that there are some companies such as CloudMakers that are creating additional components that are quasi-commercial but outside of the INDI tree.

     

    • Like 1
  9. On 2 June 2016 at 23:09, ajk said:

    So, installing INDI. I had some Ubuntus lay about but failed to install, I noticed INDI wanted 15.04 minimum for their latest code.

    So, download 16.04 Ubuntu ISO. Hour later and the VM is running. Let's install INDI now....

    
    root@ubuntu:~# apt-add-repository ppa:mutlaqja/ppa  
      Latest INDI Library and drivers!  More info: https://launchpad.net/~mutlaqja/+archive/ubuntu/ppa 
      Press [ENTER] to continue or ctrl-c to cancel adding it 
    gpg: keyring `/tmp/tmp_y2chf3d/secring.gpg' 
    created gpg: keyring `/tmp/tmp_y2chf3d/pubring.gpg' 
    created gpg: requesting key 3F33A288 from hkp server keyserver.ubuntu.com 
    gpg: /tmp/tmp_y2chf3d/trustdb.gpg: trustdb created 
    gpg: key 3F33A288: public key "Launchpad INDI" imported 
    gpg: Total number processed: 1 
    gpg:               imported: 1  (RSA: 1) 
    OK 
    root@ubuntu:~# apt-get update 
    <snipped lots of updates>
    root@ubuntu:~# apt-get install indi-full 
    Reading package lists... Done 
    Building dependency tree       
    Reading state information... 
    Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: 
      
    The following packages have unmet dependencies:  
      indi-full : Depends: indi-asicam but it is not installable 
    E: Unable to correct problems, you have held broken packages. 

    I believe this is the main reason people pay for Windows (whilst students with plenty of free time don't). If someone wants to let me know when Indi becomes installable on Ubuntu I will continue with the focuser driver development. (And please no, don't ask me to use another distro, if it doesn't work on Ubtunu it's "too exclusive" a requirement for me).

    [edit: yes, I Googled it and nothing obvious sprang up in the way of a fix for Indi. I guess I could get the source code and compile from scratch but I'd like the main user installs to at least work before putting effort into a project]

    [edit2: developer manual recommends Kubuntu (http://www.kubuntu.org/getkubuntu/) for actual development. Downloading ISO now and will try that. If it works then this would be my recommended distro if you want to use Indi. If it doesn't work I'll ping the maintainers and bark at them a bit :) ]

     

    This is the problem - asicam uses binaries that are provided by the manufacturer for specific architectures, however those like my ARM v8 64bit will fail to build or have a pre-built version in the repository.

     indi-asicam 
  10. 21 hours ago, psamathe said:

    (Sorry, just noted who I am quoting/responding to so not for me to be pointing out where Software Bisque are aiming to go Indi wise and development wise - but TheSkyX's (whole package NOT specific diver support) development status on Mac is another matter).

    I developed a driver for ATIK on OSX and also rewrote TheSkyX plugin for ATIK on OSX for Software Bisque/ATIK.

    They both have different approaches and challenges, however for me one thing with the seperate user level drivers in apps is that if one component fails then it's faster to restart just that component that effectively shutdown the guider, the capture and restart and align.. That's less of a problem if you have more predictable clear skies compared to the cloud ridden and AP-limiting UK climate! INDI's development came out of observatory use, globally, in clearer locations. Now with the adoption by more "amateur" users, new requirements appear.

     

  11. My only concern with Ekos is that because it seems bound to kstars is that if kstars fails (due to a bug) it takes out the guiding/tracking/scheduling component of the system.. so you could end up 5 minutes out of a 8 hour unattended session. However with other apps it's almost the same and the growing number of users means the system gets a heavily tested - hence I'll accept that risk.

     

    • Like 1
  12. One thing that is apparent in doing this method - you need good tracking! I was going to add some more commentary to the shot/post above but didn't have time.

    With this method the guider tracking method that is normally used is "centroid mass" so the system looks for the brightest largest mass and then works out the centre point of the mass. This works if your target isn't subject to atmospheric convolution. Most will then use a 2 second or so tracking shot to guide on.

    I know that people would suggest that utilising the star shapes in the final long exposure is the same, i.e. the sum is the same as taking the guider images, however from what I've seen it's more of a simplistic approximation that does not provide the fainter detail. The final image looses some possible signal into the noise due to the approximation.

    Is this a bad thing? No, just misses what is there.

    So if your tracking is that good you don't need guiding - then this method becomes even more valuable in that it's results are more reliable. In the shot above, ignoring the mismatch of noise floor that causes the boxes, the integration confirms the problems that "integration" using star matching actually has with wobbling guider stars.

    I'm starting to lean more towards guider-less and mount accuracy being the ultimate way to really recover detail in images. With my recent interest in automation and stepper motor work for the focuser I'm starting to think that possibly making a direct drive mount may be the way to get this. Then using this method to wring out the final detail possible with the scopes size.

    I can see why professional astronomers use a laser to provide a false star in this sense but that is not an option - and I think this method with the direct drive is possibly the way to go.

  13. The ODroid C2 thread I've pointed at uses Ubuntu Mate 16.04 64bit for ARM. If you use Intel/AMD then there is more support.

    The pre-built ISO for the VM is probably a good way to test the move into the world of Linux. Oracle's VirualBox will work nicely if your machine has enough memory/clout. 

    Once you have that working you can switch over as fast as you want.

    The main things on linux are:

    * distribution - the collection of kernel and packages that are bundled together to make the version..

    * kernel - the underlying OS version, this can be important for drivers etc .. if something says "recompile the kernel" then find a different path (it's convoluted and ongoing support is on your own head).

    * desktop manager - some have different requirements, others have links into applications that mean things don't always work as they should if funky features are used.

    * package management - this is *the* kicker for most.. each distribution has their own "tool" just to make it an Bottom..

    * architecture - this is the CPU target (type and the 32/64bit for most). Stick with IntelAMD and you'll have a lot of support.. 

    The cleverer you attempt to get.. the more you'll find you're managing linux rather than doing any stargazing. Linux can sometimes be a hard lesson in "it works for me..."

     

    • Like 1
  14. I figured I had enough playing with a single frame and attempting to pick details out of the output.. so after running overnight (CPU based code a little slow).. i7 finished at 0340 this morning.

    This is autostretched in PI and I know why the boxes are there (that's the easier part to remove).. I did a 37 images, then registered in PI and did a simple integration:

    PI-integration_27.png

     

    The fun is this bit... galaxies that don't appear in the normal integration very well

    Screen Shot 2016-05-25 at 07.26.44.png

    • Like 1
  15. I've started playing with this again late last week on the train - with some decent progress in terms of the noise level matching.

    I may have a look at the sigma stretching that the objective fits library does (I now use raw) because it seems to provide a good stretching mechanism for human viewed images.

    I'm also hoping that with the automation/focusing projects almost done.. that once the job settles a little I'll have chance to capture more data.

     

  16. So I've been sorting out a new noise identification system in addition to the deconvolution..

    Left - the raw 383L image (autostretched in PI)
    Centre left - autostretched deconvolution output prior to the noise system, not that some 'stars' are actually the noise 'hot pixels' in the original.
    Centre right - autostretched new output from the deconv with the new noise identification system in place (On close analysis these are "hot" in the sense of a normalised value of 0.0684 vs the background 0.0054-0.0061 range. I identify these as being suspect by standing 10% above the surrounding pixels then if suspect i check to see if it's a close match to the guider point spread function. If not then it's definitely noise..)
    Right - Aladin non-autostretched

    Screen Shot 2016-04-25 at 23.36.11.png

    So the image on the centre right has significantly less noise (and thus false positives). I'll tune this some more as at the moment it's done on the input and to be honest I prefer the noisy input as signal is hidden in there.

    Still to go is reenabling the saturation scaling and a second deconvolution step which now the noise is gone/heavily reduced .. should be far better.

    This is one of the galaxies I'm after in the noise.. you can see the stars in the dim background (e.g. bottom left of the pixelated view) as well as the swirl of the galaxy.. (just in case you're wondering)

    Screen Shot 2016-04-25 at 23.57.38.png

  17. Well I had a slow day today.. and ended up waking up at 7:30.. whilst the Mrs slept .. I decided to geek out a bit.. 

    I've made the system not rebuild the saturated star at the moment - this is because I find that the estimation a little out and it then compresses the range for the existing background noise. However it does still scale the deconvolution (hence the two brightest saturated stars having a box - although this is too being quickly removed with some noise processing on the PSF.

    Here's the output on the left - inverse and flipped (to align with the Aladin image on the right). There's a slight difference in rotation between the two images-  the Aladin image is aligned vertically during their observations - mine is as I observed it :D

    Screen Shot 2016-04-24 at 18.21.15.png

  18. Just an example of the current working output but the system is pulling some interesting information out.

    Left - raw image FITS autostretched in PI, middle output of the processing with autostretch in PI and right a screenshot of the location in Aladin. You'll note a bit of mirror flipping - just look at the double star to orientate. Click the image the native image (or it may be hard to see the detail).

    Screen Shot 2016-04-10 at 21.25.11.png

    I've still got some work to go (which will remove the squares for the estimates and a few other things).

     

    Just goes to show.. in your noise... there is still signal...

  19. Starting to pull together, the system is now far faster (orders of magnitude) with some additional optimisation and I wanted to show the star saturation system working.

    On the left is the output - note the sharp peak as you'd expect, and the right the raw image indicating the flat top of a saturated star.

    Screen Shot 2016-04-10 at 19.43.45.png

     

    I'm continuing to work on this and I'm starting to see some good results.

     

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.