-
Posts
3,804 -
Joined
-
Last visited
Content Type
Profiles
Forums
Gallery
Events
Blogs
Posts posted by NickK
-
-
-
Had a chance to play around with this today.. still got a little way to go but.. looking interesting.
First - non-stretched. Output of processing on the left and the raw sub on the right:
Second showing stretch using PI outstretch.
You can see the PSF image border - a natural aspect but you can also see the reduction of motion blur, however I still have some work to go on this. It's picking a few hot pixels and a few other issues.
I also want to cross reference the output against Aladdin images too to see if the background data is real or spurious noise.
The system is auto-estimating the heavily saturated stars.
edit: noted there's possibly a little mishap with the coordinate system for the reconstruction. Will be interesting to see once that's checked.. It would explain some of the odd appearances (Y flipped double star at the top of the Aladdin and the bottom of the processed image).
-
Finally... on tuesday.. 8th interview.. 4th month.. I've been offered the job ... so still got some details to go but due to the role that takes about 6 weeks..
So as the role has a technical element.. I may have to dig out the development again
Idle thumbs and all..
-
Well I was sat in Kwik fit for an hour for the MOT today.. armed with a pen an paper I think I have an idea how to make the peak estimation work even better (and faster).
I need to code it up at some point and see but it looks like a decent way of doing it and without some of the issue the above form has....
-
So continuing... you know when you find that you've over saturated the star - all you get is maximum pixel value?
Well.. I'm trying a technique to discover the maximum value. Effectively a like an autofocus it will defocus the smaller stars but then find the focus for the saturated stars. This should allow the deconvolution to work a lot better. Simply put it's 3D curve fitting using FFT correlation.
Left to Right - processing with successive saturation. The bright saturated star becomes neater and finally the peak value on the right suggests a 20x saturation.
The idea is to performs this creating a 3D data set that gives each pixel the best peak value. The map will then be used to process the scene better.
Still early days.. but the idea seems to have merit.
-
First thing todo with that sort of thing is fit the metal camera cap and take a long exposure dark. That demonstrates if it's a camera issue or something further up.
A dodgy USB cable isn't likely to cause a gradient as the data transmitted is digital bytes unless it's the power.
The KAI chip is a full frame chip. It means that the image is read by moving the CCD charge one row at a time down the sensor and reading the last row. The result is that if there's light incoming into the sensor (i.e. the shutter or something else) then the image at the furthest point of the reading takes longer to move over the chip and would see a gradient.
- 1
-
Some more images.. including the other side...
^^ this matches the aladin pictures. Image on the right is the input into the processing - the left is the output..
Whole image scaled to fit:
So in all.. it looks like appears to make a very good processing tool.
I'll post up a guide to how I do it over the next week..
Location of the star field: Field center: (RA H:M:S, Dec D:M:S) = (08:46:39.224, +20:37:51.823).
-
Look good Nick , do you have a link to this program I have a new Mac and still trying to get my head round useing it .
Thank
Les
Hi Les, It should be an easy standalone application - so basically drag your images to it and it will sort things out.. however at the moment it's still being tested (I've just run the images myself today).
It takes about 2 minutes on a 383L image at the moment - my plan is to just provide the application as open source.. then people can steal/borrow the idea for their own applications.
-
That last image.. The deconvoluted sub (l) output against the stretched raw 60 second 383L sub input. I need to check.. but I think the deconvolution is pulling detail out of the noise in the background. I'm still checking and I'll re-run the tests again...
Unfortunately the phone rang at the same time I was editing :F
-
The double star is about 13" seconds across.. not too shabby for detail. I since noted that one of the images should be removed (it's got really bad mount movement - beyond the guider track.)..
Left - Aladin, right a 30 image deconvoluted stack
Also interesting is the appearance of some galaxies that aren't on the initial sub.. however I think the one on the left is either an artefact or something that's not in the image in Saladin. Below is a single deconvoluted sub ® and the aladin image (l) annotated side by side. Sub time is 60 seconds with a 383L at -15degC. The Guider image time is 2 seconds, cooled CCD (not sure if it was the old 16IC or the Titan).
Still playing - one thing is certain, is the effect I am currently doing is impacted by the guider noise levels.
I have an idea for another way to apply the deconvolution, in that way it will not have the kernel noise squares but will take much longer to process and may loose some info.. still thinking..
-
Hehe 1parsec- I'm using your FITS ObjectiveC library on that too!
-
Firstly.. the first image..
One the left is the original raw long exposure - taken at SGL earlier in the year with little to no regard for mount alignment.
On the right is a new system I'm playing with - currently at it's simplest (and rotated/flipped the image by 90 by accident - such is experimental programming).
You'll note the massive reduction in elongation and sharpening. Only taken me a year to get myself around to actually writing this after the MBP died and it's all 100% CPU atm.. this is asymmetric 2D PSF deconvolution (no gaussian here). It's passive optics I use the guider star images, process them and then apply it to reconstruct the image.
Not bad for an initial play.. still working on refining it
- 4
-
The stars have to be in certain locations to allow the approximations of the maths model to work.
-
It's funny that BBC BASIC is interpreted.. effectively a crude virtual machine language before it's time.. now C#, Java, and the like think they started it all..
-
Have you tried implementing a drizzle?
You'll need to:
* interpolate between pixel values for coordinate fractions (i.e. 0.1-0.9)
* Find the centre point using the interpolated values - try centroid technique
* scale up the output to 2 or 4 times.
* sample each value for the output pixel given the interpolation
* stack the output using a addition or average..
-
There is nothing that is 100% secure.
Secure environments factor in the following:
* Layers - like an onion
* Time to get past each layer equates to a longer time to detect
* Effective detection at each layer
* Action on detection
* Defence against disablement
So starting with your fence, then the space between the fence and the obsy, then the obsy, then the space within the obsy (i.e. locked away components), then the larger stuff itself.
On the disablement - ensuring that wires are not accessible, that having multiple ways to detect and that disabling one doesn't cause a failure of the the other detectors - i.e. pressure pads and sensors that don't go to an easily disabled point.
-
Hi James,
OSX only but .. I've just upgraded to 10.9 and have been reading through the AppNap documentation - have you checked out NSProcessInfo API in 10.9? One of the methods relates to categorising processing being done by the application (ProcessInfo beginActivityWithOptions:reason: etc) but also there's a section in the documentation about I/O prioritisation for latency such as video I/O.
Although I think the Prevent AppNap finder option also helps but I think this will eventually become 'less optional'..
Still digging..
-
Hmm good point..
-
That's rather handy. I didn't realise there were Atik cameras using the same standard.
James
I'd still use the drivers.. just in case there's some changes/specialisations in there.
-
I discovered last night that whilst Point Grey provide their own SDK with all sorts of "proprietary, do not distribute" FUD, their cameras largely seem to implement some sort of machine vision camera standard called either IIDC or DCAM that is related to IEEE1394. That's rather handy because there's an open source camera library that implements the standard and supports not only (the more usual, I think) FireWire-connected cameras, but also some of the USB ones as well. I compiled one of the demos from the source distribution and it grabbed frames from my USB Firefly MV quite happily, so it looks as though for the cost of writing a small amount of code to integrate the library with my API I might end up with support for all of the FireWire and USB-connected Point Grey cameras on Linux and OSX
Need to stay focused though. Must get the UVC stuff for the TIS cameras sorted first.
James
Awesome.. ATIK GP should work too
-
There won't be too many visible changes in the next release as my main goal is to get something working on OSX and it'll mainly just be bugfixes otherwise, but as part of the OSX port I've now massaged the sources into a state where I have exactly the same code building working binaries on 32-bit and 64-bit intel Linux, Raspberry Pi and OSX. Basically from the top of the source tree it is possible to do:
$ ./configure && make
and it does the rest. I think the main task remaining is to work out a tidy way of loading the firmware into the QHY cameras on OSX (I can do that manually myself, but apparently the average Apple user can't cope if it doesn't have a point'n'drool user interface What I'll probably do is just have a bit of code at the start that matches the USB VID and PID values before the firmware is load and runs the relevant commands to do the work. I also need to sort out wrapping everything up into an installable package for OSX.
I'm probably going to leave one ugly OSX issue present for the time being because I'm not sure how to sort it out yet which is that the buttons in the UI get rendered at what appears to be twice the desired size on Retina displays when they contain an icon. If I plug in my desktop display then everything looks fine, but there's clearly some sort of scaling issue relating to high pixel density displays somewhere that I haven't been able to run to ground yet.
James
Is the firmware a Cypress based chip download?
-
That's a great question, where do I go from here? To Procence, actually.
My rig will be transported down to the new remote observatory this summer where it will share roof with 3 remote rigs. My horrible Swedish seeing is holding me back too much since I want to fully unleash my rigs potential under perfect skies. Everyone who've visited Olly Penrice's location knows what I'm talking about, there's no coincidence so much astronomy goes on in that region.
So with significantly darker skies and probably 4 times as many clear nights as my part of Sweden, I will probably start doing real big & crazy(stupid) projects since I have a hard time saying no to challanges. & I belive Mr.Penrice once dared me to make a full mosaic of the entire cygnus-loop at this resolution, so yeah, I'm just getting started
So Olly will need a new dedicated internet connection for the mass of image data being moved!!
The image really shows what is possible..
- 1
-
I found that using a finder scope aligned with the scope was faster - even with the goto setup. The reason being is that you get a wide field, so although you may not be massively precise, it will get you set up faster. It means that when you look at stellarium and at the sky you're not looking a microscopic portion of it to navigate.
I too found that that most systems assume a perfectly flat (hence constant rate for alignments).
My setup goes:
1. Pull out the Garmin Gekko GPS - get the GPS position, elevation and accurate north.
2. Position EQ6, I'm lucky my spirit level in the EQ6 seems to be pretty good.
3. Sit gekko on the top and align with the gekko.
4. Mount kit and rotate about to align - this simply means the entire kit isn't going to cause a [removed word] in alignment.
5. Polar align (if nighttime)
Now for AP this gets more complicated.. but basically that will get me into a good visual accuracy. I was doing solar at 6700mm with minimal movement just doing steps 1-4. However using alignment tools for EQMOD I've done 20 minute exposures (at 1340mm) on the EQ6 BUT and here's the but.. this is usually without a merdian flip at which point the mount alignment usually needs checking and redoing.
-
James - what version of Linux are you using? I'm assuming you're using libusb?
Playing around with deconvolution (programming)
in Discussions - Software
Posted
Been getting very involved in this.
Issues I've been tackling:
a. Per-pixel non-math modelled PSF deconvolution - this is straight forward except for saturated stars..
b. Saturated stars - this is complex. I have tried a varied set of ways to do this and still maintain the detail in the image. I can estimate the stars, the complexity increases when you have overlapping stars vs saturated stars - I can estimate star intensity by iterative matching of the intensity (this can be improved by using a LR style mu iterative testing) however the interesting situation is differentiating between the phase correlation result for two low intensity matches and the intermediate test for intensity. In short I have to look for the peak form and use some intelligent processing using masking/ranges.
To add fun - doing phase correlation with a large image to register is more noise robust as the signal to noise ratio is high.. with this the considerations are smaller and so the SNR drops.
After finding this - I did find a US livermore paper that correlates with my findings and experimentation (their research is not todo with astro though). So I'm obviously on the right track....