Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

oaCapture 0.4.0 release


JamesF

Recommended Posts

Hi again,

I tested that with the MacBook Air and it does indeed work at 60fps... This is even better news as I am actually planning to use it with the laptop rather than the desktop, but it still puzzles me as to why it does not work on the iMac at 60fps... Maybe it is conflicting with something else I have installed also?

Go figure :/

Best regards,

Ferran.

Are both machines USB3?  I can't see why that should make a difference, but if one is USB2 and the other USB3 perhaps it might mean that different code is executed in the OS to transfer the data and it works with one and not the other.  I'll try it on a USB2 Mac Mini later to see if that has problems.  My MBP (where it works) is USB3.

James

Link to comment
Share on other sites

  • Replies 74
  • Created
  • Last Reply

Very strange.  I did that loads when testing.  I'll look into it.

James

Had another play, tried frames and sec's and got strange results, if I set it to 500 frames it takes 1000, set to 1000 frames it takes 500, 45 sec's it takes 30, 60 sec's it takes 90, 90 sec's it takes 60, didn't try any others, pause button works but stop button won't stop it.

Dave

USB3

Link to comment
Share on other sites

Are both machines USB3?  I can't see why that should make a difference, but if one is USB2 and the other USB3 perhaps it might mean that different code is executed in the OS to transfer the data and it works with one and not the other.  I'll try it on a USB2 Mac Mini later to see if that has problems.  My MBP (where it works) is USB3.

James

AFAIK they both are USB3... I purchased them both less than two years ago and they work well and fast with my USB3 hhd's and pendrives, so I 

would sa they are both USB3.

Link to comment
Share on other sites

Had another play, tried frames and sec's and got strange results, if I set it to 500 frames it takes 1000, set to 1000 frames it takes 500, 45 sec's it takes 30, 60 sec's it takes 90, 90 sec's it takes 60, didn't try any others, pause button works but stop button won't stop it.

Very strange.  I wonder if something is messing up the menus somehow.  One possible option might be to quit the program, remove the plist file that contains the saved settings and then restart.  That should be in $HOME/Library/Preferences/com.openastro.oaCapture.plist I think.

Is the stop button failing to work when you have the capture paused, or after you've unpaused?  Or is it completely unrelated to pausing?

James

Link to comment
Share on other sites

AFAIK they both are USB3... I purchased them both less than two years ago and they work well and fast with my USB3 hhd's and pendrives, so I 

would sa they are both USB3.

Then I am completely mystified as to why it should work at 60fps on one and not the other :(

James

Link to comment
Share on other sites

Is the stop button failing to work when you have the capture paused, or after you've unpaused?  Or is it completely unrelated to pausing?

Aha!  I can stop the capture without a problem when there is no limit in place, but if there's a limit on the number of frames or time then I can't.  That's definitely a bug.

James

Link to comment
Share on other sites

The USB VID/PID of 1618:0920 is what the camera has before the firmware has loaded.  Once the firmware is loaded it should reset and reconnect to the USB bus as 1618:0921 (as far as I'm aware -- I don't have one of these to test myself but I believe it behaves exactly the same as the QHY5L-II that I do have).

That suggests to me that either the udev rules are not working or that fxload is not installed.  If you haven't copied the udev rules, they're in the tar file and need to be put in /etc/udev/rules.d.  On Mint 17 you do need to restart udev manually when the udev rules are updated, either by rebooting or running:

# service udev restart

If fxload isn't installed then you just need to run

# apt-get install fxload

Finally if you're not a member of the "users" group you'll need to add yourself to that so you have access rights for to connect to the USB device for the camera.  That will require logging in again to take effect.

Most of this should get sorted when I get around to packaging everything up as a .deb and .rpm files, but that's proving rather more tricky than expected to achieve.

James

Checked fxload is definitely installed and latest version. I ran the install-binaries.sh for 0.4.0 and now when I plug in the QHY I get the amended PID/VID 1681.0921 as you suggested but when I open oacapture I still cannot see the camera. As this is launched from command line I can see some errors showing whilst oacapture is running

post-16479-0-91829900-1428092974.png

I rechecked the udev rules in /etc/udev/rules.d to make sure they were the same as those in the tar and then restarted the system but still the same result.

Just as an aside, when I first tried to run oacapture it caused an error because it could not find the libcfitsio package. I got arround this by installing the package from the package manager.

Link to comment
Share on other sites

Oooh, that's very helpful.  Thank you.

Both the QHY5-II and QHY5L-II have the same USB VID and PID, despite having a completely different interface.  It's necessary therefore to try to work out which is connected before trying to send commands to it.  The colour and mono versions of the QHY5L-II have a similar issue in that the colour model has some additional controls that the mono model doesn't.

It looks like I am correctly separating the mono and colour QHY5L-II, but not correctly identifying the QHY5-II.  I'm not 100% surprised by this as I don't have one to test with.  I will look into this.  It may be a simple error.

James

Link to comment
Share on other sites

There's a lot of work gone into this release, almost none of which is user-visible :(

I have completely re-written the camera-handling library so it works far more elegantly and along the way resolved a number of design problems resulting from my original version as well as fixed a few outstanding bugs.  I hope I've reached the point where I'll now tend to be extending functionality rather than changing it.

More user-visible changes include support for the ZWO "high speed" control and building against the latest ZWO SDK.  Support for the Microsoft Lifecam should also hopefully be somewhat less broken than it has been.  In addition I've added "experimental" support for the ZWO v2 SDK, though I've not had much luck getting it to work with anything other than the ASI120MM-S, so it can only be enabled by manually editing the configuration file for the time being.

With the exception of a few bugfixes, that's pretty much it.

For the next release perhaps the most important planned change is to enable the creation of output files for raw colour data that are recognised on Windows and can therefore be read straight into Registax or AS!2.  This may affect mono data too, but I've not tested that.  I also want to change the filter wheel library in the same way that I've just changed the camera library.  And there appears to be some sort of colour-handling problem with colour TIS FireWire cameras.  Initially it probably stems from the fact that they claim to provide an 8-bit greyscale image when in fact it's raw colour, but this is something I'm having to work on blind unless I can find such a camera to work with myself.

Downloads at http://www.openastroproject.org/downloads/

James

Good luck with the release

Link to comment
Share on other sites

Last night I observed Jupiter continuously for 5.5 hours using oacapture's Autorun feature.  I ended up with 916GB of video from my ASI120MM-S.

Thank you again for this great piece of software.   It rocks!

Link to comment
Share on other sites

One of my targets for the next release is to get some sort of focusing aid implemented.  There appear to be many ways to do that, some of which stretch my recollection of A-level maths to breaking point, but with a little help from Cath (thank you very much, Cath) guiding me in the right direction I'm working on one that first requires the ability to be able to find edges in the image.  As this process basically entails creating a new image with just the edges I thought that to assist with coding the next stage I'd actually display that image rather than the one from the camera.  So here's what I have after a few hours at a red-hot keyboard this evening:

edges.png

James

Link to comment
Share on other sites

Wow, that was quick!

Did you end up writing the sobel from scratch to fit in with your own internal image storage method I wonder?

I'm not saying it's the best way to do focusing, just that it's the way I decided to do it myself. I've not seen anyone else do it this way by the way.

Link to comment
Share on other sites

Wow, that was quick!

Did you end up writing the sobel from scratch to fit in with your own internal image storage method I wonder?

I'm not saying it's the best way to do focusing, just that it's the way I decided to do it myself. I've not seen anyone else do it this way by the way.

Sobel does appear to be a common method for doing this sort of thing once you start looking for it so it seems like a good place to start.  Once you find the right sets of keywords to put into Google though, loads of things fall out :)  I've found a number of other similar methods (Roberts Cross, Prewitt, Scharr, Canny) and a few rather more "out there" ideas.  Scharr looks like a possible improvement in some areas though I'm not sure it will make much difference for my use-case.  Canny also looks interesting, but at first glance appears to be rather more computationally expensive.  I'm not going to process every single frame, but when the frame rate is 100fps or more, "computationally expensive" says to me "run away!"

I did write the filter from scratch, purely because I realised both gradients could be processed neatly in one pass given the way I have the data stored, as well as calculating the mean (which is handy if I decide that the standard deviation is the right way to measure focus accuracy).  I also discovered that it's a common "optimisation" to use "abs( gx ) + abs( gy )" to calculate the final gradient value for a point instead of "sqrt ( gx * gx + gy * gy )".  Apparently it gives a slightly less accurate result but a significant improvement in performance thanks to not having to calculate the square root for each pixel.

I probably do need to look at some way to eliminate some of the noise in the image, but it may be that just having a threshhold below which gradient values are clamped to zero will be sufficient for that.

James

Link to comment
Share on other sites

Canny is an algorithum for creating drawn lines, with sobel often being what it uses as it's source data. The canny raw thin line output are then usually past onto other algorithums, normally for feature detection/extraction.

So canny is most unhelpful for focusing, as it basically gives and all or nothing (thresholded) thin line along those edge gradients :(

Link to comment
Share on other sites

I take it you used a FFT with a pass filter?

If so you may want to be able to select a target region - that way you can use it for lunar and solar easier.

I can't recall the relationship between FFT and convolution though I'm sure I recall that there is one.  In this case I've used Sobel convolution to do the edge detection.

James

Link to comment
Share on other sites

I'm not going to process every single frame, but when the frame rate is 100fps or more, "computationally expensive" says to me "run away!"

Well, don't be so quick to rule out being able to process every frame. That temporal averager does the simple averaging in a very efficient way - so long as your pixels are integers and not floating points.

But I don't know how you store the image data within your software James.

Link to comment
Share on other sites

If your looking to extend into custom filters with FFT - the CPU-based FFTW will be very useful.

Another common optimisation is to split the 2D filter into 1D dimensions and then combine after. This has the added advantage of being able to scale over CPU cores without bumping into each other. So you're going beyond SSE in-core parallelisation. Naturally this is not the same as a pure radial approach but is faster..

As focus is often about the steepness (contrast) you could use a histogram of the steepness rather than a single average value. The as the image is coming into focus, the histogram mass moves into the steeper end of the scale. You can then score on the histogram characteristics - which makes it easier when you have large objects and noise.

The nasty is that an unfocused optical system pointed at stars when out of focus can give spikes too - doing this seems to make PHD more reliable!

Link to comment
Share on other sites

If your looking to extend into custom filters with FFT - the CPU-based FFTW will be very useful.

As focus is often about the steepness (contrast) you could use a histogram of the steepness rather than a single average value. The as the image is coming into focus, the histogram mass moves into the steeper end of the scale. You can then score on the histogram characteristics - which makes it easier when you have large objects and noise.

Yes I'm with Nick with FFTW, it's a nice optimized library if you want to use the FFT.

And using the histogram sounds like a good idea too !

One of the nice things about learning to write your own software James is that it gives you the ability to play with all kinds of ideas, not just with the 'normal' way of being told how it's normally done, but with your own ideas that if your lucky may pop up from time to time (often in a semi sleep/wake state I find :confused: - if you're in one of those periods of intensive software writing).

Link to comment
Share on other sites

The fftw library has been "on my radar" for some time, but I need a better understanding of the process there before trying that route.  The other nice "feature" of the Sobel convolution is that I understand the maths :)

I found an interesting paper evaluating about a dozen different methods of focus scoring late last night.  I can't claim to have understood more than 50% of it at the time though :)  It was targeted at automated microscopy where focus assessment of the camera images from the microscope were used to fine-tune the focus position and I'm guessing that the same rules may well not apply to astroimaging, but I shall return to it at some point and attempt to understand it better.

James

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.