Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Capture software for Linux


JamesF

Recommended Posts

Here's an example of the problem, this time from my capture program:

I think the fact that this happens in both my capture code and in their demo program which share no common code above the C/C++ and USB libraries strongly suggests to me that the likely cause is their library.

James

I've seen very similar results from my QHY5L-II in one of the firecapture betas, though it was solved by reseting the camera - perhaps more to do with the timing issues at a hardware level.

Link to comment
Share on other sites

ZWO are suggesting that it happens if there's insufficient bandwidth on the USB bus and that there's a control for alleviating the problem I can use, but I have no documentation for it at all as yet :(  I've picked the shared library apart a little (without access to the source) and it appears there's a separate execution thread that manages the camera and drops data into a buffer for the caller.  I suspect that there may be synchronisation problems if the USB side can't keep up.  Personally if

I didn't have a new frame ready I'd probably arrange for the user to get the same frame twice rather than give them a damaged frame without flagging some error, but I guess that's a design choice...

For the moment it might be a case of just tinkering with the controls to see what happens.  I've read the limits for the control from the library which says it should range from 40 to 100, but that the value set in the camera from cold start is 1.  That doesn't bode well.  I'll have to check that the camera default settings are in range and sort them out before starting a capture run I think.

James

Link to comment
Share on other sites

I've seen very similar results from my QHY5L-II in one of the firecapture betas, though it was solved by reseting the camera - perhaps more to do with the timing issues at a hardware level.

That does seem consistent with what ZWO are telling me.  I have an inkling that I might sometimes have seen it with the ASI120 in FireCapture too, though not often.

Either the Point Grey Firefly or QHY cameras are next on my list to get working, so we shall see what happens there.  I need to get monochrome capture working before going too much further.  I've not tried that at all yet.

James

Link to comment
Share on other sites

Aha!  Set the control to 40 and I get an absolutely rock-solid image at 1280x960.  That's a step forward.  Now I just need to sort out why the red and blue channels appear to be transposed and why the ROI control still isn't working properly.

Shame I ought to be working really, but this is far more interesting :)

James

  • Like 2
Link to comment
Share on other sites

Well, having put the camera to one side for a few hours to allow me to get on with some work, I've returned to it this evening and having basically decided that setting the USB bandwidth control variable to 40 is the only option for getting things working I now have functioning capture with the 120MC :D

I found something bright red and something bright blue and verified that the blue and red channels did appear to be swapped over.  Fortunately there's a Qt function that does the swap.  Now I just need to sort out writing an AVI file in BGR format rather than RGB.  I don't think that should be too hard as long as the Ut video codec is happy about it.

I also still need to sort out the binning and there is a 12-bit mode that may not be supported outside the ASCOM driver, but I shall come back to the first after getting the 120MM to work and worry about 12-bit later I think.

James

Link to comment
Share on other sites

Ah, well, there's always just one more gotcha :)  The Ut Video codec appears not to want to play if I tell it the file format is BGR24 rather than RGB24.

Fortunately I believe the ffmpeg swscale library can convert from the former to the latter.  I just have to work out how to use it.

James

Link to comment
Share on other sites

Ah, well, there's always just one more gotcha :)  The Ut Video codec appears not to want to play if I tell it the file format is BGR24 rather than RGB24.

Fortunately I believe the ffmpeg swscale library can convert from the former to the latter.  I just have to work out how to use it.

James

No it does not accept BGR24.  I just used a for loop to swap the pixels around, I found that easier than figuring out the swscale library!

Chris

Link to comment
Share on other sites

No it does not accept BGR24.  I just used a for loop to swap the pixels around, I found that easier than figuring out the swscale library!

Chris

By strange coincidence I had barely two minutes ago come to the same decision :D

James

Link to comment
Share on other sites

Sounds like you are charging ahead with your project James, good stuff.

It's quite unexpected in a way.  I've spent most of the last year or so writing code for a single project and it's quite hard to bring myself to sit down at a computer and write even more code after finishing each day, but having started this I'm really very much enjoying it.  It's perhaps not the tightest code I've ever written, but I have learnt a fair bit about Qt, video formats, the ffmpeg library, threads and got myself back into a bit of C++ programming, so I really can't complain there.

To top it all I plugged in my 120MM a moment ago and it "just worked" even without monochrome support.  I couldn't work out why until I discovered that the mono camera driver actually supports RGB24 as well :)  I'm going to force it into mono mode and make that work anyhow, because I'll surely need that for other cameras along the way and it would be nice to keep the output file sizes down where possible.

James

Link to comment
Share on other sites

I know what you mean, it can be hard to convince yourself to sit down at a computer in the evenings and weekends when that is what you do for your day job.  With PIPP I tend to work in bursts, completing blocks of functionality and then have breaks in between where I only work on bug fixes.  Working in this way keeps my enthusiasm for the development going and the project stays alive.

Yes, you definitely should get monochrome support working, including raw un-debayered video from colour cameras that support it.  Obviously to maximise the framerate across the USB cable as well as the final file size.

Chris

Link to comment
Share on other sites

Hmmm.  I'm not sure I have a camera that supports raw colour output.  I know the SPC900 will if you change the firmware, but I don't really want to do that right now.  I'll have to check to see if the 120MC will.  I need a goal for tomorrow anyhow.  The 120MM works fine now :D  I'd still like to find a decent non-lossy and widely supported codec that can handle 8-bit and 16-bit greyscale video though.  At the moment I'm writing the data raw and whilst it's smaller than the RGB equivalent, I'm sure it could be better.

James

Link to comment
Share on other sites

The 120MC definitely supports raw output.  For greyscale video I just set R, G and B to the same value in the Ut Video codec and that does a good job of compressing the file to less that 50% of its original size (depending on the video).  I played around with a lot of other lossless compression codecs but found nothing available that was any better than the Ut Video codec.

For 16-bit greyscale video I have seen the Y16 codec but do not know of anybody that has used it, I know of no 16-bit video with compression though.

My thoughts are that we need a better system than the flakey AVI format which causes no end of grief.  What we need is something like the SER format, but one that also supports colour data and compression.  We need some brave individual or team to write a specification and provide open source code for creating and reading the format!  I thought briefly of taking this on myself but quickly realised that I do not have any spare time.

Cheers,

Chris

  • Like 1
Link to comment
Share on other sites

It's a tricky one, certainly.  Normal RGB is great to work with from a capture point of view because you don't need to do much with it, but it doesn't compress very easily.  Splitting the RGB into colour planes would make it easier to compress because of the increased coherence, at the expense of more computation.  And then there are cameras that can only do YUV or somesuch.  It would be nice to cope with those, but expanding the data to RGB to sort it in a file isn't ideal.

However, I guess we're not really desperately interested in getting the absolute best possible compression, are we?  Merely reducing the data volume to a more manageable level.  That would allow some compromises to be made.  The requirements for storing capture data are not the same as those for storing hours worth of television or film video in a usable format.

Perhaps the path of least resistance would be to expand SER to store RGB data in three colour planes and use one of the existing codecs (or something derived from one) to compress the colour planes.

James

Link to comment
Share on other sites

I've just been having a look at all the examples and documentation that comes with the Point Grey SDK.  I think I almost prefer none :)  Definitely suffering from information overload now.  I think I'll do the QHY cameras next :D

James

Link to comment
Share on other sites

And here we have 1280x960 mono with 2x binning to 640x480 :D

James

Just wondering how many glasses of something strong I would need to be able to replicate that view :grin: :grin: ...........

It looks like you are rocketing along with this code.

If you are interested I have several SPC900s and I think that one of them has been modified for RAW colour mode. If not it could be very easily converted as I have the software to do it. If you would like your code tested with one I'm quite willing to do this for you. You can always email me if you wish.

As you can see I'm now back from our sojourn to Snowdonia and Sixpenny Handley.

all the best,

Alan

Link to comment
Share on other sites

Thank you, Alan.

After I got the binning done last night I tidied up a few other bits such as disabling binning for colour (I'm not really sure I understand what that might mean in real terms), disabling SER output for colour (not that SER output is even implemented yet) and displaying the current FPS and camera temperature.

I then got a bit stuck trying to decide what to do next.

I want to have a fairly rigorous clean-up of the code and put together some sort of build system before releasing the source.  Some of it is a bit chaotic because I wasn't sure where I was going when I started (I didn't even own at least one of the cameras when I started :)  That doesn't however prevent people using a binary release should they wish, so I can perhaps delay doing that work a little.

I really would like it to support at least some of the Point Grey, QHY and TIS cameras fairly early on.  I have a Firefly MV and at first sight the PGR documentation and SDK looks very comprehensive, but I can't help thinking there probably aren't as many Point Grey users out there as there are QHY and TIS.  I have a QHY camera (a couple now, in fact), but whilst the alleged Linux SDK is open source it is also completely undocumented and what comments there are in the code appear to be in Chinese.  I think it only supports a subset of the cameras, too.  There are other open source projects that support some of the "missing" cameras, but again it's code-only and I'd need to spend some time just getting my head around all the camera-side code rather than working on the application.  I don't have a TIS camera at all.  I was tempted to bid on a DMK21AU04 on ebay last week, but decided if I'm going to get one I'd rather have the 618 in preference to what is pretty much an SPC900 in different clothes.  At least then I might be able to run some comparisons between the TIS camera and the ASI120.  I might look for a reasonably-priced second hand one that I can sell on at a later date.  It would also be quite nice to work out why the SPC900 works on some releases and not others and fix that somehow.

On the other hand, if I write a bit of documentation, get a working histogram display and perhaps implement a reticle for the preview window and rip out or disable the UI for the bits I've not done yet then it's probably a usable capture application that supports the SPC900, a range of other webcams including the Lifecams (and probably the Xbox cam given a bit of tinkering with the driver) and both the ASI120 colour and mono cameras as well as perhaps the ASI130.  I could get it out there and get some feedback whilst working on the other bits.  I'm quite tempted by this.  I'm quite happy to give the source to anyone who wants it at the same time on the understanding that it has sharp edges and it's not my fault if they hurt themselves whilst they're trying to make it work :)

I'm a little apprehensive about the aftermath of releasing it, I have to admit :)  Last time I did anything similar there were probably only two different Linux distributions that were both installed from a stack of 3.5" floppies, any potential user could be relied upon to be in possession of a significant amount of clue and everything was far simpler.  These days you've no idea what you're letting  yourself in for :D

James

  • Like 2
Link to comment
Share on other sites

Well, I'm getting there with the histogram:

histogram.png

Now I want to add some text to display the maximum pixel value and percentage of maximum, and then to do the same for RGB images (this one is from a greyscale capture) to display individual red, green and blue histograms on the same graph.

James

  • Like 3
Link to comment
Share on other sites

RGB histogram and reticle done.  I think I might have to done the green down in the histogram.  It's a bit, err, vibrant...

hist-reticle.png

Later I'll add some options for a set of tram-lines or something similar.

James

  • Like 1
Link to comment
Share on other sites

Whilst implementing a few other things this evening I've been pondering on raw (undebayered) colour images.

It looks like there are dozens of debayering algorithms, some of which are very good but computationally quite expensive.  I'm thinking that it would make good sense where raw data is available from the camera to use a quick debayering algorithm for the preview to keep the frame rate up (bilinear, perhaps) and store the raw data on disk for later processing using any number of algorithms that aren't really feasible in "real time".  I'm even given to wondering if there may not be some method for "scoring" the debayered output such that one file could be automagically processed using a number of different algorithms and those that score badly discarded without human intervention, but that's for another time...

Anyhow...  Back to the plot.

I can't find any recognised pixel format definition for raw video frames, nor a codec that directly handles them.  Is anyone else aware of one, or shall I just shove them in an uncompressed raw video stream and assume the user will sort the details out later?

James

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.