Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Live Capture and generic video input?


Macavity

Recommended Posts

General question. Paul maybe? But "No Pressure"! It might not fit

in with the general way in which live stacking progs work.   ;)

Basically: it possible to get Lodestar Live <wink> to intercept

the output of a typical, standard, USB video capture card? :p

I note that FireCapture (along with branded cameras) allows a generic

"Webcam" input - Will even live stack as a preprocesser option! As far

as I can see it's a summation of data - It *saturates* rapidly on typical

terrestrial scenes! Undeveloped? But looks to be doing the right thing?

Of course, there is no hardware camera control or "settings".

But might still attract uz Video Astronomers who use Watecs

... or others with OSD or "hardware" controlled cameras?  :)

Link to comment
Share on other sites

Am I getting right what you meam..analogue cameras via a video grabber

then manipulation of the digital signal..had wondered about that myself :)

Yes, that's basically it! The sort of question my "management" used

to ask - with the tacit assumption it "must be easy" to implement. :D

(The truth, of course, might be another story entirely...)

Link to comment
Share on other sites

In theory I'm sure it's possible.  In practice however... well, put it this way: I theory I could climb Mount Everest, but in practice I shan't be packing my ice-axe this afternoon :)

I think it's probably fair to say that most astro cameras that we're used to dealing with are actually single-exposure cameras in terms of.  When it looks like video that's just because we grab lots of single frames very quickly.  It makes life easy though because software can grab a frame, do the processing, grab a frame, and so on.  I'm pretty sure that's the case with the SX cameras.

With a video camera it's effectively "free running" and you have to be ready to accept the frames as they arrive or risk dropping them, so in terms of coding it requires a different approach.  It's easier when there's a driver provided by the manufacturer to handle some of the problems, but when you've got to write your own it's rather more of a slog.

For colour cameras it's probably also rather more complex to decode video images too as they're often presented as luminance and chrominance signals than the RGB colour maps that we're perhaps more used to.  It's not hard to do the conversion once you know how, but there are a multitude of formats in use.

The biggest pain though is that once you have all this support it needs testing and debugging whenever you change code that might affect it and that can be exceptionally time-consuming, which isn't too appealing when you're doing the work for the fun of it.  Doubly so when you're dealing with multi-threaded execution (which you often can be with applications controlling cameras) and you get bugs that appear and disappear depending upon the exact timing of execution of the threads.

Which isn't to suggest that Paul won't take up the baton, of course :)

James

Link to comment
Share on other sites

Indeed so. A fine thing indeed... I suppose it's down to "terrible twins" 

Video Capture FILTER / PIN? Invoked as some sort of sub-process?  :p

The issue seems to be transmitting / retaining data with the interface.

But we can dream... Leave this one open though. You never know... ;)

Link to comment
Share on other sites

It's an interesting idea - it's possible. James pretty much covers the technical issues. LL design is open to the way a video capture card would work though as the stacking and display processing get fed data from the camera controller so it's the same data flow.

In the future, when the immediate list of features etc is compete it's something I can look further into, as digital processing applied to analogue video will still prove very effective.

My main caveat would be that any capture device used wouldn't need to appear to the application as a standard capture device accessible by the OS - no bespoke APIs (once drivers are installed)

Paul

Sent from my iPhone using Tapatalk

Link to comment
Share on other sites

Doesn't s/w like Deep Sky Stacker Live and "others" already do this in that  they just monitor a folder see a file(jpeg etc) and process the file(s) - in essence using the hard drive as a buffer instead of memory buffers - the latter being faster but limited to smaller "buffer". In oldie terms its just doing batch processing of data instead of real time. Therefore so long as you have the ability to interface with the OS and produce a data stream in a predetermined recognised/valid/compatible format the rest of the work flow stays the same.

There are utilities such as FFmpeg (and others) which can do a lot of this and work across platforms(Unix,Linux,Windows OSX ?) which just lives the specialist end processing.

 

There are ,like with all things, compromises - e.g. slower working,Drivers,having to load multiple processors from different vendors etc etc plus as JamesF states TESTING across platforms the bug bear of all software.

It essence it means working like EQMOD where the workflow is split into blocks which act at different stages(capture/stream/process etc) but have to conform to a "standard" so that all parts can work together.

Sounds oh so simple when you just write it down - doing it is somewhat more complicated and messy!

I will no climb back into under my stone.

Link to comment
Share on other sites

The live stacking feature is something that I'm working on right now in SharpCap.

In the current released version it's quite basic (no alignment) and only works for dedicated astro cameras that SharpCap accesses via their own APIs (ie ZWO, QHY, Altair, iNova, ASCOM, etc). The released version has no alignment yet, but can stack frames and save them to a .fits when you feel you've collected enough. There are also some basic white and black level controls to allow you to tweak the display of the stack to pull out faint detail.

In the development version I have alignment working (including derotation) based on being able to find at least 3 stars in the image, and it will work with DirectShow/WDM/Webcam devices, so the frame grabber should work. I've still got a fair bit of work to do to make this new functionality presentable enough for release though...

The biggest problem with frame grabbers (as mentioned above) is that when you connect them to a camera that does long(ish) exposures, they will tend to send the frame to the PC many times (25fps), even though the image will only change every few seconds as the camera grabs a new frame. I think that my code will cope with this at least to some extent - if it takes 1s to do the alignment and stacking calculations then it will ignore the other 24 frames that arrive during that second and then process the next one that arrives after it has finished and so on. Not ideal as frames may get stacked a few times each, but better than trying to stack them 50 times each. Obviously this really needs a better solution, but the current situation is probably going to be usable.

Fortunately the grabber device and DirectShow between them handle most of the other problems (so that each frame is presented in a standard RGB format and I don't need to worry about composite video, video compression, etc).

cheers,

Robin

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.