Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Binoveiwing Revolution R2?


DAVIDC89

Recommended Posts

This is just a crazy idea I had in my head for awhile, A couple of questions/thoughts.

Is it possible to do?, using Y splitters to run both cameras into the monitor.

Would it make a difference in image quality or would it be pointless?

Has anybody tried doing it?

 

Link to comment
Share on other sites

Im curious, you mean splitting the light from the scope to two cameras? if so, i am pretty sure it would decrease the light each camera gets anyway, so you'll end up at square one again.

Maybe worse, since each camera is getting less light making for longer exposures for each, thats if i understood your question of course.

Link to comment
Share on other sites

Cam see your thoughts on this,, I'm shure shevill Mather has done this a while back ,,if memory serves me right he was using zwo camera's,, was a few years ago think it was either on his website or vaf,

The same scenario is done via an off axis guider,,  instead of sending image for guiding it could be sent to a computer, monitor,dvr.

Few years ago I made a multi camera array,, using three camera's,, using different lenses  sending images to a cctv 4 channel DVR ,, the Av feed to a monitor was split,one feed to a monitor,second went to a video grabber and into laptop,, lot of work setting up but was intended for an observatory for meteor capture.

I was using an allview version of Samsung scb2000, Phil dyer camera and a Samsung can 2000,, this was a portable set up ,, few years back lol

20141126_211822.jpg

20140906_140555.jpg

Link to comment
Share on other sites

Still working on the concept of multiple camera capture but not through the same scope at this time, I'm using a star adventurer with zwo asi 178mc and 130mm f2.8 fd lens, canon 600d with 50mm f1.2 lens controlled via camfi WiFi unit to my phone.

Always been interested in using multiple cameras as weather is poor at mine so if I can get twice as much capture I'll be happy..

Still waiting for good weather for testing

IMG_20180503_193505.jpg

Link to comment
Share on other sites

On a monitor it probably wouldn't make much difference as I don't think the monitor would combine the images. One image frame coming in would just replace the previous one.  Plus, unless your cameras are perfectly aligned, you'd see the picture jump as each's frame replaced the other.   Depending on how fast your frames come in, it might be rather jittery.  It might be a good idea to send them to different monitors or the split monitor like above.  Use a filter on one to show the differences (e.g. H-alpha on one camera, no filter on the other).

If you sent them to a computer through a common frame grabber, it would possibly stack faster.  So you could have it build faster.  For example, both imaging at 10 second exposures but offset by 5 seconds.  You'd get the longer exposure detail in half the time.  Plus, you might get a noise benefit as one camera will likely be different than the other so any hot pixels would average down.  I'm not sure how you'd coordinate when the second camera starts.

Link to comment
Share on other sites

Hi rob, think that if both sets of images were sent to a capture folder, possible astrotoaster could stack both images into one,, phew the mind boggles,,lol,, starting to get intense

Link to comment
Share on other sites

10 hours ago, shirva said:

possible astrotoaster could stack both images into one

Dont think stacking would be a problem ,with Astrotoaster, but precise alignment might be - as Astrotoaster uses Deep Sky Stacker - this is what the manual says
"During the alignment process the best picture (the picture with the best score) will be used as the reference frame unless you choose another reference frame using the context menu.
All the offsets and rotation angles are computed relative to this reference frame.

The offsets are rotation angles are computed by identifying patterns of stars in the frames.
To put it simply the algorithm is looking for the largest triangles of which the side distances (and so the angles between the sides) are the closest.
When a sufficient number of such triangles is detected between the reference frame and the frame to be aligned the offsets and rotation are computed and validated using the least square method.
Depending of the number of stars a bisquared of bilinear transformation is used."

So IMHO, and I confess i dont know the maths, it would have problems if the "angles" are to great due to large differences in images - but worth a try. You could be the guinea pig with your new dual camera set up - would need to use 2 identical camera's I would think :-)

Link to comment
Share on other sites

16 hours ago, shirva said:

Hi rob, think that if both sets of images were sent to a capture folder, possible astrotoaster could stack both images into one,, phew the mind boggles,,lol,, starting to get intense

The next version of Sharpcap (3.2) is reportedly going to be able to monitor a folder as a DSLR workaround (https://forums.sharpcap.co.uk/viewtopic.php?f=22&t=833).  That should work as well.  The only other issue would be capturing it.  Robin recently stated that Sharpcap can be opened with multiple instances by double-clicking it again and then assigning it a different camera.   So you'd need three instances, one for each camera and one to monitor the folder.  Then, you'd capture frames with the first two and send them to the monitor folder.   Though, I'm not sure how it would tell the difference between two of the same camera when you try to connect them (I don't think it would show two options).

Link to comment
Share on other sites

Yeah seen robin put up about DSLR,, I asked the question if you could have the two cameras working,ie one dslr, one zwo,, awaiting answer ?,, I know two video cameras can be used,, hopefully it will still work that way if one is a dslr

Link to comment
Share on other sites

2 minutes ago, shirva said:

Yeah seen robin put up about DSLR,, I asked the question if you could have the two cameras working,ie one dslr, one zwo,, awaiting answer ?,, I know two video cameras can be used,, hopefully it will still work that way if one is a dslr

I would think they'd need to be the same resolution so Sharpcap can match pixel to pixel.  For example, when you switch the binning during live stacking (essentually changing the resolution) it starts the stacking over.

Link to comment
Share on other sites

Toaster will most definitely take the two images from two cameras and stack without problem. It doesnt care where the image came from. There's a max derotation angle setting in DSS itself, which will allow slight differences in the two frames and still stack. But I will suggest a better way to 'collimate' the two cameras. 

A handy tool to use for overlaying the two images just to check orientation of the two cameras is a free (Windows) app called peek through. Once loaded you setup a hotkey which in my case is Windows-A. At start of evening go to bunch of bright stars and take test image and when get short high gain photo of the target stars, set the second camera at same exp and gain. Then with two photos side by side (from each camera) ... click on one cameras image so that window is 'active' and click Windows-A (in my case) and that window will become transparent. Then you simply drag the transparent window over the first non transparent one. Drag the Window edges to make it the exact same size/scale, and bingo you'll see the stars are aligned or not aligned via the foreground transparent one won't overlay the background non transparent one. By simply turning one camera slightly and due to the high gain you'll use on the bright target stars, you should see the stars (from that camera which youo are turning) rotate over the top of the image from the other camera. Then lock it down. Bingo both cams are collimated and so closely that DSS wont have any trouble stacking the images from the cameras. I know some use cam as eGuider open in one copy of SC, and another SC opened to run the main imaging cam. Havent done it myself but Ive heard peeople say they do that. Else ask Robin. You'd set two copies of SC to run each of the bino cameras, and set both to save to the toaster folder and it will see every new shot (from either camera - it doesnt know any diff seeing every new frame and a new frame to stack).

It wont 'brighten' the image but sure as heck you'll decrease noise (thus increasing SNR) twice as fast as using just one camera. So you'd get faster to the final shot you wish to accept as nice and noise free, but also if you still went for a total exposure time same as if you only had one camera in, the better SNR will enable to to tease out more detail ... and in effect allow you to get a 'brighter' shot with more details. 

Go for it mate !!!!  I'd love to see your results!

Link to comment
Share on other sites

Great info Howie,, I had an inclination that toaster would pick it like that,, 

Think it's worth a bash with the binoviewer and revolution cameras,, good luck with it t

Link to comment
Share on other sites

You know, years ago in the days that Mallincam AV cameras were King, I saw a post on another forum about a guy who was sending the analogue video feed from the MC camera into a Sony HMD. IE the early days of virtual reality headsets. Those Sony HMD's were uber expensive (for me anyway!). But I loved the idea.

So when you folk get these dual camera setups working, as well as trying your twin setups to more rapidly reduce noise/ more rapidly increase SNR, etc it would be interesting to see if there's a VR headset which will display two feeds side by side ala stereoscopically.

Now be careful about what I am saying here as I very much doubt that the width apart of the two cams are going to make any difference at all in yielding a 3D effect .... they are simply no where near far enough apart to do that given the distances to the objects being viewed. But ... the two images each contain different noise and different bits of clarity, and just like the stacking software your brain should combine the two diff noise/clarity images into one instantly presenting to you a cleaner and more detailed 'view'. And as per the Mallincam Sony HMD guy waaaaay back about 7 years ago, in the absence of any peripheral light or people distractions give the viewer a totally immersive floating in space experience. 

Link to comment
Share on other sites

Yeah I remember that ,, virtual headsets,, great idea that never quite took off due to price,,

Bit like when we were talking about tracking via a smartphone in vr goggles, I bought a pile of arduino kit ,,and followed the instructions but could not get prototype to work,,

now that would have been something special, you look  around the night sky with vr goggles and the telescope follows your head movements..

Something that should may be revisited again ?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.