Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

PS3Eye raw bayer (and 10bit) in Linux - success!


furrysocks2

Recommended Posts

I've previously mucked around with the gspca_ov534 kernel module in linux to add additional framerates... then I got a Windows laptop and used the CLEye driver which made life easier.

https://stargazerslounge.com/topic/284481-ps3-eye-low-frame-rates-under-linux/

One of the problems with the way the kernel module is written is it cannot support sub 1-second frame rates nicely without some changes. That may yet happen, though.

 

I want to try debayering one of these cameras again, but I haven't found a way to capture Windows other than using a custom app, which is still possible. I figured getting it working in Linux would be the first step.

Just found https://github.com/inspirit/PS3EYEDriver which is a Linux kernel-derived "driver" which can be made to run under Windows but isn't actually a "driver" for using the camera with the likes of SharpCap... it's a free alternative to the CLEye SDK for custom-app raw bayer capture in Windows.

I jumped back into the ov534 kernel module source code in Linux and copied (wholesale) the initialisation routines for the bridge and sensor from PS3EYEDriver and modified a couple of colourspace and bit-depth parameters, and ran up guvcview...

blob.png.bab733f6b4f443e79fa26fed098a5332.png

blob.png.aade093bc5df3ce04b2863d7b1e54767.png

I saved off a single frame as .raw and ran https://github.com/jdthomas/bayer2rgb, GRBG. Opened in GIMP, auto white balance and success:

blob.png.305be1241621df719016312e1b4a7784.png

 

I have a suspicion that the CLEye windows driver does actually grab raw frames from the camera, but doesn't appear to offer them through the driver interface. I may be wrong, please someone tell me if it's possible in Windows without resorting to custom apps.

 

Anyway, nice option to have in the back pocket for now.

Link to comment
Share on other sites

  • Replies 26
  • Created
  • Last Reply

Gamma control proof of concept.

Used a spreadsheet to calculate values for the 16 registers that control gamma, for 0.25, 0.5, 1, 2, 4...

blob.png.b9f8eb315b0e3550d138283e52ab589d.png

Hardcoded these tables in the kernel module, and added the gamma control to the interface. Here's what came back from the camera:

blob.png.bdfa884e86078854ab6b9fb4794cb986.png

blob.png.f3d1a7d69e01a2eb7e1dcd6cf206572f.png

blob.png.9d47c6eb12979d66993fcd2920756108.png

blob.png.b7b95cef6272478892c2641bf315d659.png

blob.png.1eeafa9934297513464cc247dbcd1a47.png

Next step would be to compute the register values based on the gamma slider value, to give me more than just 5 hardcoded gamma correction curves.

(Edit: apparently you can't do floating point operations in the kernel, so I can't raise anything to the power of (1-γ) in the driver...)

 

 

 

Link to comment
Share on other sites

As I can't calculate curves on the fly, I've pre-computed 256 gamma correction curves from 0.25 to 4 and store them in a 4K table. Now gamma works a treat on the PS3 Eye.

I expect that in most cases it's best to just obtain raw bayer from the sensor and process on the computer, but it might be useful for something - focusing or something.

Link to comment
Share on other sites

  • 2 weeks later...

Recomputed the gamma table because I'd assumed the segment input endpoint values for each register were uniformly distributed but they are not... rtfm! Also reduced from 256 values to 64, still in the range 0.25 to 4. Edit: Looking at the following graph, I could improve the distribution of curves.

ov534_gamma.jpg.24a77c6e512350d970c161d2b3a9bb4d.jpg

 

In the process of removing the bayer matrix from one just now... short work with a no. 11 carbon steel scalpel blade... hope it's not too much. Feels like three layers, top layer is like butter, next two can chatter off together if not careful, but if the second is removed, then the last layer peels. I've got it plugged in to the laptop while I'm doing it and the difference in sensitivity is quite noticeable! Just paused to get my air bulb, I've not been very tidy with it so far. I'll try to collect samples of each layer to pop in some microscope slides.

I expect it won't be the last one I'll do...

Link to comment
Share on other sites

39 minutes ago, Mick J said:

Good luck with this furrysocks2,  touched one of these many moons ago and know how fragile they are, interesting.

Cheers, Mick.

First one fried under the knife... didn't think I was outside the lines to the circuitry at the side but perhaps if it's powered up it makes a difference. Chip started getting warm anyway, which is what happened with the one I tried last year.

Second one I cut through the leads trying to get the glass off. Dead.

Third one I broke the glass but got it off. Successfully scraped. Tried a pencil at one point but it left a load of carbon deposit - didn't have any IPA so used xylene on a cotton bud. It's not pretty to look at under the microscope, and I've scratched a few pixels in one corner right at the end... perhaps scalpel the top two layers off, maybe take the third back too, and finish with polishing compound might be better. I'll pick up another couple soon... they're only 50p!

Anyway, I've reassembled it with stock lens/housing... just about to turn the lights out and do a side by side comparison with an unmodified one.

Matt.

Link to comment
Share on other sites

Test is:

  • Windows
  • SharpCap 3.0
  • Lights out, maybe a bit left but not much.
  • 0.1fps, exposure max, gain min
  • take a 5-frame dark
  • left is a single frame, right is a 5-frame stack.
  • Converted to grayscale in GIMP (stock scene was colour, modified was gray but had colour noise)

 

Stock camera:

image.png.4bc4178e6da8e1b8bc24b059223a55f1.png

 

Modified:

image.png.26b58f2a7a145b4262581636f243054f.png

 

Focus seems off on the modified camera, perhaps removal of microlenses/matrix etc has changed focus, or perhaps the lenses changed between the two models, I didn't keep specific parts together. Also, the glow of the LEDs is much more clearly visible in the stock image... I can't get both sets of lights on both cameras to come on at the same time, but the blue on the stock is certainly brighter, red may be much the same. So the stock scene has more illumination.

 

 Currently, I don't have a way to do raw bayer stacking with dark subtraction in Linux. Also I think I can leave the matrix in place if I were to try an 850nm IR filter as the colour components may all pass the IR, interesting comparison to try perhaps - the key for that filter being raw capture rather than debayering the sensor.

 

I still want to try and push this wee camera on the moon through my dob... the humble PS3 Eye.

Stack_245frames_12s.png.eea2ca3522cd493b602b875cf09e4a1e.png

Link to comment
Share on other sites

Trying to see if I can get the camera to output 10 bit instead of 8 bit...

 

https://www.amazon.co.uk/Sony-PlayStation-Eye-Camera-EyeCreate/dp/B000W3YQ1Y says:

8 bit or 10 bit dynamic range

but also says:

6 or 75 degrees field or view zoom lens

which contains at least two mistakes.

 

https://en.wikipedia.org/wiki/PlayStation_Eye says:

8 bit per pixel is the sensor native color depth

but doesn't cite a source. Nothing relevant in the original press release.

 

The camera consists of an OV538 bridge processor and OV7720 sensor. Data sheets from google:

 

The OV538 can handle 10-bit raw:

image.png.f7ec4b18f2c0194890a6afcfb6593363.png

Camera Interface - "takes either 10-bit RGB raw data or 8-bit YUV data"
Color Converter - "can also bypass ... RAW8 and RAW10 formats."

image.png.b030a7f6ef0b8b2559f3d2b1bf3cf9d0.png

 

As for the OV7720 sensor, it's a bit ambiguous:

General Description - "The OV7720/OV7221 provides full-frame, sub-sampled or windowed 8-bit/10-bit images in a wide range or formats..."
Features - "Output support for Raw RGB ..."
Key Specifications - "Output Format (8-bit): ... Raw RGB Data"
A/D Converters - "the bayer pattern Raw signal is fed to a 10-bit analog-to-digital (A/D) converter"
Digital Signal Processor (DSP) - "Transfer 10-bit data to 8-bit"

image.png.cdab76ce93944416177202880d543e01.png

 

I got excited re-reading when I found this one:

COM10 - "Common Control 10, Bit[0]: Output data range selection, 0: Full range, 1: Data from [10] to [F0] (8 MSBs)"

but appears it's already set to 0.

 

 

I know I've got undebayered images in RAW8 working because they're the right framesize (640x480x1) and from an unmodified camera demosaic nicely. When I change to RAW10, I get 25% increase in framesize (8bit->10bit), but I see "ff ff ff ff 00" repeated. The way 10bit is packed gives four bytes of [9:2] MSBs, followed by one byte packed with the [1:0] LSBs. So it's still just 8bit resolution. However, I've modified oacapture to give me an 8bit preview of the 10bit formatted data coming from the camera, discarding the two LSBs, which are 00 anyway for now. I tried RAW16 and inspected the USB packets and get "fc 03" repeated, which is effectively the same thing, 8 bits.

 

So I don't know if the sensor is outputting 10 bit to the bridge or not, or if it is, whether the bridge is dropping two bits. I don't know if it's even possible, but nice to think it would be. Those extra 2 bits would give 1024 levels of grey instead of 256.

Link to comment
Share on other sites

I'd already added a 10 to 8 bit conversion routine into oacapture, I got that to print debug if ever it saw a non-zero fifth byte, which I'd trigger if ever I stumbled upon a register change that gave me true 10 bit resolution.

I think I found one (see previous post) but wasn't sure how to test it... camera's plugged into a different computer in a dark-ish room and I can't physically get to it but I've been remoting to it.

 

I modified the 10 to 8 bit conversion routine in oacapture again so that for each frame, it would print me out the numerical values of five different groups of four pixels. Each line shows the MSB[9:2] for each pixel, and the last byte packs the LSB[1:0] for all four pixels.

I upped the gain/exposure and dropped the frame rate to saturate the image, giving me:

ff ff ff ff aa
ff ff ff ff aa
ff ff ff ff aa
ff ff ff ff aa
ff ff ff ff aa

Each pixel here becomes "1111 1111 10" in binary, or 1022 out of a range 0-1023.

 

Then I tried the other way and dropped the exposure and gain at a higher frame rate to give a much darker frame. I dropped the gamma until all pixels were reading 0:

0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0

Definitively 0. Then slowly incrementing my gamma, I was able to get one of the lines fluttering between "0 0 0 0 0" and "0 0 0 0 10"... that's hex, so the 10 at the end is "00 01 00 00" in binary, meaning the LSB of one of the pixels was 01, a single 10-bit pixel fluttering between 0 and 1. As I increased the gamma, that line of numbers (representing four adjacent pixels in what must have been a relatively brighter part of the frame) started to jiggle more, with low single digit numbers appearing in the first four columns and the last digit dancing around... increasing it more saw other lines start to show (pixels in darker regions), first the last digit changing, then the others.

0 0 0 0 0
0 1 1 0 5d
1 3 2 1 cd
0 0 1 0 64
0 0 0 1 1

Seeing those numbers behave in that way was what I needed to see.

 

Cool test. You probably have to be into coding to get all that, but good fun for a late night. ;)

I'll do fully illuminated 8bit/10bit comparison images tomorrow, might need a 10 to 16 bit conversion routine in order to save frames off first, and a stretch to actually see any difference on my screen. It might all get lost in the noise, we'll see.

Link to comment
Share on other sites

The PS3 eye in your image has the bad lens for modification, you need the curved lens as the IR filter on the flat lens forms part of the focusing for the image train.

A good idea for spare parts is to remove the board mounts from the bad/bricked cams, I think they have a 12 mm thread suitable for telescope nose pieces.

Link to comment
Share on other sites

1 minute ago, Bruce Leeroy said:

The PS3 eye in your image has the bad lens for modification, you need the curved lens as the IR filter on the flat lens forms part of the focusing for the image train.

A good idea for spare parts is to remove the board mounts from the bad/bricked cams, I think they have a 12 mm thread suitable for telescope nose pieces.

I just learned about that tonight. I've a box of lenses and parts growing. ;) I'd like to see if I can get a wide-field sky view, so will play with a couple of lenses, but likely will be removing them in favour of 1.25" or C-mount nosepieces and full size IR-cut or pass filters.

Link to comment
Share on other sites

Will do the 10bit conversion later, starting with RAW16 out of the camera,... larger frames over the wire but easier to handle in existing tools.

guvcview wasn't getting it quite right:

image.png.6081229582e8618f7950f28fd3a0566c.png

I'm sure all the information is there, just a byte order or bit shift issue.

Saved off a .raw and I could see 10 bits of information.

image.png.fa8a98231536cfbad65e543d64b45605.png

Saved off a .png, cropped a portion where the change occurs, pulled up a histogram:

image.png.6178a987d3fc07cf03085aac8f8d1b56.png

The curve at the bottom is the top one cut up and put back together again... looks more like it.

 

 

Edit: looks like it's right aligned, ie uses the 10 LSBs, so a left-shift << 6 and then discard lower byte, or right-shift >> 2 and use lower byte. That doesn't feel normal... but I've nobbled oacapture somewhat to give me a faithful 8-bit preview.

Link to comment
Share on other sites

Gave up on the 16bit for now, opted to go with a routine to convert 10bit into 16bit so that I minimise bandwidth as intended and make oacapture save a frame off as a 16bit greyscale PNG.

I wasn't certain I was going to get the 5th byte split up correctly into the LSBs for each of the 4 pixels in a block, and worried that if I got them in the wrong order, the subtlety would be lost in the noise.

I modified my 10-16 bit conversion routine to give me the same third of the image processed three times, once in the order I thought they should be, once in the other order, and again discarding two bits to leave 8bit. I put the camera in front of the window and covered the lens with a piece of tissue to generate a flat

blob.png.74f4cc9b12ff4e3ed32ad1818bd9f911.png

You can see how poor a finish I left after removing the bayer matrix.

I though that by stretching it, I could inspect pixels and see a difference, but no.

 

So I modified the routine again to generate a 10-bit CSV histogram for each portion of my output frame, and graphed it:

fullhistthree.jpg.c944400dbef4c87ab7c785e242f20630.jpg

 

Zooming in on the area of interest, blue is my proposed order for assigning LSBs to pixels, red is the other order just in case and yellow is the 8bit:

zoomedhistthree.jpg.c1b6aa8fac53e7084f4284f5f3b4c8ba.jpg

 

Clear to see the 8bit (yellow) has a higher occurrence of fewer values. I didn't expect a visual difference between blue and red, but there is one and it's quite significant:

zoomedhisttwo.jpg.cc573dade1217b23120b1565e7e4e4ce.jpg

I've tried to describe the difference below:

Blue: 11111111 22222222 33333333 44444444 44332211
Red:  11111111 22222222 33333333 44444444 11223344

I really thought that actually getting the last two bits assigned correctly would not be that significant (lost to noise), and if it were the case that I couldn't tell which was right that the whole thing would be somewhat pointless... I am still at a loss to explain why reversing the order in which the LSBs are assigned to their respective pixels is statistically significant.

However, the last graph above certainly strongly suggests which method to choose, and happily also confirms that I got the order right in the first place. Unless I've gone wrong in the process.

Link to comment
Share on other sites

I modified oacapture to output two consecutive frames one saved 8bit, the next 10bit, and shot a very dark scene:

blob.png.6cf4a55cf57ae88518a589faabdf8138.png

Two histograms:

blob.png.40d9148c46ef656ea3e5de977c42740d.png

Two identical heavy stretches and the scenes look visually similar, 8bit left, 10bit right:

blob.png.458354f3a879e65d678738007c94c47a.png

Their histograms look different:

blob.png.2abd320a9682e3a5da3f0e1a310876fc.png 

And zooming waaay in, allowing for the fact that these were two different frames and the noise will be different, the 8bit does visually have fewer gray tones in it - so more dynamic range at 10bpp.

blob.png.0ae376488d5523d3d9d00d5c860f0c7c.png

 

So, RAW 640x480x10bpp @ 75fps from a PS3Eye, all thanks to DSP_Ctrl4[0].

 

Link to comment
Share on other sites

  • furrysocks2 changed the title to PS3Eye raw bayer (and 10bit) in Linux - success!

Cheers, Bruce.

Hoping to improve on the lunar imaging I've done with this cam before... always had the frame rate, now added sensitivity and pixel depth - just need a tracking mount. Would like to try an IR-pass filter too.

At the other end of things, would be really nice to be able to express exposure in milliseconds instead of changing a framerate and then an exposure slider. There's also a windowed mode (ROI), that could be useful for planetary imaging... particularly if it allows for higher (sub)frame rates. I don't have a handle on all the clock registers and timing constraints, etc, yet, and don't know how far I could take it under V4L2...

Link to comment
Share on other sites

No, though I'd like both of course.

 

 IR/UV cut gives you visible light but no IR or UV.

image.png.38f5c8cf1c5ab08f9da963ca348fd67e.png

Source: https://www.firstlightoptics.com/uv-ir-filters/zwo-1-25inch-iruv-cut-filter.html

 

850nm IR band pass cuts out visible light and gives you IR only.

image.png.03b77d76ad96ae1881475e27f9c8e382.png

Source: http://www.365astronomy.com/ZWO-850nm-IR-Band-pass-Filter-31.7mm-1.25-for-IR-Sensitive-Cameras.html, (though FLO sell them too, they just didn't have the spectrum on their page)

 

There's also the likes of these: http://www.astronomik.com/en/infrarot-passfilter-infrared-pass-filters.html, or many more.

 

Link to comment
Share on other sites

A benefit I've read re: 850nm IR band pass, at least in relation to some ASI cameras, is that the colour bayer matrix is transparent to IR at these wavelengths so you can leave it in place and you'll get a fairly mono image anyway. The efficiency of the sensors perhaps drops off at increasing wavelengths, offset by a bright source such as the moon and the lessening effect the atmosphere has, or so I have read.

I've also read that the PS3Eyes are "good" at 850nm, I know a lot of folk use them IR-only.

Link to comment
Share on other sites

Just now, JamesF said:

Very impressed with the work you've done here and pleased that oacapture has been a help with some of it.

James

Cheers, James!

Couldn't have done it without you! Hope 10bit's not too radical, hoping to push something to my fork soon-ish.

Very lucky to have stumbled upon that register change, though - you keep pressing on with the next thing you want to get done but that was a special moment last night!

Link to comment
Share on other sites

  • 1 month later...

Could you condense what you've done into a brief how-to? 

I'm interested in using this camera for some scientific imaging. I've gone through the thread and it's not clear to me what software I need to get started, and how to set the necessary registry bits, etc.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.