Is the Kinect a CPU hog? – a follow up

by Stephen Hobley on December 7, 2010

Post image for Is the Kinect a CPU hog? – a follow up

After upsetting a few people with my previous post regarding the amount of post processing required to use gesture recognition with the Kinect – I thought it was worth posting a follow up, now that I’ve had time to delve into the device a bit more deeply.

First off – I wanted to address the dispute over the comment I made about the 3×3 checkerboard pattern of the projected dot pattern. Here’s some footage I shot with an IR camera – actually a modified EyeToy camera with the internal filter removed and a visible light blocking filter on the front.

You can clearly see the dot pattern in this movie, and it is in fact arranged in a 3×3 grid of light and dark regions, each one with a central bright spot.

Now that I’ve had a chance to look at the pattern, it could well be that the Kinect is doing a form of ‘average light returned’ for each sample point. As the reflecting surface moves further away the dot density will decrease. I think the community is still out on that one. Parallax detection with such a dense and possibly random pattern is quite impressive.

Via a comment posted to the Youtube video I was directed to this patent that shows that the pattern of dots is actually significant to the depth mapping algorithm.

Now on to deciphering the sensor data…

I compiled the C++ client application that plots a depth map using OpenGL – this really smokes my laptop (roughly 2.5Ghz dual core) and is dropping frames like crazy. However, if you restrict your attention to just the depth map and attempt the blob detection routines things do become more manageable.

Also it is critical to compile the image processing library in Release mode – OpenCV does much better in this build mode, and the frame processing is much slicker. It is possible to define a region of interest with the depth data and reduce cpu processing time down to well within the 1/30th second window. This still leaves your machine somewhat “busy” with reduced amount of CPU available for other tasks.

However I’ve not been able to get any faster data transfer rate than the 30 fps – for most applications this will be acceptable, but for a real-time music controller, it’s only really good enough for triggering continuous data – not finite note on / note off. Triggering rhythmic samples is right out – as if you miss the beat, it’ll be off for the whole duration – for this kind of thing you *need* at least 60 fps, and higher if possible.

I still think that the best approach is to use a separate machine for extracting the control data, and then pass this along to a rendering machine to handle the output.

I had a chance to talk to the makers of the Kinect Piano shown in the last post, they’ve been able to optimize the performance of their system (they use Python) to get the response time much tighter, and I think it shows in this video.

They persuaded me to give up my C++ code in favo[u]r of a Python interpreter – I’ll be giving this a shot and publishing the results as soon as I can. Additionally I’ll be trying thing out on a Mac and see if it’s any smoother. All the cool kids seem to be using Macs nowadays.

Finally I’ve tried to use the Kinect as a 3D scanner – it seems to do quite well on most simple forms, although it’s not so good at recognizing a Dalek…

UPDATE – OK so I fired up my Mac and “gitted” the latest OpenKinect driver – when I go to build it tells me that std::map m_devices has no access method called at() used in createDevices() defined in libfreenect.hpp – I’m on 10.5.8 of OSX – anyone Mac savvy know how to get around this?

UPDATE UPDATE : Fixed it m’self by replacing .at(_index) with [_index] which was a guess, but it seems to work.

Share

{ 4 comments… read them below or add one }

Alex December 7, 2010 at 10:26 pm

> They persuaded me to give up my C++ code in favo[u]r of a Python interpreter – I’ll be giving this a shot and publishing the results as soon as I can

This is gay. Processing latency has nothing to do with python itself. It’s some stupid form of advocacy.

Stephen Hobley December 7, 2010 at 11:20 pm

We’ll see…

I’ve been trying to build OpenKinect on my Mac for the last hour or so, but it’s telling me that std::map does not contain the method at(int) when I go for the final ‘make’ – so that’s got me stumped at the moment.

hirsch December 8, 2010 at 3:17 am

std::map has no at() method like std::vector. std::map is modelling the concept of “Sorted Associative Container” and std::vector is a “Random Access Container” where the at() method is used for “secure” element access, because it throws a out_of_range exception. That also makes it slower than using operator[], which simply segfaults when accessing out-of-bounds elements.
Whereas using the operator[] on a std::map, simply inserts a new element if it does not exist, so no at() method is required.

Stephen Hobley December 8, 2010 at 7:59 am

hirsch – Thanks for the reply – the [] operator was just a guess, but then when I looked at the documentation I realized that there was no at operation defined. So this could be a bug in the code, or something else – not really sure.

Replacing the call to at() with [] when returning a newly inserted element seems to get everything to build, and I can connect to the Kinect, control the LEDs and motor, but I get no video or depth information back.

I get an Isochronous data error in the output window.

Leave a Comment

Comments links could be nofollow free.

Previous post:

Next post: