Open Source Kinect Fusion – Update

There is an update on the open source implementation of Microsofts Kinect Fusion Algorithm by developers of the open source Point Cloud Library.

They improved on the Microsoft implementation with their algo called KinFu Large as they are able to scan multiple volumes in on pass allowing to scan larger scenes in one go.

The point cloud library (PCL) is available as prebuild binaries for Linux, Windows and OSX as well as in source code from their svn repository. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results. Information on how to setup your own build environment and the required dependencies is available from their site.

Besides the Kinect the library supports several other sensor via OpenNi.

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Coding for Kinect

If you want to get started with coding for the Kinect and tend to use Microsofts API instead of OpenNi, I highly recommend to have a look at Microsofts Channel 9 – Coding4Fun articles on the Kinect. Lots of practical examples with source code and technical background info there. For all the Webdudes out there they even have examples for a Kinect WebSocket Server: WebSocketing the Kinect with Kinection; Kinect, HTML5, WebSockets and Canvas.

Not all examples have been updated to use the current Kinect for Windows SDK that established a new interface for the Kinect access. So check the dependencies if you want to build one.

Also worth a look is the Kinect for Windows blog.

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

KinectFusion – Open Source

UDPATE: There is a more recent post on this topic here: Open Source Kinect Fusion – Update

Developers of the open source Point Cloud Library have implemented the Kinect Fusion Algorithm as published in the Paper by Microsoft.

The preliminary source code is currently available in their SVN repository’s. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results.

Besides the Kinect the library supports several other sensors. Moving forward, the developers want to continue to refine and improve the system, and are hoping to improve upon the original algorithm in order to model larger scale environments in the near future. The code is still beta, a stable release is planed to coincide with the upcoming PCL 2.0 release.

I’m definitely looking forward for to what the Kinect community is going to do with that.

 

 

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Replacing the TV Remote


PrimeSense the company providing the technology behind Microsofts Kinect is working with Asus on the long awaited 🙂  replacement for the TV Remote. One result of this cooperation is a new product by Asus the “WAVI Xtion”. The technology behind it is the same as the Kinects. Other than Microsoft Asus is not aiming for the gaming market but the living room in general. Seeing it as the dooropener for bringing the  PC into the living room.

Amongst other things one important part of this development is to increase the resolution of the sensor to allow tracking of smaller structures like fingers.

[via Technology Review].

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Controling WoW with a Kinect

A first shot at implementing more complex interaction with the Kinect by the Institute for Creative Technologies at the University of Southern California.

They are developing the Flexible Action and Articulated Skeleton Toolkit (FAAST), which is middleware to facilitate integration of full-body control with games and VR applications. FAAST currently supports the PrimeSensor and the Microsoft Kinect using the OpenNI framework.

In this video, they show how FAAST can be used to control off-the-shelf video games such as World of Warcraft. Since these games would not normally support motion sensing devices, FAAST emulates keyboard input triggered by body posture and specific gestures. The controls can be dynamically configured for different applications and games.

FAAST is free software that uses the OpenNI. It is currently prepared the for an open-source release. You can download FAAST at: http://people.ict.usc.edu/~suma/faast/

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone