The V Motion Project

The V Motion Project is a visually powerful Kinect based musical “instrument” that was developed by multiple artists for a marketing campaign.

On the technical side they found a very creative steam punk like solution for the problem of multiple kinects interfering with each other:

Matt Tizard found a white paper and video that explained an ingenious solution: wiggle the cameras. That’s it! Normally, the Kinect projects a pattern of infrared dots into space. An infrared sensor looks to see how this pattern has been distorted, and thus the shape of any objects in front of it. When you’ve got two cameras, they get confused when they see each other’s dots. If you wiggle one of the cameras, it sees its own dots as normal but the other camera’s dots are blurred streaks it can ignore. Paul built a little battery operated wiggling device from a model car kit, and then our Kinects were the best of friends.

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Coding for Kinect

If you want to get started with coding for the Kinect and tend to use Microsofts API instead of OpenNi, I highly recommend to have a look at Microsofts Channel 9 – Coding4Fun articles on the Kinect. Lots of practical examples with source code and technical background info there. For all the Webdudes out there they even have examples for a Kinect WebSocket Server: WebSocketing the Kinect with Kinection; Kinect, HTML5, WebSockets and Canvas.

Not all examples have been updated to use the current Kinect for Windows SDK that established a new interface for the Kinect access. So check the dependencies if you want to build one.

Also worth a look is the Kinect for Windows blog.

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

The Leap: Gesture control like Kinect

A new USB device, called The Leap, by Leap Motion, creates an 8-cubic-feet bubble of “interaction space”. It claims to detect your hand gestures down to an accuracy of 0.01 millimeters — about 200 times more accurate than “existing touch-free products and technologies,” such as your smartphone’s touchscreen… or Microsoft Kinect.

Wireds Gadget Lab had a detailed first look at the device.

The Leap is available for pre-order right now and will ship sometime during the December-through-February time frame. Leap Motion will make an SDK and APIs available to developers, and plans on shipping the first batches of the hardware to developers as well. An application to sign up to be one of the first coders to work with the the Leap is on the company’s site

 

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

KinectFusion – Open Source

UDPATE: There is a more recent post on this topic here: Open Source Kinect Fusion – Update

Developers of the open source Point Cloud Library have implemented the Kinect Fusion Algorithm as published in the Paper by Microsoft.

The preliminary source code is currently available in their SVN repository’s. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results.

Besides the Kinect the library supports several other sensors. Moving forward, the developers want to continue to refine and improve the system, and are hoping to improve upon the original algorithm in order to model larger scale environments in the near future. The code is still beta, a stable release is planed to coincide with the upcoming PCL 2.0 release.

I’m definitely looking forward for to what the Kinect community is going to do with that.

 

 

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Kinect as a 3D Scanner – reloaded

Maybe you remember some months ago Microsoft Research published information about KinectFusion, an application that allowed to incrementally digitize a 3D scene with the kinect. Until now they haven’t made this available to the public. Now some engineers have been able to reproduce the functionality and made it available for non commercial use.

ReconstructMe works on top of the OpenNI Framework and thus can also use the Asus Xtion sensor. The Application makes use of the GPU for generating the 3D data. If you don’t have a GPU powerful enough you can still process recorded data in just not in realtime then. The generated 3D scene can be exported in the STL and OBJ format.

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Kinect as a 3D-Scanner

English: The Microsoft Kinect peripheral for t...

This is the coolest Kinect Hack I’ve seen so far. The project called “Kinect Fusion” initiated by Microsoft Reserach was shown at the SIGGRAPH computer graphics conference in Vancouver, B.C., and you can see it in the video below. With that integrated into a mobile device AR really would start to make sense. Interesting also that you should be able to increase the resolution of the scan by scanning the scene multiple times.

Engadget has some more Details on the Presentation given at SIGGRAPH.

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone

Augmented Videoconferencing

No matter how hard Skype and others try to convince us otherwise, we still do most of our web communications via text or, if entirely unavoidable, by voice. Maybe we’re luddites or maybe video calling has yet to prove its value. Hoping to reverse such archaic views, researchers at the MIT Media Lab have harnessed a Kinect‘s powers of depth and human perception to provide some newfangled videoconferencing functionality. First up, you can blur out everything on screen but the speaker to keep focus where it needs to be. Then, if you want to get fancier, you can freeze a frame of yourself in the still-moving video feed for when you need to do something off-camera, and to finish things off, you can even drop some 3D-aware augmented reality on your viewers. It’s all a little unrefined at the moment, but the ideas are there and well worth seeing. Jump past the break to do just that.

 

find more details on the project here: http://kinectedconference.media.mit.edu/

Tweet about this on TwitterShare on Google+Share on FacebookPin on PinterestShare on RedditShare on LinkedInShare on StumbleUponEmail this to someone