Coding for Kinect

If you want to get started with coding for the Kinect and tend to use Microsofts API instead of OpenNi, I highly recommend to have a look at Microsofts Channel 9 – Coding4Fun articles on the Kinect. Lots of practical examples with source code and technical background info there. For all the Webdudes out there they even have examples for a Kinect WebSocket Server: WebSocketing the Kinect with Kinection; Kinect, HTML5, WebSockets and Canvas.

Not all examples have been updated to use the current Kinect for Windows SDK that established a new interface for the Kinect access. So check the dependencies if you want to build one.

Also worth a look is the Kinect for Windows blog.

Problem Steps Recorder

Just discovered a really great tool that comes as part of Windows 7, the Problem Steps Recorder. It allows to record user interaction to a document allowing to reproduce and document a problem.
Therefor it takes a screenshot of every user action highlighting the focus areas of the ongoing interaction with a colored frame. Also it provides an automatically generated textual description of every step.  Everything is stored in an MHTML container a custom Microsoft archive format that contains HTML as well as the screenshots in one single file that can be viewed with Internet Explorer. Several solutions to convert this file exist if Internet Explorer is not your favorite tool.

To launch it just type “psr” at the prompt in the Windows start menu.


KinectFusion – Open Source

UDPATE: There is a more recent post on this topic here: Open Source Kinect Fusion – Update

Developers of the open source Point Cloud Library have implemented the Kinect Fusion Algorithm as published in the Paper by Microsoft.

The preliminary source code is currently available in their SVN repository’s. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results.

Besides the Kinect the library supports several other sensors. Moving forward, the developers want to continue to refine and improve the system, and are hoping to improve upon the original algorithm in order to model larger scale environments in the near future. The code is still beta, a stable release is planed to coincide with the upcoming PCL 2.0 release.

I’m definitely looking forward for to what the Kinect community is going to do with that.



Projecting Desk Lamp Shares Workspace

Microsoft Research shows a very simple but compelling way to share your physical local desktop with a remote one.

The project called IllumiShare integrated a camera and a small projector into a desk lamp. This device allows to overlay the image captured from a remote desktop over your own as a projection.

3-D Models Created by a Cell Phone

Capturing an object in three dimensions doesn’t require the budget of Avatar. A new cell phone app developed by Microsoft researchers can be sufficient. The software uses overlapping snapshots to build a photo-realistic 3-D model that can be spun around and viewed from any angle.

Another company providing a similar solution is 3Dmedia.

[via technology review]