Maybe you remember some months ago Microsoft Research published information about KinectFusion, an application that allowed to incrementally digitize a 3D scene with the kinect. Until now they haven’t made this available to the public. Now some engineers have been able to reproduce the functionality and made it available for non commercial use.
ReconstructMe works on top of the OpenNI Framework and thus can also use the Asus Xtion sensor. The Application makes use of the GPU for generating the 3D data. If you don’t have a GPU powerful enough you can still process recorded data in just not in realtime then. The generated 3D scene can be exported in the STL and OBJ format.
Not so long ago, augmented reality (AR) was an experimental technology that rarely left the lab and required a high level of technical expertise and knowledge to create new applications. Now, thanks to advances in smartphone hardware, AR technology is much more available and easily accessible for users and developers alike.
No matter how hard Skype and others try to convince us otherwise, we still do most of our web communications via text or, if entirely unavoidable, by voice. Maybe we’re luddites or maybe video calling has yet to prove its value. Hoping to reverse such archaic views, researchers at the MIT Media Lab have harnessed a Kinect‘s powers of depth and human perception to provide some newfangled videoconferencing functionality. First up, you can blur out everything on screen but the speaker to keep focus where it needs to be. Then, if you want to get fancier, you can freeze a frame of yourself in the still-moving video feed for when you need to do something off-camera, and to finish things off, you can even drop some 3D-aware augmented reality on your viewers. It’s all a little unrefined at the moment, but the ideas are there and well worth seeing. Jump past the break to do just that.
Capturing an object in three dimensions doesn’t require the budget of Avatar. A new cell phone app developed by Microsoft researchers can be sufficient. The software uses overlapping snapshots to build a photo-realistic 3-D model that can be spun around and viewed from any angle.
Another company providing a similar solution is 3Dmedia.
Blair MacIntyre, associate professor in Georgia Tech’s College of Computing and member of Tech’s GVU Center, discusses the release of Argon, the first mobile augmented reality browser based on open Web standards.
MacIntyre and his Augmented Environments Lab in the School of Interactive Computing developed Argon to move the Web into the world. It does so by taking video from the phone’s camera and rendering graphical content on top of the video to provide users with an experience that merges space with cyberspace.