This is the coolest Kinect Hack I’ve seen so far. The project called “Kinect Fusion” initiated by Microsoft Reserach was shown at the SIGGRAPH computer graphics conference in Vancouver, B.C., and you can see it in the video below. With that integrated into a mobile device AR really would start to make sense. Interesting also that you should be able to increase the resolution of the scan by scanning the scene multiple times.
Engadget has some more Details on the Presentation given at SIGGRAPH.
Not so long ago, augmented reality (AR) was an experimental technology that rarely left the lab and required a high level of technical expertise and knowledge to create new applications. Now, thanks to advances in smartphone hardware, AR technology is much more available and easily accessible for users and developers alike.
No matter how hard Skype and others try to convince us otherwise, we still do most of our web communications via text or, if entirely unavoidable, by voice. Maybe we’re luddites or maybe video calling has yet to prove its value. Hoping to reverse such archaic views, researchers at the MIT Media Lab have harnessed a Kinect‘s powers of depth and human perception to provide some newfangled videoconferencing functionality. First up, you can blur out everything on screen but the speaker to keep focus where it needs to be. Then, if you want to get fancier, you can freeze a frame of yourself in the still-moving video feed for when you need to do something off-camera, and to finish things off, you can even drop some 3D-aware augmented reality on your viewers. It’s all a little unrefined at the moment, but the ideas are there and well worth seeing. Jump past the break to do just that.
Capturing an object in three dimensions doesn’t require the budget of Avatar. A new cell phone app developed by Microsoft researchers can be sufficient. The software uses overlapping snapshots to build a photo-realistic 3-D model that can be spun around and viewed from any angle.
Another company providing a similar solution is 3Dmedia.
Bruce Sterling published the foreword of his new book “The Insider’s Guide to Mobile” by Raimo van der Klein, CEO of Layar the company developing the augmented reality browser also named layar.
He speaks about 3 years of development of the AR browser.
from the foreword:
Basically now we have entered the third wave of mobile. First was Communication, second was Content and now the third is Context. We are barely scratching the surface of this third wave. Context is restructuring mobile services so, that it utilises contextual datapoints to optimize the service experience for the users. Contextual datapoints are for example location, proximity to objects, proximity to friends, the user’s viewing angle, actual time, your direct surroundings and much more.
It’s an augmented-reality, OCR-capable translation app, but that’s a poor description. A better one would be “magic.” World Lens looks at any printed text through the iPhones camera, reads it, translates between Spanish and English. That’s pretty impressive already — it does it in real time — but it also matches the color, font and perspective of the text, and remaps it onto the image. It’s as if the world itself has been translated.