The VR hardware market exploded after Facebook deemed VR investment worthy with its 2 billion acquisition of Oculus in 2014. A lot of serious players entered the game and Q1/2016 is gonna be the culmination of this. All major players Oculus, HTC/Valve, Sony and Microsoft with their AR headset, announced the public release of their consumer products for the beginning of next year. What still seems the most interesting to see though is what Magic Leap can come up with.
I always thought that the real magic of VR and AR technology doesn’t lie in entertainment but in its great potential as a tool for engineering and creation in general. So far only one company, HTC, lets this idea reflect in its marketing leading up to the big battle of the platforms next year. The following video is an example for this.
The British all-terrain vehicle maker Land Rover has found a practical application for Augmented Reality. They are developing a driver assist system that creates a virtual transparent bonnet/hood.
Several cameras in the grill capture the ground below the forward part of the car. The combined images of these cameras are projected onto a head-up display in the lower half of the windshield allowing to “look through” the bonnet/hood and motor block onto the ground.
The application is supposed to assist the driver in difficult situations for example steep climbs or when maneuvering in confined spaces. Additionally the application displays information about important vehicle data, including speed, incline, roll angle, steering position, and drive mode.
The concept study dubbed “Discovery Vision Concept” will be shown at the New York International Auto Show.
I remember when the Kinect was first released, and later when the first 3D AR demos leveraging Kinect Fusion style algorithms became public, thinking that this would be a very interesting thing to have on a mobile platform.
Now Google has made its Project Tango public that is promising to deliver exactly this.
Their protoype a 5″ phone is capable of tracking full 3D motion while simultaneously creating a 3D map of its environment. Running on Android APIs provide position, orientation, and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine.
Algorithms, and APIs are still in active development. You can apply for a developer program to receive one of the 200 prototype device currently available. They expect to distribute all of our their available units by March 14th, 2014.
castAR a Kickstarter campaign by technicalillusions that recently reached its 400K funding goals is developing a new pair of AR glasses. Two micro projectors one for each eye mounted onto a glasses frame either project onto a retro reflective surface or on a clip-on in front of the glasses. The projectors are running at 120 Hz. Apart from that a camera that is integrated into the glasses tracks a set of IR LED for position tracking. Overall they are exploring many different approaches to AR and 3D user-input. Have a look at the video for an in detail explanation.
The preliminary source code is currently available in their SVN repository’s. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results.
Besides the Kinect the library supports several other sensors. Moving forward, the developers want to continue to refine and improve the system, and are hoping to improve upon the original algorithm in order to model larger scale environments in the near future. The code is still beta, a stable release is planed to coincide with the upcoming PCL 2.0 release.
I’m definitely looking forward for to what the Kinect community is going to do with that.
Maybe you remember some months ago Microsoft Research published information about KinectFusion, an application that allowed to incrementally digitize a 3D scene with the kinect. Until now they haven’t made this available to the public. Now some engineers have been able to reproduce the functionality and made it available for non commercial use.
ReconstructMe works on top of the OpenNI Framework and thus can also use the Asus Xtion sensor. The Application makes use of the GPU for generating the 3D data. If you don’t have a GPU powerful enough you can still process recorded data in just not in realtime then. The generated 3D scene can be exported in the STL and OBJ format.