Great info on different projection techniques for VR video on Paul Bourkes site: paulbourke.net/dome/panodome/
A implementation of a render cam based on this in Blender: blender.org/manual/game_engine/camera/dome.html
Schematics of the discrete part of a sensor node for the Valve/HTC Vive Lighthouse Tracking System courtesy of Alan Yates. More info in this thread on reddit: reddit.com/r/Vive/comments/465lqw/lighthouse_sensor_module_designs/ in Alan Yates posts here: reddit.com/user/vk2zay and in this video with him talking about it at maker faire:
Also interesting the embedded-fm podcast with Alan Yates talking about Lighthouse Development and the science behind it in general: http://embedded.fm/episodes/162
For an in depth analysis of the lighthouse system read Oliver Kreylos post here: doc-ok.org
Someone made a board from the schematics. The data is on github: github.com/pdaderko/lighthouse_sensor/tree/master/hardware
Last but not least. Valve started a royalty-free licensing program to use lighthouse technology for third-party products. Licensees will need to pay $2,975 to attend a training course, but other than that, there’s no licensing fees or royalties for using the tech.
Valve provides a Lighthouse ‘Licensee Dev Kit’ to companies who apply to use the technology. It includes:
The VR hardware market exploded after Facebook deemed VR investment worthy with its 2 billion acquisition of Oculus in 2014. A lot of serious players entered the game and Q1/2016 is gonna be the culmination of this. All major players Oculus, HTC/Valve, Sony and Microsoft with their AR headset, announced the public release of their consumer products for the beginning of next year. What still seems the most interesting to see though is what Magic Leap can come up with.
I always thought that the real magic of VR and AR technology doesn’t lie in entertainment but in its great potential as a tool for engineering and creation in general. So far only one company, HTC, lets this idea reflect in its marketing leading up to the big battle of the platforms next year. The following video is an example for this.
castAR a Kickstarter campaign by technicalillusions that recently reached its 400K funding goals is developing a new pair of AR glasses. Two micro projectors one for each eye mounted onto a glasses frame either project onto a retro reflective surface or on a clip-on in front of the glasses. The projectors are running at 120 Hz. Apart from that a camera that is integrated into the glasses tracks a set of IR LED for position tracking. Overall they are exploring many different approaches to AR and 3D user-input. Have a look at the video for an in detail explanation.
They improved on the Microsoft implementation with their algo called KinFu Large as they are able to scan multiple volumes in on pass allowing to scan larger scenes in one go.
The point cloud library (PCL) is available as prebuild binaries for Linux, Windows and OSX as well as in source code from their svn repository. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results. Information on how to setup your own build environment and the required dependencies is available from their site.
A VR HMD currently in development.
A custom (2x) Kinect based Gesture Recognition software.
A researcher at MIT’s Center for Future Civic Media has developed software that lets anyone quickly stitch together random displays to form what he calls a Junkyard Jumbotron. Using the Junkyard Jumbotron, groups of people can more easily view data, graphics or other information on what is essentially a larger virtual display.
To create the virtual display, a user goes to the Junkyard Jumbotron creation Website to receive a unique URL. This URL is then entered on all the devices that will be used in the virtual display system. Once the URL is entered, each device will display a visual code.
The next step is to take a photo of all the ensemble of displays exhibiting the codes. The photo must then be e-mailed or uploaded to the creation Website. At this point, software developed by the center analyzes the photo to figure out where all the displays are located.
After this step, any image that the user desires to display is simply e-mailed to the site, and the software automatically slices up that image and places pieces on the individual devices. This forms the larger virtual image. A user can them manipulate the image on any device zooming and panning across devices.
The source code of this project is available on github.
This video shows how it works:
A nice 360° Video show-reel by the Australian company pixelcase.
Click the images to view the videos.