Lighthouse Sensor Schematics

Basic information on Lighthouse and schematics

Schematics of the discrete part of a sensor node for the Valve/HTC Vive Lighthouse Tracking System courtesy of Alan Yates. More info in this thread on reddit: in Alan Yates posts here: and in this video with him talking about it at maker faire:

Also interesting the embedded-fm podcast with Alan Yates talking about Lighthouse Development and the science behind it in general:

In depth analysis of lighthouse technology

For an in depth analysis of the lighthouse system read Oliver Kreylos post here:


Someone made a board from the schematics. The data is on github:


Open Source implementation

There is a open source implementation of a indoor tracking system that uses the vive base stations on github:

Official Licensing of Lighthouse Technology

Last but not least. Valve started a royalty-free licensing program to use lighthouse technology for third-party products. Licensees will need to pay $2,975 to attend a training course, but other than that, there’s no licensing fees or royalties for using the tech.

Valve provides a Lighthouse ‘Licensee Dev Kit’ to companies who apply to use the technology. It includes:

Dev Kit Contents

  • A modular reference tracked object suitable for attaching to prototype HMDs or other devices
  • Full complement of EVM circuit boards to enable rapid prototyping of your own tracked object
  • 40 individual sensors for building your own tracked object
  • Accessories to enable custom prototypes


  • Software toolkit to assist with optimal sensor placement
  • Calibration tools for prototyping and manufacturing


  • Schematics and layouts for all electronic components
  • Mechanical designs for the reference tracked object and accessories
  • Datasheets for the sensor ASICs

Virtual Reality as a Tool

The VR hardware market exploded after Facebook deemed VR investment worthy with its 2 billion acquisition of Oculus in 2014. A lot of serious players entered the game and Q1/2016 is gonna be the culmination of this. All major players Oculus, HTC/Valve, Sony and Microsoft with their AR headset, announced the public release of their consumer products for the beginning of next year. What still seems the most interesting to see though is what Magic Leap can come up with.

I always thought that the real magic of VR and AR technology doesn’t lie in entertainment but in its great potential as a tool for engineering and creation in general. So far only one company, HTC, lets this idea reflect in its marketing leading up to the big battle of the platforms next year. The following video is an example for this.

castAR – AR and VR Glasses Kickstarter

castAR a Kickstarter campaign by technicalillusions that recently reached its 400K funding goals is developing a new pair of AR glasses. Two micro projectors one for each eye mounted onto a glasses frame either project onto a retro reflective surface or on a clip-on in front of the glasses. The projectors are running at 120 Hz. Apart from that a camera that is integrated into the glasses tracks a set of IR LED for position tracking. Overall they are exploring many different approaches to AR and 3D user-input. Have a look at the video for an in detail explanation.

Open Source Kinect Fusion – Update

There is an update on the open source implementation of Microsofts Kinect Fusion Algorithm by developers of the open source Point Cloud Library.

They improved on the Microsoft implementation with their algo called KinFu Large as they are able to scan multiple volumes in on pass allowing to scan larger scenes in one go.

The point cloud library (PCL) is available as prebuild binaries for Linux, Windows and OSX as well as in source code from their svn repository. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results. Information on how to setup your own build environment and the required dependencies is available from their site.

Besides the Kinect the library supports several other sensor via OpenNi.

Junkyard Jumbotron

A researcher at MIT’s Center for Future Civic Media has developed software that lets anyone quickly stitch together random displays to form what he calls a Junkyard Jumbotron. Using the Junkyard Jumbotron, groups of people can more easily view data, graphics or other information on what is essentially a larger virtual display.

To create the virtual display, a user goes to the Junkyard Jumbotron creation Website to receive a unique URL. This URL is then entered on all the devices that will be used in the virtual display system. Once the URL is entered, each device will display a visual code.

The next step is to take a photo of all the ensemble of displays exhibiting the codes. The photo must then be e-mailed or uploaded to the creation Website. At this point, software developed by the center analyzes the photo to figure out where all the displays are located.

After this step, any image that the user desires to display is simply e-mailed to the site, and the software automatically slices up that image and places pieces on the individual devices. This forms the larger virtual image. A user can them manipulate the image on any device zooming and panning across devices.

The source code of this project is available on github.

This video shows how it works: