A nice realtime 3D vizualization of a kitchen for OSX and Windows.
On re:publica 2013 the Berlin based german data designers from OpenDataCity created a wifi tracking network with 100 Access Points that allowed them to visualize the movements of about 6,700 different electronic devices during the conference.
The application called re:log is a dynamic map of the conference location that shows the approximate locations of the devices when they were connected to the local WiFi hotspots. An interactive timeline underneath allows to explore the dynamic changes over time, while a rectangular area can be drawn to more specifically highlight and follow a smaller amount of dots.
The visualization was based on tracking the MAC addresses of the devices according to the WiFi hotspot they were connected to. This data, which can be downloaded, was fully anonymized, yet the authors mention their desire to allow people to look up their own MAC address in the future.
Beautiful dance performance augmented with 3D graphics.
internet-map.net shows a zoomable map of the internet based on Alexa traffic measurements. The Internet map is a bi-dimensional presentation of links between websites every site is a circle on the map with its size determined by website traffic, the larger the amount of traffic, the bigger the circle. Users’ switching between websites forms links, and the stronger the link, the closer the websites tend to arrange themselves to each other.
The data it is based on is a snapshot of the global network as of the end of 2011 (however, balloons show actual statistics from Alexa). It encompasses over 350 thousand websites from 196 countries and all domain zones. Information about more than 2 million links between the websites has joined some of them together into topical clusters. As one might have expected, the largest clusters are formed by national websites, i.e. sites belonging to one country. For the sake of convenience, all websites relative to a certain country carry the same color.
The V Motion Project is a visually powerful Kinect based musical “instrument” that was developed by multiple artists for a marketing campaign.
On the technical side they found a very creative steam punk like solution for the problem of multiple kinects interfering with each other:
Matt Tizard found a white paper and video that explained an ingenious solution: wiggle the cameras. That’s it! Normally, the Kinect projects a pattern of infrared dots into space. An infrared sensor looks to see how this pattern has been distorted, and thus the shape of any objects in front of it. When you’ve got two cameras, they get confused when they see each other’s dots. If you wiggle one of the cameras, it sees its own dots as normal but the other camera’s dots are blurred streaks it can ignore. Paul built a little battery operated wiggling device from a model car kit, and then our Kinects were the best of friends.
A Music Hack Day event in Boston has yielded a funny little web app The Infinite Jukebox creates an infinitely long and ever-changing version of uploaded tracks and visualizes the process.
It also shows how todays chart music material is copy an pasted together in the studio as it performs really well on such material meaning you can’t really hear the cuts. On older tracks, were larger portions of the song are recorded in one take, it doesn’t work that well.
If you want to test. At the moment it only seems to work with Chrome and Safari not with Firefox.
A custom (2x) Kinect based Gesture Recognition software.
A researcher at MIT’s Center for Future Civic Media has developed software that lets anyone quickly stitch together random displays to form what he calls a Junkyard Jumbotron. Using the Junkyard Jumbotron, groups of people can more easily view data, graphics or other information on what is essentially a larger virtual display.
To create the virtual display, a user goes to the Junkyard Jumbotron creation Website to receive a unique URL. This URL is then entered on all the devices that will be used in the virtual display system. Once the URL is entered, each device will display a visual code.
The next step is to take a photo of all the ensemble of displays exhibiting the codes. The photo must then be e-mailed or uploaded to the creation Website. At this point, software developed by the center analyzes the photo to figure out where all the displays are located.
After this step, any image that the user desires to display is simply e-mailed to the site, and the software automatically slices up that image and places pieces on the individual devices. This forms the larger virtual image. A user can them manipulate the image on any device zooming and panning across devices.
The source code of this project is available on github.
This video shows how it works: