With the SpaceX Dragon 3 capsule that was recently berthed to the ISS, NASA deployed the HDEV (High Definition Earth Viewing) experiment to space. It consists of four of the shelf HD video cameras in a common housing with a video encoder and router.
Today ISSs robotic arm extracted the box from the Dragons unpressurized cargo hold and mounted it outside of the Columbus module.
Live stream showing video of all four cameras in a predefined sequence.
The four cameras are mounted so that one camera is pointing forward into the stations velocity vector, two cameras to the back and one down towards the earth.
The housing insulates the cameras from the extreme temperatures and vacuum of space but will provide no significant shielding against radiation. That’s on purpose as the main reason for the experiment is to find out how none radiation hardened cameras, especially their sensors, will fare in this environment.
The video signal is encoded to a H264 stream for the downlink and broadcasted live on a ustream channel. The stream will show all four installed cameras in a preprogrammed sequence.
This is an interesting contribution in the race towards live satellite maps, several companies are now taking part in, as the possibility to use off the shelf camera would definitely limit the costs for such ventures.
Researchers at ETH Zurich, Switzerland have developed “Cubli” a 15 × 15 × 15 cm cube that can jump up and balance on its corner. Reaction wheels mounted on three faces of the cube rotate at high angular velocities and then brake suddenly, causing the Cubli to jump up. Once the Cubli has almost reached the corner stand up position, controlled motor torques are applied to make it balance on its corner. In addition to balancing, the motor torques can also be used to achieve a controlled fall such that the Cubli can be commanded to fall in any arbitrary direction. Combining these three abilities — jumping up, balancing, and controlled falling — the Cubli is able to ‘walk’.
On re:publica 2013 the Berlin based german data designers from OpenDataCity created a wifi tracking network with 100 Access Points that allowed them to visualize the movements of about 6,700 different electronic devices during the conference.
The application called re:log is a dynamic map of the conference location that shows the approximate locations of the devices when they were connected to the local WiFi hotspots. An interactive timeline underneath allows to explore the dynamic changes over time, while a rectangular area can be drawn to more specifically highlight and follow a smaller amount of dots.
The visualization was based on tracking the MAC addresses of the devices according to the WiFi hotspot they were connected to. This data, which can be downloaded, was fully anonymized, yet the authors mention their desire to allow people to look up their own MAC address in the future.
I suspect the solution used is based on MagicMap a free Wifi/Bluetooth tracking architecture developed at the Humboldt-University Berlin. Their Wiki has some more information.
Just recently stumbled upon a video of this amazing kinetic installation called Hyper-Matrix. It was created for the Hyundai Motor Group Exhibition Pavilion in Korea at the 2012 EXPO. The installation consists of a specially made huge steel construction to support thousands of stepper motors that control 320x320mm cubes that project out of the internal facade of the building. The foam cubes are mounted to actuators that move them forward and back by the steppers, creating patterns across the three-sided display.
Comprised of what at first appear to be three blank white walls, Hyper-Matrix installation quickly comes to life as thousands of individual cubic units forming a field of pixels begin to move, pulsate, and form dynamic images across the room, creating infinite number of possibilities in the vertical, 180 degree, landscape. In addition, as the boxes are arranged at only 5mm narrow intervals, the wall can also be a nice moving screen for the images projected on to it.
There have been quite some large images been produced and published. But this 150 Gigapixel panorama of Tokio definitely stands out with its amazing detail and technical perfection. I have yet to find a significant stitching error. Captured from the top of the Tokyo Tower it is 600,000 pixels wide and allows to zoom into an amazingly detailed Tokio frozen in time. So detailed that wired even created a photo essay with it.
Make sure to use a WebGL capable browser for a smooth performance.
Here is a video with a “fly through” of the image.
internet-map.net shows a zoomable map of the internet based on Alexa traffic measurements. The Internet map is a bi-dimensional presentation of links between websites every site is a circle on the map with its size determined by website traffic, the larger the amount of traffic, the bigger the circle. Users’ switching between websites forms links, and the stronger the link, the closer the websites tend to arrange themselves to each other.
The data it is based on is a snapshot of the global network as of the end of 2011 (however, balloons show actual statistics from Alexa). It encompasses over 350 thousand websites from 196 countries and all domain zones. Information about more than 2 million links between the websites has joined some of them together into topical clusters. As one might have expected, the largest clusters are formed by national websites, i.e. sites belonging to one country. For the sake of convenience, all websites relative to a certain country carry the same color.
They improved on the Microsoft implementation with their algo called KinFu Large as they are able to scan multiple volumes in on pass allowing to scan larger scenes in one go.
The point cloud library (PCL) is available as prebuild binaries for Linux, Windows and OSX as well as in source code from their svn repository. The code relies heavily on the NVidia CUDA development libraries for GPU optimizations and will require a compatible GPU for best results. Information on how to setup your own build environment and the required dependencies is available from their site.
The Texan company Zebra Imaging produces Holographic prints that are pretty impressive. The pricing starts at $249 for a 290x290mm monochrome print and $599 for a color print of the same size.
1. Raw 3-D graphic data of almost any kind- including: LIDAR, Aerial photographs, CAD, CAM, can be used to make a hologram.
2. The model data is processed and rendered by a proprietary rendering engine. Each digital hologram is composed of thousands of hogels (like a three dimensional pixel). The model data is broken down into subsets for each hogel.
3. A hologram of a 3-D model is formed by recording the interference pattern of two laser beams. One laser beam is encoded with the datausing an LCD screen which then scatters the image onto the recording medium. The second beam serves as a reference beam. The two beams are brought together and interfere on the recording medium (a photo polymer film). Each point in the object acts as a point source of light, and each of these point sources interferes with the reference beam, giving rise to an interference pattern. The interference pattern of light and dark areas, similar to zebra stripes, is recorded in the photo polymer. This process is repeated for each hogel to build the entire hologram.
4. After recording and processing the film, the hologram is illuminated by a light in a similar position to the reference beam it was recorded with. Each hogel’s recorded interference pattern will diffract part of the reference beam to re-construct the data beam. These individual reconstructed data beams add together to reconstruct the whole 3-D model. The viewer perceives a 3-D image reconstructed in reflected light which is identical to the 3-D model data.
The V Motion Projectis a visually powerful Kinect based musical “instrument” that was developed by multiple artists for a marketing campaign.
On the technical side they found a very creative steam punk like solution for the problem of multiple kinects interfering with each other:
Matt Tizard found a white paper and video that explained an ingenious solution: wiggle the cameras. That’s it! Normally, the Kinect projects a pattern of infrared dots into space. An infrared sensor looks to see how this pattern has been distorted, and thus the shape of any objects in front of it. When you’ve got two cameras, they get confused when they see each other’s dots. If you wiggle one of the cameras, it sees its own dots as normal but the other camera’s dots are blurred streaks it can ignore. Paul built a little battery operated wiggling device from a model car kit, and then our Kinects were the best of friends.