Researchers at ETH Zurich, Switzerland have developed “Cubli” a 15 × 15 × 15 cm cube that can jump up and balance on its corner. Reaction wheels mounted on three faces of the cube rotate at high angular velocities and then brake suddenly, causing the Cubli to jump up. Once the Cubli has almost reached the corner stand up position, controlled motor torques are applied to make it balance on its corner. In addition to balancing, the motor torques can also be used to achieve a controlled fall such that the Cubli can be commanded to fall in any arbitrary direction. Combining these three abilities — jumping up, balancing, and controlled falling — the Cubli is able to ‘walk’.
By now, most roboticists are familiar with the myriad gecko-type robots that employ Van der Waals forces (created by microscopic synthetic setae) to cling to walls. Less well-known is the work on an electrically-controllable alternative developed by researchers at SRI International (formerly called Stanford Research Institute) called “electroadhesion”. Impressively, the electroadhesive can support 0.2 to 1.4 N per square centimeter, requiring a mere 20 micro-Watts per Newton. This means that a square meter of electroadhesive could hold at least 200kg (440 lbs) while only consuming 40 milli-Watts, and could turn on and off at the flick of a switch! Read on for pictures, videos, and discussion.
News from the famous voxel rendering engine Unlimited Detail by the australian company Eyclideon. Deemed vapor ware by some they have now released information about Geosphere a software for viewing large geospatial voxel data sets. The technology can now also be licensed in form of an SDK.
Still some of the criticism, eg. that animation and lighting can’t be done in voxels, voiced by people like John Carmack might be right after all and the reason for the shift from the field of gaming to geospatial visualization where this is not a requirement.
A researcher at MIT’s Center for Future Civic Media has developed software that lets anyone quickly stitch together random displays to form what he calls a Junkyard Jumbotron. Using the Junkyard Jumbotron, groups of people can more easily view data, graphics or other information on what is essentially a larger virtual display.
To create the virtual display, a user goes to the Junkyard Jumbotron creation Website to receive a unique URL. This URL is then entered on all the devices that will be used in the virtual display system. Once the URL is entered, each device will display a visual code.
The next step is to take a photo of all the ensemble of displays exhibiting the codes. The photo must then be e-mailed or uploaded to the creation Website. At this point, software developed by the center analyzes the photo to figure out where all the displays are located.
After this step, any image that the user desires to display is simply e-mailed to the site, and the software automatically slices up that image and places pieces on the individual devices. This forms the larger virtual image. A user can them manipulate the image on any device zooming and panning across devices.
The source code of this project is available on github.
“If you want to know what the large-scale, high-performance data processing infrastructure of the future looks like, my advice would be to read the Google research papers that are coming out right now,” Olson said during a recent panel discussion alongside Wired.
SMAA is a powerful alternative for FXAA and similar post-processing anti aliasing techniques. Some of the new features include local contrast analysis for more reliable edge detection, and a simple and effective way to handle sharp geometric features and diagonal lines. This, along with the accelerated and accurate pattern classification allows for a better reconstruction of silhouettes. Also it looks much better for scenes in motion.
Check out the video below for a comparison with several other algos. The video is also available as a download in better quality.