UV4L was originally conceived as a modular collection of Video4Linux2-compliant, cross-platform, user space drivers for real or virtual video input and output devices (with absolutely no external difference from kernel drivers), and other pluggable back-ends or front-ends.
It has evolved over the years and now includes a full-featured Streaming Server component with a customisable web UI providing a set of standard, modern and unique solutions for encrypted live bidirectional data, audio and video streaming, mirroring or conferencing over the web and for the IoT . Since recent releases UV4L has also been providing a RESTful API for the developers who want to implement their custom applications.
The Video4Linux2 uv4l-raspicam driver for Raspberry Pi has been extended to support the TC358743HDMI to MIPI converter chip. This chipset is often found in the B101 capture boards made by Auvidea.
This projects uses a interesting DIY approach to broadcast video via wifi. Video transmitter and receiver are never directly associated. The receiver is put into monitor mode. This results in a transmission that behaves more like a analog solution. With a weaker signal there is not an immediate disruption of the transmission but a degradation because of packet loss.
With the SpaceX Dragon 3 capsule that was recently berthed to the ISS, NASA deployed the HDEV (High Definition Earth Viewing) experiment to space. It consists of four of the shelf HD video cameras in a common housing with a video encoder and router.
Today ISSs robotic arm extracted the box from the Dragons unpressurized cargo hold and mounted it outside of the Columbus module.
Live stream showing video of all four cameras in a predefined sequence.
The four cameras are mounted so that one camera is pointing forward into the stations velocity vector, two cameras to the back and one down towards the earth.
The housing insulates the cameras from the extreme temperatures and vacuum of space but will provide no significant shielding against radiation. That’s on purpose as the main reason for the experiment is to find out how none radiation hardened cameras, especially their sensors, will fare in this environment.
The video signal is encoded to a H264 stream for the downlink and broadcasted live on a ustream channel. The stream will show all four installed cameras in a preprogrammed sequence.
This is an interesting contribution in the race towards live satellite maps, several companies are now taking part in, as the possibility to use off the shelf camera would definitely limit the costs for such ventures.
The live555 Streaming Media framework allows to stream content over RTP, and comes with a RTSP server. Below some hints how to build on Windows 7 64Bit with Visual Studio Express 2012 though these should also work with VS 2013.
On the command line
More current versions of Visual Studio IDE do not support building makefile based projects, so you need to build from the command line. Instructions on generating makefiles for the windows command-line can be found in the official documentation.
You can still build the project in the IDE. Check the second part of the post for instructions on how to modify the project for that.
From the Windows SDKs include directory copy “NtWin32.Mak” and “Win32.Mak” to the VS include directory eg. “C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include”.
Edit win32config and make “TOOLS32” point to the location of your build tools for me that was “C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC”. Change “msvcirt.lib” to “msvcrt.lib” in the “LINK_OPTS_0” param and the “LINKS” param to ‘ “$(TOOLS32)\bin\$(link)” -out:’ (without the ticks).
From the start menu launch “Developer Command Prompt for VS2012” this will give you a console that is configured for building with VS. Go to the directory where you unpacked the sources and execute “genWindowsMakefiles”.
Now run “nmake -f file.mak” in all subdirs. Eg. “nmake -f groupsock.mak” in the groupsock dir. You should proceed in the following order:
In the IDE
To get the project to build in the VS IDE you need to patch two files:
In “DelayQueue.cpp” after "const DelayInterval ETERNITY(INT_MAX, MILLION-1);" add
Several options exist to stream the picture of a webcam or the Raspberry Pi cam from the Pi. The first is using a MJPEG stream. This is the most compatible as many applications and even browsers can display such a stream.
The second one is H264. Also H264 can be encoded on the Pis GPU it has a very high latency, at least five seconds from my experience.
And last but not least you can simply pipe the video stream over netcat to transmit it to another client.
“Motion” can serve up a MJPEG stream. Apart from that is has several other features as listed below: eg. a simple motion detection. It may run in the background as Linux daemon. Here’s a guide how to get motion going with a PS3 Eye Cam.
Taking snapshots of movement
Watch multiple video devices at the same time
Watch multiple inputs on one capture card at the same time
Live streaming webcam (using multipart/x-mixed-replace)
Real time creation of mpeg movies using libraries from ffmpeg
Take automated snapshots on regular intervals
Take automated snapshots at irregular intervals using cron
Execute external commands when detecting movement (and e.g. send SMS or email)
Motion tracking (camera follow motion – special hardware required)
Feed events to a MySQL or PostgreSQL database.
Feed video back to a video4linux loopback for real time viewing
Lots of user contributed related projects with web interfaces etc.
User configurable and user defined on screen display.
Control via browser (older versions used xml-rpc)
Automatic noise and threshold control
Motion is a daemon with low CPU consumption and small memory footprint.
Reflection for Mac allows to mirror an iPad 2 or iPhone 4s screen on a Mac for presentations etc. By default the image gets scaled to 1280x720px but it’s also possible to use the native device resolution. A detailed test of the application you can find here (German).
“From the press release: ‘In recognition of the growing importance that the Internet plays in the generation and consumption of video content, MPEG intends to develop a new video compression standard in line with the expected usage models of the Internet. The new standard is intended to achieve substantially better compression performance than that offered by MPEG-2 and possibly comparable to that offered by the AVC Baseline Profile. MPEG will issue a call for proposals on video compression technology at the end of its upcoming meeting in March 2011 that is expected to lead to a standard falling under ISO/IEC “Type-1 licensing”, i.e. intended to be “royalty free.”‘”