Tuesday, September 27, 2011

The Finished Article (or at least the first release)

So I actually finished the first release a couple of months ago but wanted to have it running for a while so I can be sure it is stable before releasing it.

I have modified the firmware with a startup script (which you can download for viewing here) which checks for scripts to be found on either the associated SMB share or a USB stick on startup; this should allow for the camera to load additional code at boot without the need to revise the firmware constantly. The other advantage is the firmware has very little available space, and there wasn't enough space for the graphics to be stored for the text and volume meters.

The firmware can be found here; as before, whilst it works great for me, your mileage may vary.

The firmware released by Trendnet has been updated due to the security holes found by Console Cowboys - I have a new version of the firmware here.

I have also included the files I have put onto USB sticks on the camera, if you extract the following files (found here) into the root directory of a FAT fomatted memory stick and insert it into the camera it will start the baby camera volume meter on reboot.

The additional files are very conservative with the timings so the camera boot time takes about 1 minute longer. I don't consider this to be an issue as they are rarely restarted.

You can also pop the files on a samba / windows share and connect to it from the camera's interface and it will load the files over the network on reboot - freeing up the USB (although as the network link is torn down when I unload the existing camera module to load my own so it doesn't work for me with my apps)

For visualisation I am using Jogglers from O2. I bought these from the O2 store when they went on firesale for £50 and they are perfect for the job. I have leveraged work done by far smarter folks than me and they are now running Ubuntu 11.04 and I am using VLC to display the video. I added a couple of shortcuts to the desktop so the other half can easily start the video if they are rebooted. We have had them running for about 2 months without a restart so I consider them to be perfectly stable enough for our use (they even survived my typical curse; whenever I go travelling for work it is inevitable that something I have hacked together fails - in this instance they have survived all trips without needing any assistance from me)

The URL's I use for VNC are:


- or -


you will need to substitute the appropriate hostname for your purposes.

I still have some to-do's which I will get round to shortly and post updates:

Add OpenSSH Server:

This is compiled and tested, I just need to add it to the startup script

Update Busybox

This is compiled and tested, I just need to add it to the startup script

Add USB Temperature Sensor support

This is proving to be tricky, the USB support on the camera seems to be the bare minimum to support storage devices, I can't seem to get generic support working - if anyone out there has any ideas I would be grateful for suggestions

Add 2 way audio / mp3 playback

The little fella finds it easier to sleep with white noise and I would like to be able to fire it off with perhaps a gradual fade out over time to ease him to sleep. This does require adding an external speaker to the camera which is not ideal.

If any of you have any other suggestions, please let me know.

As ever, I welcome your feedback, please let me know your thoughts (good or bad).

Wednesday, June 29, 2011

Adding visual audio level monitors

With the ability to overlay images to the video stream tackled the next task was to analyse the audio and overlay VU meters to allow us to see how noisy the little tyke is without having to actually listen to him (it is a him btw, he popped out on the 22nd June).

The camera uses the linux open sound system (OSS), which has been replaced by ALSA in linux 2.5 and above and isn't particulary nice to work with. The camera streaming server (camserv) , for which there is no source code, opens the audio-in device on startup and as it can only be opened once (and without it being available camserv refuses to work) I had to look at other avenues to get the audio.

Fortunately, camserv actually offers the audio out in a mangled stream which can be found at http://[cam-ip]/cgi/audio/audio.cgi - this stream provides chunks of audio data in 16bit PCM stream at what appears to be about 8KHz (although the frequency is only being guessed at). The stream isn't presented in a friendly format (i.e. browsers don't recognise it and simply try and download it as a series of files) but it is quite easy to write a bit of code to connect to the port and analyse the audio.

I put together some code to connect to the stream and display the current, peak and average volume overlayed on the video stream. It uses some of the same code used for the text display on the previous entry and seems to run around 1/4 sec behind the actual audio (this is due to the audio being streamed locally, analysed and then overlayed onto the video).

I have included the source code and a compiled version here, and below is a short video of it being streamed to an iPad:

And here it is on Apple TV2 via XBMC:

To use the code on your camera it does need to authenticate to the local camera web server; it defaults to username:admin pasword:admin as do the cameras out of the box, if you are using your own username and password (and I strongly suggest you do) you will need to specify the base64 encode of the username and password on the command line via -a . There are a number of websites which can do is encode so I haven
't included this in the code; the format needs to be a hash of the username:password (e.g. 'admin:admin' which has a hash of 'YWRtaW46YWRtaW4=')

Wednesday, June 8, 2011

Overlaying text and images into the video stream

Having tinkered around with the toolchain for a while I have turned my attention to one of the core tasks of the project - that is overlaying text and images in the video stream.

The video capture, encoding and motion detection are all handled by the plmedia.o module; fortunately the source code for this module is provided in the toolchain, although there is no documentation.

After a fair bit of scratching my head it is apparent that the video capture hardware writes directly to memory on demand (by the module) and then a further request is sent to the encoder hardware which encodes/compresses it directly to another buffer. This is the buffer which is used by the device when stremaing video. There is further hardware which performs motion detection, using the video capture buffer.

I decided to inject into the video stream directly before the encoding takes place in order to be sure (with as much certainty as possible) that the frame has been captured in full.

plmedia.o source is made up of 3 core components: plgrabber, plencoder and plmd. plencoder is the component responsible for managing the encoding process and this is where I have added my code.

The video is captured in YCbCr format with 4.1.1 ratio between the luminence and the chroma channels - that is, the Y channel is at 640x480 resolution with 8 bpp and the Cb & Cr channels are at 320x240 resolution with 8 bpp.

Each channel is captured simultaniously in its own buffer, I added an IOCTL command to pass a structure which contains the dimensions of the image to be added, a TTL (so images can expire) and pointers to buffers which contain appropriate images. The definitions for the IOCTL command and the buffer can be found in the source below in include/video/plmedia.h.

The heavy lifting is done in the plencoder.c source; look for a function called OverlayImages which iterates through the image buffers before each encode - it is interesting to note that I was having huge issues with image corruption which took some real head scratching to solve - the images would appear to be corrupt with some elements of the video image showing through - it turns out that the processor has a little cache which needs to be flushed otherwise it doesn't write to the buffer before the encoder kicks in.

In EncIOCTL you will find the new function to add an image to the buffer list - there is functionality to replace an existing buffer or add a new one - the replace function is useful for adding animation or moving images to the buffer (such as a time stamp) without needlessly wasting resources.

Finally in InitEncoder and CleanupEncode is addional code to create and release the buffers.

There is an arbitary limit of upto 6 independent overlays and this can be increased by modifying the IMAGEBUFFERS constant in the plencoder.h

The image buffers must be passed in the right format - and this is where it can get tricky - images need to be converted to YCbCr format and at the right resolution for each channel - the co-ordinates and dimensions refer to the Y channel and are appropriatly converted for the CbCr channels.

I have also created an applet to add text to the video stream as part of the babycam project; the code is attached below. It uses a pre-created BMP consisting of a matrix of characters white on black. The code extracts the relevant characters, builds a buffer, converts it and then sends it to the module. The same code can be used to send a BMP although it needs to be in 24bpp uncompressed format (regular windows .bmp's are fine).

The sample below shows a screen grab of the the module working with 2 overlays added by the displaytext applet - although they are in black and white there is full colour support.

And here is the same scene being streamed to an iPad:

The source and cross compiled code for the module can be downloaded here

The source and cross compiled code for the text overlay applet can be downloaded here

Wednesday, April 27, 2011

Telnetting to the device

As per my previous post, I provided instructions on how to enable telnet on the device - for those of you who would like to have access but don't have the time to install the toolchain and mess about I have posted a modified firmware file here.

Please be aware if you chose to use this - it is at your own risk - although it worked for me, I will not be held responsible if it bricks or otherwise kills your camera, cat, car or grandmother.

Once installed you should be able to telnet onto the camera on port 15566; The telnetd limits it to 2 simultanious connections and there is no user level security.

I will post a further firmware shortly with SSH and SCP along with lots of other goodies when I get past the testing stage.

Thursday, March 17, 2011

Under the hood

Poking around the shell after telnet'ing in has revealed the following hardware, a bit of googling later and it actually is very impressive given the relative low cost of the device:

Prolific PL1029 Arm Processor

OmniVision OV7720 CMOS Sensor

4 Mbytes NOR (Flash)

32 Mbytes SDRAM

RALink Technology RT61 WiFi Adapter (802.11b/g)

Ethernet Adapter

2 x Status LED connected via GPIO
1 x IR LED connected via GPIO
1 x light sensor connected via GPIO
1 x momentary push button connected via GPIO

Wednesday, March 16, 2011

Taking a peek at the firmware

TRENDnet offer a download for the GPL source at their website; a quick download later allowed me to take a peek at what they had used to pull it together. Annoyingly, they had only included the bare esentials with no buildchain or any other tools to build a custom firmware.

A bit more research showed that they have used a relatively common platform to build the camera (using the PL1209 chipset) which a very capable and self contained device (more on this later).

It seems that other companies also make use of this platform for these type of devices (indeed some look identical to the 312W I have here) and so I had a look around to see if anyone else has had any joy in getting into the firmware.

A company snappily called Airlink 101 Network Solutions offer a similar looking device; the AICN500W which has a very similar case and the same functionality. A quick look around their website and a download later and I have a much more complete firmware source tree and crucially the tools necessary to build a complete firmware image.

I have republished their toolchain here under the terms of the GPL.

The instructions provided with the image indicate that fedora 3 is needed to use the buildchain; not having the desire to have yet another image on my Mac I took the chance that it would work with Ubuntu; a couple of hours later (and a couple of fixes to the make files provided) and I can get the entire toolchain to compile providing the compiler, headers and source files for all the opensource components.

Not wanting to risk installing the firmware from another manufacturer - and bricking the camera - I looked around for a description of the firmware format to see if the 312W resembled that used in the Airlink devices.

I stumbled across a chap who has already done this for the AICN747W which looks similar enough - and interestingly he has used it to enable telnet on the camera. A quick peek at his instructions and a little tinkering later, using the fwpacker source provided in the tool chain to reverse engineer the process and I wrote a script to extract the components in order to modify the firmware provided with the 312W.

You can download the source here

# TrendNet firmware unpacker
# Matt Brain (matt.brain@gmail.com)
# Usage: fwunpack.pl
# Extracts: prostub.bin
# vmlinuz
# minix.gz
# autoboot.bat
# footer.bin
# Writes log to fwunpack.log

use strict;

my $scriptversion=1; #Version of this script

my $headerPointer=0x04; #Header reference in firmware
my $versionLocation=0x08; #Version location in firmware
my $md5Location=0x0c; #md5 location in firmware
my $md5Size=16; #md5 length
my $crcLocation=$md5Location+$md5Size; #crc location
my $crcSize=4; #crc length
my $pkgSizeLocation=$crcLocation+$crcSize; #package size location
my $headerSize=4096; #size of header found at $headerPointer

my $model="Unknown";
my $headerLocation=0x00;
my $tVersion=0x00;
my @version=(0,0,0,0);

my @chunkStart=(0);
my @chunkLength=(0);
my @chunkName=("vmlinuz", "minix.gz", "autoboot.bat");
my @chunk;

my $footer="";
my $prostub="";
my $numArgs=$#ARGV+1;

sub Usage {
print "Usage: fwunpack \n\n";

if ($numArgs<1) {

print "** fwunpack version $scriptversion **\n\n";
print "Opening $ARGV[0]\n";

#read the header location

read(FIRMWARE,$headerLocation,4) or die "FATAL: Couldn't read the header location\n";

#firmware location is stored in little endian format so need to fiddle the bits
$headerLocation = unpack('L<',$headerLocation);
printf "Header location at 0x%x\n",$headerLocation;

read(FIRMWARE,$tVersion,4) or die "FATAL: Couldn't read the version\n";
printf "Firmware version $version[3].$version[2].$version[1].$version[0]\n";
#read the model name
print "Model $model\n";

my $chunkCursor=$headerLocation+4096;
#load the header
for (my $counter=0;$counter < 511;$counter++) {
$chunkStart[$counter] = unpack('L<',$chunkStart[$counter]);
$chunkLength[$counter] = unpack('L<',$chunkLength[$counter]);
if ($chunkStart[$counter]>0) {
printf "%s found, deploy at:0x%x length:%d\n",$chunkName[$counter],$chunkStart[$counter],$chunkLength[$counter];
$chunkCursor = tell(FIRMWARE);
open(CHUNK,">$chunkName[$counter]") or die $!;
binmode CHUNK;
print CHUNK $chunk[$counter];
close CHUNK;

print "writing footer.bin\n";

binmode FOOTER;
print FOOTER $footer;
close FOOTER;

print "writing prostub.bin\n";

binmode PROSTUB;
print PROSTUB $prostub;
close PROSTUB;


Using the instructions provided with the AICN500W firmware I was then able to rebuild the firmware and do a comparison between the 2 - this was critical to show that i was pulling all the bits out as expected and was able to reconstruct the bits into an identical firmware file. I had to create an ipcam.dsc file to describe the components in the firmware for the fwpacker tool as below and append the footer.bin to the output of fwpacker.

# ipcam.dsc

# describe the firmware version

# describe the model which maxium 8 characters
@model IPCAM001

# describe nor flash type and size
@norinit 1 "8x8,64x63"

# describe deployment location and input file
vmlinuz 0x20000 777008
minix.gz 0xe0000 2836160
autoboot.bat 0x2000 8192

I then referred back to the instructions found earlier and
enabled telnet access by uncommenting 2 lines in etc/rc.d/init.d/daemon.sh:

# echo "Starting telnetd ..."
# telnetd -p 15566

gzipping the disk image back up produced a smaller image which concerned me - but rebuilding the firmware and uploading proved it worked (bit of a gamble I know) - I had telnet access on 15566.
You may download a bundle which contains the source for the fwunpacker along with the fwpacker and associated files from

It seems all the download links are broken - here is a bundle of all the files used in this project Google Drive ZIP archive

Getting started

Upon returning to the homestead I opened the box and got started; it comes with a neat little wall wart power supply and a setup CD. Having a Mac (and no desire to go through what looks like an unnecessary config process using a Windows VM session) I decided to connect it up to ethernet and use the in built web server to configure it. Out of the box it uses a static IP address on ethernet and was a doddle to configure it to use the wifi and get it fired up.

I won't go through the procedure (it is well documented in the instructions) but it didn't take long to get it streaming video into firefox, albeit without audio as it requires IE for the web client's activex control to stream both video and audio to it.

Out of curiosity I connected to it from a windows VMware session with IE and found it also supports 2 way audio and motion detection - quite impressive for such a low cost device although the handicapping for anything not using the activex control is a bit of a pain.

A quick port scan revealed it was listen on HTTP and RTSP; I would have to look elsewhere to get to the console.

I also connected to it using VLC player (using RTSP) and joy - the audio and video both work perfectly - this should make it easier to use with the target devices I had in mind.

The video quality is ok - I wasn't expecting amazing results, but it is certainly suitable for our needs. There is a slight purple tinge to the image, but I think this is due to the sensor being IR sensitive. Streaming is pretty good - it can get a little choppy at times, but i suspect this is due to the WiFi network being congested rather than an issue with the camera itself. I am still pondering the merits of WiFi over Ethernet, with all the possible implications on the little mites health; I haven't discounted the idea of using Ethernet when it is being used for the baby.

Introducing the TRENDnet TV-IP312W

After looking around for a while I found there are lots of companies producing inexpensive IP Webcams; knowing how reluctant companies could be with releasing the opensource components (or only releasing the bare minimum to meet the GPL license requirements) I needed to find something which I thought I stood a fair chance of modifying. In addition to this I need to find a cam with the following functionality.

1.Work well in low light conditions
  • We don't want to have to be burning 100 Watts and blinding the baby just so we can see it sleeping (or attempting to)
2.Stream video and audio with mjpeg and 3gpp protocols
  • We need something which pretty much already works with stuff we have - I don't have the time (or the knowledge) to build an entire streaming platform
3. Have a mechanism to monitor temperature or at least expansion which could be adapted to monitor temperature

4. Be affordable
  • If she who must be obeyed (SWMBO) had found I had spent thousands of pounds on a fancy streaming platform I don't think she would be happy ;)
  • We also have friends who are in a similar position, and it would be nice to be able to gift them a similar solution
With these additional requirements in mind (and including the requirements from the initial request); I decided on a TRENDnet TV-IP312W.

This camera seems to tick all the boxes; a quick trawl through the internet identified the following features:

  • WiFi (802.11g) and Ethernet
  • Video capture at 640 x 480 and 30fps
  • Audio capture and Audio out (not even thought of using this)
  • Streaming with multiple formats simultaneously
  • USB port (add temperature module here I think)
  • Works down to 0 lux with the support of IR illumination
  • OpenSource firmware (downloadable from the TRENDnet website)
I looked around for a good price and as luck would have it found it for £99.00 (reduced from £145.00) at my local PC world, so I popped out and bought the last one in the store.

Building a baby monitor

We have a baby on the way, and in her infinite wisdom (and knowing my love of hacking around) the other half has requested a baby monitor for the nursery. I have a love of all things gadgetry and decided we needed to be able to view and hear the baby as well as monitor the temperature.

I drew up a short list of the functions needed and then had a look around to see if anything ticks all the boxes.

The requirements are:

1. View and Listen to the baby on devices we already own
  • This includes iOS devices such as the iPad, iPhone and Apple TV2
  • PC's
2. View and Listen to the baby on a dedicated device (and ideally something else we already own)
  • I have an O2 Joggler Picture Frame which I bought for doing other things with - this would be ideal
3. Allow us to view and listen remotely.
  • I'm not planning to be a bad parent (honest), but i travel a lot for work do being able to virtually peek my head around the door and check in on the little might would be a bonus.
4.Monitor and alert on events; specifically:
  • Temperature
  • Noise
5. Be easy to use
  • Unlike many of my hacks; it shouldn't require specific knowledge or constant pampering to operate, it should just work.
6. Be family friendly
  • We don't want to have a maze of wires and dongles floating around - it needs to be self contained
7. Have a privacy setting.
  • The last thing we want is guests to see the good lady breastfeeding before popping the little mite to bed; there needs to be an easy way to turn of the video and audio and for it to automatically restart should we forget to turn it back on
  • Ideally, the other monitoring; i.e. temperature and volume should continue to be monitored
With all these requirements in mind I started hunting around for the perfect device. Whilst there are many good baby monitors (some with camera's, screens, temperature monitors etc) available, none of them ticked all the boxes (and to be honest I relished the challenge).

I then started to look around the market for an IP camera which could be modified to meet our requirements. Having had some experience with embedded devices (such as linksys routers, gumstix and pogoplugs) I thought it would be relatively trivial to enhance something out there based on an opensource Linux platform.