Plusnet IPv6 still delayed, so let’s go spelunking in a Hurricane Electric tunnel

When I last changed ISP (last year, when Sky bought my old ISP, BE Unlimited) one of my requirements was for my new ISP to have a roadmap for making IPv6 available. I’ve been waiting (impatiently) for my new ISP, Plusnet, to deliver on their initial “coming soon” statements. Sadly, like almost all ISPs, Plusnet are not moving quickly with IPv6, and are still investigating alternatives like carrier grade NAT to extend the life of IPv4. I can sympathise with this position – IPv6 has limited device support, most of their customers are not ready to adopt it, and trying to provide support for the necessary dual-stack environment would not be easy. But, the problem of IPv4 address exhaustion is not going away.

So at the end of last year they started their second controlled trial of IPv6. I was keen to join, but the conditions were onerous. I would get a second account, I would need to provide my own IPv6-capable router, I couldn’t have my existing IPv4 static IP address, I couldn’t have Reverse DNS on the line, and I had to commit to providing feedback on my “normal” workload. So much as I wanted to join the trial, I couldn’t, as I wouldn’t be able to run my mailserver.

So I decided to investigate alternatives until such time as Plusnet get native IPv6 support working. The default solution in cases like mine, where my ISP only provides me with an IPv4 connection, is to tunnel IPv6 conversations through my IPv4 connection, to an ISP who does provide IPv6 connectivity to the Internet. There are two major players in this area for home users, SisXS and Hurricane Electric. Both provide all the basic functionality, as well as each having some individual specialist features. I’m just looking for a basic IPv6 connection and could use either, but in the end Hurricane Electric appeared vastly easier to register with, so I went with them.

My current internet connection is FTTC (fibre to the cabinet) via a BT OpenReach VDSL2 modem and my ISP-supplied (cheap and nasty) combined broadband router, NAT and firewall. This gives me a private 16bit IPv4 address space, for which my home server (a low-power mini-ITX system that runs 24×7) provides all the network management functions, such as DHCP and DNS.

What I want to add to this is a protocol-41 tunnel from the IPv6 ISP (Hurricane Electric, or HE) back through my NAT & Firewall to my home server. By registering for such a tunnel, HE provide (for free) a personal /64 subnet to me through that tunnel, allowing me to use my home server to provision IPv6 addresses to all the devices on my LAN. However, this connection is neither NAT’ed nor firewalled. The IPv6 addresses are both globally addressable and visible. So I also want my home server to act as a firewall for all IPv6 communications through that tunnel, to protect the devices on my network, without forcing them to all adopt their own firewalls. I was initially concerned that because my home server also acts as an OpenVPN endpoint, and so uses a bridged network configuration, getting the tunnel working might be quite awkward, but it turns out to make very little difference to the solution.

So, to make this work, first you need a static IPv4 address on the internet, and to have ensured that your router will respond to ICMP requests (pings!). Then you can register with Hurricane Electric, and “Create a Regular Tunnel”, which will result in a page of information describing your tunnel. I printed this for future reference (and in case I broke my network while making changes) but you can always access this information from the HE website.

You now need to edit /etc/network/interfaces. Add lines to define the tunnel, as follows, substituting the values from your tunnel description:

# Define 6in4 ipv6 tunnel to Hurricane Electric
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address [your "Client IPv6 Address"]
netmask 64
endpoint [your "Server IPv4 Address"]
ttl 255

up ip -6 route add default dev he-ipv6
down ip -6 route del default dev he-ipv6

Now add an address from your “Routed /64 IPv6 Prefix” to the appropriate interface – in my case, this is the bridge interface br0, but its more likely to be eth0 for you. This defines your servers static, globally accessible IPv6 address:

# Add an IPv6 address from the routed prefix to the br0 interface.
iface br0 inet6 static
address [any IPv6 address from the Routed /64 IPv6 Prefix]
netmask 64

Since I am running Ubuntu 12.04 I now need to install radvd, which will advertise the IPv6 subnet to any systems on our network that want to configure themselves an IPv6 connection. Think of it as a sort of DHCP for IPv6. However, when I move to 14.04 sometime later this year I expect to be able to get rid of radvd, and replace it with dnsamsq (which I already use for IPv4 DNS/DHCP), as the latest version of dnsmasq is reported to provide a superset of the radvd capabilities.

sudo apt-get update
sudo apt-get install radvd

Then configure radvd to give out IPv6 addresses from our Routed /64 IPv6 Prefix, by creating the file /etc/radvd.conf, and entering the following into it:

interface [your interface, probably eth0]
{
AdvSendAdvert on;
AdvLinkMTU 1480;
prefix [Your Routed /64 IPv6 Prefix, incl the /64]
{
AdvOnLink on;
AdvAutonomous on;
};
};

Any IPv6-capable devices will now ask for (and be allocated) an IPv6 address in your Routed /64 subnet, based on the MAC address of the interface that is requesting the IPv6 address.
Now uncomment the line:

# net.ipv6.conf.all.forwarding=1

from the file /etc/sysctl.conf. This will allow your server to act as a router for IPv6 traffic.

Now we need to enable and then configure the firewall. I take no credit for this, as much of the information related to the firewall was gleaned from this post. As I run Ubuntu Server I’ll use ufw, the Ubuntu Firewall utility to configure the underlying ipchains firewall. Alternative front-ends to ipchains will work equally well, though the actual method of configuration will obviously differ. First I needed to enable the firewall for IPv6 by editing /etc/default/ufw, and ensuring the following options are set correctly:

# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=yes

and

# Set the default forward policy to ACCEPT, DROP or REJECT. Please note that
# if you change this you will most likely want to adjust your rules
DEFAULT_FORWARD_POLICY="ACCEPT"

Now we need to enable the firewall (by default it’s disabled) and add some additional rules to it:

# Enable the firewall
sudo ufw enable
# Allow everything on my LAN to connect to anything
sudo ufw allow from 192.168.0.0/16
# Allow Protocol-41 connections from the Tunnel Endpoint Server (to run the tunnel)
sudo ufw allow from [Your "Server IPv4 Address"] proto ipv6
# Allow BOOTP service on port 67 from radvd
sudo ufw allow proto any to any port 67
# Allow my IPv6 addresses to access services on this server
sudo ufw allow from [Your "Routed /64 IPv6 Prefix" including the "/64"]

I also had to add a few more rules to cope with the external facing services that my home server provides to the Internet (mail, web, ssh, ftp, vpn etc).

Finally I want to prevent all but a few specific types of external IPv6 connection to be made inbound into my network. To do this, edit the file /etc/ufw/before6.rules, and add the following lines directly BEFORE the “COMMIT” statement at the end of the file:


# Forward IPv6 packets associated with an established connection
-A ufw6-before-forward -i he-ipv6 -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow "No Next Header" to be forwarded or proto=59
# See http://www.ietf.org/rfc/rfc1883.txt (not sure if the length
# is needed as all IPv6 headers should be that size anyway).
-A ufw6-before-forward -p ipv6-nonxt -m length --length 40 -j ACCEPT

# allow MULTICAST to be forwarded
# These 2 need to be open to enable Auto-Discovery.
-A ufw6-before-forward -p icmpv6 -s ff00::/8 -j ACCEPT
-A ufw6-before-forward -p icmpv6 -d ff00::/8 -j ACCEPT

# ok icmp codes to forward
-A ufw6-before-forward -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-request -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-reply -j ACCEPT

# Don't forward any other packets to hosts behind this router.
-A ufw6-before-forward -i he-ipv6 -j ufw6-logging-deny
-A ufw6-before-forward -i he-ipv6 -j DROP

At this point I saved everything and rebooted (though you could just bring up the he-ipv6 interface) and everything came up correctly. I was able to test that I had a valid Global scope IPv6 address associated with (in my case) my br0 interface, and that I could successfully ping6 -c 5 ipv6.google.com via it. I was also able to check that my laptop had automatically found and configured a valid Global scope IPv6 address for it’s eth0 interface, and that it could ping6 my home server and external IPv6 sites, and that it was possible to browse IPv6-only websites from it.

Simple install of OpenVPN with Ubuntu server and Mint client

Since I’ll have my laptop and phone with me while I’m in hospital, I’m expecting to be able to keep in touch with family and friends. However, it would also be useful to have access to my home network from hospital. I can already SSH into a command line on my home server, but I’ve been meaning to get a proper VPN set up for some time now, so this seemed like the excuse I needed to actually make it happen.

In my case, I have a home network with a NAT’d router that has a static IP address, and proper DNS & reverse DNS entries associated with it. I then have an Ubuntu Server running 24×7 behind it, providing various services to my home network. I simply want my Mint laptop to be able to VPN into my home network, securely, and on demand.

It turns out to be really straighforward, though there were a few quirks to overcome! Fundamentally I followed the instructions in this PDF to set up the server side of the VPN, found on the Madison Linux User Group website. However, there are a few problems with it, that need correcting:

  1. Under “Installing the Network Bridge and Configure Network Settings” on page 3, be aware that if you are using dnsmasq to mange your DNS and DHCP leases, you will need to change it’s configuration file in /etc/dnsmasq.conf, to listen on br0 rather than eth0.
  2. Under “Create the Server Keys and Certificates”, point (3), note that there are two KEY_EMAIL variables where there should be only one, and the variables KEY_CN, KEY_NAME and KEY_OU should ideally be filled in for completeness.
  3. Under “Create the Server Keys and Certificates”, before starting on point (4), you (a) need to edit the file /etc/openvpn/easy-rsa/whichopensslcnf, and remove all occurances of the string “[[:alnum:]]”, and (b) need to make sure that you can write to the file “~/.rnd”; by default it isn’t writeable except by root, so you need to issue the command “sudo chown youruserid ~/.rnd”.
  4. Under “Generate the Client Keys and Certificates” on page 4, when you come to carry out the instructions in point (1), to create the client keys, you must set a unique KEY_CN by changing the last command to something like “KEY_CN=someuniqueclientcn ./pkitool client1”.
  5. Under “Create the OpenVPN Server Scripts”, on page 5, the code to go into the up.sh and down.sh scripts is wrong. The brctl command is located in /sbin, not /usr/sbin. Change both scripts accordingly.

For the client side of the VPN, I followed the instructions in this PDF, which worked perfectly.

Once everything has been rebooted and is working properly, it is possible to VPN into my home network from the internet. All traffic from my laptop then flows through my home network via a secure VPN. Essentially my laptop becomes part of my home network for the time the VPN is running.

Which is exactly what I wanted.

New ISP, and playing with callerid

Back at the start of last month (the 1st of March) Sky announced that they were going to acquire the UK fixed line businesses of Telefonica. This amounted to about 500,000 O2 and BE Unlimited broadband and telephone customers, including me.

On the surface, this looked like a really good deal for almost everyone – O2 and BE Unlimited had been steadily losing customers for a long time, because Telefonica wasn’t making the investments to allow them to keep up with technologies such as FTTC. Sky was already offering some of those technologies (particularly FTTC) and has a history of investing to create premium solutions for their customers. And Sky would gain the additional customers to jump them past Virgin, moving from the third largest ISP in the UK to the second largest, just behind BT.

However, there was a fly in the ointment; BE Unlimited in particular started life as the niche-market provider-of-choice for the demanding techie. As such they offered some less common features – static IP addresses (either individually or in blocks), full Reverse DNS resolution, Annex M ADSL2+ with user-configurable noise margin, multiple bonded connections … the types of features that are usually reserved for very expensive business class connections. And Sky, as a mass-market ISP, did not (and to my knowledge still do not) offer any of those kinds of features.

What the announcement did was galvanise many of the BE Unlimited customers to start looking for alternatives, in case the features that they depended on would go away when their service was transferred to Sky. In my case my requirements were fairly simple; at least 1 static IP address with configurable Reverse DNS, no port-blocking, no usage limits, and the ability to get at least 12Mb down and 2Mb up. And ideally I didn’t want to pay any more than I already was.

Frankly, I was expecting it to be a real challenge, but to my surprise the second ISP I called was able to meet all my needs. Admittedly the nice lady in the call-centre had never heard of Reverse DNS before, but she took down all my requirements, and put me on hold while she talked to their technical support team. Several long minutes later, and we were in business; their offer was 60Mbps down, 20Mbps up on an unlimited FTTC connection, with one static IP address with reverse-DNS, and no ports blocked. Even better, it was £2 a month less than I currently paid, provided I also transferred my phone service to them.

And so I became one of what appears to be the many who have left BE Unlimited rather than be moved to Sky. Which is rather sad really – they were an exceptional ISP who got bought by a large organisation (Telefonica) and then starved of the cash they needed to continue to grow. The end result is that they won’t exist soon.

The one thing that I have “lost” in this process of changing ISP’s is the “anonymous caller rejection” feature on my phone line. BE provided that for £2 a month, whereas Plusnet wanted £4. Which is a lot for a feature that in practice, I found rather limited anyway.

Which brings me to the point of this post. Rather than paying for the Anonymous Call Reject, I’m going to use a modem to monitor the callerid of inbound phone calls, and then decide on a call by call basis what to do with the call. I can either let the call continue (and ring the phones) or I can pick-up & immediately hang-up the line (using the modem) before the phones even ring. That should enable me to cut off all the callers with witheld numbers, just as the £4 a month ISP option would. However, I can also reject calls from “unavailable” numbers, which are usually cold callers using VoIP systems. And I can also easily implement a system that will reject all calls outside of certain hours (say 8am to 6pm), unless the caller is on a whitelist of known contacts. Which is a lot better than the ISP anonymous caller rejection can achieve.

To do this I’m using an old USR 5668B serial-attached modem that correctly understands the UK’s completely proprietary callerid system. However, this modem is no longer available, and getting a clear answer on what modems can actually cope with UK callerid is, er, troublesome. However, I have discovered reports that the currently available USR 5637 USB fax modem works well with UK callerid. See here for ordering information (no affiliation).

To access the modem I’m using a simple self-written script, written in Perl, using an external library called Device::Modem, which gives me a nice high level abstracted interface to the modem, making the programming much easier. My code is based on some example code provided with Device::Modem for reading the callerid, but in my case, is now logging the inbound calls into a file so I can better investigage who to put on a future whitelist.

Update: code attached for interest.

#!/usr/bin/perl
#
# Capture Callerid information for incoming calls, using the modem attached to /dev/ttyS0
#
#
use Device::Modem;

#
# Init modem
#
my $port = '/dev/ttyS0';
my $baud = 57600;
my $modem = new Device::Modem ( port => $port );
die "calleridmonitor.pl: Can't connect to port $port : $!\n" unless $modem->connect( baudrate => $baud );

#
# Set up signal catcher to ensure modem is disconnected in event of kill signal
#
$SIG{INT} = \&tsktsk;

sub tsktsk {
$modem->disconnect();
die "calleridmonitor.pl: SIGINT received.";
}

#
# Enable modem to return Caller ID info
#
$modem->atsend('AT#CID=1'.Device::Modem::CR);
my $response = $modem->answer(undef, 2); # 2 seconds timeout

# Poll state of modem
my $call_number = '';
my $call_date = '';
my $call_time = '';

# Do forever
while( 1 ) {

# Listen for any data coming from modem
my $cid_info = $modem->answer();

$cid_info =~ s/\R//g; # Get rid of any line break wierdness

#
# If something received, take a look at it
#
if( $cid_info =~ /^RINGDATE\s=\s(\d{4})TIME\s=\s(\d{4})NMBR\s=\s(.*)/ ) {
#
# It matches our (previously observed) callerid string
#

# Date (mmdd) in $1
$call_date = $1;
# Time (hhmm) in 24hr format in $2
$call_time = $2;
# Phone number (string) in $3
$call_number = $3;

#
# Write out the date, time and number to a log file
#

# Assuming we dont want the file open all the time, so open, append and close the log
open (FH, ">>/home/richard/callerid/phone.log") || die "calleridmonitor.pl: Couldn't open file: $!\n";
print FH "$call_date $call_time $call_number\n";
close FH ;
}
# Repeat forever
}

Calling home, free of charge

I’ve just been through a period of travel hell; individually each of the the three back-to-back trips are interesting, useful, and in some ways, quite good fun. But they’re back to back. So over a 17 day period, I’ve actually had only 3 days in the UK, and two of those I was still working. Of course, mention business travel to anyone that doesn’t do it, and it brings to mind visions of exotic locations and lavish expense accounts. Whereas the reality tends to be cramped economy class travel, very long working days, and lonely hotel rooms a long way from friends and family.

More importantly it means that just doing the normal day to day family things, like chatting to your kids about their day in school can rapidly become an extortionately expensive luxury that you feel ought to be kept to the brief minimum. Especially when the company that you’re travelling for thinks that it shouldn’t pay for those phone calls – which particularly irks me.

And that got me thinking – I actually have all the facilities I need to enable me to call home to my family for nothing. My company expects me to need an Internet connection in whatever hotel I stay in, and fully funds it. I carry a laptop and an Android Smartphone. In combination with the rather sophisticated phone that I have at home, I can talk to my family for as long as I want for no additional costs, using a technology called VoIP, based on an open standard called SIP.

My home phone is a Siemens Gigaset S685 IP DECT system, with 5 handsets. It’s what the marketing world likes to term “pro-sumer”, by which they really mean it’s got lots of features that most consumers will never use, but it’s not really got enough features for true commercial use. They also mean it’s a bit expensive for a consumer product.

But in this case, we’re talking about a DECT phone that connects to both my home phone line and my home broadband, and can have up to 7 logical phone lines connected to it – the physical “POTS” line, and 6 VoIP connections over the internet. The base unit can support separate conversations on up to 3 lines in parallel, with as many handsets connected to each of those lines as required. Each handset can act as a speaker-phone, or have either a headset or bluetooth earpiece attached to. It can even do call routing, where it looks at the number you dial, and decides which line to place your call on. In short, it’s absolutely packed with features.

The key to the free calls home is of course, the VoIP lines, because as of version 2.3, Android can also make VoIP calls. The trick is simply getting the two to talk to one another, while they are in different countries.

So first you need to find a SIP provider for your home phone and your Smartphone. The best way that I’ve found to do this is to set up a SIP-based virtual PBX. You create a free account with them (your PBX) and then add users to your PBX. Each user is given their own SIP credentials so they can logon, and you can associate an extension number with each of those users, allowing them to easily exchange calls – which is exactly what I need, as (unlike my Android smartphone) the handsets on my old Gigaset cannot call SIP URI’s directly.

The first provider I came across that allows this is OnSip, in the USA. My experience with them has been good, but there are many others out there too. Unfortunately it’s not perfect – for me, there are a couple of quirks with their service. Firstly, they don’t seem to expect anyone outside the USA to be using their service, so I cannot enter my address correctly. Which I suspect is technically a breach of their T&C’s. And secondly, it means that all the call tones you’ll hear when using this (ringing, engaged, unobtainable etc) will be American rather than British. I can live with that, but if you choose to go down this route too, DO NOT ENABLE THE E911 OPTION. You could theoretically call out US emergency services to whatever pseudo-fictitious address you have registered with them, which would not be good.

To make it work:

  1. Register with your chosen free SIP PBX provider. I’ll assume OnSip, who will ask you to register a userid, then validate it by email, before letting you set up your first PBX.
  2. Registering for OnSip PBX

    Registering for OnSip PBX

    When you come to define that PBX, you’ll see a screen similar to this one, asking for initial configuration information. You can call the domain name (which is essentially the Internet name of the PBX) anything you like. Incidentally, changing that country field seems to do nothing. It doesn’t even change the format of the information they collect for your address or (real) phone number.
  3. Creating a new user

    Creating a new user

    Having now got a SIP PBX available, we can add some users to it. Each user is roughly equivalent to a telephone on a normal PBX, but in this case the user accesses the PBX by way of some SIP credentials. The users can be called via those credentials, or (if the caller is another user of this PBX) the extension number that is associated with that user. This happens irrespective of the device that the user is using those credentials on, or its location.
  4. List of virtual PBX users

    List of virtual PBX users

    After entering a user for the Gigaset (my house phone) and one for my smartphone, I have a PBX with two users associated with it. I’ve obscured some critical pieces of information to maintain some privacy, but fundamentally the OnSip system gives me a set of SIP credentials for each “user” of the system (bottom right on the image), and associates an extension with them too.
  5. Next we need to get the Gigaset to register with the SIP PBX. To do this, logon to the Gigaset web interface, select the “Settings” tab, and then navigate to Telephony, Connections.
    Gigaset VoIP providers


    Gigaset Basic VoIP Provider Settings
    Gigaset Advanced VoIP Provider Settings Gigaset Settings

    Now define a new provider by clicking on one of the “Edit” buttons as seen in the first of these screenshots. This will bring up the basic settings page seen in the second screenshot. Into this screen enter the data found on the OnSip user configuration screen under the Phone Configuration heading. Copy the OnSip username field into the Gigaset username field, the OnSip Auth Username field into the Gigaset Authentication Username field, the OnSip SIP Password into the Gigaset Authentication Password field, and then click the “Show Advanced Settings” button, which will extend the form with the additional fields seen in the third screenshot. Now copy the OnSip Proxy/Domain field into the four Gigaset fields: Domain, Proxy Server Address, Registrar Server, and Outbound Proxy. When you save the settings the Gigaset will attempt to register with the OnSip PBX. Sometimes this takes a few seconds. You may need to refresh the browser to see the status change to Registered.
  6. Now we need to make the Android Smartphone register to the OnSip PBX too. To do this, bring up the Android Settings by pressing “Menu” from the home screen, and tap “Settings”.
    Android Call Settings


    Android Internet Calling (SIP) Accounts
    Android SIP Account Details Android SIP setup

    Navigate to the Call Settings, and tap it to reveal the screen in the first screenshot.
    Tap on “Use Internet Calling” and set it to “Ask for each call”. Then tap on Accounts to bring up the Internet Calling (SIP) Accounts screen where we can enter SIP accounts associated with the phone. See the second screenshot.
    Now add a new account for the OnSip PBX by tapping “New Account”; this will bring up a screen like the one shown in the third screenshot, into which you need to enter your credentials.
    The content of each of the fields (from the OnSip phone configuration information) should be obvious by now. When you save the account you will want to check that the registration information is correct. The easiest way to do this is to enable the “Receive incoming calls” option (back on the first screenshot again), which will force Android to register all it’s accounts with their associated providers. If you get an error then either the provider is down (possible) or the settings are incorrect (more likely). Leaving this option enabled forces Android to keep those connections active, which runs all the radios, and eats the battery, but allows incoming VoIP calls to your Smartphone (say from the Gigaset). In my experience it’s too power-hungry to use in this mode, other than very sparingly. Fortunately you can make outgoing calls from the Smartphone without enabling it.
  7. Android Internet Calling enabled Contact

    Android Internet Calling enabled Contact

    Finally you need to define a contact in the Smartphone contacts to use with Internet Calling. As all my contacts are normally managed by Lotus Traveler for me, which has no concept of Internet Calling, I defined a new local-only contact that is not synched with any of my Accounts (ie, Google or Lotus Traveler) and used that. Enter the name of the contact as normal, then scroll all the way to the bottom of the contact, where you will find a “More” section. Expand that, and continue to scroll to the bottom, where you will find a field for “Internet call”; into that simply enter either the OnSip SIP URI of your Gigaset, or it’s OnSip extension number.

Note that this really only works when connected to a reasonably good quality WiFi network. Poor quality networks seem to give quite variable results. Sometimes they still work, other times one end or the other may experience audio problems and/or dropped calls. It seems to work just fine through both the IBM and my home firewalls, even at the same time. I’ve not checked the actual audio codecs in use, but sound quality is subjectively better than a normal cellular call. Neither Android or the Gigaset seem to do silemce suppression (ie injection of white-noise when no-one is speaking) so the pauses in a conversation are totally silent, which can be slightly disconcerting.

Normally if you want to dial the Smartphone from the Gigaset you would need to indicate to the phone that it should send the call over the appropriate VoIP provider. This quickly becomes a pain, but it’s easy to set up a simple dial plan on the Gigaset so calls that start “700” (remember I made my extensions be 7000 and 7001?) go out over the OnSip VoIP connection automatically, which makes the solution really easy to use (ie family-friendly) from the Gigaset.

Finally, there is a really interesting technology called iNum available. Sadly it’s not (as far as I can tell) implemented by any of the major telecoms ISPs in the UK, that when combined with SRV records in special DNS records, would allow some really cool automatic rerouting of calls over different networks to different endpoints, depending on context. In theory the network could understand whether I was at home or out with my mobile, and route the call appropriately. It could also do smart things like look at the inbound number and the time, and route business calls to voicemail out of office hours, but still let personal calls through as normal.

If only it were implemented.

Building a scan server on Ubuntu Server 12.04

I have an old but capable little Canon scanner that I’ve used for various administrative tasks for a couple of years now. It connects to my laptop via USB, and draws its power from that link too, which makes it very convenient and easy to use.

Except for the last few months, my daughters have started making increasing use of the scanner for their homework too. Which is fine, but means that the scanner is being carried around the house, and regularly being plugged in and out of different laptops. Which is probably not a good recipe for a long and trouble-free working life.

So today I configured a simple scan server. The scanner is now “permanently” attached to my home server, and anyone who wants to scan documents can do so from their own computer, over the network, using the scanner on the home server. No more hunting for missing USB cables, or even a missing scanner!

This is surprisingly easy to achieve under Linux, as the scanning subsystem (SANE) is implemented as a client/server system, much like the printing system. The only thing that makes it a bit more convoluted is the involvement of a superserver like inetd or one of its equivalents.

On the server:

  1. Plug in the scanner
  2. sudo apt-get install sane-utils
  3. Make sure your /etc/services file contains:

    sane-port 6566/tcp sane saned # SANE

  4. Configure inetd or xinetd to autostart saned. In my case I use xinetd, so I need to ensure that /etc/xinetd.d/saned contains:

    service sane-port
    {
    port = 6566
    socket_type = stream
    server = /usr/sbin/saned
    protocol = tcp
    user = saned
    group = saned
    wait = no
    disable = no
    }

    Now restart xinetd (by sudo /etc/init.d/xinetd restart) so that saned can be autostarted when it’s required.

  5. Next we need to configure saned. This is controlled by the file /etc/defaults/saned, which should be changed to look like:

    # Defaults for the saned initscript, from sane-utils

    # Set to yes to start saned
    RUN=yes

    # Set to the user saned should run as
    RUN_AS_USER=saned

  6. At this point we need to make sure that saned can access the scanner. I did this by setting up a udev rule to arrange for the permissions on the underlying device to be set so saned can access it. For my convenience I also set up a “well known” symbolic name (/dev/scanner) to the scanner device too, as that base device can change depending on what is plugged into the server at any point in time; I’m pretty sure saned doesn’t require this though. I achieved this by making the new file /etc/udev/rules.d/90-scanner.rules contain the single line:

    ATTRS{idVendor}==”04a9″,ATTRS{idProduct}==”2206″,SYMLINK+=”scanner”,MODE=”0660″,OWNER=”root”,GROUP=”saned”

    The idVendor and idProduct are obtained by running the lsusb command, and extracting the USB vendor and product identifiers from the scanner entry.

  7. Next we need to configure saned. In this case, all we need to do is define the systems that can connect to it. This is done by making the file /etc/sane.d/saned.conf read:

    # saned.conf
    # Configuration for the saned daemon

    ## Daemon options
    # Port range for the data connection. Choose a range inside [1024 – 65535].
    # Avoid specifying too large a range, for performance reasons.
    #
    # ONLY use this if your saned server is sitting behind a firewall. If your
    # firewall is a Linux machine, we strongly recommend using the
    # Netfilter nf_conntrack_sane connection tracking module instead.
    #
    # data_portrange = 10000 – 10100

    ## Access list
    # A list of host names, IP addresses or IP subnets (CIDR notation) that
    # are permitted to use local SANE devices. IPv6 addresses must be enclosed
    # in brackets, and should always be specified in their compressed form.
    #
    # The hostname matching is not case-sensitive.

    #scan-client.somedomain.firm
    #192.168.0.1
    #192.168.0.1/29
    #[2001:7a8:185e::42:12]
    #[2001:7a8:185e::42:12]/64
    192.168.255.0/24

    # NOTE: /etc/inetd.conf (or /etc/xinetd.conf) and
    # /etc/services must also be properly configured to start
    # the saned daemon as documented in saned(8), services(4)
    # and inetd.conf(4) (or xinetd.conf(5)).

    In this case you can see I’ve defined it so anything in the 192.168.255.xxx subnet can connect to saned.

On a standard Ubuntu desktop client, only one action needs to be taken to allow it to seamlessly make use of the scan server:

  1. Modify the /etc/sane.d/net.conf file, so it reads:

    # This is the net backend config file.

    ## net backend options
    # Timeout for the initial connection to saned. This will prevent the backend
    # from blocking for several minutes trying to connect to an unresponsive
    # saned host (network outage, host down, …). Value in seconds.
    connect_timeout = 20

    ## saned hosts
    # Each line names a host to attach to.
    # If you list “localhost” then your backends can be accessed either
    # directly or through the net backend. Going through the net backend
    # may be necessary to access devices that need special privileges.
    # localhost
    # My home server:
    192.168.255.20

  2. From this point onwards, you should be able to start “Simple Scan” on the client machine, and see the scanner attached to the server machine, as though it was locally attached. You can alter all the settings as required, and scan over the network as needed.

Internet of things

Many years ago, back when I was still writing software for a living, I was involved in creating some software that is now called WebSphere MQ. This was designed to integrate different types of software and systems together, easily and without disruption. I always used to describe it as email for programs. Over time that whole concept was refined and extended, and the basic concepts are now used as the basic building blocks for what is often called the internet of things. This a concept where sensors and actuators are connected together (usually via the internet) so they can be more easily accessed. IBM’s Smarter Planet theme takes these sensors and actuators, and applies intelligence to them, allowing us to gain incremental benefits sometimes even in unexpected areas.

For a long time now I’ve been monitoring various sensors around my home, connecting them to my own internal data network, where I’ve been able to process the output of these sensors in various ways. I’ve been doing this using a protocol called MQTT, implemented on a very simple server, called Real Small Message Broker (RSMB) which was written by a colleague of mine. This has worked really well for my needs, and being designed for embedded use, requires practically no resources to run.

However, some time ago a service become available on the Internet called Pachube (now renamed to Cosm), which is essentially a central clearing house to which anyone can publicly connect and classify their sensors, creating a potentially huge number of datafeeds. These can then be aggregated, selected and reused by other people, giving individuals access to untold numbers of sensors, allowing them to create innovative new services that eventually others will then build further services on. It’s a great concept that I wanted to contribute to. Initially, as a trial, I decided to share the temperature feed from my home office with the world.

To achieve that, I did the following:

  1. Created an account on Cosm.
  2. Defined a datafeed, which I named (“Study Temperature”), tagged with appropriate metadata (“physical”, “temperature”, etc)
  3. I then created and named a datastream to be associated with that datafeed, consisting of a set of single data points (namely the temperature in my study).
  4. I then created an API key to provide access to that datafeed. Normally you would limit the permissions associated with that key, but because of the way I intended to use it, I created a key that would grant complete access to all my Cosm resources.
  5. Since I am already capturing the temperature in my study and feeding it into the RSMB on my home server, I just needed to share that data with Cosm, which I was able to do by connecting my RSMB to Cosm with a Broker Connection, and some topic replication. See this post by Nicholas O’Leary, another colleague, who describes exactly how to do this. Note also my comments, where I conclusively prove that RSMB cannot do name resolution – you can only use dotted-quad IP addresses.
  6. Once the broker.cfg file has been saved, and the RSMB restarted, you should see data starting to arrive in Cosm.

At this point, I added a little icing to the cake. I could see that I was now sharing information to the world via Cosm, but Cosm also integrates with some other services on the internet, which together can be made to give some value back to me. In particular, we experience occasional power outages at our house, and I would like to know as soon as possible if our power has gone out so that (if necessary) I can call our key-holder and get them to flip the power back on and save the contents of our freezers.

Since I now have a permanent connection between my home server and the Cosm servers, I can use a feature of RSMB called notifications. When enabled, this publishes a message to a predefined topic on each broker describing the state of the connection between them. Essentially once a connection is established it writes “the connection is up” to that topic, and each broker tells the other ‘if you lose contact with me, write “the connection is down”‘ to that topic. This means Cosm can have a datastream whose content describes the current status of the connection between my house and Cosm.

So next I use the Cosm web GUI to define another Cosm datastream and datafeed for these notifications.

By then adding a Cosm trigger to that datastream, I can arrange to send a Tweet from Cosm to an account on Twitter. Simplistically I could simply get Cosm to send a Tweet to me. But in practice I have a separate Twitter account for my house (everyone has a twittering house these days) and so I get Cosm to send a Tweet to that account.

Note that when using the Cosm web interface to define the twitter trigger, the results under Firefox were so badly formatted as to not be usable. I swapped to Google’s Chromium browser instead, from which it was possible to actually get the trigger defined and working.

My Twitter account both follows my house’s Twitter account, and has arranged for Tweets from my house’s account to be enabled for mobile notifications (ie, they are sent, at no cost, to my mobile phone as SMS messages).

Finally, I update my RSMB broker.cfg configuration to enable notifications and name the notification topic, and restart RSMB.

What this means in practice is that whenever the connection between my house and Cosm changes (coming up or going down) a message is published to a Cosm datafeed, which triggers Cosm to send a Tweet to my houses Twitter feed, which then shows up on my Twitter feed, and also causes Twitter to send me a mobile notification (an SMS message to my mobile phone) containing the message that was originally written to the Cosm datafeed. From that SMS message I can tell (from anywhere in the world) when a power failure occurs at my house, and when it has been recovered.

Easy 🙂

Just for reference, if you want to replicate this, here is my broker.cfg. Be aware that it will be line-wrapped:

#
retained_persistence true
#
connection cosm
try_private false
address 216.52.233.121 1883
username MRxxxxxxxxxxxAPI_KEYxxxxxxxxxxxxxxxxxxxxxxxx0g
clientid rmappleby
notifications true
notification_topic /v2/feeds/63101/datastreams/0.csv
topic “” out house/temperature/study /v2/feeds/63014/datastreams/0.csv
#

Bitten by a Viper

In this case, the viper in question is an old industrial control computer, based on an Intel PXA255 processor, in PC104 form factor, running a minimal linux. It was originally manufactured by Arcom Ltd, who have in the intervening years been taken over by Eurotech S.p.A. and comes with a nice complement of default communications options.

Mine is a standard “full-fat” Viper board, with no additional expansion cards, but mounted in an industrial enclosure with an integral UPS battery-backup. Sadly it’s been lying around gathering dust in my study for about the last 5 years, and as part of a recent attempt to tidy up some of my clutter I decided I either needed to make use of it, or get rid of it.

A little searching unearthed the power supply, and then, according to the little red LED on the back panel, we had life.

The first problem was how to talk to it. This is a true industrial device – no screen or keyboard. The options are either to telnet or ssh in over the network connection, or use an ASCII terminal over a serial connection. And therein lay the first problem; it didn’t connect to my network as I was expecting, leaving only the serial option open to me. But as I’m almost exclusively laptop-based these days, finding a machine with a serial port, and then a serial cable to link it to the Viper, took quite some effort.

But having done so, and set up minicom with the correct serial port parameters (helpfully documented in the Viper manuals) I was able to login to the Viper, at which point a little debugging of the network stack revealed the issue – at some point I’d defined the Viper with a static IP address, on a different subnet to my current home network.

A few quick edits to return it to DHCP operation, and a little configuration work on my home servers dnsmasq configuration returned the Viper to my home network, at a readily accessible IP address, allowing me to ssh in again.

At this point, having proved that the system is working again, I now need to decide what to do with it; and in this case, I’m going to use it as part of my plan to implement a set of DIY ambient orbs, as I mentioned back in this post. The Viper happens to consume only 9w while network connected, and has a built-in I2C interface, which can connect to BlinkM LED’s which I can mount in my B&Q Cubo housings. This will allow me to control the colour of the orbs from a program running on the Viper.

Better than that though, I can then install an MQTT client on the Viper, allowing my Viper program to subscribe to a topic on my home servers MQTT micro-broker. This will allow me to have the Viper change the colours of the orbs in response to messages published to the broker on my home server. At that point I can trivially integrate any sort of external input to the orbs just by administering my home server, & without needing to make further updates to the Viper.

So all I need to do now is to get the AEL development kit (cross-compiler etc) set up so I can write some code for the Viper, and then buy some BlinkM LEDs. Which sounds simple, but unfortunately the AEL development kit hasn’t been updated for some 4 years now, and isn’t keen on installing on my latest and greatest Ubuntu 10.10 development machine. I’ve put some calls out to colleagues who have also had experience with these Arcom Vipers, but so far no joy. Next step will be to see if Eurotech can provide a little help – after all, I suspect the original Arcom engineers will mostly still be working there. I guess the worst case will be to set up a *really* old Linux system in a virtual machine, and install the AEL development kit on that instead.

Network upgrade

As most of you are aware, I’ve had a wireless network at home for something like the last 8 or 9 years. Originally an 802.11b network, powered by an original “flying saucer” Apple Airport, it’s been upgraded a couple of times, finally ending up as an 802.11g network powered by a Linksys WRT54GL router, acting as an access point. And it’s worked really well.

But a few years ago I bit the bullet, and installed my own structured cabling system, based around Cat5e, such that I had at least a couple of ethernet ports in each major room in the house. Because as we all know, wires are always faster and more reliable than wireless. This necessitated building a rack in the loft, and installing a 24 port 10/100 switch, and some patch panels to allow me to connect up all the modular faceplates that were scattered around the house. And again, it’s worked really well.

But this last year I wanted to extend my network to the (detached) garage, and discovered that my wireless didn’t have the range. And I also noticed that there were some other “black spots” around my house that had relatively poor coverage. And all 3 of my kids are now equipped with laptops, and streaming music and movies around the house via the 10/100 network. And while it was generally coping well, occasionally things would get a little over-whelmed.

So this month, I upgraded.

I replaced the old switch with a shiny new 24 port 10/100/1000 switch from TP-Link, and the original WRT54GL (acting as my access point) got upgraded to an 802.11n 300Mbps dedicated access point, again from TP-Link. The result has been pretty impressive. All congestion on the wired network has now vanished. The new access point definitely has significantly better range (thanks to its MIMO set up) now easily reaching my garage, has eliminated all but one of the wireless black-spots around the house, and improved the wireless speed pretty much everywhere.

And since my WRT54GL has been freed up by this exercise, I’ve been able to relocate it close to the wireless black-spot, add a second cell to my wireless network, and completely eliminate that last remaining black-spot too.

Of course, now I have no excuse for not completing my wireless sensor hub project…

Building a wireless sensor hub

Those of you not thinking “A what?” are probably already thinking “Why?”; the best answer I have is mostly just because I can. Of course, there was a little more to it than that. I had several little problems or projects in mind, and this had the potential to solve several of them at a stroke.

So, what were these problems? Well, I wanted to:

  • extend my home network into the garage
  • be warned if the power failed in the garage
  • monitor the temperatures both outside, in the garage, and inside the garage freezer
  • have the option to install a 1-wire weather station on the garage roof

What I needed was a small wireless computer system, with a couple of ethernet interfaces, capable of bridging my home wireless network back onto wired ethernet in the garage. It also needed to run some simple scripts to report it’s status over the wireless network, and be capable of managing both a 1-wire network, and any other sensors that I might need in the future. Clearly it would be a bonus if it were to be cheap.

And it turns out that thanks to the ubiquitous nature of Linux, and the inquisitive nature of the worlds hacker community, it’s perfectly feasible to do all that with a single, cheap off-the-shelf box; a Linksys WRT54GL wireless router. Which I just happen to have lying around spare. Of course, it also needs a special third-party version of the firmware in combination with a little bit of reconfiguration to act as a wireless bridge. And it also needs some “minor” hardware alterations to enable some hidden serial ports, allowing a simple (but incorrectly drawn!) serial to 1-wire adapter to be added inside the routers case. So it’s not exactly trivial, but the price is definitely right, and it makes for an interesting project for a rainy Sunday. Or two.

Earlier this week I ordered all the parts to enable the serial ports, and to construct the serial to 1-wire adapter. Once they arrive I’ll combine the two circuits onto a single small piece of stripboard, which I can then install within the router case, exposing the 1-wire interface through a panel-mounted socket. In the meantime I can at least get on with the other aspects of the conversion.

So today I replaced the firmware, updated the configuration so the system acts as a wireless bridge, and installed software on both the router and my home server to allow my home server to interrogate the 1-wire sensors attached to the router, over the wireless network. Until I modify the router hardware and attach some sensors I can run the software on the router in a test-mode, which simulates a range of sensors, allowing me to test the rest of the solution.

Taking this in more detail:

  1. I chose to use OpenWRT as my replacement firmware. This is fairly similar to a standard Linux distribution, but has the advantage of a simple web-based GUI (called Luci) for management. Better however, is a replacement web-based GUI called X-Wrt, who package up their GUI and the base OpenWRT firmware together, in ready-to-flash images. For my WRT54GL, I needed the image at http://downloads.x-wrt.org/xwrt/kamikaze/8.09.2/brcm-2.4/default/openwrt-wrt54g-squashfs.bin.
    This can be used with the standard Linksys firmware “upgrade” to reflash the router. If you have already reflashed the router with a third-party firmware then you’ll typically find that you must use a .trx file. Fortunately you can convert .bin files to .trx files, as .bin files are simply .trx files with a header on the front. Usually the header is 32 bytes long; you can tell by examining the .bin file, and looking for the start of the .trx data, which starts with “HDR0”:

    richard@t60p:~/Downloads$ hexdump -C openwrt-wrt54g-squashfs.bin
    00000000 57 35 34 47 00 00 00 00 09 0c 1d 04 47 01 55 32 |W54G........G.U2|
    00000010 4e 44 00 00 1f 00 00 00 00 00 00 00 00 00 00 00 |ND..............|
    00000020 48 44 52 30 00 10 21 00 f3 aa 87 16 00 00 01 00 |HDR0..!.........|
    00000030 1c 00 00 00 04 09 00 00 00 c0 07 00 1f 8b 08 00 |................|
    00000040 00 00 00 00 02 03 a5 57 4d 6c 5b 59 19 3d be ef |.......WMl[Y.=..|

    You can then do the conversion with the command:

    dd if=openwrt-wrt54g-squashfs.bin of=openwrt-wrt54-squashfs.trx bs=32 skip=1
  2. Having loaded the firmware, the router will reboot. You must then telnet into the router using one of the LAN ports. The router will have an IP address of 192.168.1.1. At this stage there is no password, so set one using the passwd command, which will allow you to use SSH and the web-based GUI.
    You now need to disable the telnet service. You can do this most easily from the web-based GUI; surf to http://192.168.1.1/index.html. Disable the telnet service, and then stop it. Your telnet connection will be dropped. Log back into the router using SSH instead; ssh root@192.168.1.1, entering your new password.
  3. You now need to configure the networking on the router. This is done using the files /etc/config/network and /etc/config/wireless. Back them both up, and then change them as follows:

    # /etc/config/network
    #### VLAN configuration
    config switch eth0
    option vlan0 "0 1 2 3 5*"
    option vlan1 "4 5"
    #### Loopback configuration
    config interface loopback
    option ifname "lo"
    option proto static
    option ipaddr 127.0.0.1
    option netmask 255.0.0.0
    #### LAN configuration
    config interface lan
    option type bridge
    option ifname "eth0.0"
    option proto dhcp
    # -or-
    # option proto static
    # option ipaddr an-ip-on-your-subnet
    # option netmask your-subnet-netmask
    # option gateway your-gateway-ip-address
    # option dns your-dns-server-ip-address
    #### WAN configuration
    config interface wan
    option ifname "eth0.1"
    option proto dhcp

    and:

    # /etc/config/wireless
    config wifi-device wl0
    option type broadcom
    option channel auto
    config wifi-iface
    option device wl0
    option network lan
    option mode sta
    option ssid yourssid
    # psk = WPA Personal, psk2 = WPA2 Personal
    # wpa = WPA Enterprise, wpa2 = WPA2 Enterprise
    option encryption psk
    option key "your key here"
  4. Now disable and stop the dnsmasq and firewall services using the web-based GUI. Failing to do this will prevent the bridge from passing your DHCP requests through to your home network correctly.
  5. Reboot the router, and it should connect to your wireless network, requesting a dynamic IP address for its bridge interface. In my case, I configured my main DNS/DHCP server to always give my WRT54GL the same “dynamic” IP address whenever it connects, making it easier to access. You can achieve the same effect by defining the bridge interface with an appropriate static IP address if you prefer.
  6. I then SSH’d back into the router, and updated the package management system by running the command opkg update. I was then able to install OWServer on the router by simply entering the command opkg install owserver. As I have no 1-wire hardware for OWServer to examine, for the time being I use it in simulation mode, using the command owserver –fake=05,10,21 –foreground, which allows me to test my home servers connectivity to the OWServer software.
  7. On my home server, I downloaded http://sourceforge.net/projects/owfs/files/owfs/2.7p29/owfs-2.7p29.tar.gz/download, which I saved locally. I then untarred this, and compiled (as a simple test) the OWShell programs. Do a configure in the root of the untarred directory, and then change down into owfs-2.7p29/module/owshell, and run the commands:

    make
    sudo make install

    I can now do owdir -s router-host:4304 and be presented with a response from the OWServer running on the router. From here its a short step to getting the Perl and PHP language bindings created, allowing me easily script up access to the 1-wire network hosted on the WRT54GL. But first I need to get some real sensors attached to the router…

Digitising DVD’s with Linux to stream to PS3

Having posted about converting the format of video files to suit streaming to a PS3, I got an email asking me how I actually convert a DVD into a digital file, and how my network is set up to stream videos to the PS3.

So, a little more explanation. I have a small, very low-power server that runs 24×7 in my house, and is connected to my 100mbps ethernet network. On that server I have quite a lot of disk storage, and run Mediatomb, which is a DNLA-compliant UPnP media server. It serves my collection of video and audio files to any device on my network that wants to access them.

In the case of the video files, that device is my PS3, which is also connected to my ethernet network, and also has an HDMI connection to my LCD TV. With this configuration I can start the PS3, which automatically detects the Mediatomb media server and puts an icon on its GUI interface. I can then select that icon, get a list of the available video files, and by clicking on one, have it played on my TV for me.

The only configuration involved in this solution is of Mediatomb, which involves a couple of documented changes to its configuration file, and specifying where my media files are via its web interface. All very simple.

To create the media files from a DVD, I do the following:

  1. Extract the content of the DVD onto my hard drive using a program called vobcopy.
  2. Use ffmpeg to convert that content into a more compact form, better suited to streaming over a network.
  3. Copy the resulting file to my server where Mediatomb can access it.

Now lets look at the first two steps in more detail:

DVD’s actually store their content as a series of VOB‘s; these contain the actual video that you see when you play back a DVD. In general there are separate VOB’s for the main movie, any adverts, trailers, bonus features, etc etc. And just to make life a little more complex, some DVDs store the main movie in more than one VOB, though fortunately vobcopy can hide that from us.

To make things a little more difficult, the movie industry have then encrypted the DVD to make it harder to do what we are about to do. To get around this, you must have installed libdvdcss2, which is a DVD decryption library that is available from the Medibuntu repository.

To find which VOB to extract from the DVD, simply run “vobcopy –info”. This will produce some output like:

richard@t60p:~$ vobcopy –info
Vobcopy 1.1.0 – GPL Copyright (c) 2001 – 2007 robos@muon.de
[Hint] All lines starting with “libdvdread:” are not from vobcopy but from the libdvdread-library

[Info] Path to dvd: /dev/sr0
libdvdread: Using libdvdcss version 1.2.10 for DVD access
[Info] Name of the dvd: STARWARS2UK_D1_2_PAL
[Info] There are 21 titles on this DVD.
[Info] There are 103 chapters on the dvd.
[Info] Most chapters has title 1 with 51 chapters.
[Info] All titles:
[Info] Title 1 has 51 chapters.
[Info] Title 2 has 1 chapter.
[Info] Title 3 has 2 chapters.
[Info] Title 4 has 1 chapter.
[Info] Title 5 has 1 chapter.
[Info] Title 6 has 1 chapter.
[Info] Title 7 has 1 chapter.
[Info] Title 8 has 1 chapter.
[Info] Title 9 has 1 chapter.
[Info] Title 10 has 2 chapters.
[Info] Title 11 has 10 chapters.
[Info] Title 12 has 14 chapters.
[Info] Title 13 has 1 chapter.
[Info] Title 14 has 1 chapter.
[Info] Title 15 has 2 chapters.
[Info] Title 16 has 1 chapter.
[Info] Title 17 has 4 chapters.
[Info] Title 18 has 2 chapters.
[Info] Title 19 has 2 chapters.
[Info] Title 20 has 1 chapter.
[Info] Title 21 has 3 chapters.

[Info] There are 21 angles on this dvd.
[Info] All titles:
[Info] Title 1 has 1 angle.
[Info] Title 2 has 1 angle.
[Info] Title 3 has 1 angle.
[Info] Title 4 has 1 angle.
[Info] Title 5 has 1 angle.
[Info] Title 6 has 1 angle.
[Info] Title 7 has 1 angle.
[Info] Title 8 has 1 angle.
[Info] Title 9 has 1 angle.
[Info] Title 10 has 1 angle.
[Info] Title 11 has 1 angle.
[Info] Title 12 has 1 angle.
[Info] Title 13 has 1 angle.
[Info] Title 14 has 1 angle.
[Info] Title 15 has 1 angle.
[Info] Title 16 has 1 angle.
[Info] Title 17 has 1 angle.
[Info] Title 18 has 1 angle.
[Info] Title 19 has 1 angle.
[Info] Title 20 has 1 angle.
[Info] Title 21 has 1 angle.
[Info] Using Title: 1
[Info] Title has 51 chapters and 1 angles
[Info] Using Chapter: 1
[Info] Using Angle: 1

[Info] DVD-name: STARWARS2UK_D1_2_PAL
[Info] Disk free: 74104.160156 MB
[Info] Vobs size: 5733.919922 MB

It’s highly likely that the VOB with the most chapters is the main movie; in this case title 1. We can check that by running the command “mplayer dvd://[vob_number]”. If we see the main movie playing then we can extract that VOB to our hard disk by running the command “vobcopy –title-number [vob_number]”. vobcopy will then proceed to decrypt and copy that VOB to your hard disk (as STARWARS2UK_D1_2_PAL3-1.vob in this case).

Now we can convert that (very large) file into something smaller and more easy to stream. This uses exactly the same command as the last post; ffmpeg. This time however, we need to make sensible guesses for the bitrates that we want to use for the video and audio streams. Personally I go with 2000k for the video, and 192k for the audio. It’s good enough in terms of quality, and produces a file ~1/3rd the size of the original VOB, which is much more amenable to being streamed over a 100mbps ethernet network. If you’re hoping to do this over wireless, then you’ll probably need to compress even more and sacrifice quality … wireless just doesn’t have the bandwidth to do good quality video streaming.

So, the command to convert that VOB to a .mp4 file is:

ffmpeg -i STARWARS2UK_D1_2_PAL3-1.vob -vcodec libx264 -b 2000k -acodec libfaac -ab 192k STARWARS2UK_D1_2_PAL3-1.mp4

That command will take at least as long to execute as the movie would have taken to play. But once completed, the resulting file can then be copied to my server and be available for instant access in the future.