Something of a milestone

For those who have been following this blog long enough, you might have noticed that March 2014 marks 5 years since I underwent the operation that removed my tumour. 5 years survival is seen to be a big thing for cancer patients. You hear all kinds of talk of “being cured”, or that the likelihood of the cancer returning is now “comparable to the risk of contracting the cancer in the first place”.

Sadly, neither are exactly true – it’s just a convenient measure of the effectiveness of the treatment regimes that the doctors use. You can read more on this here, but ultimately, XKCD explains what this really means to me far better than I can.

But having said all that, 5 years is still a major milestone. It’s a long time with (in my case) no sign of any recurrence of the cancer, and in general the longer I live with no sign of cancer, the better my chance of having actually beaten it. The bottom line for me is that I’ve seen too many people die who were diagnosed at the same time as me to feel anything other than incredibly lucky to still be here.

That alone has to be worth celebrating.

Updated weatherstation software

I’ve been enhancing the software that I use to read data from my weatherstation. It’s been working well since I added some extra code to detect when the sensor readings are obviously corrupt (radio interference, uncontrolled concurrent memory accesses, etc), but the tracking of the wind direction was still not quite good enough.

To improve that, I’ve extended the number of readings used in the running average, and enhanced the algorithms that average the wind direction to take into account not only the direction of each data point (unity vectors), but also the speed of the wind in that direction (true vectors).

The result seems to track significantly more accurately to nearby high quality stations, but I am conscious that this is still presenting a manipulation of the data, rather than the actual data that a better sensor would provide. Having said that, it’s now producing a pretty good result for hardware that cost me only £50.

The software can be downloaded from here.

Playing with callerid again

Back in April last year I started playing with an old external serial-attached modem to read the callerid of incoming calls. My intention was to intercept calls from direct marketeers. The concept was good, but I ran into problems with the modem; it took up a lot of space, kept overheating, and lacked any voice facilities, limiting what I could do with it. In addition (probably because of the modem constantly overheating) the software I was running kept crashing.

So in the end, I gave up on the idea.

But recently we seem to have had a spate of annoying calls from direct marketeers based in India, selling products for UK companies that are cynically avoiding the UK’s regulations around direct marketing opt-outs. The straw that broke the camels back was the call that came through at 6am on a Saturday morning.

The problem here is that the phone companies don’t care about this. They make money from all these calls, so its not in their interest to block them. They’ll sell me a service to block “withheld” numbers, but not numbers that show as “unavailable”. Unfortunately, these days the majority of the problem calls come from India, and show as “unavailable” because the Indian call centers are using VoIP systems to place their calls to the UK, and they deliberately ensure that their callerid is “unavailable”.

So I’m back on the idea of making my own solution to this problem. So first off, I purchased a USR 5637 USB fax modem that is compatible with the UK callerid protocols. Even better, this is a voice modem too, so it can interact with a caller, sending audio as well as data down the phone line, and recognise touchtones. It’s also small, self-powered, cool-running and very reliable.

Next I spent some time looking to see what other people have done in this space, and eventually found this webpage, that describes a simple Bash script that intercepts calls that are not on a whitelist, and plays modem tones to them before hanging up. Recognised callers are allowed to ring the phone as normal, progressing to an answerphone if necessary. It’s not exactly the functionality that I want, but the simplicity is beguiling, and it’s trivial to extend it to do something closer to what I do want. And anyway, anything more sophisticated is going to require something like Asterisk, and switching my whole phone system over to VoIP, which is not going to be very family-friendly.

So for now, I’m gathering lists of all incoming calls to establish a basic whitelist, before moving on to do some really basic screening of calls.

Plusnet IPv6 still delayed, so let’s go spelunking in a Hurricane Electric tunnel

When I last changed ISP (last year, when Sky bought my old ISP, BE Unlimited) one of my requirements was for my new ISP to have a roadmap for making IPv6 available. I’ve been waiting (impatiently) for my new ISP, Plusnet, to deliver on their initial “coming soon” statements. Sadly, like almost all ISPs, Plusnet are not moving quickly with IPv6, and are still investigating alternatives like carrier grade NAT to extend the life of IPv4. I can sympathise with this position – IPv6 has limited device support, most of their customers are not ready to adopt it, and trying to provide support for the necessary dual-stack environment would not be easy. But, the problem of IPv4 address exhaustion is not going away.

So at the end of last year they started their second controlled trial of IPv6. I was keen to join, but the conditions were onerous. I would get a second account, I would need to provide my own IPv6-capable router, I couldn’t have my existing IPv4 static IP address, I couldn’t have Reverse DNS on the line, and I had to commit to providing feedback on my “normal” workload. So much as I wanted to join the trial, I couldn’t, as I wouldn’t be able to run my mailserver.

So I decided to investigate alternatives until such time as Plusnet get native IPv6 support working. The default solution in cases like mine, where my ISP only provides me with an IPv4 connection, is to tunnel IPv6 conversations through my IPv4 connection, to an ISP who does provide IPv6 connectivity to the Internet. There are two major players in this area for home users, SisXS and Hurricane Electric. Both provide all the basic functionality, as well as each having some individual specialist features. I’m just looking for a basic IPv6 connection and could use either, but in the end Hurricane Electric appeared vastly easier to register with, so I went with them.

My current internet connection is FTTC (fibre to the cabinet) via a BT OpenReach VDSL2 modem and my ISP-supplied (cheap and nasty) combined broadband router, NAT and firewall. This gives me a private 16bit IPv4 address space, for which my home server (a low-power mini-ITX system that runs 24×7) provides all the network management functions, such as DHCP and DNS.

What I want to add to this is a protocol-41 tunnel from the IPv6 ISP (Hurricane Electric, or HE) back through my NAT & Firewall to my home server. By registering for such a tunnel, HE provide (for free) a personal /64 subnet to me through that tunnel, allowing me to use my home server to provision IPv6 addresses to all the devices on my LAN. However, this connection is neither NAT’ed nor firewalled. The IPv6 addresses are both globally addressable and visible. So I also want my home server to act as a firewall for all IPv6 communications through that tunnel, to protect the devices on my network, without forcing them to all adopt their own firewalls. I was initially concerned that because my home server also acts as an OpenVPN endpoint, and so uses a bridged network configuration, getting the tunnel working might be quite awkward, but it turns out to make very little difference to the solution.

So, to make this work, first you need a static IPv4 address on the internet, and to have ensured that your router will respond to ICMP requests (pings!). Then you can register with Hurricane Electric, and “Create a Regular Tunnel”, which will result in a page of information describing your tunnel. I printed this for future reference (and in case I broke my network while making changes) but you can always access this information from the HE website.

You now need to edit /etc/network/interfaces. Add lines to define the tunnel, as follows, substituting the values from your tunnel description:

# Define 6in4 ipv6 tunnel to Hurricane Electric
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address [your "Client IPv6 Address"]
netmask 64
endpoint [your "Server IPv4 Address"]
ttl 255

up ip -6 route add default dev he-ipv6
down ip -6 route del default dev he-ipv6

Now add an address from your “Routed /64 IPv6 Prefix” to the appropriate interface – in my case, this is the bridge interface br0, but its more likely to be eth0 for you. This defines your servers static, globally accessible IPv6 address:

# Add an IPv6 address from the routed prefix to the br0 interface.
iface br0 inet6 static
address [any IPv6 address from the Routed /64 IPv6 Prefix]
netmask 64

Since I am running Ubuntu 12.04 I now need to install radvd, which will advertise the IPv6 subnet to any systems on our network that want to configure themselves an IPv6 connection. Think of it as a sort of DHCP for IPv6. However, when I move to 14.04 sometime later this year I expect to be able to get rid of radvd, and replace it with dnsamsq (which I already use for IPv4 DNS/DHCP), as the latest version of dnsmasq is reported to provide a superset of the radvd capabilities.

sudo apt-get update
sudo apt-get install radvd

Then configure radvd to give out IPv6 addresses from our Routed /64 IPv6 Prefix, by creating the file /etc/radvd.conf, and entering the following into it:

interface [your interface, probably eth0]
{
AdvSendAdvert on;
AdvLinkMTU 1480;
prefix [Your Routed /64 IPv6 Prefix, incl the /64]
{
AdvOnLink on;
AdvAutonomous on;
};
};

Any IPv6-capable devices will now ask for (and be allocated) an IPv6 address in your Routed /64 subnet, based on the MAC address of the interface that is requesting the IPv6 address.
Now uncomment the line:

# net.ipv6.conf.all.forwarding=1

from the file /etc/sysctl.conf. This will allow your server to act as a router for IPv6 traffic.

Now we need to enable and then configure the firewall. I take no credit for this, as much of the information related to the firewall was gleaned from this post. As I run Ubuntu Server I’ll use ufw, the Ubuntu Firewall utility to configure the underlying ipchains firewall. Alternative front-ends to ipchains will work equally well, though the actual method of configuration will obviously differ. First I needed to enable the firewall for IPv6 by editing /etc/default/ufw, and ensuring the following options are set correctly:

# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=yes

and

# Set the default forward policy to ACCEPT, DROP or REJECT. Please note that
# if you change this you will most likely want to adjust your rules
DEFAULT_FORWARD_POLICY="ACCEPT"

Now we need to enable the firewall (by default it’s disabled) and add some additional rules to it:

# Enable the firewall
sudo ufw enable
# Allow everything on my LAN to connect to anything
sudo ufw allow from 192.168.0.0/16
# Allow Protocol-41 connections from the Tunnel Endpoint Server (to run the tunnel)
sudo ufw allow from [Your "Server IPv4 Address"] proto ipv6
# Allow BOOTP service on port 67 from radvd
sudo ufw allow proto any to any port 67
# Allow my IPv6 addresses to access services on this server
sudo ufw allow from [Your "Routed /64 IPv6 Prefix" including the "/64"]

I also had to add a few more rules to cope with the external facing services that my home server provides to the Internet (mail, web, ssh, ftp, vpn etc).

Finally I want to prevent all but a few specific types of external IPv6 connection to be made inbound into my network. To do this, edit the file /etc/ufw/before6.rules, and add the following lines directly BEFORE the “COMMIT” statement at the end of the file:


# Forward IPv6 packets associated with an established connection
-A ufw6-before-forward -i he-ipv6 -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow "No Next Header" to be forwarded or proto=59
# See http://www.ietf.org/rfc/rfc1883.txt (not sure if the length
# is needed as all IPv6 headers should be that size anyway).
-A ufw6-before-forward -p ipv6-nonxt -m length --length 40 -j ACCEPT

# allow MULTICAST to be forwarded
# These 2 need to be open to enable Auto-Discovery.
-A ufw6-before-forward -p icmpv6 -s ff00::/8 -j ACCEPT
-A ufw6-before-forward -p icmpv6 -d ff00::/8 -j ACCEPT

# ok icmp codes to forward
-A ufw6-before-forward -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-request -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-reply -j ACCEPT

# Don't forward any other packets to hosts behind this router.
-A ufw6-before-forward -i he-ipv6 -j ufw6-logging-deny
-A ufw6-before-forward -i he-ipv6 -j DROP

At this point I saved everything and rebooted (though you could just bring up the he-ipv6 interface) and everything came up correctly. I was able to test that I had a valid Global scope IPv6 address associated with (in my case) my br0 interface, and that I could successfully ping6 -c 5 ipv6.google.com via it. I was also able to check that my laptop had automatically found and configured a valid Global scope IPv6 address for it’s eth0 interface, and that it could ping6 my home server and external IPv6 sites, and that it was possible to browse IPv6-only websites from it.

Health update

The last time I mentioned my health was back in November of last year; at that point I was keen to get back to work. I was frustrated to be sitting at home with not much to occupy myself, and feeling somewhat guilty for exceeding the 3-months that I’d originally suggested to my management that I’d need to be away from work.

After some meetings with my management we agreed that I’d start a phased return to work in late November by working approximately half days from home, with no commuting. The main intent was for me to catch up on all the things that had gone on (including a large internal reorganisation) while I was out, get all my admin (and email!) up to date, rather than worry too much about any specific business goals.

And to my surprise I found it incredibly difficult. Initially I struggled to regularly work even half a day, and when I tried to “push on through” I failed. Spectacularly. I’d literally fall asleep at the keyboard. Over the 6 weeks running up to Christmas I did see my stamina improve a little, and I even managed some half days back in my local office. But progress was depressingly slow, and when I first tried to commute up to London for a meeting, I felt so unwell by the time I’d got there that I barely had time to attend the meeting before I had to leave for home again.

A fortnights break at Christmas was a welcome relief, during which I had another consultation with my surgeon, and brought up the issue of my tiredness and ongoing kidney pain. The result was a set of blood and urine tests.

The blood tests revealed little that was wrong, or at least unexpected; my kidney function appeared to be fine, but I was still showing the signs of a low-level background infection. Since my perineal wound was (and is) still open, this was only to be expected. But fundamentally, I was in as good health as anyone could expect – the suggestion was that I just needed more time to get over my last operation, and that all the other treatments that I’ve been through over the last 4 years probably weren’t helping.

The urine test however, showed another drug-resistant UTI. More antibiotics put paid to that, but I was advised to see my urologist again too. He suspects that my problems with kidney pain and repeated UTI’s are ultimately due to a problem called renal or vesicoureteral reflux. This is normally a condition most common in young children, but in my case is almost certainly caused by the process of reimplanting my left ureter; it no longer acts as a one-way valve, allowing urine to flow back up into my kidney.

Of itself, this causes nothing more than mild discomfort. But in combination with UTI’s, this can cause significant pain (as I discovered) and potentially further permanent damage to my kidney, which is most definitely not desirable. So for the next six months I’ve been prescribed a prophylactic dose of antibiotic (Trimethoprim) to keep the UTI’s at bay.

And since returning to work after the Christmas break, I’ve noticed that my stamina has noticeably improved. I’m still a long way from what I would consider normal, but I’m managing to work much closer to full days now, and I’m coping with some commuting too. I can see real progress.

Of course, in retrospect the lesson to be learned is that I probably tried to come back to work too early. I suspect that if I’d stayed off work for another month or so my recovery would probably have been faster and easier. But I’d have been climbing the walls!

Restoring “normal” scrolling in Ubuntu

I’m not a fan of Canonical’s Unity desktop environment, so I always install an alternative. My current favourite is Cinnamon, which is much more familiar, and for me, much more intuitive. However, some of the UI design work that went into Unity has slopped over into the base configuration of the system, which is rather annoying. In particular, the near-unusable overlay scrollbars show up in other desktop environments.

To switch back to normal scrollbars, simply type:

gsettings set com.canonical.desktop.interface scrollbar-mode normal

in a terminal. It will take effect immediately, and give you normal scrollbars again.

However, those scrollbars don’t work exactly as I expect. Clicking anywhere in the “trough” of the scrollbar should (to my mind) scroll the window a page back or forward. These scrollbars wizz the window to that point in the output, as though you’d dragged the “puck” to that point. It’s particularly annoying when you’re dealing with very long command line histories. Fortunately that too can be changed. Simply create a file called “~/.config/gtk-3.0/settings.ini”, and make it contain:

[Settings]
gtk-primary-button-warps-slider = false

You’ll need to restart to see the result.

Ditching the spinning rust

For some time now I’ve been thinking of switching my laptop storage over to an SSD. I like the idea of the massively improved performance, the slightly reduced power consumption, and the ability to better withstand the abuse of commuting. However, I don’t like the limited write cycles, or (since I need a reasonable size drive to hold all the data I’ve accumulated over the years) the massive price-premium over traditional drives. So I’ve been playing a waiting game over the last couple of years, and watching the technology develop.

But as the January sales started, I noticed the prices of 256GB SSDs have dipped to the point where I’m happy to “invest”. So I’ve picked up a Samsung 840 EVO 250GB SSD for my X201 Thinkpad; it’s essentially a mid-range SSD at a budget price-point, and should transform my laptops performance.

SSD’s are very different beasts from traditional hard drives, and from reading around the Internet there appear to be several things that I should take into account if I want to obtain and then maintain the best performance from it. Predominant amongst these are ensuring the correct alignment of partitions on the SSD, ensuring proper support for the Trim command, and selecting the best file system for my needs.

But this laptop is supplied to me by my employer, and must have full system encryption implemented on it. I can achieve this using a combination of LUKS and LVM, but it complicates the implementation of things like Trim support. The disk is divided into a minimal unencrypted boot partition with the majority of the space turned into a LUKS-encrypted block device. That is then used to create an LVM logical volume, from which are allocated the partitions for the actual Linux install.

Clearly once I started looking at partition alignment and different filesystem types a reinstall becomes the simplest option, and the need for Trim support predicates fairly recent versions of LUKS and LVM, driving me to a more recent distribution than my current Mint 14.1, which is getting rather old now. This gives me the opportunity to upgrade and fine-tune my install to better suit the new SSD. I did consider moving to the latest Mint 16, but my experiences with Mint have been quite mixed. I like their desktop environment very much, but am much less pleased with other aspects of the distribution, so I think I’ll switch back to the latest Ubuntu, but using the Cinnamon desktop environment from Mint; the best of all worlds for me.

Partition alignment

This article describes why there is a problem with modern drives that use 4k sectors internally, but represent themselves as having a 512byte sector externally. The problem is actually magnified with SSD’s where this can cause significant issues with excessive wearing of the cells. Worse still, modern SSDs like my Samsung write in 4K pages, but erase in 1M blocks of 256 pages. It means that partitions need to be aligned not to “just” 4K boundries, but to 1MB boundries.

Fortunately this is trivial in a modern Linux distribution; we partition the target drive with a GPT scheme using gdisk; on a new blank disk it will automatically align the partitions to 2048 sector, or 1MB boundries. On disks with existing partitions this can be enabled with the “l 2048” command in the advanced sub-menu, which will force alignment of newly created partitions on 1MB boundries.

Trim support

In the context of SSD’s TRIM is an ATA command that allows the operating system to tell the SSD which sectors are no longer in use, and so can be cleared, ready for rapid reuse. Wikipedia has some good information on it here. The key in my case is going to be to enable the filesystem to issue TRIM commands, and then enabling the LVM and LUKS containers that hold the filesystem to pass the TRIM commands on through to the actual SSD. There is more information on how to achieve this here.

However, there are significant questions over whether it is best to enable TRIM on the fstab options, getting the filesystem to issue TRIM commands automatically as it deletes sectors, or periodically running the user space command fstrim using something like a cron job or an init script. Both approaches still have scenarios that could result in significant performance degradation. At the moment I’m tending towards using fstrim in some fashion, but I need to do more research before making a final decision on this.

File system choice

Fundamentally I need a filesystem that supports the TRIM command – not all do. But beyond that I would expect any filesystem to perform better on an SSD than it does on a hard drive, which is good.

However, as you would expect, different filesystems have different strengths and weaknesses so by knowing my typical usage patterns I can select the “best” of the available filesystems for my system. And interestingly, according to these benchmarks, the LUKS/LVM containers that I will be forced to use can have a much more significant affect on some filesystems (particularly the almost default ext4) than others.

So based on my reading of these benchmarks and the type of use that I typically make of my machine, my current thought is to run an Ubuntu 13.10 install on BTRFS filesystems with lzo compression for both my root and home partitions, both hosted in a single LUKS/LVM container. My boot partition will be a totally separate ext3 partition.

The slight concern with that choice is that BTRFS is still considered “beta” code, and still under heavy development. It is not currently the default on any major distribution, but it is offered as an installation choice on almost all. The advanced management capabilities such as on-the-fly compression, de-duplication, snapshots etc make it very attractive though, and ultimately unless people like me do adopt it, it will never become the default filesystem.

I’ll be implementing a robust backup plan though!

Program a better windvane

Back in August I wrote about my weather station, some of the problems I’d experienced with it, and what I’d done to fix them. The one thing that I’d not been able to quickly solve was the lack of damping on the wind vane, which meant it was difficult to accurately track the wind direction.

Having done some research on the web, it seems that everyone has this problem with the wind vane; it’s fundamentally a bad design. Some people have tried modifying them, usually by adding a much larger tail-piece, which then needs a larger nose-cone to counterbalance it. It usually also means that the unit needs to be remounted to avoid the wind vane colliding with the anemometer.

For a while I toyed with the idea of following this route, and redesigning the wind vane. However, I could see that I would be signing myself up for a lot of messing around at the top of a ladder, and winter is very fast approaching. Not a terribly attractive option.

Meanwhile I’ve been rewriting the software that I use to capture the data from the weather station, before I send it on to my PWS on Weather Underground. The software had a couple of little bugs that I wanted to resolve, and lacked some functions that I wanted to add. So I wondered if I could do something about damping the wind vane in that software. It turns out that I can. Sort of.

The way the weather station appears to work, is that it has 4080 weather records in the console, that act as a circular buffer holding the weather history. By default, the console “creates” a new historical record every 30 minutes (giving an 85 day history) though this can be altered with software. The weather sensors however, are read at a fixed interval of about every 50 seconds, and are apparently always written to the current record. So with the default configuration the console only records the last of about 36 sets of readings.

However, by connecting to the console via USB, it’s possible to capture some of those intermediate readings, which allows us to do something helpful. In my case, I read the sensor data from the console’s current weather record every minute, creating a running average of the last “n” wind direction readings, before uploading it to Weather Underground. At the moment, n=10, which produces a significant reduction in the extremes of the readings.

Of course, this isn’t really damping the wind vane. Rather, it’s mathematically manipulating all the data points I can see (some of which I know will be inaccurate due to the sensor design) and removing the more extreme values from the set that I process. So we’re actually losing data here. But the proof is in the pudding, and the results seem to track more expensive weather station designs more accurately.

You can see this in the following series of images. This first one is an example plot of a day of raw wind direction data from my weatherstation:

Graph of undamped raw wind direction

This is a plot of the wind direction data from a different day, using a high quality weather station (a Davis Vantage Pro 2):

Graph of wind direction from a Davis Vantage Pro 2

And this is a plot of the wind direction on the same day, using my weather station, but with the damping function enabled:

Graph of wind direction from my weather station, after damping

Ok, it’s not perfect, but it’s a lot better than it was. And I know that the mounting location of the Davis Vantage Pro 2 sensors is much better than mine, so I’m unlikely to ever get results as good as the Davis set anyway.

For anyone interested in the damping, I create an array of historical wind direction data. I then take each element of that array in turn, and convert it into unit vectors for X and Y components of the angle. I then average the X and Y vectors, before turning the result back into an angle. By sampling frequently, and modifying the length of the history buffer, it is possible to significantly reduce the amount of “noise” from the sensor, and produce a much better track from the sensor data.

If that sounds too complicated, you have a Fine Offset 1080 or 1081 weatherstation such as the Maplin N96FY, and just want to get similar results to me, then you will soon be able to find all the code and instructions on how to use it here.

Loss of DynDNS domains

Many years ago, back when I was first starting to experiment with running services on the Internet, I was using an ADSL connection with a dynamic IP address. This meant that the IP address I was working with could change, typically whenever my home router was rebooted. This made running Internet services (web, ftp, mail etc) rather awkward, but at the time I didn’t want to pay out for a connection with static IP addresses and a proper domain name.

Fortunately there was a solution to this; a company called Dyn offered a service called DynDNS where (as a private individual) you could register with them and get up to 5 sub-domains from a fairly large, reasonably-named set of base domains.

Better still, they had implemented a system where you could regularly nudge their DNS servers with your domain names and your current IP address, and they made sure that those domains were mapped to whatever IP address you had at the time. This became so popular that most of the router device manufacturers added support for the DynDNS protocol into their devices.

And so life was good. I used one of their free domains, appleby.homeip.net, to hang a website and a mailserver off, and my router ensured that the domain name was always properly mapped to my (changing) IP address. And it worked pretty well – I had a little laptop running my Internet services on a domestic ADSL connection, without any problems.

Of course, times changed. I moved to a higher speed connection, and obtained a block of static IP addresses and some real domain names. Thanks to the problems of SPAM, running a mailserver got a lot more complex, and the need for proper r-DNS meant I ended up moving my mail to one of my proper domains. However, my Postfix install still maintained my old userids@appleby.homeip.net as a virtual domain for anyone who was still using them. Similarly, Apache ran virtual hosts for all my old DynDNS domains as well as my new “real” ones.

And all was good with the world again.

Then back in May 2011 I noticed that there was a problem; somehow some of my DynDNS domains had fallen off the net. It seemed that Dyn had changed their policies around the frequency with which they expected their servers to be updated. Some fiddling with the Linux DynDNS client, ddclient, and with my DynDNS account settings eventually got the domains back up and running.

Then in November 2011 Dyn announced that they were going to reduce the number of free domains for private individuals from 5 to 1, although existing customers could keep any existing active domains. Alarm bells should have started ringing at this point. If I’d looked more closely I’d have noticed that Dyn were moving upmarket, and targeting corporate customers. Those of us with free accounts were becoming a nuisance, and were being encouraged to migrate to their paid-for “pro” accounts at $30 a year.

But sadly I didn’t notice any of that; I had bigger problems to deal with at the time…

Finally, this summer Dyn killed off the ability to create free accounts, and announced that to maintain existing free accounts you had to login to their website at least every 30 days; updates to the DynDNS service from a DynDNS compatible client was no longer sufficient. Another turn of the screw to force freeloaders users to go elsewhere.

In my case, by the time I got out of hospital to discover what they’d done, my free domains were already toast, which was really frustrating, as I was still getting occasional emails via the appleby.homeip.net domain, and I had a couple of low-traffic websites running on their domains. Worse, it seems my wife was still had been receiving quite a lot of email via her appleby.homeip.net account.

While I am fairly frustrated by the way that Dyn have managed this, I can see it from their point of view too. I had free use of a service for many years that enabled me to do things that I otherwise wouldn’t have been able to. It didn’t cost Dyn much to provide that, but it did cost something. That was probably easy for them to justify when they were more of an Internet startup, but it’s a lot harder now that they’re a proper corporation with Wall Street breathing down their necks all the time. So I’m being philosophical about it; the moral of the story is, as always, never to depend on free services that are controlled by someone else.

I’ve now rationalised all my web content down onto a single new website, hosted on a domain that I completely control. So if you were interested in the content of appleby.is-a-geek.net, or super6.kicks-ass.org, you might want to check out theapplebyclan.com.

But the email addresses are more problematic. We’ve tracked down all the people we can think of who might be using that old domain and notified them of the change, but the nature of email is that we’re bound to have missed some. And for them, we’ve apparently just fallen off the edge of the (digital) world.

ASUS EeePC 1225B

My youngest daughter has just started high school, and needed wanted to replace her desktop PC with a new laptop. My older children have managed to do everything they’ve needed for school with netbooks running Linux, so that was my starting position; nice and portable, powerful enough, and not too expensive. Unfortunately, netbooks are getting really thin on the ground. If you want small and light the current answer is an ultrabook, which is usually at the “premium” end of the market, or a tablet, which just isn’t going to cut the mustard for this application.

ASUS seem to be the last company making netbooks, and even they appear to be exiting the market; their last offering, the EeePC 1225B, is now heading rapidly towards clearance deals – which in this case is good news. It comes with Windows 7 Home Premium preloaded, but given my daughters existing familiarity with Linux I’m installing Linux as her primary OS. However, in the future we may want to return the system to stock, and this machine comes with no recovery media, just a recovery partition. I can install Linux around both that and the Windows install, but it could be a real problem if the actual hard drive were to die. Fortunately ASUS have provided a way to back up the recovery partition onto a 16GB USB device, separating the recovery from the actual machine – which is really helpful. Except it can’t be installed on a flash device. Which is not helpful. Why on earth can’t I use a flash drive?! A rummage in my spare parts bin finally unearthed an ancient 20GB 2.5″ PATA drive and a USB caddy – we’re in business!

The process for backing up the recovery partition is to boot into the on-disk recovery partition (which appears to contain a minimal Windows system, some ASUS scripts and the compressed install images for the recoverable partitions) and then copy the whole recovery partition to the USB device, making it bootable in the process. That can then be used to recover the netbook to its original state.

Except, on careful reading of the manuals, that’s not quite the case. The recovery process only mentions recovery of the Windows partitions. Not the original recovery partition, or the (apparently non-functional) EFI boot partition. Not ideal. So with the help of a bootable Linux USB disk, a spare SATA hard drive, and a USB SATA caddy, I imaged all the partitions and the partition table from the original hard drive, using a mix of “dd” and “ntfsclone”. With those and a live Linux disk I ought to be able to rebuild the entire disk if necessary.

And so having reassured myself that I have a workable restoration process, a quick application of gparted allowed me to shrink the existing Windows install down, creating space for my Linux installation. And then the problems started.

Initially I installed the latest Ubuntu (13.10, “Saucy”) but this simply crashed (apparently randomly) on booting, which wasn’t very reassuring. There seem to be some concerns about the last couple of Ubuntu releases not being as well-baked as usual, so I decided to try Mint rather than starting to debug Ubuntu. So I installed 32-bit Mint 15 (Olivia), and all appeared well until I updated to the latest kernel, at which point the keyboard and trackpad stopped working. Reverting to the older kernel left the system working fine, so I removed the newer kernel, pinned the older one, and completed the install & configuration of the laptop for my daughter.

Who was delighted, but reported that the laptop was randomly hanging – something I’d not seen while I was installing & configuring it. After quite a lot of further testing, it appears that the system is rock solid stable while running on AC power, but hangs randomly when running on battery. I suspect a horrible ACPI bug lurking at the root of this, which probably explains the problems I experienced with Ubuntu too. Clearly I need to resolve this for my daughter, but she is happy enough working in “tethered” mode at the moment, so at least I’ve some time to start debugging it.

Interestingly, I was surprised that ASUS didn’t seem to document the keys that cause the laptop to do useful things during startup. So for completeness, pressing F9 during boot starts the recovery utility, pressing F2 enters the BIOS, and pressing ESC allows you to select which media device to boot from. You need to tap them repeatedly while the system is booting. Bizarrely, F9 in particular appears to be very hit or miss.