Kitchen lighting

Having had such a positive result from tinkering with my chandelier, I’ve been thinking about how I could apply LED lighting to the upcoming refit of our kitchen. Currently we are having LED down-lighters in the ceiling, tri-phosphor fluorescent lighting over the worktops, and standard 40w incandescent candle bulbs in the extractor hood.

I think that the main LED down-lighters will be fine, subject to getting the right colour temperature, and having a way to dim them. But I’m now much less happy with the idea of tri-phosphor fluorescent worktop lighting, and the incandescent lights in the extractor hood. I’d now like to go LED everywhere in the kitchen, if only to colour-match the lighting.

The extractor hood is slightly frustrating. All the different hoods are basically just a metal enclosure, some filtration, some fans, and some lights. The pricing for that varies wildly, and without much obvious logic to it. We wanted as large an extraction rate as possible, in a simple chimney style hood, with LED lighting. To get LED lighting we would have had to get a much lower rate of extraction, and pay a huge amount more, so in the end we decided the extraction rate was more important, and got the one with incandescent bulbs.

However, a bit of poking around in the showroom reveals that these bulbs fit into a pair of back to back SES/E14 sockets, and shine down through perspex lenses. So I’m already thinking that I could convert that to LED before it’s fitted. Another of those little 12w LED drivers from Amazon, a couple of these G4 “panel” type bulbs, and pair of these neat little converters from ATEN Lighting (so I can use the existing SES sockets as mounting points) should see me good.

The worktop lighting is somewhat less clear. There are (hideously expensive) pre-made LED lights designed to fit under cupboards, and be daisy-chained together much like the old T5 fluorescent fittings. I guess that’s convenient for the electricians, but at anywhere from £50 to over a £100 a meter (depending on what you buy, and where from!) that’s never going to fly from a budget perspective. On the other hand, you can now buy flexible strips of splash-proof LED’s that come with a self-adhesive backing on them for around £40 for 5m. Simply add a driver and you should be good to go.

Of course, it’s not quite that simple. There are a lot of different makers of strip LED, with different LED types, densities, etc. And what kind of driver do you need? And actually, I’m going to end up with several runs of this stuff, each on it’s own driver. How to do I connect them all together? Worse, in an ideal world I want to be able to independently dim the ceiling down-lighters and worktop lighting. Suddenly this is starting to look more complex. No wonder the kitchen fitter wanted to use fluorescent tubes!

Still, at the moment it looks like I need:

  1. A dual gang, low-wattage, trailing-edge mains voltage dimmer
  2. The down-lighters wired in parallel directly to the dimmer
  3. The various worktop strips to be dimmable, and also wired to dimmable constant current LED drivers
  4. Those drivers then wired in parallel to the other channel of the dimmer

With a bit of thought it may even be possible to add some interesting “accent” lighting, as additional circuits in parallel with the down-lights. But at this point I need to do more research, and talk to people who’ve done this. So if anyone has any insight to add, please leave a comment!

Converting halogen chandelier to LED

About a decade back we bought a modern chandelier, powered by five 20w G4 halogen capsules. It’s a really lovely feature light, covering the hall, stairs and landing. Unfortunately, in the intervening time, electricity costs have rocketed, and we’ve all moved to lower-powered lighting using things like compact florescent lamps (CFLs) which cost a fifth of what old incandescent bulbs did to run. That chandelier is now the only incandescent fitting left in our house, and by far the most power hungry to run.

As a result we’re careful to turn it off whenever we don’t actually need it. As this defeats the whole point of having a feature light like a chandelier, I’ve been trying for some time to find a way to reduce it’s power consumption.

Newer technology halogen capsules were the first choice; the original 20w bulbs got switched out for 15w equivalents, that produced approximately the same light. But boy were they expensive, and they lasted 1000 hours at best – a year or so. So I’ve still been looking for alternatives; CFL was never going to work aesthetically, but LEDs seemed to hold promise … though the light output of early LEDs was not great, and no-one supported the G4 form factor anyway.

But recently I noticed manufactures were trying again. Mostly they were building simple circuit boards with a few high power SMD LEDs on each side. I don’t doubt that they work, but I suspect the beam angles would make the field of light very patchy. They look awful too!

And then I noticed these: 24 SMD LEDs mounted onto a neat cross-shaped circuit board. Nominally 360 degree beam angles, with the whole thing encapsulated in some type of silicon for physical robustness. Finally, something worth the gamble.

G4 Halogen capsule and G4 LED alternative

The existing 12v halogen transformer is apparently not suitable for driving LEDs so I picked up a cheap 12W LED driver from Amazon to replace it. In total, a little over £20 for 5 LED G4 replacements, the LED driver and the P&P.

LED driver and G4 LED bulb with halogen G4 bulb and AA battery for comparison of size

And the result is pretty good. The LEDs are extremely bright, and look simply wonderful in the chandelier. However the nature of LEDs is that they are extremely directional, producing a tight beam of light. When you’re in alignment with the output beam the LEDs are extremely bright. However, off-beam they are much less bright that an incandescent bulb that acts more like a point light source, radiating in every direction. The fact that there are 120 little surface mounted LEDs each pointing in slightly different directions helps enormously, but the overall room illumination from the chandelier still just isn’t as bright as it was with the old halogens.

It is, however, more than bright enough, and is only using 6w 7.5w rather than 100w to run, so we can run it as much as we like now. That’s a pretty good compromise in my book!

Update: I was asked for a picture of the end result. So here it is, taken at night, with the chandelier lit up, which makes the picture look a lot darker that it is in real life. The walls are also a light purple colour, accentuating the cold blue-white colour of the LED’s:
Picture of the lit chandelier

“We never know the worth of water till the well is dry” – Thomas Fuller

Between the increasing population density in my part of the UK, an apparently ever increasing demand for water (150 litres per person per day, apparently) and the impact of climate change, our water supplies are becoming a lot less predictable. In my part of the UK (Hampshire) we have been seeing real problems with water shortages during the summer for the last decade or so.

The response from the relatively newly privatised water industry has been to unceremoniously raise prices to cover the cost of “improvements to the infrastructure”, and “years of under-investment”. To be fair to them, the official statistics say that their network is leaking a lot less water than it used to. Unfortunately the average house-owner (well, OK, me!) hasn’t noticed any changes whatsoever – except in the charges. Until last month.

Southern Water are in the vanguard of charging for water by usage, by installing water meters, rather than linking payment to relative size of house. This clearly penalises larger families in smaller houses, though the water company would prefer to suggest that it promotes sensible water usage, and is fairer to everyone. I really can see their argument, but as the owner of what I have always felt to be a relatively modest house, shared with four squeaky-clean women, the thought of being put on a water meter has always made my blood run cold.

But there was only so long that I was able to resist … and last month, the water company swapped out my existing stop-cock in the street for a new one with an integrated water meter. No choice, simply a fait accompli. My initial (tight-fisted) thought was to initiate rationing, but somehow common sense prevailed and I’ve let normal life run its course for the last month with no real mention of this change to my family.

And today I went out into the street, and prised up the cover from the water meter to read it. This in itself was irritating because, despite being a smart meter that can be read remotely, Southern Water don’t issue us with the means to do that, and the meter-manufacturer considers the protocols to be “proprietary” and won’t tell me how to build a decoder. Southern Water will apparently read it every 6 months – which means their (presumably expensive) smart meters are about as useful in changing usage behaviours as the original dumb ones. Which is to say not at all. What a wasted opportunity.

However, at least I got the result for 26 days of water usage, rather than having to wait 6 months for Southern Water to get around to telling me. A further 10 minutes with Southern Waters charging information and a calculator results in the news that …

  • In our house we use about 82 litres of water per person per day
  • That works out at about 150m3 per year, which is significantly to the right side of “average” for a 5 person house
  • If I understand their charges correctly, my annual bill is about to be approximately halved.

I’m not sure whether to jump up and down shouting “RESULT!”, or slink off into the corner for not having elected to have a meter fitted a few years back when it first became an option. But clearly the next action has to be to find a way to read that smart meter in real time, and integrate it into my whole house monitoring, as I did with electricity, using my CurrentCost monitor.

Big Brother may not be watching you(*), but Google certainly is

I’m not a fan of advertising. In general my view is that the best products should sell themselves; I have a nagging suspicion that any product that needs to be heavily advertised is probably either unnecessary or not as good as it’s competitors. It tends to make me look suspiciously at the product and wonder what they’re trying to hide behind all that advertising spend.

I find advertising on the internet even worse. Because it’s so (relatively) inexpensive to reach people on the internet there appears to be very little time put into making the majority of advertising interesting; most of the advertisers seem to have adopted a “throw it at the wall and see what sticks” approach, figuring that if they throw enough, something will eventually stick. As a result most people I know are all fairly jaded, and tend to just ignore them all.

To counter this, the advertisers are trying to tailor the adverts that you see so they they are more closely matched to your circumstances or interests, presenting them to you almost as a “service”. The idea is that if they can show you adverts that are more likely to be of interest to you, then you’re more likely to “click through” and possibly buy something. So if you’ve just had (or even better, are about to have) a baby, they’ll show you adverts trying to sell you nappies, baby foods, etc. The idea is that you’re less irritated (because the adverts are now supposed to be useful), and they can charge the product owners more for each of those adverts because they’re supposed to be more effective. According to the advertisers, everyone wins.

Except, to be able to target those adverts at us, the advertisers need to know a lot about each of us. They need to be able to work out what kind of adverts we might be interested in. So they have to gather that information. Lots of it. And Google is gathering a lot more information than I care to think about. It monitors everything most people search for. It knows which of the search results it gave you that you decided to click on. It often knows a lot about which websites you visit during your normal web-surfing because the owners of many of those websites use Googles advertising services to deliver targeted adverts to you too. And now, thanks to Android, which now has a 75% share of the world-wide smartphone market, it usually knows exactly where you are when you do almost anything with the “mobile internet”. It brings a whole new meaning to tracking your browsing habits.

I’m becoming increasingly uncomfortable with this. For me, the straw that broke the camels back was Google wrapping their search results links in code that tracks you as you click on your choice of result. I’ve decided that I just don’t believe the platitudes from Googles execs about their famous motto of “Do no evil”. It’s just too easy for those people to redefine what they mean by “Evil”.

So I’m doing what I can to be a little less tracked. I now block all adverts, period. And although I still need to use Google, I’m doing my best to avoid letting them know much more than my search terms; I’m certainly not going to let them know which of their results I finally clicked on. It’s not much, but it’s a start towards keeping myself a bit further under the radar, in this increasingly Orwellian internet.

(*) Actually, Big Brother almost certainly is either watching you too, or getting Google to do it for them.

Simple install of OpenVPN with Ubuntu server and Mint client

Since I’ll have my laptop and phone with me while I’m in hospital, I’m expecting to be able to keep in touch with family and friends. However, it would also be useful to have access to my home network from hospital. I can already SSH into a command line on my home server, but I’ve been meaning to get a proper VPN set up for some time now, so this seemed like the excuse I needed to actually make it happen.

In my case, I have a home network with a NAT’d router that has a static IP address, and proper DNS & reverse DNS entries associated with it. I then have an Ubuntu Server running 24×7 behind it, providing various services to my home network. I simply want my Mint laptop to be able to VPN into my home network, securely, and on demand.

It turns out to be really straighforward, though there were a few quirks to overcome! Fundamentally I followed the instructions in this PDF to set up the server side of the VPN, found on the Madison Linux User Group website. However, there are a few problems with it, that need correcting:

  1. Under “Installing the Network Bridge and Configure Network Settings” on page 3, be aware that if you are using dnsmasq to mange your DNS and DHCP leases, you will need to change it’s configuration file in /etc/dnsmasq.conf, to listen on br0 rather than eth0.
  2. Under “Create the Server Keys and Certificates”, point (3), note that there are two KEY_EMAIL variables where there should be only one, and the variables KEY_CN, KEY_NAME and KEY_OU should ideally be filled in for completeness.
  3. Under “Create the Server Keys and Certificates”, before starting on point (4), you (a) need to edit the file /etc/openvpn/easy-rsa/whichopensslcnf, and remove all occurances of the string “[[:alnum:]]”, and (b) need to make sure that you can write to the file “~/.rnd”; by default it isn’t writeable except by root, so you need to issue the command “sudo chown youruserid ~/.rnd”.
  4. Under “Generate the Client Keys and Certificates” on page 4, when you come to carry out the instructions in point (1), to create the client keys, you must set a unique KEY_CN by changing the last command to something like “KEY_CN=someuniqueclientcn ./pkitool client1”.
  5. Under “Create the OpenVPN Server Scripts”, on page 5, the code to go into the and scripts is wrong. The brctl command is located in /sbin, not /usr/sbin. Change both scripts accordingly.

For the client side of the VPN, I followed the instructions in this PDF, which worked perfectly.

Once everything has been rebooted and is working properly, it is possible to VPN into my home network from the internet. All traffic from my laptop then flows through my home network via a secure VPN. Essentially my laptop becomes part of my home network for the time the VPN is running.

Which is exactly what I wanted.

You don’t have to know the answer to everything – just how to find it

Since I work at IBM, I get to use the companys own email system, which is based on what used to be called Lotus Notes. It’s recently had some extra “social media awareness” added to it, been rebranded “IBM Notes”, and repositioned as a desktop client for social business. Which is all very modern and hip, especially for a product that has it’s roots back in the early 1990’s. However, most organisations (including IBM) tend to use it solely for email – for which it is the proverbial sledgehammer.

But having been using it for some 18 years now, I’m fairly comfortable with it. The only issue I have is that as I’ve been using it for so long, my mail archives contain a huge amount of useful information from old projects that I’ve worked on. I also have other information related to those projects stored elsewhere on my laptop harddrive, and pulling all that information together and searching it coherently isn’t a trivial problem. However, in recent years desktop search engines have begun to provide a really nice solution to this.

The problem here is that Lotus Notes is based on a series of binary databases which form the backbone of its ability to efficiently replicate documents between clients and servers. Desktop search engines generally don’t understand those databases, and hence do not work with Lotus Notes. So searching my laptop becomes a somewhat tedious process, involving the Lotus Notes client search feature, and manually correlating with a desktop search engine of some type. It works, but it’s just not as good as it could be.

What I really want, what I really really want (as the Spice Girls would sing) is a desktop search engine that can understand and integrate my Lotus Notes content. And that’s what this post is all about.

Since I run Linux I have a choice of open source desktop search engines such as Tracker or Beagle (now deceased). But my current preference is for Recoll, which I find to be very usable. And then, last year, I discovered that a colleague had written and published a filter, to enable Recoll to index documents inside Lotus Notes databases. So I had to get it working on my system!

Unfortunately, it turned out that my early attempts to get the filter working on my Ubuntu (now Mint) system completely failed. He was a RedHat user, and there are quite a lot of packaging differences between a Debianesque Lotus Notes install, and a RedHat one, especially inside IBM where we use our own internal versions of Java too. So the rest of this post is essentially a decription of how I hacked his elegant code to pieces to make it work on my system. It’s particularly relevant to members of the IBM community who use the IBM “OCDC” extensions to Linux as their production workstation. I’m going to structure it into a description of how Recoll and the Notes filter work, then a description of how I chose to implement the indexing (to minimise wasteful re-indexing), and hence what files go where, and some links to allow people to download the various files that I know to work on my system.

At a very simplistic level, Recoll works by scanning your computer filesystem, and for each file it encounters, it works out what it is (plain text, HTML, Microsoft Word, etc) and then either indexes it (if it’s a format that it natively understands) using the Xapian framework, or passing it to a helper application or filter which returns a version of the file in a format that Recoll does understand, and so can index. In the case of container formats like zip files, Recoll extracts all the contents, and processes each of those extracted files in turn. This means Recoll can process documents to an arbitrary level of “nesting”, comfortably indexing a Word file inside a zip file inside a RAR archive for example. Once all your files are indexed, you can search the index with arbitrary queries. If you get any hits, Recoll will help to invoke an appropriate application to allow you to view the original file. The helper applications are already existing external applications like unRTF or PDFtotext that carry out conversions from formats that Recoll will commonly encounter, while filters are Python applications that enable Recoll to cope with specialist formats, such as Lotus Notes databases.

So, the way the Lotus Notes filter works, is that:

  1. Recoll encounters a Lotus Notes database, something.nsf
  2. To work out what to do with it, Recoll looks up the file type in its mimemap configuration file, and determines what “mimetype” to associate with that file
  3. It then looks up what action to take for that mimetype in the mimeconf configuration file, which tells it to invoke the rcllnotes filter
  4. It then invokes rcllnotes, passing it the URI to something.nsf
  5. rcllnotes then extracts all the documents (and their attachments) from the Notes database, passing them back to Recoll for indexing
  6. It does this by invoking a Java application, rcllnotes.jar, that must be run under the same JVM as Lotus Notes
  7. This Java application uses Lotus Notes’ Java APIs to access each document in the database in turn
  8. These are then either flattened into HTML output (using an XLST stylesheet) which Recoll can consume directly, or in the case of attachments, output as a document needing further processing; Recoll can tell which is which from the mimetype of the output. Included in the flattened HTML are a couple of metadata tags, one marking this HTML document as descended from a Lotus Notes database, and the other containing the complete Lotus Notes URI for the original document. This latter information can be used by the Lotus Notes client to directly access the document – which is crucial later in the search process
  9. Recoll then indexes the documents it receives, saving enough information to allow Recoll to use rcllnotes again to retrieve just the relevant document from within the Notes database.
  10. So, when a search results in a Notes document, Recoll can use the saved information (the URI of the database and the Notes UNID of the document?) and the rcllnotes filter to obtain either the flattened HTML version of the document, or a copy of an attachment. Recoll then uses the documents mimetype to determine how to display it. In the case of an attachment, Recoll simply opens it with the appropriate application. In the case of the HTML, Recoll combines the expected “text/html” with the information in the metadata tag that describes this HTML as being derived from a Lotus Notes document. This produces a mimetype of “text/html|notesdoc”, which it then looks up in the mimeview configuration file, which causes it to use the rclOpenNotesClient script. That reads the Notes URI from the other HTML metadata field in the flattened HTML file, and then invokes the Lotus Notes client with it, causing the actual document of interest to be opened in Lotus Notes.

One of the problems with using Recoll with Lotus Notes databases is that it’s not possible to index just the few changed documents in a Notes database; you have to reindex an entire database worth of documents. Unfortunately there are usually a lot of documents in a Notes database, and the process of indexing a single document actually seems relatively slow, so it’s important to minimise how often you need to reindex a Notes database.

To achieve this, I make use of a feature of Recoll where it is possible to search multiple indexes in parallel. This allows me to partition my system into different types of data, creating separate indexes for each, but then searching against them all. To help with this, I made the decision to index only Notes databases associated with my email (either my current email database, or it’s archives) and a well-known (to me) subset of my filesystem data. Since my email archives are partitioned into separate databases, each holding about two years of content, I can easily partition the data I need to index into three categories: static Lotus Notes databases that never change (the old archives), dynamic Lotus Notes databases that change more frequently (my email database and its current archive), and other selected filesystem data.

I then create three separate indexes, one for each of those categories:

  1. The static Notes databases amount to about 5.5GB and takes about 2.5 hours 8GB and takes a little under 4 hours to index on my X201 laptop; however, since this is truely static, I only need to index it once.
  2. The dynamic Notes databases amount to about 4GB and take about 2 hours 1.5GB and take about 40 minutes to index; I reindex this once a week. This is a bigger job than it should be because I’ve been remiss and need to carve a big chunk of my current archive off into another “old” static one.
  3. Finally, the filesystem data runs to about another 20GB or so, and I expect this to change most frequently, but be the least expensive to reindex. Consequently I use “real time indexing” on this index; that means the whole 20GB is indexed once, and then inotify is used to determine whenever a file has been changed and trigger a reindex of just that file, immediately. That process runs in the background and is generally unnoticable.

So, how to duplicate this setup on your system?

First you will need to install Recoll. Use sudo apt-get install recoll to achieve that. Then you need to add the Lotus Notes filter to Recoll. Normally you’d download the filter from here, and follow the instructions in the README. However, as I noted at the beginning, it won’t work “out the box” under IBM’s OCDC environment. So instead, you can download the version that I have modified.

Unpack that into a temporary directory. Now copy the files in RecollNotes/Filter (rcllnotes, rcllnotes.jar and rclOpenNotesClient) to the Recoll filter directory (normally /usr/share/recoll/filters), and ensure that they are executable (sudo chmod +x rcllnotes etc). You should also copy a Lotus Notes graphic into the Recoll images directory where it can be used in the search results; sudo cp /opt/ibm/lotus/notes/notes_48.png /usr/share/recoll/images/lotus-notes.png.

Now copy the main configuration file for the Notes filter to your home directory. It’s called RecollNotes/Configurations/.rcllnotes and once you have copied it to your home directory, you need to edit it, and add your Lotus Notes password in the appropriate line. Note that this is by default a “hidden” file, so won’t show up in Nautilus or normal “ls” commands. Use “ls -a” if necessary!

Next you need to set up and configure the three actual indexes. The installation of Recoll should have created a ~/.recoll/ configuration directory. Now create two more, such as ~/.recoll-static/ and ~/.recoll-dynamic/. Appropriately copy the configuration files from the subfolders of RecollNotes/Configurations/, into your three Recoll configuration folders. Now edit the recoll.conf files in ~/.recoll-static/ and ~/.recoll-dynamic/, updating the names of the Notes Databases that you wish to index. Now manually index these Notes databases by running the commands recollindex -c ~/.recoll-static -z and recollindex -c ~/.recoll-dynamic -z.

At this point it should be possible to start recoll against either of those indexes (recoll -c ~/.recoll-static for example) and run searches within databases in that index. I leave it as an exercise for the interested reader to work out how to automate the reindexing with CRON jobs.

Next we wish to set up the indexing for the ~/.recoll/ configuration. This is the filesystem data that will run with a real-time indexer. So start by opening up the Recoll GUI. You will be asked if you want to start indexing immediately. I suggest that you select real-time indexing at startup, and let it start the indexing. Then immediately STOP the indexing process from the File menu. Now copy the file RecollNotes/recoll_index_on_ac to your personal scripts directory (mine is ~/.scripts), ensure it is executable, and then edit the file ~/.config/autostart/recollindex.desktop, changing the line that says Exec=recollindex -w 60 -m to Exec=~/.scripts/recoll_index_on_ac (or as appropriate). This script will in future be started instead of the normal indexer, and will ensure that indexing only runs when your laptop is on AC power, hopefully increasing your battery life. You can now start it manually with the command nohup ~/.scripts/recoll_index_on_ac &, but in future it will be started automatically whenever you login.

While your filesystem index is building, you can configure Recoll to use all three indexes at once. Start the Recoll GUI, and navigate to Preferences -> External Index dialog. Select “Add Index”, and navigate into the ~/.recoll-static/ and ~/.recoll-dynamic/ directories, selecting the xapiandb directory in each. Make sure each is selected. Searches done from the GUI will now use the default index (from the filesystem data) and the additional indexes from the two Lotus Notes configurations.

There is one final configuration worth carrying out, and that is to customise the presentation of the search results. If you look in the file in RecollNotes/reoll-result-list-customisation you will find some instructions to make the search results easier to read and use. Feel free to adopt them or not, as you wish.

Update: To answer the first question (by text message no less!), my indexes use up about 2.5GB of space, so no, it’s not insignificant, but I figure disk really is cheap these days.

Update: Corrected command to copy Notes icon to Recoll image directory to match configuration files, and a couple of the pathnames where I had introduced some typos.

Update: Added the main .rcllnotes configuration file to my archive of files, and updated the installation instructions to discuss installing it, and the need to add your Lotus Notes password to the file.

New ISP, and playing with callerid

Back at the start of last month (the 1st of March) Sky announced that they were going to acquire the UK fixed line businesses of Telefonica. This amounted to about 500,000 O2 and BE Unlimited broadband and telephone customers, including me.

On the surface, this looked like a really good deal for almost everyone – O2 and BE Unlimited had been steadily losing customers for a long time, because Telefonica wasn’t making the investments to allow them to keep up with technologies such as FTTC. Sky was already offering some of those technologies (particularly FTTC) and has a history of investing to create premium solutions for their customers. And Sky would gain the additional customers to jump them past Virgin, moving from the third largest ISP in the UK to the second largest, just behind BT.

However, there was a fly in the ointment; BE Unlimited in particular started life as the niche-market provider-of-choice for the demanding techie. As such they offered some less common features – static IP addresses (either individually or in blocks), full Reverse DNS resolution, Annex M ADSL2+ with user-configurable noise margin, multiple bonded connections … the types of features that are usually reserved for very expensive business class connections. And Sky, as a mass-market ISP, did not (and to my knowledge still do not) offer any of those kinds of features.

What the announcement did was galvanise many of the BE Unlimited customers to start looking for alternatives, in case the features that they depended on would go away when their service was transferred to Sky. In my case my requirements were fairly simple; at least 1 static IP address with configurable Reverse DNS, no port-blocking, no usage limits, and the ability to get at least 12Mb down and 2Mb up. And ideally I didn’t want to pay any more than I already was.

Frankly, I was expecting it to be a real challenge, but to my surprise the second ISP I called was able to meet all my needs. Admittedly the nice lady in the call-centre had never heard of Reverse DNS before, but she took down all my requirements, and put me on hold while she talked to their technical support team. Several long minutes later, and we were in business; their offer was 60Mbps down, 20Mbps up on an unlimited FTTC connection, with one static IP address with reverse-DNS, and no ports blocked. Even better, it was £2 a month less than I currently paid, provided I also transferred my phone service to them.

And so I became one of what appears to be the many who have left BE Unlimited rather than be moved to Sky. Which is rather sad really – they were an exceptional ISP who got bought by a large organisation (Telefonica) and then starved of the cash they needed to continue to grow. The end result is that they won’t exist soon.

The one thing that I have “lost” in this process of changing ISP’s is the “anonymous caller rejection” feature on my phone line. BE provided that for £2 a month, whereas Plusnet wanted £4. Which is a lot for a feature that in practice, I found rather limited anyway.

Which brings me to the point of this post. Rather than paying for the Anonymous Call Reject, I’m going to use a modem to monitor the callerid of inbound phone calls, and then decide on a call by call basis what to do with the call. I can either let the call continue (and ring the phones) or I can pick-up & immediately hang-up the line (using the modem) before the phones even ring. That should enable me to cut off all the callers with witheld numbers, just as the £4 a month ISP option would. However, I can also reject calls from “unavailable” numbers, which are usually cold callers using VoIP systems. And I can also easily implement a system that will reject all calls outside of certain hours (say 8am to 6pm), unless the caller is on a whitelist of known contacts. Which is a lot better than the ISP anonymous caller rejection can achieve.

To do this I’m using an old USR 5668B serial-attached modem that correctly understands the UK’s completely proprietary callerid system. However, this modem is no longer available, and getting a clear answer on what modems can actually cope with UK callerid is, er, troublesome. However, I have discovered reports that the currently available USR 5637 USB fax modem works well with UK callerid. See here for ordering information (no affiliation).

To access the modem I’m using a simple self-written script, written in Perl, using an external library called Device::Modem, which gives me a nice high level abstracted interface to the modem, making the programming much easier. My code is based on some example code provided with Device::Modem for reading the callerid, but in my case, is now logging the inbound calls into a file so I can better investigage who to put on a future whitelist.

Update: code attached for interest.

# Capture Callerid information for incoming calls, using the modem attached to /dev/ttyS0
use Device::Modem;

# Init modem
my $port = '/dev/ttyS0';
my $baud = 57600;
my $modem = new Device::Modem ( port => $port );
die " Can't connect to port $port : $!\n" unless $modem->connect( baudrate => $baud );

# Set up signal catcher to ensure modem is disconnected in event of kill signal
$SIG{INT} = \&tsktsk;

sub tsktsk {
die " SIGINT received.";

# Enable modem to return Caller ID info
my $response = $modem->answer(undef, 2); # 2 seconds timeout

# Poll state of modem
my $call_number = '';
my $call_date = '';
my $call_time = '';

# Do forever
while( 1 ) {

# Listen for any data coming from modem
my $cid_info = $modem->answer();

$cid_info =~ s/\R//g; # Get rid of any line break wierdness

# If something received, take a look at it
if( $cid_info =~ /^RINGDATE\s=\s(\d{4})TIME\s=\s(\d{4})NMBR\s=\s(.*)/ ) {
# It matches our (previously observed) callerid string

# Date (mmdd) in $1
$call_date = $1;
# Time (hhmm) in 24hr format in $2
$call_time = $2;
# Phone number (string) in $3
$call_number = $3;

# Write out the date, time and number to a log file

# Assuming we dont want the file open all the time, so open, append and close the log
open (FH, ">>/home/richard/callerid/phone.log") || die " Couldn't open file: $!\n";
print FH "$call_date $call_time $call_number\n";
close FH ;
# Repeat forever

Calling home, free of charge

I’ve just been through a period of travel hell; individually each of the the three back-to-back trips are interesting, useful, and in some ways, quite good fun. But they’re back to back. So over a 17 day period, I’ve actually had only 3 days in the UK, and two of those I was still working. Of course, mention business travel to anyone that doesn’t do it, and it brings to mind visions of exotic locations and lavish expense accounts. Whereas the reality tends to be cramped economy class travel, very long working days, and lonely hotel rooms a long way from friends and family.

More importantly it means that just doing the normal day to day family things, like chatting to your kids about their day in school can rapidly become an extortionately expensive luxury that you feel ought to be kept to the brief minimum. Especially when the company that you’re travelling for thinks that it shouldn’t pay for those phone calls – which particularly irks me.

And that got me thinking – I actually have all the facilities I need to enable me to call home to my family for nothing. My company expects me to need an Internet connection in whatever hotel I stay in, and fully funds it. I carry a laptop and an Android Smartphone. In combination with the rather sophisticated phone that I have at home, I can talk to my family for as long as I want for no additional costs, using a technology called VoIP, based on an open standard called SIP.

My home phone is a Siemens Gigaset S685 IP DECT system, with 5 handsets. It’s what the marketing world likes to term “pro-sumer”, by which they really mean it’s got lots of features that most consumers will never use, but it’s not really got enough features for true commercial use. They also mean it’s a bit expensive for a consumer product.

But in this case, we’re talking about a DECT phone that connects to both my home phone line and my home broadband, and can have up to 7 logical phone lines connected to it – the physical “POTS” line, and 6 VoIP connections over the internet. The base unit can support separate conversations on up to 3 lines in parallel, with as many handsets connected to each of those lines as required. Each handset can act as a speaker-phone, or have either a headset or bluetooth earpiece attached to. It can even do call routing, where it looks at the number you dial, and decides which line to place your call on. In short, it’s absolutely packed with features.

The key to the free calls home is of course, the VoIP lines, because as of version 2.3, Android can also make VoIP calls. The trick is simply getting the two to talk to one another, while they are in different countries.

So first you need to find a SIP provider for your home phone and your Smartphone. The best way that I’ve found to do this is to set up a SIP-based virtual PBX. You create a free account with them (your PBX) and then add users to your PBX. Each user is given their own SIP credentials so they can logon, and you can associate an extension number with each of those users, allowing them to easily exchange calls – which is exactly what I need, as (unlike my Android smartphone) the handsets on my old Gigaset cannot call SIP URI’s directly.

The first provider I came across that allows this is OnSip, in the USA. My experience with them has been good, but there are many others out there too. Unfortunately it’s not perfect – for me, there are a couple of quirks with their service. Firstly, they don’t seem to expect anyone outside the USA to be using their service, so I cannot enter my address correctly. Which I suspect is technically a breach of their T&C’s. And secondly, it means that all the call tones you’ll hear when using this (ringing, engaged, unobtainable etc) will be American rather than British. I can live with that, but if you choose to go down this route too, DO NOT ENABLE THE E911 OPTION. You could theoretically call out US emergency services to whatever pseudo-fictitious address you have registered with them, which would not be good.

To make it work:

  1. Register with your chosen free SIP PBX provider. I’ll assume OnSip, who will ask you to register a userid, then validate it by email, before letting you set up your first PBX.
  2. Registering for OnSip PBX

    Registering for OnSip PBX

    When you come to define that PBX, you’ll see a screen similar to this one, asking for initial configuration information. You can call the domain name (which is essentially the Internet name of the PBX) anything you like. Incidentally, changing that country field seems to do nothing. It doesn’t even change the format of the information they collect for your address or (real) phone number.
  3. Creating a new user

    Creating a new user

    Having now got a SIP PBX available, we can add some users to it. Each user is roughly equivalent to a telephone on a normal PBX, but in this case the user accesses the PBX by way of some SIP credentials. The users can be called via those credentials, or (if the caller is another user of this PBX) the extension number that is associated with that user. This happens irrespective of the device that the user is using those credentials on, or its location.
  4. List of virtual PBX users

    List of virtual PBX users

    After entering a user for the Gigaset (my house phone) and one for my smartphone, I have a PBX with two users associated with it. I’ve obscured some critical pieces of information to maintain some privacy, but fundamentally the OnSip system gives me a set of SIP credentials for each “user” of the system (bottom right on the image), and associates an extension with them too.
  5. Next we need to get the Gigaset to register with the SIP PBX. To do this, logon to the Gigaset web interface, select the “Settings” tab, and then navigate to Telephony, Connections.
    Gigaset VoIP providers

    Gigaset Basic VoIP Provider Settings
    Gigaset Advanced VoIP Provider Settings Gigaset Settings

    Now define a new provider by clicking on one of the “Edit” buttons as seen in the first of these screenshots. This will bring up the basic settings page seen in the second screenshot. Into this screen enter the data found on the OnSip user configuration screen under the Phone Configuration heading. Copy the OnSip username field into the Gigaset username field, the OnSip Auth Username field into the Gigaset Authentication Username field, the OnSip SIP Password into the Gigaset Authentication Password field, and then click the “Show Advanced Settings” button, which will extend the form with the additional fields seen in the third screenshot. Now copy the OnSip Proxy/Domain field into the four Gigaset fields: Domain, Proxy Server Address, Registrar Server, and Outbound Proxy. When you save the settings the Gigaset will attempt to register with the OnSip PBX. Sometimes this takes a few seconds. You may need to refresh the browser to see the status change to Registered.
  6. Now we need to make the Android Smartphone register to the OnSip PBX too. To do this, bring up the Android Settings by pressing “Menu” from the home screen, and tap “Settings”.
    Android Call Settings

    Android Internet Calling (SIP) Accounts
    Android SIP Account Details Android SIP setup

    Navigate to the Call Settings, and tap it to reveal the screen in the first screenshot.
    Tap on “Use Internet Calling” and set it to “Ask for each call”. Then tap on Accounts to bring up the Internet Calling (SIP) Accounts screen where we can enter SIP accounts associated with the phone. See the second screenshot.
    Now add a new account for the OnSip PBX by tapping “New Account”; this will bring up a screen like the one shown in the third screenshot, into which you need to enter your credentials.
    The content of each of the fields (from the OnSip phone configuration information) should be obvious by now. When you save the account you will want to check that the registration information is correct. The easiest way to do this is to enable the “Receive incoming calls” option (back on the first screenshot again), which will force Android to register all it’s accounts with their associated providers. If you get an error then either the provider is down (possible) or the settings are incorrect (more likely). Leaving this option enabled forces Android to keep those connections active, which runs all the radios, and eats the battery, but allows incoming VoIP calls to your Smartphone (say from the Gigaset). In my experience it’s too power-hungry to use in this mode, other than very sparingly. Fortunately you can make outgoing calls from the Smartphone without enabling it.
  7. Android Internet Calling enabled Contact

    Android Internet Calling enabled Contact

    Finally you need to define a contact in the Smartphone contacts to use with Internet Calling. As all my contacts are normally managed by Lotus Traveler for me, which has no concept of Internet Calling, I defined a new local-only contact that is not synched with any of my Accounts (ie, Google or Lotus Traveler) and used that. Enter the name of the contact as normal, then scroll all the way to the bottom of the contact, where you will find a “More” section. Expand that, and continue to scroll to the bottom, where you will find a field for “Internet call”; into that simply enter either the OnSip SIP URI of your Gigaset, or it’s OnSip extension number.

Note that this really only works when connected to a reasonably good quality WiFi network. Poor quality networks seem to give quite variable results. Sometimes they still work, other times one end or the other may experience audio problems and/or dropped calls. It seems to work just fine through both the IBM and my home firewalls, even at the same time. I’ve not checked the actual audio codecs in use, but sound quality is subjectively better than a normal cellular call. Neither Android or the Gigaset seem to do silemce suppression (ie injection of white-noise when no-one is speaking) so the pauses in a conversation are totally silent, which can be slightly disconcerting.

Normally if you want to dial the Smartphone from the Gigaset you would need to indicate to the phone that it should send the call over the appropriate VoIP provider. This quickly becomes a pain, but it’s easy to set up a simple dial plan on the Gigaset so calls that start “700” (remember I made my extensions be 7000 and 7001?) go out over the OnSip VoIP connection automatically, which makes the solution really easy to use (ie family-friendly) from the Gigaset.

Finally, there is a really interesting technology called iNum available. Sadly it’s not (as far as I can tell) implemented by any of the major telecoms ISPs in the UK, that when combined with SRV records in special DNS records, would allow some really cool automatic rerouting of calls over different networks to different endpoints, depending on context. In theory the network could understand whether I was at home or out with my mobile, and route the call appropriately. It could also do smart things like look at the inbound number and the time, and route business calls to voicemail out of office hours, but still let personal calls through as normal.

If only it were implemented.

Building a scan server on Ubuntu Server 12.04

I have an old but capable little Canon scanner that I’ve used for various administrative tasks for a couple of years now. It connects to my laptop via USB, and draws its power from that link too, which makes it very convenient and easy to use.

Except for the last few months, my daughters have started making increasing use of the scanner for their homework too. Which is fine, but means that the scanner is being carried around the house, and regularly being plugged in and out of different laptops. Which is probably not a good recipe for a long and trouble-free working life.

So today I configured a simple scan server. The scanner is now “permanently” attached to my home server, and anyone who wants to scan documents can do so from their own computer, over the network, using the scanner on the home server. No more hunting for missing USB cables, or even a missing scanner!

This is surprisingly easy to achieve under Linux, as the scanning subsystem (SANE) is implemented as a client/server system, much like the printing system. The only thing that makes it a bit more convoluted is the involvement of a superserver like inetd or one of its equivalents.

On the server:

  1. Plug in the scanner
  2. sudo apt-get install sane-utils
  3. Make sure your /etc/services file contains:

    sane-port 6566/tcp sane saned # SANE

  4. Configure inetd or xinetd to autostart saned. In my case I use xinetd, so I need to ensure that /etc/xinetd.d/saned contains:

    service sane-port
    port = 6566
    socket_type = stream
    server = /usr/sbin/saned
    protocol = tcp
    user = saned
    group = saned
    wait = no
    disable = no

    Now restart xinetd (by sudo /etc/init.d/xinetd restart) so that saned can be autostarted when it’s required.

  5. Next we need to configure saned. This is controlled by the file /etc/defaults/saned, which should be changed to look like:

    # Defaults for the saned initscript, from sane-utils

    # Set to yes to start saned

    # Set to the user saned should run as

  6. At this point we need to make sure that saned can access the scanner. I did this by setting up a udev rule to arrange for the permissions on the underlying device to be set so saned can access it. For my convenience I also set up a “well known” symbolic name (/dev/scanner) to the scanner device too, as that base device can change depending on what is plugged into the server at any point in time; I’m pretty sure saned doesn’t require this though. I achieved this by making the new file /etc/udev/rules.d/90-scanner.rules contain the single line:


    The idVendor and idProduct are obtained by running the lsusb command, and extracting the USB vendor and product identifiers from the scanner entry.

  7. Next we need to configure saned. In this case, all we need to do is define the systems that can connect to it. This is done by making the file /etc/sane.d/saned.conf read:

    # saned.conf
    # Configuration for the saned daemon

    ## Daemon options
    # Port range for the data connection. Choose a range inside [1024 – 65535].
    # Avoid specifying too large a range, for performance reasons.
    # ONLY use this if your saned server is sitting behind a firewall. If your
    # firewall is a Linux machine, we strongly recommend using the
    # Netfilter nf_conntrack_sane connection tracking module instead.
    # data_portrange = 10000 – 10100

    ## Access list
    # A list of host names, IP addresses or IP subnets (CIDR notation) that
    # are permitted to use local SANE devices. IPv6 addresses must be enclosed
    # in brackets, and should always be specified in their compressed form.
    # The hostname matching is not case-sensitive.


    # NOTE: /etc/inetd.conf (or /etc/xinetd.conf) and
    # /etc/services must also be properly configured to start
    # the saned daemon as documented in saned(8), services(4)
    # and inetd.conf(4) (or xinetd.conf(5)).

    In this case you can see I’ve defined it so anything in the subnet can connect to saned.

On a standard Ubuntu desktop client, only one action needs to be taken to allow it to seamlessly make use of the scan server:

  1. Modify the /etc/sane.d/net.conf file, so it reads:

    # This is the net backend config file.

    ## net backend options
    # Timeout for the initial connection to saned. This will prevent the backend
    # from blocking for several minutes trying to connect to an unresponsive
    # saned host (network outage, host down, …). Value in seconds.
    connect_timeout = 20

    ## saned hosts
    # Each line names a host to attach to.
    # If you list “localhost” then your backends can be accessed either
    # directly or through the net backend. Going through the net backend
    # may be necessary to access devices that need special privileges.
    # localhost
    # My home server:

  2. From this point onwards, you should be able to start “Simple Scan” on the client machine, and see the scanner attached to the server machine, as though it was locally attached. You can alter all the settings as required, and scan over the network as needed.

Getting Adobe Flash to work on Ubuntu 12.04

One of those “I’ll get around to it” jobs has been upgrading my wife’s laptop. She’s been happily running an old version of Ubuntu for some time now on an ancient Thinkpad without any real issues. Then very recently she ran into problems with uploading photographs to Snapfish. The culprit was clearly an update to Flash, but I decided since she wasn’t getting security updates any more, it was probably better to upgrade her to the latest 12.04 LTS version of Ubuntu to fix the problem.

Unfortunately she didn’t have enough spare disk space to go through a series of upgrades (this machine has an old 30GB hard drive!) so really this meant a hardware upgrade, and a proper reinstall too. However, as I was feeling a bit more awake today (and the weather was awful) this seemed like a good little task to take on. So I backed up the system to my server, found an old 80GB 2.5″ PATA drive in my spares box, and set to.

As with all Thinkpads, making hardware upgrades is fairly straightforward. Pop out one screw, and the drive assembly slides out the side. Remove another four screws from that assembly to separate the drive from it’s cage and plastic bezel. Swapping in the higher capacity drive was a simple matter of reversing the process.

This machine only has a USB 1.1 port, so I did the Ubuntu install from a CD, rather than my normally preferred USB memory stick. It’s a lot noisier than using the memory stick, but has an identical effect – I was soon running the latest version of Ubuntu. A quick update cycle got me the latest security fixes, and then I added in some key additional packages, sorted out the printer and wireless network accesses, and then restored all the user data that I’d backed up earlier.

Which was great, but Snapfish still didn’t work. It seems that Adobe Flash doesn’t work on 12.04 LTS either. Drat.

So after a bit more reading, it all became clear. Adobe don’t follow the normal Linux approach of issuing a specific version of their software with a given OS release, and then only providing security fixes to it. Instead, they regularly issue their current, latest code to all the different operating systems that they support. This means that when they introduce a bug, you get it on all the versions of all their supported operating systems.

Except in this case, just to make it more confusing, it seems that some people are seeing one set of problems, and others another, while some people are seeing no problems at all. And Adobe are apparently not particularly interested, as they don’t see supporting Linux as a priority.

So in the end, after a great deal of reading various forums on the Internet, it became increasingly clear that there was no clear fix for the problems that Adobe appear to have introduced in their latest code. Ultimately, it seems that the simplest solution is to just back-level the Flash plugin in the browser to an earlier version that doesn’t exhibit these problems.

To do this, I downloaded the archive of previous Adobe Flash plugins, which can be found here: I chose the v11.1.102.55 releases (174 MB), and extracted the Linux version, which is packaged as a shared library called By searching the /usr tree I found that (in my case) there was a single copy of this library already installed in /usr/lib/flashplugin-installer/, which I then replaced with the back-level version.

After restarting Firefox, Adobe Flash now works normally again, and we can bulk upload photos to Snapfish again. Clearly this hack will be overwritten if Adobe issues a new version of the flash plugin, but that’s probably what we want to happen anyway.