Health update

The last time I mentioned my health was back in November of last year; at that point I was keen to get back to work. I was frustrated to be sitting at home with not much to occupy myself, and feeling somewhat guilty for exceeding the 3-months that I’d originally suggested to my management that I’d need to be away from work.

After some meetings with my management we agreed that I’d start a phased return to work in late November by working approximately half days from home, with no commuting. The main intent was for me to catch up on all the things that had gone on (including a large internal reorganisation) while I was out, get all my admin (and email!) up to date, rather than worry too much about any specific business goals.

And to my surprise I found it incredibly difficult. Initially I struggled to regularly work even half a day, and when I tried to “push on through” I failed. Spectacularly. I’d literally fall asleep at the keyboard. Over the 6 weeks running up to Christmas I did see my stamina improve a little, and I even managed some half days back in my local office. But progress was depressingly slow, and when I first tried to commute up to London for a meeting, I felt so unwell by the time I’d got there that I barely had time to attend the meeting before I had to leave for home again.

A fortnights break at Christmas was a welcome relief, during which I had another consultation with my surgeon, and brought up the issue of my tiredness and ongoing kidney pain. The result was a set of blood and urine tests.

The blood tests revealed little that was wrong, or at least unexpected; my kidney function appeared to be fine, but I was still showing the signs of a low-level background infection. Since my perineal wound was (and is) still open, this was only to be expected. But fundamentally, I was in as good health as anyone could expect – the suggestion was that I just needed more time to get over my last operation, and that all the other treatments that I’ve been through over the last 4 years probably weren’t helping.

The urine test however, showed another drug-resistant UTI. More antibiotics put paid to that, but I was advised to see my urologist again too. He suspects that my problems with kidney pain and repeated UTI’s are ultimately due to a problem called renal or vesicoureteral reflux. This is normally a condition most common in young children, but in my case is almost certainly caused by the process of reimplanting my left ureter; it no longer acts as a one-way valve, allowing urine to flow back up into my kidney.

Of itself, this causes nothing more than mild discomfort. But in combination with UTI’s, this can cause significant pain (as I discovered) and potentially further permanent damage to my kidney, which is most definitely not desirable. So for the next six months I’ve been prescribed a prophylactic dose of antibiotic (Trimethoprim) to keep the UTI’s at bay.

And since returning to work after the Christmas break, I’ve noticed that my stamina has noticeably improved. I’m still a long way from what I would consider normal, but I’m managing to work much closer to full days now, and I’m coping with some commuting too. I can see real progress.

Of course, in retrospect the lesson to be learned is that I probably tried to come back to work too early. I suspect that if I’d stayed off work for another month or so my recovery would probably have been faster and easier. But I’d have been climbing the walls!

Ditching the spinning rust

For some time now I’ve been thinking of switching my laptop storage over to an SSD. I like the idea of the massively improved performance, the slightly reduced power consumption, and the ability to better withstand the abuse of commuting. However, I don’t like the limited write cycles, or (since I need a reasonable size drive to hold all the data I’ve accumulated over the years) the massive price-premium over traditional drives. So I’ve been playing a waiting game over the last couple of years, and watching the technology develop.

But as the January sales started, I noticed the prices of 256GB SSDs have dipped to the point where I’m happy to “invest”. So I’ve picked up a Samsung 840 EVO 250GB SSD for my X201 Thinkpad; it’s essentially a mid-range SSD at a budget price-point, and should transform my laptops performance.

SSD’s are very different beasts from traditional hard drives, and from reading around the Internet there appear to be several things that I should take into account if I want to obtain and then maintain the best performance from it. Predominant amongst these are ensuring the correct alignment of partitions on the SSD, ensuring proper support for the Trim command, and selecting the best file system for my needs.

But this laptop is supplied to me by my employer, and must have full system encryption implemented on it. I can achieve this using a combination of LUKS and LVM, but it complicates the implementation of things like Trim support. The disk is divided into a minimal unencrypted boot partition with the majority of the space turned into a LUKS-encrypted block device. That is then used to create an LVM logical volume, from which are allocated the partitions for the actual Linux install.

Clearly once I started looking at partition alignment and different filesystem types a reinstall becomes the simplest option, and the need for Trim support predicates fairly recent versions of LUKS and LVM, driving me to a more recent distribution than my current Mint 14.1, which is getting rather old now. This gives me the opportunity to upgrade and fine-tune my install to better suit the new SSD. I did consider moving to the latest Mint 16, but my experiences with Mint have been quite mixed. I like their desktop environment very much, but am much less pleased with other aspects of the distribution, so I think I’ll switch back to the latest Ubuntu, but using the Cinnamon desktop environment from Mint; the best of all worlds for me.

Partition alignment

This article describes why there is a problem with modern drives that use 4k sectors internally, but represent themselves as having a 512byte sector externally. The problem is actually magnified with SSD’s where this can cause significant issues with excessive wearing of the cells. Worse still, modern SSDs like my Samsung write in 4K pages, but erase in 1M blocks of 256 pages. It means that partitions need to be aligned not to “just” 4K boundries, but to 1MB boundries.

Fortunately this is trivial in a modern Linux distribution; we partition the target drive with a GPT scheme using gdisk; on a new blank disk it will automatically align the partitions to 2048 sector, or 1MB boundries. On disks with existing partitions this can be enabled with the “l 2048” command in the advanced sub-menu, which will force alignment of newly created partitions on 1MB boundries.

Trim support

In the context of SSD’s TRIM is an ATA command that allows the operating system to tell the SSD which sectors are no longer in use, and so can be cleared, ready for rapid reuse. Wikipedia has some good information on it here. The key in my case is going to be to enable the filesystem to issue TRIM commands, and then enabling the LVM and LUKS containers that hold the filesystem to pass the TRIM commands on through to the actual SSD. There is more information on how to achieve this here.

However, there are significant questions over whether it is best to enable TRIM on the fstab options, getting the filesystem to issue TRIM commands automatically as it deletes sectors, or periodically running the user space command fstrim using something like a cron job or an init script. Both approaches still have scenarios that could result in significant performance degradation. At the moment I’m tending towards using fstrim in some fashion, but I need to do more research before making a final decision on this.

File system choice

Fundamentally I need a filesystem that supports the TRIM command – not all do. But beyond that I would expect any filesystem to perform better on an SSD than it does on a hard drive, which is good.

However, as you would expect, different filesystems have different strengths and weaknesses so by knowing my typical usage patterns I can select the “best” of the available filesystems for my system. And interestingly, according to these benchmarks, the LUKS/LVM containers that I will be forced to use can have a much more significant affect on some filesystems (particularly the almost default ext4) than others.

So based on my reading of these benchmarks and the type of use that I typically make of my machine, my current thought is to run an Ubuntu 13.10 install on BTRFS filesystems with lzo compression for both my root and home partitions, both hosted in a single LUKS/LVM container. My boot partition will be a totally separate ext3 partition.

The slight concern with that choice is that BTRFS is still considered “beta” code, and still under heavy development. It is not currently the default on any major distribution, but it is offered as an installation choice on almost all. The advanced management capabilities such as on-the-fly compression, de-duplication, snapshots etc make it very attractive though, and ultimately unless people like me do adopt it, it will never become the default filesystem.

I’ll be implementing a robust backup plan though!

More progress…

My meeting with my consultant was basically all good.

He removed the corrugated drain from my perineal wound, which has transformed my level of comfort. I can now sit down and move around infinitely more easily than before. He has left the actual wound tract open, which will allow continued drainage (so no relief from the joys of absorbent pads etc yet) but he tells me that with the drain removed, the amount of exudate from the wound should drop significantly.

His suspicion is that my bladder pains are actually a combination of irritation from the stent along with an infection, so I’ve been given a weeks supply of a targeted antibiotic that should knock most urinary infections on the head. Apparently this will also turn my urine orange, which will be novel. But in parallel to that he’s running a urine sample through the labs (results on Monday sometime) just to check in case I need something more powerful.

He also let me know that all the bits of me that got removed during my last operation went off to pathology for testing to see if there were any signs of cancer present. I wasn’t aware of this, or I might have been more nervous. However, the good news is that there was absolutely no sign of cancer anywhere in any of the excised tissue. Phew.

He next wants to see me in about 6-8 weeks time, and in the meantime wants me to start getting back to as normal a life as I can; building up my stamina and starting to do more gentle exercise, continuing to eat and drink normally, the only restrictions being on anything that might put undue pressure on my perineal wound (so no cycling for a couple more months at least).

I asked about returning to work, and from his perspective, I should start planning that once my perineal wound has stopped discharging, which sounds really positive. However, I must admit that at the moment I’d be a waste of space at work – I still can’t stand up for more than about 10 minutes at a time, and I’m cat-napping several times each day, as well as sleeping most of the night. However, if things follow the same course as after my first major operation then this phase will last about a month while my body continues healing, and then things will start to improve rapidly again. So I’m hopeful that I may get back to work in October, rather than “late November” as I had originally warned my management team.

We shall see!

You don’t have to know the answer to everything – just how to find it

Since I work at IBM, I get to use the companys own email system, which is based on what used to be called Lotus Notes. It’s recently had some extra “social media awareness” added to it, been rebranded “IBM Notes”, and repositioned as a desktop client for social business. Which is all very modern and hip, especially for a product that has it’s roots back in the early 1990’s. However, most organisations (including IBM) tend to use it solely for email – for which it is the proverbial sledgehammer.

But having been using it for some 18 years now, I’m fairly comfortable with it. The only issue I have is that as I’ve been using it for so long, my mail archives contain a huge amount of useful information from old projects that I’ve worked on. I also have other information related to those projects stored elsewhere on my laptop harddrive, and pulling all that information together and searching it coherently isn’t a trivial problem. However, in recent years desktop search engines have begun to provide a really nice solution to this.

The problem here is that Lotus Notes is based on a series of binary databases which form the backbone of its ability to efficiently replicate documents between clients and servers. Desktop search engines generally don’t understand those databases, and hence do not work with Lotus Notes. So searching my laptop becomes a somewhat tedious process, involving the Lotus Notes client search feature, and manually correlating with a desktop search engine of some type. It works, but it’s just not as good as it could be.

What I really want, what I really really want (as the Spice Girls would sing) is a desktop search engine that can understand and integrate my Lotus Notes content. And that’s what this post is all about.

Since I run Linux I have a choice of open source desktop search engines such as Tracker or Beagle (now deceased). But my current preference is for Recoll, which I find to be very usable. And then, last year, I discovered that a colleague had written and published a filter, to enable Recoll to index documents inside Lotus Notes databases. So I had to get it working on my system!

Unfortunately, it turned out that my early attempts to get the filter working on my Ubuntu (now Mint) system completely failed. He was a RedHat user, and there are quite a lot of packaging differences between a Debianesque Lotus Notes install, and a RedHat one, especially inside IBM where we use our own internal versions of Java too. So the rest of this post is essentially a decription of how I hacked his elegant code to pieces to make it work on my system. It’s particularly relevant to members of the IBM community who use the IBM “OCDC” extensions to Linux as their production workstation. I’m going to structure it into a description of how Recoll and the Notes filter work, then a description of how I chose to implement the indexing (to minimise wasteful re-indexing), and hence what files go where, and some links to allow people to download the various files that I know to work on my system.

At a very simplistic level, Recoll works by scanning your computer filesystem, and for each file it encounters, it works out what it is (plain text, HTML, Microsoft Word, etc) and then either indexes it (if it’s a format that it natively understands) using the Xapian framework, or passing it to a helper application or filter which returns a version of the file in a format that Recoll does understand, and so can index. In the case of container formats like zip files, Recoll extracts all the contents, and processes each of those extracted files in turn. This means Recoll can process documents to an arbitrary level of “nesting”, comfortably indexing a Word file inside a zip file inside a RAR archive for example. Once all your files are indexed, you can search the index with arbitrary queries. If you get any hits, Recoll will help to invoke an appropriate application to allow you to view the original file. The helper applications are already existing external applications like unRTF or PDFtotext that carry out conversions from formats that Recoll will commonly encounter, while filters are Python applications that enable Recoll to cope with specialist formats, such as Lotus Notes databases.

So, the way the Lotus Notes filter works, is that:

  1. Recoll encounters a Lotus Notes database, something.nsf
  2. To work out what to do with it, Recoll looks up the file type in its mimemap configuration file, and determines what “mimetype” to associate with that file
  3. It then looks up what action to take for that mimetype in the mimeconf configuration file, which tells it to invoke the rcllnotes filter
  4. It then invokes rcllnotes, passing it the URI to something.nsf
  5. rcllnotes then extracts all the documents (and their attachments) from the Notes database, passing them back to Recoll for indexing
  6. It does this by invoking a Java application, rcllnotes.jar, that must be run under the same JVM as Lotus Notes
  7. This Java application uses Lotus Notes’ Java APIs to access each document in the database in turn
  8. These are then either flattened into HTML output (using an XLST stylesheet) which Recoll can consume directly, or in the case of attachments, output as a document needing further processing; Recoll can tell which is which from the mimetype of the output. Included in the flattened HTML are a couple of metadata tags, one marking this HTML document as descended from a Lotus Notes database, and the other containing the complete Lotus Notes URI for the original document. This latter information can be used by the Lotus Notes client to directly access the document – which is crucial later in the search process
  9. Recoll then indexes the documents it receives, saving enough information to allow Recoll to use rcllnotes again to retrieve just the relevant document from within the Notes database.
  10. So, when a search results in a Notes document, Recoll can use the saved information (the URI of the database and the Notes UNID of the document?) and the rcllnotes filter to obtain either the flattened HTML version of the document, or a copy of an attachment. Recoll then uses the documents mimetype to determine how to display it. In the case of an attachment, Recoll simply opens it with the appropriate application. In the case of the HTML, Recoll combines the expected “text/html” with the information in the metadata tag that describes this HTML as being derived from a Lotus Notes document. This produces a mimetype of “text/html|notesdoc”, which it then looks up in the mimeview configuration file, which causes it to use the rclOpenNotesClient script. That reads the Notes URI from the other HTML metadata field in the flattened HTML file, and then invokes the Lotus Notes client with it, causing the actual document of interest to be opened in Lotus Notes.

One of the problems with using Recoll with Lotus Notes databases is that it’s not possible to index just the few changed documents in a Notes database; you have to reindex an entire database worth of documents. Unfortunately there are usually a lot of documents in a Notes database, and the process of indexing a single document actually seems relatively slow, so it’s important to minimise how often you need to reindex a Notes database.

To achieve this, I make use of a feature of Recoll where it is possible to search multiple indexes in parallel. This allows me to partition my system into different types of data, creating separate indexes for each, but then searching against them all. To help with this, I made the decision to index only Notes databases associated with my email (either my current email database, or it’s archives) and a well-known (to me) subset of my filesystem data. Since my email archives are partitioned into separate databases, each holding about two years of content, I can easily partition the data I need to index into three categories: static Lotus Notes databases that never change (the old archives), dynamic Lotus Notes databases that change more frequently (my email database and its current archive), and other selected filesystem data.

I then create three separate indexes, one for each of those categories:

  1. The static Notes databases amount to about 5.5GB and takes about 2.5 hours 8GB and takes a little under 4 hours to index on my X201 laptop; however, since this is truely static, I only need to index it once.
  2. The dynamic Notes databases amount to about 4GB and take about 2 hours 1.5GB and take about 40 minutes to index; I reindex this once a week. This is a bigger job than it should be because I’ve been remiss and need to carve a big chunk of my current archive off into another “old” static one.
  3. Finally, the filesystem data runs to about another 20GB or so, and I expect this to change most frequently, but be the least expensive to reindex. Consequently I use “real time indexing” on this index; that means the whole 20GB is indexed once, and then inotify is used to determine whenever a file has been changed and trigger a reindex of just that file, immediately. That process runs in the background and is generally unnoticable.

So, how to duplicate this setup on your system?

First you will need to install Recoll. Use sudo apt-get install recoll to achieve that. Then you need to add the Lotus Notes filter to Recoll. Normally you’d download the filter from here, and follow the instructions in the README. However, as I noted at the beginning, it won’t work “out the box” under IBM’s OCDC environment. So instead, you can download the version that I have modified.

Unpack that into a temporary directory. Now copy the files in RecollNotes/Filter (rcllnotes, rcllnotes.jar and rclOpenNotesClient) to the Recoll filter directory (normally /usr/share/recoll/filters), and ensure that they are executable (sudo chmod +x rcllnotes etc). You should also copy a Lotus Notes graphic into the Recoll images directory where it can be used in the search results; sudo cp /opt/ibm/lotus/notes/notes_48.png /usr/share/recoll/images/lotus-notes.png.

Now copy the main configuration file for the Notes filter to your home directory. It’s called RecollNotes/Configurations/.rcllnotes and once you have copied it to your home directory, you need to edit it, and add your Lotus Notes password in the appropriate line. Note that this is by default a “hidden” file, so won’t show up in Nautilus or normal “ls” commands. Use “ls -a” if necessary!

Next you need to set up and configure the three actual indexes. The installation of Recoll should have created a ~/.recoll/ configuration directory. Now create two more, such as ~/.recoll-static/ and ~/.recoll-dynamic/. Appropriately copy the configuration files from the subfolders of RecollNotes/Configurations/, into your three Recoll configuration folders. Now edit the recoll.conf files in ~/.recoll-static/ and ~/.recoll-dynamic/, updating the names of the Notes Databases that you wish to index. Now manually index these Notes databases by running the commands recollindex -c ~/.recoll-static -z and recollindex -c ~/.recoll-dynamic -z.

At this point it should be possible to start recoll against either of those indexes (recoll -c ~/.recoll-static for example) and run searches within databases in that index. I leave it as an exercise for the interested reader to work out how to automate the reindexing with CRON jobs.

Next we wish to set up the indexing for the ~/.recoll/ configuration. This is the filesystem data that will run with a real-time indexer. So start by opening up the Recoll GUI. You will be asked if you want to start indexing immediately. I suggest that you select real-time indexing at startup, and let it start the indexing. Then immediately STOP the indexing process from the File menu. Now copy the file RecollNotes/recoll_index_on_ac to your personal scripts directory (mine is ~/.scripts), ensure it is executable, and then edit the file ~/.config/autostart/recollindex.desktop, changing the line that says Exec=recollindex -w 60 -m to Exec=~/.scripts/recoll_index_on_ac (or as appropriate). This script will in future be started instead of the normal indexer, and will ensure that indexing only runs when your laptop is on AC power, hopefully increasing your battery life. You can now start it manually with the command nohup ~/.scripts/recoll_index_on_ac &, but in future it will be started automatically whenever you login.

While your filesystem index is building, you can configure Recoll to use all three indexes at once. Start the Recoll GUI, and navigate to Preferences -> External Index dialog. Select “Add Index”, and navigate into the ~/.recoll-static/ and ~/.recoll-dynamic/ directories, selecting the xapiandb directory in each. Make sure each is selected. Searches done from the GUI will now use the default index (from the filesystem data) and the additional indexes from the two Lotus Notes configurations.

There is one final configuration worth carrying out, and that is to customise the presentation of the search results. If you look in the file in RecollNotes/reoll-result-list-customisation you will find some instructions to make the search results easier to read and use. Feel free to adopt them or not, as you wish.

Update: To answer the first question (by text message no less!), my indexes use up about 2.5GB of space, so no, it’s not insignificant, but I figure disk really is cheap these days.

Update: Corrected command to copy Notes icon to Recoll image directory to match configuration files, and a couple of the pathnames where I had introduced some typos.

Update: Added the main .rcllnotes configuration file to my archive of files, and updated the installation instructions to discuss installing it, and the need to add your Lotus Notes password to the file.

Why I love working in IT, and for IBM

I was recently looking at some of the research projects that IBM has been investing in, and came across an article describing some work we’ve been doing on improving proton radiation therapy. This is a cutting-edge radiotherapy treatment; it can be a more effective form of therapy than standard X-ray radiotherapy because it directs the radiation treatment to precisely where it is needed with minimal damage to surrounding tissue – which is what has caused all my problems with lack of healing in my pelvis.

At the current time there are very few places offering this treatment. There are about 10 in the USA, and a few in Europe. There are none in the UK – the NHS sends what few patients can justify this treatment abroad to Switzerland and the USA. That’s at least partly because the systems are very expensive to install, and then can’t be heavily used (which would justify the high build cost) because each case requires a long and complex planning phase to ensure that the beam of protons only goes exactly where it should.

What our researchers are doing is automating a lot of the difficult manual work involved in planning the treatments, reducing the planning time for each treatment from several days to a few minutes. Ultimately this will probably result in a (very small) clinical improvement in the effectiveness of the treatment for each patient, but by making it possible to use the machine much more effectively it will enable their wider adoption, helping to bring this new radiotherapy treatment to a much larger number of patients.

Calling home, free of charge

I’ve just been through a period of travel hell; individually each of the the three back-to-back trips are interesting, useful, and in some ways, quite good fun. But they’re back to back. So over a 17 day period, I’ve actually had only 3 days in the UK, and two of those I was still working. Of course, mention business travel to anyone that doesn’t do it, and it brings to mind visions of exotic locations and lavish expense accounts. Whereas the reality tends to be cramped economy class travel, very long working days, and lonely hotel rooms a long way from friends and family.

More importantly it means that just doing the normal day to day family things, like chatting to your kids about their day in school can rapidly become an extortionately expensive luxury that you feel ought to be kept to the brief minimum. Especially when the company that you’re travelling for thinks that it shouldn’t pay for those phone calls – which particularly irks me.

And that got me thinking – I actually have all the facilities I need to enable me to call home to my family for nothing. My company expects me to need an Internet connection in whatever hotel I stay in, and fully funds it. I carry a laptop and an Android Smartphone. In combination with the rather sophisticated phone that I have at home, I can talk to my family for as long as I want for no additional costs, using a technology called VoIP, based on an open standard called SIP.

My home phone is a Siemens Gigaset S685 IP DECT system, with 5 handsets. It’s what the marketing world likes to term “pro-sumer”, by which they really mean it’s got lots of features that most consumers will never use, but it’s not really got enough features for true commercial use. They also mean it’s a bit expensive for a consumer product.

But in this case, we’re talking about a DECT phone that connects to both my home phone line and my home broadband, and can have up to 7 logical phone lines connected to it – the physical “POTS” line, and 6 VoIP connections over the internet. The base unit can support separate conversations on up to 3 lines in parallel, with as many handsets connected to each of those lines as required. Each handset can act as a speaker-phone, or have either a headset or bluetooth earpiece attached to. It can even do call routing, where it looks at the number you dial, and decides which line to place your call on. In short, it’s absolutely packed with features.

The key to the free calls home is of course, the VoIP lines, because as of version 2.3, Android can also make VoIP calls. The trick is simply getting the two to talk to one another, while they are in different countries.

So first you need to find a SIP provider for your home phone and your Smartphone. The best way that I’ve found to do this is to set up a SIP-based virtual PBX. You create a free account with them (your PBX) and then add users to your PBX. Each user is given their own SIP credentials so they can logon, and you can associate an extension number with each of those users, allowing them to easily exchange calls – which is exactly what I need, as (unlike my Android smartphone) the handsets on my old Gigaset cannot call SIP URI’s directly.

The first provider I came across that allows this is OnSip, in the USA. My experience with them has been good, but there are many others out there too. Unfortunately it’s not perfect – for me, there are a couple of quirks with their service. Firstly, they don’t seem to expect anyone outside the USA to be using their service, so I cannot enter my address correctly. Which I suspect is technically a breach of their T&C’s. And secondly, it means that all the call tones you’ll hear when using this (ringing, engaged, unobtainable etc) will be American rather than British. I can live with that, but if you choose to go down this route too, DO NOT ENABLE THE E911 OPTION. You could theoretically call out US emergency services to whatever pseudo-fictitious address you have registered with them, which would not be good.

To make it work:

  1. Register with your chosen free SIP PBX provider. I’ll assume OnSip, who will ask you to register a userid, then validate it by email, before letting you set up your first PBX.
  2. Registering for OnSip PBX

    Registering for OnSip PBX

    When you come to define that PBX, you’ll see a screen similar to this one, asking for initial configuration information. You can call the domain name (which is essentially the Internet name of the PBX) anything you like. Incidentally, changing that country field seems to do nothing. It doesn’t even change the format of the information they collect for your address or (real) phone number.
  3. Creating a new user

    Creating a new user

    Having now got a SIP PBX available, we can add some users to it. Each user is roughly equivalent to a telephone on a normal PBX, but in this case the user accesses the PBX by way of some SIP credentials. The users can be called via those credentials, or (if the caller is another user of this PBX) the extension number that is associated with that user. This happens irrespective of the device that the user is using those credentials on, or its location.
  4. List of virtual PBX users

    List of virtual PBX users

    After entering a user for the Gigaset (my house phone) and one for my smartphone, I have a PBX with two users associated with it. I’ve obscured some critical pieces of information to maintain some privacy, but fundamentally the OnSip system gives me a set of SIP credentials for each “user” of the system (bottom right on the image), and associates an extension with them too.
  5. Next we need to get the Gigaset to register with the SIP PBX. To do this, logon to the Gigaset web interface, select the “Settings” tab, and then navigate to Telephony, Connections.
    Gigaset VoIP providers


    Gigaset Basic VoIP Provider Settings
    Gigaset Advanced VoIP Provider Settings Gigaset Settings

    Now define a new provider by clicking on one of the “Edit” buttons as seen in the first of these screenshots. This will bring up the basic settings page seen in the second screenshot. Into this screen enter the data found on the OnSip user configuration screen under the Phone Configuration heading. Copy the OnSip username field into the Gigaset username field, the OnSip Auth Username field into the Gigaset Authentication Username field, the OnSip SIP Password into the Gigaset Authentication Password field, and then click the “Show Advanced Settings” button, which will extend the form with the additional fields seen in the third screenshot. Now copy the OnSip Proxy/Domain field into the four Gigaset fields: Domain, Proxy Server Address, Registrar Server, and Outbound Proxy. When you save the settings the Gigaset will attempt to register with the OnSip PBX. Sometimes this takes a few seconds. You may need to refresh the browser to see the status change to Registered.
  6. Now we need to make the Android Smartphone register to the OnSip PBX too. To do this, bring up the Android Settings by pressing “Menu” from the home screen, and tap “Settings”.
    Android Call Settings


    Android Internet Calling (SIP) Accounts
    Android SIP Account Details Android SIP setup

    Navigate to the Call Settings, and tap it to reveal the screen in the first screenshot.
    Tap on “Use Internet Calling” and set it to “Ask for each call”. Then tap on Accounts to bring up the Internet Calling (SIP) Accounts screen where we can enter SIP accounts associated with the phone. See the second screenshot.
    Now add a new account for the OnSip PBX by tapping “New Account”; this will bring up a screen like the one shown in the third screenshot, into which you need to enter your credentials.
    The content of each of the fields (from the OnSip phone configuration information) should be obvious by now. When you save the account you will want to check that the registration information is correct. The easiest way to do this is to enable the “Receive incoming calls” option (back on the first screenshot again), which will force Android to register all it’s accounts with their associated providers. If you get an error then either the provider is down (possible) or the settings are incorrect (more likely). Leaving this option enabled forces Android to keep those connections active, which runs all the radios, and eats the battery, but allows incoming VoIP calls to your Smartphone (say from the Gigaset). In my experience it’s too power-hungry to use in this mode, other than very sparingly. Fortunately you can make outgoing calls from the Smartphone without enabling it.
  7. Android Internet Calling enabled Contact

    Android Internet Calling enabled Contact

    Finally you need to define a contact in the Smartphone contacts to use with Internet Calling. As all my contacts are normally managed by Lotus Traveler for me, which has no concept of Internet Calling, I defined a new local-only contact that is not synched with any of my Accounts (ie, Google or Lotus Traveler) and used that. Enter the name of the contact as normal, then scroll all the way to the bottom of the contact, where you will find a “More” section. Expand that, and continue to scroll to the bottom, where you will find a field for “Internet call”; into that simply enter either the OnSip SIP URI of your Gigaset, or it’s OnSip extension number.

Note that this really only works when connected to a reasonably good quality WiFi network. Poor quality networks seem to give quite variable results. Sometimes they still work, other times one end or the other may experience audio problems and/or dropped calls. It seems to work just fine through both the IBM and my home firewalls, even at the same time. I’ve not checked the actual audio codecs in use, but sound quality is subjectively better than a normal cellular call. Neither Android or the Gigaset seem to do silemce suppression (ie injection of white-noise when no-one is speaking) so the pauses in a conversation are totally silent, which can be slightly disconcerting.

Normally if you want to dial the Smartphone from the Gigaset you would need to indicate to the phone that it should send the call over the appropriate VoIP provider. This quickly becomes a pain, but it’s easy to set up a simple dial plan on the Gigaset so calls that start “700” (remember I made my extensions be 7000 and 7001?) go out over the OnSip VoIP connection automatically, which makes the solution really easy to use (ie family-friendly) from the Gigaset.

Finally, there is a really interesting technology called iNum available. Sadly it’s not (as far as I can tell) implemented by any of the major telecoms ISPs in the UK, that when combined with SRV records in special DNS records, would allow some really cool automatic rerouting of calls over different networks to different endpoints, depending on context. In theory the network could understand whether I was at home or out with my mobile, and route the call appropriately. It could also do smart things like look at the inbound number and the time, and route business calls to voicemail out of office hours, but still let personal calls through as normal.

If only it were implemented.

Eat well, drink in moderation, and sleep sound, in these three good health abound

The last couple of weeks seem to have been manically busy, and as a result I’ve been commuting full-time to London, rather than my more usual 3 or 4 days a week. I’ve found that really difficult, and as the days passed I was struggling to keep going.

The main issue is just plain tiredness. My ileostomy has both herniated, and become very enlarged since it was installed. The result is a large bulge on my abdomen which makes sleeping difficult; I rarely get more than a few hours sleep a night.

So, discovering that I had meetings all-day on Thursday, all-day on Friday, and was entertaining customers on Thursday night too, didn’t fill me with my usual enthusiasm. I honestly didn’t think I’d make it through Friday. So to ease the load I booked an hotel in London on Thursday night.

And had the best nights sleep I can remember.

The reason, as far as I can tell, is that Marriott have invested in some spectacularly nice pillows. Their beds and duvets are pretty good too. But their pillows are something special. Wonderfully soft, but firm and supportive too. Mine are woefully poor in comparison.

Turns out that they even sell them. But in Europe they only sell smaller versions (I think to suit European queen-sized beds) of the pillow I made use of, even though at £54 + p&p, it’s a pretty full-on price. Unfortunately I have a king-size bed, so I really need one a little larger.

The interesting thing I took from this is the description of the pillow, which has two chambers, one central one, giving firm support, surrounded by a soft, fluffy outer layer giving comfort.

But this is the internet. Time to go hunting for someone who sells something similar, but in a size better suited to my bed. And here it is. Unfortunately it comes with what seems to me to be a terrifying £71.25 price tag. But then, if you believe the 10 year guarantee, that’s £7.13 a year, or a mere 2p per night.

So, what price a sound nights sleep?

State of mind

It’s been a long time since I last posted anything here about my treatment. Actually, it’s been a long time since I posted almost anything here. And I’ve been pretty distant and difficult to contact in the real world too.

I could make a series of high-minded excuses, but actually the truth is much less prosaic; after the latest series of updates from my consultants I’ve been struggling to come to terms with my current situation, trying to work out what to do next, and how. And fundamentally I’ve not been coping too well; I’ve been acting rather like the archetypal “rabbit in the headlights”.

Tuesday will be the 3rd anniversary of my original cancer operation. I’ve been fighting medical problems for almost all of that time, as well as the 6 months prior to that while I was being diagnosed. And it’s fair to say that I’m tired. I’m tired of the hospital appointments, the consultations, and of being prodded and poked, injected, bled, and scanned. Of being outwardly positive and cheerful, even when I feel depressed and negative. And above it all, I’m heartily sick of still not being well again.

And I feel guilty for feeling like that, especially when actually I’m one of the lucky ones who seems to be surviving his brush with cancer, unlike so many others.

If this were a different problem I’d take a break and come back to the fight refreshed. But there is no break with this; at the moment it’s my new normal. So I have to just keep on going. But unfortunately this also seems to be an inflection point in my treatment; I need to make some irreversable decisions on the direction of that treatment, and they all seem to lead to fairly disagreeable end points. Which is not helping much.

So, to the friends and colleagues I’ve not been in touch with for a while, or been short and irritable with, I’m sorry. I’ll try to do a bit better. And since, according to Dorothy L. Sayers, a “trouble shared is trouble halved”, so I’ll even try to post a bit more here too.

Grumpy bar-steward

I’ve noticed that recently I’ve been less positive and a bit more grumpy when thinking and talking about my treatment.

It’s not really that anything has radically changed, but time is passing. The constant fighting with the ileostomy, the lack of sleep and consequent tiredness, the limits on what I can physically do while I’ve got the ileostomy. Trying to cope with the frustrations of the treatment not working as planned. Not being able to move on with my life plan in some of the ways that I would like to. It all adds up. And wears me down.

I noticed that on the 26th it will have been 3 years that I’ve been fighting this, and I’ve realised that actually I’m never really going to be able to completely stop, because even if we do finally solve my current problems, I’ll still never completely recover back to the level of wellness that most people take for granted.

Meanwhile, because I’m basically coping with being back at work and I’m getting pretty good at hiding the outward signs of my condition, there seems to be an expectation that I can do all the mad things that everyone else is expected to do again. Which actually, I can’t. Fortunately my immediate colleagues shield me from some of the worst excesses of the IBM system – but it’s frustrating that they have to.

For the first time I have found myself wondering if I should declare myself “disabled” to the IBM HR machine. I guess it all hinges on your definition of disabled. I’ve never considered myself to be disabled – there are people in much worse situations than me – but I’m starting to wonder if I need that label to maintain awareness. It feels like a very large step though, and one I’d really rather not take. Somehow it also feels like giving in.

And in other news, my appointment for the MAG3 Renogram is now through for Thursday morning. The fight goes on.

RIP, old friend

My second ever laptop, my old Thinkpad 600, has finally died.

It replaced my original 760CD, and like that machine, I carried it with me on numerous business trips all over the world. It survived being dropped, splashed with coffee, slung into bags, and then crammed into overhead luggage racks or under seats where it invariably got stood on. Despite all this abuse it still worked faultlessly in temperatures as varied as -20c in the Scandinavian winter, through to +40c in the humid summers of the US Deep South.

When IBM supplied me with an upgrade I bought my 600 off the company. For a time it was my personal system. Then it ran my home network, 24×7 until I could afford the parts to build a proper server. It was my wife’s main machine for a time, and lately my youngest daughter has been doing all her homework on it. Even in retirement it worked hard for it’s living.

I figure it must be 12 years old, and when it finally failed it was still running Ubuntu 11.04, which is a sophisticated and current operating system. Ok, the bizarre hybrid sound card wasn’t recognised (the 600 used a custom DSP to implement the modem and the sound system – a great idea, badly executed) and it wasn’t the fastest system on the block, but the overall experience was still pretty good.

At the end of the day it’s just a pile of electronics attached to a magnesium alloy chassis with some rather drab composite plastic covers, that no longer works. I should just recycle it. But there is an emotional attachment; I can’t bring myself to just take it to the recycling centre and dump it. But given that the motherboard has failed, the best I can offer it is the chance to donate some spare parts to new projects. So for now it’s going into my spare parts box; hopefully the screen, memory, disk and perhaps keyboard will show up in some future projects.

It may yet live again!