Health

It’s been a long time since I last posted about how I was getting on. It feels like an “old” topic, and as there hasn’t been much change, one that I’ve nothing very interesting to write about. But, I notice with some frustration that it’s been very nearly a year since my colostomy operation, and I still have a series of difficulties with it. So I thought I’d post about my current status, and hope you forgive me if it sounds like I’m just moaning.

So firstly, my colostomy hasn’t really settled down into the sort of natural rhythm that the medical professions like to describe, where it operates once or maybe twice a day at fairly set times. Mine seems fairly erratic, which means being prepared to change my bag wherever I go. I suspect that this is at least partly down to the rather erratic schedule of my life, but doing any form of physical activity usually provokes it too, which makes doing regular exercise frustratingly difficult. As a result, I’ve also put on quite a lot of weight over the last 6 months, which I’m finding quite depressing too.

To try to overcome this, I’ve learned how to irrigate my colostomy, which is essentially a self-administered enema. The idea is that if I do this every two to three days then there is not enough waste in the colon between irrigations to cause problems with my daily life. The good news is that the technique works really well. The less good news is that rather than two to three days between irrigations, I only get about 20 hours. The actual irrigation process takes about an hour, and needs me to be relaxed. So I if I can find a relaxing hour, every day, I could use this approach to effectively manage my colostomy. The problem, of course, is who has a spare hour every day? Certainly not me, with my erratic schedule…

More worryingly my perineal wound (where my anus was removed) still hasn’t healed, and still weeps a significant amount of (what I assume to be) interstitial fluid, which means I need to dress (and change) the wound several times a day. It’s not very comfortable either, especially in this warm weather. Sadly, the healing problems I’m experiencing are again probably rooted in the preoperative radiation therapy that I had back in 2009. I sometimes wonder how things might have turned out if I had chosen not to have that radiation therapy: I might never have needed my colostomy. Or my perineal wound might be healed. Or I might be dead. Hmmm.

Of course, such deliberations are ultimately pointless, and instead I’m planning to discuss with my surgeon what options are open to me now to improve the wound when I next see him (in September).

On the urological front, the problem I was experiencing with my left kidney appears to have stabilised; the function won’t get any better, but it’s not getting any worse either. In short, the reimplantation of my left ureter seems to have been a complete success. I experienced a series of urological infections in the first few months after the operation, but six months on a low-grade antibiotic seems to have resolved that. I’ve been off the antibiotics for about five weeks now, and there have been no signs of any recurrence. Hopefully I won’t need to worry any more about my kidneys.

Meanwhile, time marches on, and on August the 11th I have another appointment with the CT Scanner to check to see if there is any sign of recurring cancer. I suspect that this may well be the last of those regular checks, and that I will be largely discharged from the cancer side of things after that, with the doctors concentrating on my perineal wound from now on. But I’m sure I’ll find out more when I next see my surgeon.

Fixing a bad block in an ext4 partition on an advanced format SATA drive

I recently got an email from my home server, warning me that it had detected an error on one of its hard drives. This was automatically generated by smartd, part of SmartMonTools, that monitors the health of the disk storage attached to my server by running a series of regular tests without my intervention.

To find out what had exactly been found, I used the smartctl command to see the logged results of the last few self-tests. As you can see, the daily Short Offline tests were all passing successfully, but the long-running weekly Extended Offline tests were showing a problem with the same LBA on each run, namely LBA 1318075984:


# smartctl -l xselftest /dev/sda
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-24-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, http://www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Extended Self-test Log Version: 1 (1 sectors)
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 9559 -
# 2 Short offline Completed without error 00% 9535 -
# 3 Short offline Completed without error 00% 9511 -
# 4 Extended offline Completed: read failure 70% 9490 1318075984
# 5 Short offline Completed without error 00% 9487 -
# 6 Short offline Completed without error 00% 9463 -
# 7 Short offline Completed without error 00% 9440 -
# 8 Short offline Completed without error 00% 9416 -
# 9 Short offline Completed without error 00% 9392 -
#10 Short offline Completed without error 00% 9368 -
#11 Short offline Completed without error 00% 9344 -
#12 Extended offline Completed: read failure 70% 9322 1318075984
#13 Short offline Completed without error 00% 9320 -
#14 Short offline Completed without error 00% 9296 -
#15 Extended offline Completed without error 00% 9204 -
#16 Short offline Completed without error 00% 9198 -
#17 Short offline Completed without error 00% 9176 -
#18 Short offline Completed without error 00% 9152 -

The fact that this is a “read failure” probably means that this is a medium error. That can usually be resolved by writing fresh data to the block. This will either succeed (in the case of a transient problem), or cause the drive to reallocate a spare block to replace the now-failed block. The problem, of course, is that that block might be part of some important piece of data. Fortunately I have backups. But I’d prefer to restore only the damaged file, rather than the whole disk. The rest of this post discusses how to achieve that.

Firstly we need to look at the disk layout to determine what partition the affected block falls within:


gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 453C41A1-848D-45CA-AC5C-FC3FE68E8280
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2157 sectors (1.1 MiB)

Number Start (sector) End (sector) Size Code Name
1 2048 1050623 512.0 MiB EF00
2 1050624 4956159 1.9 GiB 8300
3 4956160 44017663 18.6 GiB 8300
4 44017664 52219903 3.9 GiB 8200
5 52219904 61984767 4.7 GiB 8300
6 61984768 940890111 419.1 GiB 8300
7 940890112 3907028991 1.4 TiB 8300

We can see that the disk uses logical 512 byte sectors, and that the failing sector is in partion /dev/sda7. We also want to know (for later) what is the physical block size for this disk, which can be found by:


# cat /sys/block/sda/queue/physical_block_size
4096

Since this is larger than the LBA size (of 512 bytes) it means that it’s actually the physical block that contains LBA 1318075984 that is failing, and therefore so will be all the other LBAs in that physical block. In this case, that means 8 LBAs. Because of the way the SMART selftests work, it’s likely that 1318075984 and the following 7 will be failing, but we can test that later.

Next we need to understand what filesystem that partition has been formatted as. I happen to know that all my partitions are formatted as ext4 on this system, but you could find this out this information from the /etc/fstab configuration file.

The rest of this post is only directly relevant to ext4/3/2 filesystems. Feel free to use the general process, but please look elsewhere for detailed instructions for BTRFS, XFS, etc etc.

Next thing to do is to determine the offset of the failing LBA into the sda7 partition. So, 1318075984 – 940890112, which is 377185872 blocks of 512 bytes. We now need to know how many filesystem blocks that is, so lets find out what blocksize that partition is using:


# tune2fs -l /dev/sda7 | grep Block
Block count: 370767360
Block size: 4096
Blocks per group: 32768

So, each filesystem block is 4096 bytes. To determine the offset of the failing LBA in the filesystem, we divide the LBA offset into the filesystem by 8 (4096/512), giving us a filesystem offset of 47148234. Since this is an exact result, we know it happens to be the first logical LBA in that filesystem block that is causing the error (as we expected).

Next we want to know if that LBA is in use, or part of the filesystems free space:


# debugfs /dev/sda7
debugfs 1.42.9 (4-Feb-2014)
debugfs: testb 47148234
Block 47148234 marked in use

So we know that filesystem block is part of a file – unfortunately. The question is which one?


debugfs: icheck 47148234
Block Inode number
47148234 123993
debugfs: ncheck 123993
Inode Pathname
123993 /media/homevideo/AA-20140806.mp4

Since the filesystem block size and the physical disk block size are the same, I could just assume that thats the only block affected. But that’s probably not very wise. So lets check the physical blocks (on the disk) before and after the one we know is failing by asking for the failing LBA + and – 8 LBA’s:


# # The reported failing LBA:
# dd if=/dev/sda of=sector.bytes skip=1318075984 bs=512 count=1
dd: error reading ‘/dev/sda’: Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 2.85748 s, 0.0 kB/s
# # The reported failing LBA - 8:
# dd if=/dev/sda of=sector.bytes skip=1318075976 bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0246482 s, 20.8 kB/s
# # The reported failing LBA + 8:
# dd if=/dev/sda of=sector.bytes skip=1318075992 bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0246482 s, 20.8 kB/s

So, in this case we can see that the physical blocks before and after the failing disk block are both currently readable, meaning that we only need to deal with the single failing block.

Since I have a backup of the file that this failing block occurs in, we’ll complete the resolution of the problem by overwriting the failing physical sector with zeros (definitely corrupting the containing file) and triggering the drives block-level reallocation routines, and then delete the file, prior to recovering it from backup:


# dd if=/dev/zero of=/dev/sda skip=1318075984 bs=512 count=8
8+0 records in
8+0 records out
4096 bytes (4.1 kB) copied, 0.000756974 s, 5.4 MB/s
# rm /media/homevideo/AA-20140806.mp4
# dd if=/dev/sda of=sector.bytes skip=1318075984 bs=512 count=8
8+0 records in
8+0 records out
4096 bytes (4.1 kB) copied, 0.000857198 s, 4.8 MB/s

At this point I could run an immediate Extended Offline self-test, but I’m confident that as I can now successfully read the originally-failing block, the problem is solved, and I’ll just wait for the normally scheduled self-tests to be run by smartd again.

Update: I’ve experienced a situation where overwriting the failing physical sector with zeros using dd failed to trigger the drives automatic block reallocation routines. However, in that case I was able to resolve the situation by using hdparm instead. Use:

hdparm --read-sector 1318075984 /dev/sda

to (try to) read and display a block of data, and

hdparm --write-sector 1318075984 --yes-i-know-what-i-am-doing /dev/sda

to overwrite the block with zeros. Both these commands use the drives logical block size (normally 512 bytes) not the 4K physical sector size.

Something of a milestone

For those who have been following this blog long enough, you might have noticed that March 2014 marks 5 years since I underwent the operation that removed my tumour. 5 years survival is seen to be a big thing for cancer patients. You hear all kinds of talk of “being cured”, or that the likelihood of the cancer returning is now “comparable to the risk of contracting the cancer in the first place”.

Sadly, neither are exactly true – it’s just a convenient measure of the effectiveness of the treatment regimes that the doctors use. You can read more on this here, but ultimately, XKCD explains what this really means to me far better than I can.

But having said all that, 5 years is still a major milestone. It’s a long time with (in my case) no sign of any recurrence of the cancer, and in general the longer I live with no sign of cancer, the better my chance of having actually beaten it. The bottom line for me is that I’ve seen too many people die who were diagnosed at the same time as me to feel anything other than incredibly lucky to still be here.

That alone has to be worth celebrating.

Updated weatherstation software

I’ve been enhancing the software that I use to read data from my weatherstation. It’s been working well since I added some extra code to detect when the sensor readings are obviously corrupt (radio interference, uncontrolled concurrent memory accesses, etc), but the tracking of the wind direction was still not quite good enough.

To improve that, I’ve extended the number of readings used in the running average, and enhanced the algorithms that average the wind direction to take into account not only the direction of each data point (unity vectors), but also the speed of the wind in that direction (true vectors).

The result seems to track significantly more accurately to nearby high quality stations, but I am conscious that this is still presenting a manipulation of the data, rather than the actual data that a better sensor would provide. Having said that, it’s now producing a pretty good result for hardware that cost me only £50.

The software can be downloaded from here.

Playing with callerid again

Back in April last year I started playing with an old external serial-attached modem to read the callerid of incoming calls. My intention was to intercept calls from direct marketeers. The concept was good, but I ran into problems with the modem; it took up a lot of space, kept overheating, and lacked any voice facilities, limiting what I could do with it. In addition (probably because of the modem constantly overheating) the software I was running kept crashing.

So in the end, I gave up on the idea.

But recently we seem to have had a spate of annoying calls from direct marketeers based in India, selling products for UK companies that are cynically avoiding the UK’s regulations around direct marketing opt-outs. The straw that broke the camels back was the call that came through at 6am on a Saturday morning.

The problem here is that the phone companies don’t care about this. They make money from all these calls, so its not in their interest to block them. They’ll sell me a service to block “withheld” numbers, but not numbers that show as “unavailable”. Unfortunately, these days the majority of the problem calls come from India, and show as “unavailable” because the Indian call centers are using VoIP systems to place their calls to the UK, and they deliberately ensure that their callerid is “unavailable”.

So I’m back on the idea of making my own solution to this problem. So first off, I purchased a USR 5637 USB fax modem that is compatible with the UK callerid protocols. Even better, this is a voice modem too, so it can interact with a caller, sending audio as well as data down the phone line, and recognise touchtones. It’s also small, self-powered, cool-running and very reliable.

Next I spent some time looking to see what other people have done in this space, and eventually found this webpage, that describes a simple Bash script that intercepts calls that are not on a whitelist, and plays modem tones to them before hanging up. Recognised callers are allowed to ring the phone as normal, progressing to an answerphone if necessary. It’s not exactly the functionality that I want, but the simplicity is beguiling, and it’s trivial to extend it to do something closer to what I do want. And anyway, anything more sophisticated is going to require something like Asterisk, and switching my whole phone system over to VoIP, which is not going to be very family-friendly.

So for now, I’m gathering lists of all incoming calls to establish a basic whitelist, before moving on to do some really basic screening of calls.

Plusnet IPv6 still delayed, so let’s go spelunking in a Hurricane Electric tunnel

When I last changed ISP (last year, when Sky bought my old ISP, BE Unlimited) one of my requirements was for my new ISP to have a roadmap for making IPv6 available. I’ve been waiting (impatiently) for my new ISP, Plusnet, to deliver on their initial “coming soon” statements. Sadly, like almost all ISPs, Plusnet are not moving quickly with IPv6, and are still investigating alternatives like carrier grade NAT to extend the life of IPv4. I can sympathise with this position – IPv6 has limited device support, most of their customers are not ready to adopt it, and trying to provide support for the necessary dual-stack environment would not be easy. But, the problem of IPv4 address exhaustion is not going away.

So at the end of last year they started their second controlled trial of IPv6. I was keen to join, but the conditions were onerous. I would get a second account, I would need to provide my own IPv6-capable router, I couldn’t have my existing IPv4 static IP address, I couldn’t have Reverse DNS on the line, and I had to commit to providing feedback on my “normal” workload. So much as I wanted to join the trial, I couldn’t, as I wouldn’t be able to run my mailserver.

So I decided to investigate alternatives until such time as Plusnet get native IPv6 support working. The default solution in cases like mine, where my ISP only provides me with an IPv4 connection, is to tunnel IPv6 conversations through my IPv4 connection, to an ISP who does provide IPv6 connectivity to the Internet. There are two major players in this area for home users, SisXS and Hurricane Electric. Both provide all the basic functionality, as well as each having some individual specialist features. I’m just looking for a basic IPv6 connection and could use either, but in the end Hurricane Electric appeared vastly easier to register with, so I went with them.

My current internet connection is FTTC (fibre to the cabinet) via a BT OpenReach VDSL2 modem and my ISP-supplied (cheap and nasty) combined broadband router, NAT and firewall. This gives me a private 16bit IPv4 address space, for which my home server (a low-power mini-ITX system that runs 24×7) provides all the network management functions, such as DHCP and DNS.

What I want to add to this is a protocol-41 tunnel from the IPv6 ISP (Hurricane Electric, or HE) back through my NAT & Firewall to my home server. By registering for such a tunnel, HE provide (for free) a personal /64 subnet to me through that tunnel, allowing me to use my home server to provision IPv6 addresses to all the devices on my LAN. However, this connection is neither NAT’ed nor firewalled. The IPv6 addresses are both globally addressable and visible. So I also want my home server to act as a firewall for all IPv6 communications through that tunnel, to protect the devices on my network, without forcing them to all adopt their own firewalls. I was initially concerned that because my home server also acts as an OpenVPN endpoint, and so uses a bridged network configuration, getting the tunnel working might be quite awkward, but it turns out to make very little difference to the solution.

So, to make this work, first you need a static IPv4 address on the internet, and to have ensured that your router will respond to ICMP requests (pings!). Then you can register with Hurricane Electric, and “Create a Regular Tunnel”, which will result in a page of information describing your tunnel. I printed this for future reference (and in case I broke my network while making changes) but you can always access this information from the HE website.

You now need to edit /etc/network/interfaces. Add lines to define the tunnel, as follows, substituting the values from your tunnel description:

# Define 6in4 ipv6 tunnel to Hurricane Electric
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address [your "Client IPv6 Address"]
netmask 64
endpoint [your "Server IPv4 Address"]
ttl 255

up ip -6 route add default dev he-ipv6
down ip -6 route del default dev he-ipv6

Now add an address from your “Routed /64 IPv6 Prefix” to the appropriate interface – in my case, this is the bridge interface br0, but its more likely to be eth0 for you. This defines your servers static, globally accessible IPv6 address:

# Add an IPv6 address from the routed prefix to the br0 interface.
iface br0 inet6 static
address [any IPv6 address from the Routed /64 IPv6 Prefix]
netmask 64

Since I am running Ubuntu 12.04 I now need to install radvd, which will advertise the IPv6 subnet to any systems on our network that want to configure themselves an IPv6 connection. Think of it as a sort of DHCP for IPv6. However, when I move to 14.04 sometime later this year I expect to be able to get rid of radvd, and replace it with dnsamsq (which I already use for IPv4 DNS/DHCP), as the latest version of dnsmasq is reported to provide a superset of the radvd capabilities.

sudo apt-get update
sudo apt-get install radvd

Then configure radvd to give out IPv6 addresses from our Routed /64 IPv6 Prefix, by creating the file /etc/radvd.conf, and entering the following into it:

interface [your interface, probably eth0]
{
AdvSendAdvert on;
AdvLinkMTU 1480;
prefix [Your Routed /64 IPv6 Prefix, incl the /64]
{
AdvOnLink on;
AdvAutonomous on;
};
};

Any IPv6-capable devices will now ask for (and be allocated) an IPv6 address in your Routed /64 subnet, based on the MAC address of the interface that is requesting the IPv6 address.
Now uncomment the line:

# net.ipv6.conf.all.forwarding=1

from the file /etc/sysctl.conf. This will allow your server to act as a router for IPv6 traffic.

Now we need to enable and then configure the firewall. I take no credit for this, as much of the information related to the firewall was gleaned from this post. As I run Ubuntu Server I’ll use ufw, the Ubuntu Firewall utility to configure the underlying ipchains firewall. Alternative front-ends to ipchains will work equally well, though the actual method of configuration will obviously differ. First I needed to enable the firewall for IPv6 by editing /etc/default/ufw, and ensuring the following options are set correctly:

# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=yes

and

# Set the default forward policy to ACCEPT, DROP or REJECT. Please note that
# if you change this you will most likely want to adjust your rules
DEFAULT_FORWARD_POLICY="ACCEPT"

Now we need to enable the firewall (by default it’s disabled) and add some additional rules to it:

# Enable the firewall
sudo ufw enable
# Allow everything on my LAN to connect to anything
sudo ufw allow from 192.168.0.0/16
# Allow Protocol-41 connections from the Tunnel Endpoint Server (to run the tunnel)
sudo ufw allow from [Your "Server IPv4 Address"] proto ipv6
# Allow BOOTP service on port 67 from radvd
sudo ufw allow proto any to any port 67
# Allow my IPv6 addresses to access services on this server
sudo ufw allow from [Your "Routed /64 IPv6 Prefix" including the "/64"]

I also had to add a few more rules to cope with the external facing services that my home server provides to the Internet (mail, web, ssh, ftp, vpn etc).

Finally I want to prevent all but a few specific types of external IPv6 connection to be made inbound into my network. To do this, edit the file /etc/ufw/before6.rules, and add the following lines directly BEFORE the “COMMIT” statement at the end of the file:


# Forward IPv6 packets associated with an established connection
-A ufw6-before-forward -i he-ipv6 -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow "No Next Header" to be forwarded or proto=59
# See http://www.ietf.org/rfc/rfc1883.txt (not sure if the length
# is needed as all IPv6 headers should be that size anyway).
-A ufw6-before-forward -p ipv6-nonxt -m length --length 40 -j ACCEPT

# allow MULTICAST to be forwarded
# These 2 need to be open to enable Auto-Discovery.
-A ufw6-before-forward -p icmpv6 -s ff00::/8 -j ACCEPT
-A ufw6-before-forward -p icmpv6 -d ff00::/8 -j ACCEPT

# ok icmp codes to forward
-A ufw6-before-forward -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-request -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-reply -j ACCEPT

# Don't forward any other packets to hosts behind this router.
-A ufw6-before-forward -i he-ipv6 -j ufw6-logging-deny
-A ufw6-before-forward -i he-ipv6 -j DROP

At this point I saved everything and rebooted (though you could just bring up the he-ipv6 interface) and everything came up correctly. I was able to test that I had a valid Global scope IPv6 address associated with (in my case) my br0 interface, and that I could successfully ping6 -c 5 ipv6.google.com via it. I was also able to check that my laptop had automatically found and configured a valid Global scope IPv6 address for it’s eth0 interface, and that it could ping6 my home server and external IPv6 sites, and that it was possible to browse IPv6-only websites from it.

Health update

The last time I mentioned my health was back in November of last year; at that point I was keen to get back to work. I was frustrated to be sitting at home with not much to occupy myself, and feeling somewhat guilty for exceeding the 3-months that I’d originally suggested to my management that I’d need to be away from work.

After some meetings with my management we agreed that I’d start a phased return to work in late November by working approximately half days from home, with no commuting. The main intent was for me to catch up on all the things that had gone on (including a large internal reorganisation) while I was out, get all my admin (and email!) up to date, rather than worry too much about any specific business goals.

And to my surprise I found it incredibly difficult. Initially I struggled to regularly work even half a day, and when I tried to “push on through” I failed. Spectacularly. I’d literally fall asleep at the keyboard. Over the 6 weeks running up to Christmas I did see my stamina improve a little, and I even managed some half days back in my local office. But progress was depressingly slow, and when I first tried to commute up to London for a meeting, I felt so unwell by the time I’d got there that I barely had time to attend the meeting before I had to leave for home again.

A fortnights break at Christmas was a welcome relief, during which I had another consultation with my surgeon, and brought up the issue of my tiredness and ongoing kidney pain. The result was a set of blood and urine tests.

The blood tests revealed little that was wrong, or at least unexpected; my kidney function appeared to be fine, but I was still showing the signs of a low-level background infection. Since my perineal wound was (and is) still open, this was only to be expected. But fundamentally, I was in as good health as anyone could expect – the suggestion was that I just needed more time to get over my last operation, and that all the other treatments that I’ve been through over the last 4 years probably weren’t helping.

The urine test however, showed another drug-resistant UTI. More antibiotics put paid to that, but I was advised to see my urologist again too. He suspects that my problems with kidney pain and repeated UTI’s are ultimately due to a problem called renal or vesicoureteral reflux. This is normally a condition most common in young children, but in my case is almost certainly caused by the process of reimplanting my left ureter; it no longer acts as a one-way valve, allowing urine to flow back up into my kidney.

Of itself, this causes nothing more than mild discomfort. But in combination with UTI’s, this can cause significant pain (as I discovered) and potentially further permanent damage to my kidney, which is most definitely not desirable. So for the next six months I’ve been prescribed a prophylactic dose of antibiotic (Trimethoprim) to keep the UTI’s at bay.

And since returning to work after the Christmas break, I’ve noticed that my stamina has noticeably improved. I’m still a long way from what I would consider normal, but I’m managing to work much closer to full days now, and I’m coping with some commuting too. I can see real progress.

Of course, in retrospect the lesson to be learned is that I probably tried to come back to work too early. I suspect that if I’d stayed off work for another month or so my recovery would probably have been faster and easier. But I’d have been climbing the walls!