Fixing a bad block in an ext4 partition on an advanced format SATA drive

I recently got an email from my home server, warning me that it had detected an error on one of its hard drives. This was automatically generated by smartd, part of SmartMonTools, that monitors the health of the disk storage attached to my server by running a series of regular tests without my intervention.

To find out what had exactly been found, I used the smartctl command to see the logged results of the last few self-tests. As you can see, the daily Short Offline tests were all passing successfully, but the long-running weekly Extended Offline tests were showing a problem with the same LBA on each run, namely LBA 1318075984:


# smartctl -l xselftest /dev/sda
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-24-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Extended Self-test Log Version: 1 (1 sectors)
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 9559 -
# 2 Short offline Completed without error 00% 9535 -
# 3 Short offline Completed without error 00% 9511 -
# 4 Extended offline Completed: read failure 70% 9490 1318075984
# 5 Short offline Completed without error 00% 9487 -
# 6 Short offline Completed without error 00% 9463 -
# 7 Short offline Completed without error 00% 9440 -
# 8 Short offline Completed without error 00% 9416 -
# 9 Short offline Completed without error 00% 9392 -
#10 Short offline Completed without error 00% 9368 -
#11 Short offline Completed without error 00% 9344 -
#12 Extended offline Completed: read failure 70% 9322 1318075984
#13 Short offline Completed without error 00% 9320 -
#14 Short offline Completed without error 00% 9296 -
#15 Extended offline Completed without error 00% 9204 -
#16 Short offline Completed without error 00% 9198 -
#17 Short offline Completed without error 00% 9176 -
#18 Short offline Completed without error 00% 9152 -

The fact that this is a “read failure” probably means that this is a medium error. That can usually be resolved by writing fresh data to the block. This will either succeed (in the case of a transient problem), or cause the drive to reallocate a spare block to replace the now-failed block. The problem, of course, is that that block might be part of some important piece of data. Fortunately I have backups. But I’d prefer to restore only the damaged file, rather than the whole disk. The rest of this post discusses how to achieve that.

Firstly we need to look at the disk layout to determine what partition the affected block falls within:


gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.8

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 453C41A1-848D-45CA-AC5C-FC3FE68E8280
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2157 sectors (1.1 MiB)

Number Start (sector) End (sector) Size Code Name
1 2048 1050623 512.0 MiB EF00
2 1050624 4956159 1.9 GiB 8300
3 4956160 44017663 18.6 GiB 8300
4 44017664 52219903 3.9 GiB 8200
5 52219904 61984767 4.7 GiB 8300
6 61984768 940890111 419.1 GiB 8300
7 940890112 3907028991 1.4 TiB 8300

We can see that the disk uses logical 512 byte sectors, and that the failing sector is in partion /dev/sda7. We also want to know (for later) what is the physical block size for this disk, which can be found by:


# cat /sys/block/sda/queue/physical_block_size
4096

Since this is larger than the LBA size (of 512 bytes) it means that it’s actually the physical block that contains LBA 1318075984 that is failing, and therefore so will be all the other LBAs in that physical block. In this case, that means 8 LBAs. Because of the way the SMART selftests work, it’s likely that 1318075984 and the following 7 will be failing, but we can test that later.

Next we need to understand what filesystem that partition has been formatted as. I happen to know that all my partitions are formatted as ext4 on this system, but you could find this out this information from the /etc/fstab configuration file.

The rest of this post is only directly relevant to ext4/3/2 filesystems. Feel free to use the general process, but please look elsewhere for detailed instructions for BTRFS, XFS, etc etc.

Next thing to do is to determine the offset of the failing LBA into the sda7 partition. So, 1318075984 – 940890112, which is 377185872 blocks of 512 bytes. We now need to know how many filesystem blocks that is, so lets find out what blocksize that partition is using:


# tune2fs -l /dev/sda7 | grep Block
Block count: 370767360
Block size: 4096
Blocks per group: 32768

So, each filesystem block is 4096 bytes. To determine the offset of the failing LBA in the filesystem, we divide the LBA offset into the filesystem by 8 (4096/512), giving us a filesystem offset of 47148234. Since this is an exact result, we know it happens to be the first logical LBA in that filesystem block that is causing the error (as we expected).

Next we want to know if that LBA is in use, or part of the filesystems free space:


# debugfs /dev/sda7
debugfs 1.42.9 (4-Feb-2014)
debugfs: testb 47148234
Block 47148234 marked in use

So we know that filesystem block is part of a file – unfortunately. The question is which one?


debugfs: icheck 47148234
Block Inode number
47148234 123993
debugfs: ncheck 123993
Inode Pathname
123993 /media/homevideo/AA-20140806.mp4

Since the filesystem block size and the physical disk block size are the same, I could just assume that thats the only block affected. But that’s probably not very wise. So lets check the physical blocks (on the disk) before and after the one we know is failing by asking for the failing LBA + and – 8 LBA’s:


# # The reported failing LBA:
# dd if=/dev/sda of=sector.bytes skip=1318075984 bs=512 count=1
dd: error reading ‘/dev/sda’: Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 2.85748 s, 0.0 kB/s
# # The reported failing LBA - 8:
# dd if=/dev/sda of=sector.bytes skip=1318075976 bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0246482 s, 20.8 kB/s
# # The reported failing LBA + 8:
# dd if=/dev/sda of=sector.bytes skip=1318075992 bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0246482 s, 20.8 kB/s

So, in this case we can see that the physical blocks before and after the failing disk block are both currently readable, meaning that we only need to deal with the single failing block.

Since I have a backup of the file that this failing block occurs in, we’ll complete the resolution of the problem by overwriting the failing physical sector with zeros (definitely corrupting the containing file) and triggering the drives block-level reallocation routines, and then delete the file, prior to recovering it from backup:


# dd if=/dev/zero of=/dev/sda skip=1318075984 bs=512 count=8
8+0 records in
8+0 records out
4096 bytes (4.1 kB) copied, 0.000756974 s, 5.4 MB/s
# rm /media/homevideo/AA-20140806.mp4
# dd if=/dev/sda of=sector.bytes skip=1318075984 bs=512 count=8
8+0 records in
8+0 records out
4096 bytes (4.1 kB) copied, 0.000857198 s, 4.8 MB/s

At this point I could run an immediate Extended Offline self-test, but I’m confident that as I can now successfully read the originally-failing block, the problem is solved, and I’ll just wait for the normally scheduled self-tests to be run by smartd again.

Update: I’ve experienced a situation where overwriting the failing physical sector with zeros using dd failed to trigger the drives automatic block reallocation routines. However, in that case I was able to resolve the situation by using hdparm instead. Use:

hdparm --read-sector 1318075984 /dev/sda

to (try to) read and display a block of data, and

hdparm --write-sector 1318075984 --yes-i-know-what-i-am-doing /dev/sda

to overwrite the block with zeros. Both these commands use the drives logical block size (normally 512 bytes) not the 4K physical sector size.

Plusnet IPv6 still delayed, so let’s go spelunking in a Hurricane Electric tunnel

When I last changed ISP (last year, when Sky bought my old ISP, BE Unlimited) one of my requirements was for my new ISP to have a roadmap for making IPv6 available. I’ve been waiting (impatiently) for my new ISP, Plusnet, to deliver on their initial “coming soon” statements. Sadly, like almost all ISPs, Plusnet are not moving quickly with IPv6, and are still investigating alternatives like carrier grade NAT to extend the life of IPv4. I can sympathise with this position – IPv6 has limited device support, most of their customers are not ready to adopt it, and trying to provide support for the necessary dual-stack environment would not be easy. But, the problem of IPv4 address exhaustion is not going away.

So at the end of last year they started their second controlled trial of IPv6. I was keen to join, but the conditions were onerous. I would get a second account, I would need to provide my own IPv6-capable router, I couldn’t have my existing IPv4 static IP address, I couldn’t have Reverse DNS on the line, and I had to commit to providing feedback on my “normal” workload. So much as I wanted to join the trial, I couldn’t, as I wouldn’t be able to run my mailserver.

So I decided to investigate alternatives until such time as Plusnet get native IPv6 support working. The default solution in cases like mine, where my ISP only provides me with an IPv4 connection, is to tunnel IPv6 conversations through my IPv4 connection, to an ISP who does provide IPv6 connectivity to the Internet. There are two major players in this area for home users, SisXS and Hurricane Electric. Both provide all the basic functionality, as well as each having some individual specialist features. I’m just looking for a basic IPv6 connection and could use either, but in the end Hurricane Electric appeared vastly easier to register with, so I went with them.

My current internet connection is FTTC (fibre to the cabinet) via a BT OpenReach VDSL2 modem and my ISP-supplied (cheap and nasty) combined broadband router, NAT and firewall. This gives me a private 16bit IPv4 address space, for which my home server (a low-power mini-ITX system that runs 24×7) provides all the network management functions, such as DHCP and DNS.

What I want to add to this is a protocol-41 tunnel from the IPv6 ISP (Hurricane Electric, or HE) back through my NAT & Firewall to my home server. By registering for such a tunnel, HE provide (for free) a personal /64 subnet to me through that tunnel, allowing me to use my home server to provision IPv6 addresses to all the devices on my LAN. However, this connection is neither NAT’ed nor firewalled. The IPv6 addresses are both globally addressable and visible. So I also want my home server to act as a firewall for all IPv6 communications through that tunnel, to protect the devices on my network, without forcing them to all adopt their own firewalls. I was initially concerned that because my home server also acts as an OpenVPN endpoint, and so uses a bridged network configuration, getting the tunnel working might be quite awkward, but it turns out to make very little difference to the solution.

So, to make this work, first you need a static IPv4 address on the internet, and to have ensured that your router will respond to ICMP requests (pings!). Then you can register with Hurricane Electric, and “Create a Regular Tunnel”, which will result in a page of information describing your tunnel. I printed this for future reference (and in case I broke my network while making changes) but you can always access this information from the HE website.

You now need to edit /etc/network/interfaces. Add lines to define the tunnel, as follows, substituting the values from your tunnel description:

# Define 6in4 ipv6 tunnel to Hurricane Electric
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address [your "Client IPv6 Address"]
netmask 64
endpoint [your "Server IPv4 Address"]
ttl 255

up ip -6 route add default dev he-ipv6
down ip -6 route del default dev he-ipv6

Now add an address from your “Routed /64 IPv6 Prefix” to the appropriate interface – in my case, this is the bridge interface br0, but its more likely to be eth0 for you. This defines your servers static, globally accessible IPv6 address:

# Add an IPv6 address from the routed prefix to the br0 interface.
iface br0 inet6 static
address [any IPv6 address from the Routed /64 IPv6 Prefix]
netmask 64

Since I am running Ubuntu 12.04 I now need to install radvd, which will advertise the IPv6 subnet to any systems on our network that want to configure themselves an IPv6 connection. Think of it as a sort of DHCP for IPv6. However, when I move to 14.04 sometime later this year I expect to be able to get rid of radvd, and replace it with dnsamsq (which I already use for IPv4 DNS/DHCP), as the latest version of dnsmasq is reported to provide a superset of the radvd capabilities.

sudo apt-get update
sudo apt-get install radvd

Then configure radvd to give out IPv6 addresses from our Routed /64 IPv6 Prefix, by creating the file /etc/radvd.conf, and entering the following into it:

interface [your interface, probably eth0]
{
AdvSendAdvert on;
AdvLinkMTU 1480;
prefix [Your Routed /64 IPv6 Prefix, incl the /64]
{
AdvOnLink on;
AdvAutonomous on;
};
};

Any IPv6-capable devices will now ask for (and be allocated) an IPv6 address in your Routed /64 subnet, based on the MAC address of the interface that is requesting the IPv6 address.
Now uncomment the line:

# net.ipv6.conf.all.forwarding=1

from the file /etc/sysctl.conf. This will allow your server to act as a router for IPv6 traffic.

Now we need to enable and then configure the firewall. I take no credit for this, as much of the information related to the firewall was gleaned from this post. As I run Ubuntu Server I’ll use ufw, the Ubuntu Firewall utility to configure the underlying ipchains firewall. Alternative front-ends to ipchains will work equally well, though the actual method of configuration will obviously differ. First I needed to enable the firewall for IPv6 by editing /etc/default/ufw, and ensuring the following options are set correctly:

# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
IPV6=yes

and

# Set the default forward policy to ACCEPT, DROP or REJECT. Please note that
# if you change this you will most likely want to adjust your rules
DEFAULT_FORWARD_POLICY="ACCEPT"

Now we need to enable the firewall (by default it’s disabled) and add some additional rules to it:

# Enable the firewall
sudo ufw enable
# Allow everything on my LAN to connect to anything
sudo ufw allow from 192.168.0.0/16
# Allow Protocol-41 connections from the Tunnel Endpoint Server (to run the tunnel)
sudo ufw allow from [Your "Server IPv4 Address"] proto ipv6
# Allow BOOTP service on port 67 from radvd
sudo ufw allow proto any to any port 67
# Allow my IPv6 addresses to access services on this server
sudo ufw allow from [Your "Routed /64 IPv6 Prefix" including the "/64"]

I also had to add a few more rules to cope with the external facing services that my home server provides to the Internet (mail, web, ssh, ftp, vpn etc).

Finally I want to prevent all but a few specific types of external IPv6 connection to be made inbound into my network. To do this, edit the file /etc/ufw/before6.rules, and add the following lines directly BEFORE the “COMMIT” statement at the end of the file:


# Forward IPv6 packets associated with an established connection
-A ufw6-before-forward -i he-ipv6 -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow "No Next Header" to be forwarded or proto=59
# See http://www.ietf.org/rfc/rfc1883.txt (not sure if the length
# is needed as all IPv6 headers should be that size anyway).
-A ufw6-before-forward -p ipv6-nonxt -m length --length 40 -j ACCEPT

# allow MULTICAST to be forwarded
# These 2 need to be open to enable Auto-Discovery.
-A ufw6-before-forward -p icmpv6 -s ff00::/8 -j ACCEPT
-A ufw6-before-forward -p icmpv6 -d ff00::/8 -j ACCEPT

# ok icmp codes to forward
-A ufw6-before-forward -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-request -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-reply -j ACCEPT

# Don't forward any other packets to hosts behind this router.
-A ufw6-before-forward -i he-ipv6 -j ufw6-logging-deny
-A ufw6-before-forward -i he-ipv6 -j DROP

At this point I saved everything and rebooted (though you could just bring up the he-ipv6 interface) and everything came up correctly. I was able to test that I had a valid Global scope IPv6 address associated with (in my case) my br0 interface, and that I could successfully ping6 -c 5 ipv6.google.com via it. I was also able to check that my laptop had automatically found and configured a valid Global scope IPv6 address for it’s eth0 interface, and that it could ping6 my home server and external IPv6 sites, and that it was possible to browse IPv6-only websites from it.

Ditching the spinning rust

For some time now I’ve been thinking of switching my laptop storage over to an SSD. I like the idea of the massively improved performance, the slightly reduced power consumption, and the ability to better withstand the abuse of commuting. However, I don’t like the limited write cycles, or (since I need a reasonable size drive to hold all the data I’ve accumulated over the years) the massive price-premium over traditional drives. So I’ve been playing a waiting game over the last couple of years, and watching the technology develop.

But as the January sales started, I noticed the prices of 256GB SSDs have dipped to the point where I’m happy to “invest”. So I’ve picked up a Samsung 840 EVO 250GB SSD for my X201 Thinkpad; it’s essentially a mid-range SSD at a budget price-point, and should transform my laptops performance.

SSD’s are very different beasts from traditional hard drives, and from reading around the Internet there appear to be several things that I should take into account if I want to obtain and then maintain the best performance from it. Predominant amongst these are ensuring the correct alignment of partitions on the SSD, ensuring proper support for the Trim command, and selecting the best file system for my needs.

But this laptop is supplied to me by my employer, and must have full system encryption implemented on it. I can achieve this using a combination of LUKS and LVM, but it complicates the implementation of things like Trim support. The disk is divided into a minimal unencrypted boot partition with the majority of the space turned into a LUKS-encrypted block device. That is then used to create an LVM logical volume, from which are allocated the partitions for the actual Linux install.

Clearly once I started looking at partition alignment and different filesystem types a reinstall becomes the simplest option, and the need for Trim support predicates fairly recent versions of LUKS and LVM, driving me to a more recent distribution than my current Mint 14.1, which is getting rather old now. This gives me the opportunity to upgrade and fine-tune my install to better suit the new SSD. I did consider moving to the latest Mint 16, but my experiences with Mint have been quite mixed. I like their desktop environment very much, but am much less pleased with other aspects of the distribution, so I think I’ll switch back to the latest Ubuntu, but using the Cinnamon desktop environment from Mint; the best of all worlds for me.

Partition alignment

This article describes why there is a problem with modern drives that use 4k sectors internally, but represent themselves as having a 512byte sector externally. The problem is actually magnified with SSD’s where this can cause significant issues with excessive wearing of the cells. Worse still, modern SSDs like my Samsung write in 4K pages, but erase in 1M blocks of 256 pages. It means that partitions need to be aligned not to “just” 4K boundries, but to 1MB boundries.

Fortunately this is trivial in a modern Linux distribution; we partition the target drive with a GPT scheme using gdisk; on a new blank disk it will automatically align the partitions to 2048 sector, or 1MB boundries. On disks with existing partitions this can be enabled with the “l 2048” command in the advanced sub-menu, which will force alignment of newly created partitions on 1MB boundries.

Trim support

In the context of SSD’s TRIM is an ATA command that allows the operating system to tell the SSD which sectors are no longer in use, and so can be cleared, ready for rapid reuse. Wikipedia has some good information on it here. The key in my case is going to be to enable the filesystem to issue TRIM commands, and then enabling the LVM and LUKS containers that hold the filesystem to pass the TRIM commands on through to the actual SSD. There is more information on how to achieve this here.

However, there are significant questions over whether it is best to enable TRIM on the fstab options, getting the filesystem to issue TRIM commands automatically as it deletes sectors, or periodically running the user space command fstrim using something like a cron job or an init script. Both approaches still have scenarios that could result in significant performance degradation. At the moment I’m tending towards using fstrim in some fashion, but I need to do more research before making a final decision on this.

File system choice

Fundamentally I need a filesystem that supports the TRIM command – not all do. But beyond that I would expect any filesystem to perform better on an SSD than it does on a hard drive, which is good.

However, as you would expect, different filesystems have different strengths and weaknesses so by knowing my typical usage patterns I can select the “best” of the available filesystems for my system. And interestingly, according to these benchmarks, the LUKS/LVM containers that I will be forced to use can have a much more significant affect on some filesystems (particularly the almost default ext4) than others.

So based on my reading of these benchmarks and the type of use that I typically make of my machine, my current thought is to run an Ubuntu 13.10 install on BTRFS filesystems with lzo compression for both my root and home partitions, both hosted in a single LUKS/LVM container. My boot partition will be a totally separate ext3 partition.

The slight concern with that choice is that BTRFS is still considered “beta” code, and still under heavy development. It is not currently the default on any major distribution, but it is offered as an installation choice on almost all. The advanced management capabilities such as on-the-fly compression, de-duplication, snapshots etc make it very attractive though, and ultimately unless people like me do adopt it, it will never become the default filesystem.

I’ll be implementing a robust backup plan though!

Simple install of OpenVPN with Ubuntu server and Mint client

Since I’ll have my laptop and phone with me while I’m in hospital, I’m expecting to be able to keep in touch with family and friends. However, it would also be useful to have access to my home network from hospital. I can already SSH into a command line on my home server, but I’ve been meaning to get a proper VPN set up for some time now, so this seemed like the excuse I needed to actually make it happen.

In my case, I have a home network with a NAT’d router that has a static IP address, and proper DNS & reverse DNS entries associated with it. I then have an Ubuntu Server running 24×7 behind it, providing various services to my home network. I simply want my Mint laptop to be able to VPN into my home network, securely, and on demand.

It turns out to be really straighforward, though there were a few quirks to overcome! Fundamentally I followed the instructions in this PDF to set up the server side of the VPN, found on the Madison Linux User Group website. However, there are a few problems with it, that need correcting:

  1. Under “Installing the Network Bridge and Configure Network Settings” on page 3, be aware that if you are using dnsmasq to mange your DNS and DHCP leases, you will need to change it’s configuration file in /etc/dnsmasq.conf, to listen on br0 rather than eth0.
  2. Under “Create the Server Keys and Certificates”, point (3), note that there are two KEY_EMAIL variables where there should be only one, and the variables KEY_CN, KEY_NAME and KEY_OU should ideally be filled in for completeness.
  3. Under “Create the Server Keys and Certificates”, before starting on point (4), you (a) need to edit the file /etc/openvpn/easy-rsa/whichopensslcnf, and remove all occurances of the string “[[:alnum:]]”, and (b) need to make sure that you can write to the file “~/.rnd”; by default it isn’t writeable except by root, so you need to issue the command “sudo chown youruserid ~/.rnd”.
  4. Under “Generate the Client Keys and Certificates” on page 4, when you come to carry out the instructions in point (1), to create the client keys, you must set a unique KEY_CN by changing the last command to something like “KEY_CN=someuniqueclientcn ./pkitool client1”.
  5. Under “Create the OpenVPN Server Scripts”, on page 5, the code to go into the up.sh and down.sh scripts is wrong. The brctl command is located in /sbin, not /usr/sbin. Change both scripts accordingly.

For the client side of the VPN, I followed the instructions in this PDF, which worked perfectly.

Once everything has been rebooted and is working properly, it is possible to VPN into my home network from the internet. All traffic from my laptop then flows through my home network via a secure VPN. Essentially my laptop becomes part of my home network for the time the VPN is running.

Which is exactly what I wanted.

You don’t have to know the answer to everything – just how to find it

Since I work at IBM, I get to use the companys own email system, which is based on what used to be called Lotus Notes. It’s recently had some extra “social media awareness” added to it, been rebranded “IBM Notes”, and repositioned as a desktop client for social business. Which is all very modern and hip, especially for a product that has it’s roots back in the early 1990’s. However, most organisations (including IBM) tend to use it solely for email – for which it is the proverbial sledgehammer.

But having been using it for some 18 years now, I’m fairly comfortable with it. The only issue I have is that as I’ve been using it for so long, my mail archives contain a huge amount of useful information from old projects that I’ve worked on. I also have other information related to those projects stored elsewhere on my laptop harddrive, and pulling all that information together and searching it coherently isn’t a trivial problem. However, in recent years desktop search engines have begun to provide a really nice solution to this.

The problem here is that Lotus Notes is based on a series of binary databases which form the backbone of its ability to efficiently replicate documents between clients and servers. Desktop search engines generally don’t understand those databases, and hence do not work with Lotus Notes. So searching my laptop becomes a somewhat tedious process, involving the Lotus Notes client search feature, and manually correlating with a desktop search engine of some type. It works, but it’s just not as good as it could be.

What I really want, what I really really want (as the Spice Girls would sing) is a desktop search engine that can understand and integrate my Lotus Notes content. And that’s what this post is all about.

Since I run Linux I have a choice of open source desktop search engines such as Tracker or Beagle (now deceased). But my current preference is for Recoll, which I find to be very usable. And then, last year, I discovered that a colleague had written and published a filter, to enable Recoll to index documents inside Lotus Notes databases. So I had to get it working on my system!

Unfortunately, it turned out that my early attempts to get the filter working on my Ubuntu (now Mint) system completely failed. He was a RedHat user, and there are quite a lot of packaging differences between a Debianesque Lotus Notes install, and a RedHat one, especially inside IBM where we use our own internal versions of Java too. So the rest of this post is essentially a decription of how I hacked his elegant code to pieces to make it work on my system. It’s particularly relevant to members of the IBM community who use the IBM “OCDC” extensions to Linux as their production workstation. I’m going to structure it into a description of how Recoll and the Notes filter work, then a description of how I chose to implement the indexing (to minimise wasteful re-indexing), and hence what files go where, and some links to allow people to download the various files that I know to work on my system.

At a very simplistic level, Recoll works by scanning your computer filesystem, and for each file it encounters, it works out what it is (plain text, HTML, Microsoft Word, etc) and then either indexes it (if it’s a format that it natively understands) using the Xapian framework, or passing it to a helper application or filter which returns a version of the file in a format that Recoll does understand, and so can index. In the case of container formats like zip files, Recoll extracts all the contents, and processes each of those extracted files in turn. This means Recoll can process documents to an arbitrary level of “nesting”, comfortably indexing a Word file inside a zip file inside a RAR archive for example. Once all your files are indexed, you can search the index with arbitrary queries. If you get any hits, Recoll will help to invoke an appropriate application to allow you to view the original file. The helper applications are already existing external applications like unRTF or PDFtotext that carry out conversions from formats that Recoll will commonly encounter, while filters are Python applications that enable Recoll to cope with specialist formats, such as Lotus Notes databases.

So, the way the Lotus Notes filter works, is that:

  1. Recoll encounters a Lotus Notes database, something.nsf
  2. To work out what to do with it, Recoll looks up the file type in its mimemap configuration file, and determines what “mimetype” to associate with that file
  3. It then looks up what action to take for that mimetype in the mimeconf configuration file, which tells it to invoke the rcllnotes filter
  4. It then invokes rcllnotes, passing it the URI to something.nsf
  5. rcllnotes then extracts all the documents (and their attachments) from the Notes database, passing them back to Recoll for indexing
  6. It does this by invoking a Java application, rcllnotes.jar, that must be run under the same JVM as Lotus Notes
  7. This Java application uses Lotus Notes’ Java APIs to access each document in the database in turn
  8. These are then either flattened into HTML output (using an XLST stylesheet) which Recoll can consume directly, or in the case of attachments, output as a document needing further processing; Recoll can tell which is which from the mimetype of the output. Included in the flattened HTML are a couple of metadata tags, one marking this HTML document as descended from a Lotus Notes database, and the other containing the complete Lotus Notes URI for the original document. This latter information can be used by the Lotus Notes client to directly access the document – which is crucial later in the search process
  9. Recoll then indexes the documents it receives, saving enough information to allow Recoll to use rcllnotes again to retrieve just the relevant document from within the Notes database.
  10. So, when a search results in a Notes document, Recoll can use the saved information (the URI of the database and the Notes UNID of the document?) and the rcllnotes filter to obtain either the flattened HTML version of the document, or a copy of an attachment. Recoll then uses the documents mimetype to determine how to display it. In the case of an attachment, Recoll simply opens it with the appropriate application. In the case of the HTML, Recoll combines the expected “text/html” with the information in the metadata tag that describes this HTML as being derived from a Lotus Notes document. This produces a mimetype of “text/html|notesdoc”, which it then looks up in the mimeview configuration file, which causes it to use the rclOpenNotesClient script. That reads the Notes URI from the other HTML metadata field in the flattened HTML file, and then invokes the Lotus Notes client with it, causing the actual document of interest to be opened in Lotus Notes.

One of the problems with using Recoll with Lotus Notes databases is that it’s not possible to index just the few changed documents in a Notes database; you have to reindex an entire database worth of documents. Unfortunately there are usually a lot of documents in a Notes database, and the process of indexing a single document actually seems relatively slow, so it’s important to minimise how often you need to reindex a Notes database.

To achieve this, I make use of a feature of Recoll where it is possible to search multiple indexes in parallel. This allows me to partition my system into different types of data, creating separate indexes for each, but then searching against them all. To help with this, I made the decision to index only Notes databases associated with my email (either my current email database, or it’s archives) and a well-known (to me) subset of my filesystem data. Since my email archives are partitioned into separate databases, each holding about two years of content, I can easily partition the data I need to index into three categories: static Lotus Notes databases that never change (the old archives), dynamic Lotus Notes databases that change more frequently (my email database and its current archive), and other selected filesystem data.

I then create three separate indexes, one for each of those categories:

  1. The static Notes databases amount to about 5.5GB and takes about 2.5 hours 8GB and takes a little under 4 hours to index on my X201 laptop; however, since this is truely static, I only need to index it once.
  2. The dynamic Notes databases amount to about 4GB and take about 2 hours 1.5GB and take about 40 minutes to index; I reindex this once a week. This is a bigger job than it should be because I’ve been remiss and need to carve a big chunk of my current archive off into another “old” static one.
  3. Finally, the filesystem data runs to about another 20GB or so, and I expect this to change most frequently, but be the least expensive to reindex. Consequently I use “real time indexing” on this index; that means the whole 20GB is indexed once, and then inotify is used to determine whenever a file has been changed and trigger a reindex of just that file, immediately. That process runs in the background and is generally unnoticable.

So, how to duplicate this setup on your system?

First you will need to install Recoll. Use sudo apt-get install recoll to achieve that. Then you need to add the Lotus Notes filter to Recoll. Normally you’d download the filter from here, and follow the instructions in the README. However, as I noted at the beginning, it won’t work “out the box” under IBM’s OCDC environment. So instead, you can download the version that I have modified.

Unpack that into a temporary directory. Now copy the files in RecollNotes/Filter (rcllnotes, rcllnotes.jar and rclOpenNotesClient) to the Recoll filter directory (normally /usr/share/recoll/filters), and ensure that they are executable (sudo chmod +x rcllnotes etc). You should also copy a Lotus Notes graphic into the Recoll images directory where it can be used in the search results; sudo cp /opt/ibm/lotus/notes/notes_48.png /usr/share/recoll/images/lotus-notes.png.

Now copy the main configuration file for the Notes filter to your home directory. It’s called RecollNotes/Configurations/.rcllnotes and once you have copied it to your home directory, you need to edit it, and add your Lotus Notes password in the appropriate line. Note that this is by default a “hidden” file, so won’t show up in Nautilus or normal “ls” commands. Use “ls -a” if necessary!

Next you need to set up and configure the three actual indexes. The installation of Recoll should have created a ~/.recoll/ configuration directory. Now create two more, such as ~/.recoll-static/ and ~/.recoll-dynamic/. Appropriately copy the configuration files from the subfolders of RecollNotes/Configurations/, into your three Recoll configuration folders. Now edit the recoll.conf files in ~/.recoll-static/ and ~/.recoll-dynamic/, updating the names of the Notes Databases that you wish to index. Now manually index these Notes databases by running the commands recollindex -c ~/.recoll-static -z and recollindex -c ~/.recoll-dynamic -z.

At this point it should be possible to start recoll against either of those indexes (recoll -c ~/.recoll-static for example) and run searches within databases in that index. I leave it as an exercise for the interested reader to work out how to automate the reindexing with CRON jobs.

Next we wish to set up the indexing for the ~/.recoll/ configuration. This is the filesystem data that will run with a real-time indexer. So start by opening up the Recoll GUI. You will be asked if you want to start indexing immediately. I suggest that you select real-time indexing at startup, and let it start the indexing. Then immediately STOP the indexing process from the File menu. Now copy the file RecollNotes/recoll_index_on_ac to your personal scripts directory (mine is ~/.scripts), ensure it is executable, and then edit the file ~/.config/autostart/recollindex.desktop, changing the line that says Exec=recollindex -w 60 -m to Exec=~/.scripts/recoll_index_on_ac (or as appropriate). This script will in future be started instead of the normal indexer, and will ensure that indexing only runs when your laptop is on AC power, hopefully increasing your battery life. You can now start it manually with the command nohup ~/.scripts/recoll_index_on_ac &, but in future it will be started automatically whenever you login.

While your filesystem index is building, you can configure Recoll to use all three indexes at once. Start the Recoll GUI, and navigate to Preferences -> External Index dialog. Select “Add Index”, and navigate into the ~/.recoll-static/ and ~/.recoll-dynamic/ directories, selecting the xapiandb directory in each. Make sure each is selected. Searches done from the GUI will now use the default index (from the filesystem data) and the additional indexes from the two Lotus Notes configurations.

There is one final configuration worth carrying out, and that is to customise the presentation of the search results. If you look in the file in RecollNotes/reoll-result-list-customisation you will find some instructions to make the search results easier to read and use. Feel free to adopt them or not, as you wish.

Update: To answer the first question (by text message no less!), my indexes use up about 2.5GB of space, so no, it’s not insignificant, but I figure disk really is cheap these days.

Update: Corrected command to copy Notes icon to Recoll image directory to match configuration files, and a couple of the pathnames where I had introduced some typos.

Update: Added the main .rcllnotes configuration file to my archive of files, and updated the installation instructions to discuss installing it, and the need to add your Lotus Notes password to the file.

Building a scan server on Ubuntu Server 12.04

I have an old but capable little Canon scanner that I’ve used for various administrative tasks for a couple of years now. It connects to my laptop via USB, and draws its power from that link too, which makes it very convenient and easy to use.

Except for the last few months, my daughters have started making increasing use of the scanner for their homework too. Which is fine, but means that the scanner is being carried around the house, and regularly being plugged in and out of different laptops. Which is probably not a good recipe for a long and trouble-free working life.

So today I configured a simple scan server. The scanner is now “permanently” attached to my home server, and anyone who wants to scan documents can do so from their own computer, over the network, using the scanner on the home server. No more hunting for missing USB cables, or even a missing scanner!

This is surprisingly easy to achieve under Linux, as the scanning subsystem (SANE) is implemented as a client/server system, much like the printing system. The only thing that makes it a bit more convoluted is the involvement of a superserver like inetd or one of its equivalents.

On the server:

  1. Plug in the scanner
  2. sudo apt-get install sane-utils
  3. Make sure your /etc/services file contains:

    sane-port 6566/tcp sane saned # SANE

  4. Configure inetd or xinetd to autostart saned. In my case I use xinetd, so I need to ensure that /etc/xinetd.d/saned contains:

    service sane-port
    {
    port = 6566
    socket_type = stream
    server = /usr/sbin/saned
    protocol = tcp
    user = saned
    group = saned
    wait = no
    disable = no
    }

    Now restart xinetd (by sudo /etc/init.d/xinetd restart) so that saned can be autostarted when it’s required.

  5. Next we need to configure saned. This is controlled by the file /etc/defaults/saned, which should be changed to look like:

    # Defaults for the saned initscript, from sane-utils

    # Set to yes to start saned
    RUN=yes

    # Set to the user saned should run as
    RUN_AS_USER=saned

  6. At this point we need to make sure that saned can access the scanner. I did this by setting up a udev rule to arrange for the permissions on the underlying device to be set so saned can access it. For my convenience I also set up a “well known” symbolic name (/dev/scanner) to the scanner device too, as that base device can change depending on what is plugged into the server at any point in time; I’m pretty sure saned doesn’t require this though. I achieved this by making the new file /etc/udev/rules.d/90-scanner.rules contain the single line:

    ATTRS{idVendor}==”04a9″,ATTRS{idProduct}==”2206″,SYMLINK+=”scanner”,MODE=”0660″,OWNER=”root”,GROUP=”saned”

    The idVendor and idProduct are obtained by running the lsusb command, and extracting the USB vendor and product identifiers from the scanner entry.

  7. Next we need to configure saned. In this case, all we need to do is define the systems that can connect to it. This is done by making the file /etc/sane.d/saned.conf read:

    # saned.conf
    # Configuration for the saned daemon

    ## Daemon options
    # Port range for the data connection. Choose a range inside [1024 – 65535].
    # Avoid specifying too large a range, for performance reasons.
    #
    # ONLY use this if your saned server is sitting behind a firewall. If your
    # firewall is a Linux machine, we strongly recommend using the
    # Netfilter nf_conntrack_sane connection tracking module instead.
    #
    # data_portrange = 10000 – 10100

    ## Access list
    # A list of host names, IP addresses or IP subnets (CIDR notation) that
    # are permitted to use local SANE devices. IPv6 addresses must be enclosed
    # in brackets, and should always be specified in their compressed form.
    #
    # The hostname matching is not case-sensitive.

    #scan-client.somedomain.firm
    #192.168.0.1
    #192.168.0.1/29
    #[2001:7a8:185e::42:12]
    #[2001:7a8:185e::42:12]/64
    192.168.255.0/24

    # NOTE: /etc/inetd.conf (or /etc/xinetd.conf) and
    # /etc/services must also be properly configured to start
    # the saned daemon as documented in saned(8), services(4)
    # and inetd.conf(4) (or xinetd.conf(5)).

    In this case you can see I’ve defined it so anything in the 192.168.255.xxx subnet can connect to saned.

On a standard Ubuntu desktop client, only one action needs to be taken to allow it to seamlessly make use of the scan server:

  1. Modify the /etc/sane.d/net.conf file, so it reads:

    # This is the net backend config file.

    ## net backend options
    # Timeout for the initial connection to saned. This will prevent the backend
    # from blocking for several minutes trying to connect to an unresponsive
    # saned host (network outage, host down, …). Value in seconds.
    connect_timeout = 20

    ## saned hosts
    # Each line names a host to attach to.
    # If you list “localhost” then your backends can be accessed either
    # directly or through the net backend. Going through the net backend
    # may be necessary to access devices that need special privileges.
    # localhost
    # My home server:
    192.168.255.20

  2. From this point onwards, you should be able to start “Simple Scan” on the client machine, and see the scanner attached to the server machine, as though it was locally attached. You can alter all the settings as required, and scan over the network as needed.

Getting Adobe Flash to work on Ubuntu 12.04

One of those “I’ll get around to it” jobs has been upgrading my wife’s laptop. She’s been happily running an old version of Ubuntu for some time now on an ancient Thinkpad without any real issues. Then very recently she ran into problems with uploading photographs to Snapfish. The culprit was clearly an update to Flash, but I decided since she wasn’t getting security updates any more, it was probably better to upgrade her to the latest 12.04 LTS version of Ubuntu to fix the problem.

Unfortunately she didn’t have enough spare disk space to go through a series of upgrades (this machine has an old 30GB hard drive!) so really this meant a hardware upgrade, and a proper reinstall too. However, as I was feeling a bit more awake today (and the weather was awful) this seemed like a good little task to take on. So I backed up the system to my server, found an old 80GB 2.5″ PATA drive in my spares box, and set to.

As with all Thinkpads, making hardware upgrades is fairly straightforward. Pop out one screw, and the drive assembly slides out the side. Remove another four screws from that assembly to separate the drive from it’s cage and plastic bezel. Swapping in the higher capacity drive was a simple matter of reversing the process.

This machine only has a USB 1.1 port, so I did the Ubuntu install from a CD, rather than my normally preferred USB memory stick. It’s a lot noisier than using the memory stick, but has an identical effect – I was soon running the latest version of Ubuntu. A quick update cycle got me the latest security fixes, and then I added in some key additional packages, sorted out the printer and wireless network accesses, and then restored all the user data that I’d backed up earlier.

Which was great, but Snapfish still didn’t work. It seems that Adobe Flash doesn’t work on 12.04 LTS either. Drat.

So after a bit more reading, it all became clear. Adobe don’t follow the normal Linux approach of issuing a specific version of their software with a given OS release, and then only providing security fixes to it. Instead, they regularly issue their current, latest code to all the different operating systems that they support. This means that when they introduce a bug, you get it on all the versions of all their supported operating systems.

Except in this case, just to make it more confusing, it seems that some people are seeing one set of problems, and others another, while some people are seeing no problems at all. And Adobe are apparently not particularly interested, as they don’t see supporting Linux as a priority.

So in the end, after a great deal of reading various forums on the Internet, it became increasingly clear that there was no clear fix for the problems that Adobe appear to have introduced in their latest code. Ultimately, it seems that the simplest solution is to just back-level the Flash plugin in the browser to an earlier version that doesn’t exhibit these problems.

To do this, I downloaded the archive of previous Adobe Flash plugins, which can be found here: http://helpx.adobe.com/flash-player/kb/archived-flash-player-versions.html. I chose the v11.1.102.55 releases (174 MB), and extracted the Linux version, which is packaged as a shared library called libflashplayer.so. By searching the /usr tree I found that (in my case) there was a single copy of this library already installed in /usr/lib/flashplugin-installer/libflashplayer.so, which I then replaced with the back-level version.

After restarting Firefox, Adobe Flash now works normally again, and we can bulk upload photos to Snapfish again. Clearly this hack will be overwritten if Adobe issues a new version of the flash plugin, but that’s probably what we want to happen anyway.

Mediatomb on Ubuntu 12.04

One of the things I use Ubuntu for is to run my home server, which serves my video collection to various players around the house. This is achieved with a DNLA server that ships with Ubuntu called Mediatomb.

Unfortunately, despite having all the needed function, and being remarkably stable, Mediatomb hasn’t been under active development for the last year or two, and when I upgraded to the latest version of Ubuntu server, I discovered that a key feature of Mediatomb had been disabled; namely the support for writing import filters in Javascript.

This allows the collection of video to be sorted and filtered by their characteristics into a series of virtual folders, which can then be used by the media players to find whatever content is required. You could have folders of films sorted by leading actor, or director. Or folders of films by certification. Or date of release. The options are endless. It’s a great feature, and utterly indispensable when it’s suddenly removed.

The reason the feature is disabled is that Mediatomb depends on an out of support version of libjs, or Spidermonkey, the Mozilla Javascript engine. The Ubuntu developers don’t have time to fix this, so until the Mediatomb developers do, the Ubuntu developers have applied a quick fix and disabled the Javascript support so the package can still be shipped.

This post shows how to re-enable that Javascript support on Ubuntu 12.04 Server. It’s a bit of a dirty hack, but it will work until either:

  1. The Mediatomb developers fix this
  2. Someone (such as Raik Bieniek or Saito Kiyoshi) packages & maintains a better fix for 12.04 in a PPA
  3. This effort to replace the Javascript support with Python support completes (I couldn’t get it to compile)

The basic approach here is to rebuild the shipped package of Mediatomb, but re-enabling the Javascript support. This requires that we have a version of Javascript on the system that Mediatomb can use. The current version in the 12.04 repositories won’t work (the API’s have changed), so we need to install an older “back-level” version, which we can get from Debian, who have still been applying security fixes to it.

  1. cd;mkdir temp;cd temp
  2. sudo apt-get install build-essential, to install the tools you will need to build packages on Ubuntu
  3. sudo apt-get build-dep mediatomb, to install all the dependencies needed for mediatomb to compile
  4. sudo apt-get source mediatomb, to get the source code for mediatomb, and unpack it into a convenient subdirectory
  5. sudo vi mediatomb-0.12.1/debian/rules, and change the line that says “–disable-libjs” to “–enable-libjs” (note that those are prefixed by double-dashes)
  6. Add a new entry to the changelog file in the same directory, incrementing the version number from zero to one. This will help prevent your changes being overwritten.
  7. Get an old copy of Spidermonkey from the Debian Squeeze distribution (on which Ubuntu is ultimately based). You need libmozjs2d and libmozjs-dev, in either the amd64 or i386 versions, depending on whether you are running in 64-bit or 32-bit mode. To determine which version you need, enter the command “dpkg –print-architecture” in a terminal. Then install the appropriate packages using sudo dpkg -i packagename
  8. In all likelihood you will get an error from one or both of those installs, complaining about dependencies. To resolve them and complete the installs, simply enter sudo apt-get install -f
  9. cd mediatomb-0.12.1 and then sudo ./configure. Lots of content will scroll past, but at the end there should be a summary; look for a line that says something like “libjs : yes”. If present then you have enabled Javascript support in the build, and satisfied the dependencies. You can now install any additional dependencies and reconfigure the build further if you wish.
  10. Switch back to your source code with cd ~/temp/mediatomb-0.12.1
  11. Start the compilation with sudo fakeroot debian/rules binary. Lots of compilation messages should scroll past.
  12. When it stops, you should have three .deb files in ~/temp. You can install them with sudo dpkg -i mediatomb*.deb

Finally, switch to root (sudo su) and then issue the command echo packagename hold | dpkg –set-selections for each of mediatomb, mediatomb-common, mediatomb-daemon, libmozjs2d and libmozjs-dev. Then drop back to your user by entering control-D. This will prevent your customised packages being overwritten as part of the normal update processes (they will be “held”.)

You can now configure Mediatomb normally, including the use of custom import.js scripts by altering /etc/mediatomb/config.xml as desired.

Update: Having just been through a reboot on my server it seems that Mediatomb isn’t installed to autostart properly. To resolve this you need to run the command sudo update-rc.d mediatomb defaults which will install the various rcn.d startup and shutdown links.

Update2: I’ve noticed that sometimes after a reboot Mediatomb still isn’t autostarted properly. Turns out that there is a message in /var/log/mediatomb.log referring to The connection to the MySQL database has failed: mysql_error (2002). What this means is that if you are using MySQL rather than SQLite, there is a race condition where Upstart sometimes tries to bring up Mediatomb before the MySQL database is available. You can resolve this by editing /etc/init/mediatomb.conf, and changing:

start on (local-filesystems and net-device-up IFACE!=lo)

to

start on (started mysql and local-filesystems and net-device-up IFACE!=lo)

Upstart will then ensure that MySQL is running before attempting to start Mediatomb.

New KVM Option

The Edimax KVM will hopefully be on its way back to Amazon on Monday.

Extracting all the leads that I had pre-routed through the trunking behind my new built-in desk was a lot harder than I expected. This was largely because I’d run the wires through the trunking, and then attached the trunking to the wall/desk. Getting them back out without removing the trunking from the wall was not trivial. About an hour into the process I was really starting to regret (a lot) that I’d not tested the KVM before running all the cables. Needless to say, I won’t make that mistake again.

Speaking of which, the new KVM is now on it’s way.

This time I did a LOT more checking of the specifications and reviews before making my selection. Fundamentally my basic needs hadn’t changed, but this time I was a lot more aware of the subtle differences in the manufacturers descriptions of the units. It’s very clear that at the cheaper end of the market most of the manufacturers take one of a couple of old reference designs, and repackage & rebadge them. These wouldn’t support my needs. The key additional criteria I looked for this time was mention of support for laptops, and specifically support for Windows 7.

Windows 7 support is essential because it indicates a KVM that has full support for EDID emulation, which (I believe) is probably also a requirement for the newer Linux desktop environments such as Unity and Gnome Shell. I’m now pretty sure that this is where the Edimax fell short.

In the end, the two cheapest devices that did everything I needed seem to be the ATEN CS64U, and the IOGEAR GCS1804. The IOGEAR is the fuller featured device, with detachable leads etc, but it’s difficult to source in the UK, and comes with a £140 price point. Whereas the ATEN is a bit more restricted (no OSD for example), but is available from Amazon for only £40.

That made the decision trivial.

The search for a new desktop still continues

Back in my last post I described how I attempted to customise the Unity desktop environment to better suit my needs within my admittedly commercial setting. The results weren’t too bad, but exposed a series of issues, some of which were purely personal and subjective, and some I felt to be genuine bugs, which I raised against the Unity interface.

This post is all about the same attempt, but this time with the Gnome 3 Shell.

So, having freshly installed Ubuntu 11.10, we need to install Gnome Shell. It’s in the repositories, so it’s a simple matter of opening a terminal and issuing the command:
sudo apt-get install gnome-shell

The steps I then followed were:

  1. Getting to a shell prompt with Alt + F2 has been disabled by default in Gnome Shell under Ubuntu 11.10. This is annoying, as it means you can’t (trivially) restart or debug the desktop environment without it. To fix it, open “System Settings” and under the Keyboard settings, go to Shortcuts, then System, and then click the “Disabled” next to “Show the run command prompt” and press Alt + F2 to re-enable the command prompt.
  2. Next I added some Gnome Shell customisations from http://extensions.gnome.org, which make Gnome Shell look a lot more like Gnome 2. Heretical? Perhaps. So shoot me.
    1. Frippery Applications Menu – puts an old-school applications menu on the left side of the top menu
    2. Frippery Bottom Panel – adds a taskbar to the bottom of every workspace
    3. Frippery Move Clock – moves the clock back to where it used to be in Gnome 2.xx
    4. Frippery Panel Favorite – adds a copy of the launcher into the top panel
  3. Next, I wanted to remove the accessibility icon from the top panel. Last time I looked, I didn’t need any of those settings, so I certainly don’t need the icon there all the time. To get rid of it, I downloaded an extension from the website of the author of all the previous extensions:
    http://www.fpmurphy.com/gnome-shell-extensions/noa11y-2.0.tar.gz

    This needs to be unpacked into ~/.local/share/gnome-shell/extensions/ by cd’ing to that directory and running something like:
    tar xvf ~/Downloads/noa11y-2.0.tar.gz.

    You then need to install Gnome Tweak to be able to enable the extension:
    sudo apt-get install gnome-tweak-tool
    and then run it with gnome-tweak-tool and choose to enable or disable whatever extensions you have loaded.

  4. As with Unity, I wanted to disable the overlay scrollbars:
    sudo su
    echo "export LIBOVERLAY_SCROLLBAR=0" > /etc/X11/Xsession.d/80overlayscrollbars

    Sadly the scrollbars still don’t have scroll buttons on them – which reinforces my thought that this is simply an issue with themes, and hence can probably be resolved once I’ve learned how themes work.

  5. Next, move my preferred set of three window buttons back to the top right of each window titlebar by running the command:

    gconftool-2 --set --type str /desktop/gnome/shell/windows/button_layout ":minimize,maximize,close"

  6. Around now it’s probably worthwhile to logout and back in again, or even reboot.

  7. Next I configured the general settings:

    1. Screen:
      Turn off at “30 minutes”
      Set screen lock “on” after “screen turns off”
    2. Power:
      Do nothing when lid closed (battery or AC)
      Don’t suspend when inactive (battery or AC)
      Shutdown if power critical on battery
    3. Time and Date:
      Panel clock to show 12hour format
    4. Removable Media:
      Tick “Never prompt or start programs on media insertion”
  8. Next I added the date to the clock:
    gsettings set org.gnome.shell.clock show-date true

  9. As with Unity, I configured the terminal by opening terminal preferences, and set the font to “Monospace 9”, the default terminal size to 100×40, and the scrollback to 10000 lines.

  10. As with Unity, I changed the default fonts throughout:

    gsettings set org.gnome.desktop.interface document-font-name 'Sans 10'
    gsettings set org.gnome.desktop.interface font-name 'Ubuntu 9'
    gsettings set org.gnome.desktop.interface monospace-font-name 'Ubuntu Mono 10'
    gsettings set org.gnome.nautilus.desktop font 'Ubuntu 10'
    gconftool --set /apps/metacity/general/titlebar_font 'Ubuntu Bold 9' --type STRING

  11. I also removed the guest account:

    sudo gedit /etc/lightdm/lightdm.conf

    Make it read:

    [SeatDefaults]
    greeter-session=unity-greeter
    user-session=ubuntu
    allow-guest=false

  12. At this point I found I had a desktop that had almost all the features of my current Gnome 2 setup, plus access to the new metaphor of the Gnome Shell when I wanted it. The only problem is that when mousing to the newly added Applications menu, it’s far too easy to hit the “hot spot”, triggering the Overview Mode.

    There are extensions to move that hotspot elsewhere, but I’d prefer to change the way it works, so it takes a more conscious effort to engage it. Unfortunately there is no configuration for this, so for this exercise I took a quick hack at the base code, which is a real cludge. I’m hopeful that it may be possible to do something cleaner with Monkey patching, but for now my change proves the concept, and makes it necessary to click on the hotspot to engage it, or use the super key (as now). To do this, edit:
    /usr/share/gnome-shell/js/ui/layout.js, and change the _onCornerEntered method so it reads:


    _onCornerEntered : function() {
    // if (!this._entered) {
    // Patched this to prevent the hot corner from engaging on cursor entering zone.
    if (this._entered) {
    this._entered = true;
    if (!Main.overview.animationInProgress) {
    this._activationTime = Date.now() / 1000;

    this.rippleAnimation();
    Main.overview.toggle();
    }
    }
    return false;
    },

    Now just entering the hotspot will not trigger the Overview Mode – but you can still click on the hotspot (which is the top left pixel), or just use the Super key.

  13. Since I was messing around in the source code I also decided to get rid of the “currently running application name” in the top panel, which again cannot be configured away. Quite why I’d need to be reminded of the name of the application I’m using is beyond me, and it takes up precious menubar space. This time, edit:

    /usr/share/gnome-shell/js/ui/panel.js, and find the AppMenuButton function prototype. Then scroll down to the “_sync” method, and make the first few lines read:

    _sync: function() {
    let tracker = Shell.WindowTracker.get_default();
    let lastStartedApp = null;
    let workspace = global.screen.get_active_workspace();
    // Add the following line to never show the button (application name)
    return;
    for (let i = 0; i < this._startingApps.length; i++)
    if (this._startingApps[i].is_on_workspace(workspace))
    lastStartedApp = this._startingApps[i];

    This works perfectly, but like the previous code change, the problem is that whenever Gnome Shell gets updated, I would need to re-hack these changes manually. Which is not good. But as I mentioned before, I’m hopeful that Monkey Patching will come to the rescue and allow me to create a proper extension for this.

And the conclusion? Well, since it’s all written in Javascript and CSS, Gnome Shell is much easier to customise, and its extension system allows (in theory) for a robust and user-friendly mechanism to change anything that one wishes. I now have a DE that has all the features I like from an “old” Gnome 2 desktop, as well as access to all the new features of Gnome Shell. See below:

There are still rough edges, some of which are probably bugs too, but overall I can get closer to what I want (which may not be what you, or the Gnome Shell developers want) with Gnome Shell than Unity.

So when I upgrade my work “production” laptop over Christmas, it will be to a Gnome Shell based desktop.