Ditching the spinning rust

For some time now I’ve been thinking of switching my laptop storage over to an SSD. I like the idea of the massively improved performance, the slightly reduced power consumption, and the ability to better withstand the abuse of commuting. However, I don’t like the limited write cycles, or (since I need a reasonable size drive to hold all the data I’ve accumulated over the years) the massive price-premium over traditional drives. So I’ve been playing a waiting game over the last couple of years, and watching the technology develop.

But as the January sales started, I noticed the prices of 256GB SSDs have dipped to the point where I’m happy to “invest”. So I’ve picked up a Samsung 840 EVO 250GB SSD for my X201 Thinkpad; it’s essentially a mid-range SSD at a budget price-point, and should transform my laptops performance.

SSD’s are very different beasts from traditional hard drives, and from reading around the Internet there appear to be several things that I should take into account if I want to obtain and then maintain the best performance from it. Predominant amongst these are ensuring the correct alignment of partitions on the SSD, ensuring proper support for the Trim command, and selecting the best file system for my needs.

But this laptop is supplied to me by my employer, and must have full system encryption implemented on it. I can achieve this using a combination of LUKS and LVM, but it complicates the implementation of things like Trim support. The disk is divided into a minimal unencrypted boot partition with the majority of the space turned into a LUKS-encrypted block device. That is then used to create an LVM logical volume, from which are allocated the partitions for the actual Linux install.

Clearly once I started looking at partition alignment and different filesystem types a reinstall becomes the simplest option, and the need for Trim support predicates fairly recent versions of LUKS and LVM, driving me to a more recent distribution than my current Mint 14.1, which is getting rather old now. This gives me the opportunity to upgrade and fine-tune my install to better suit the new SSD. I did consider moving to the latest Mint 16, but my experiences with Mint have been quite mixed. I like their desktop environment very much, but am much less pleased with other aspects of the distribution, so I think I’ll switch back to the latest Ubuntu, but using the Cinnamon desktop environment from Mint; the best of all worlds for me.

Partition alignment

This article describes why there is a problem with modern drives that use 4k sectors internally, but represent themselves as having a 512byte sector externally. The problem is actually magnified with SSD’s where this can cause significant issues with excessive wearing of the cells. Worse still, modern SSDs like my Samsung write in 4K pages, but erase in 1M blocks of 256 pages. It means that partitions need to be aligned not to “just” 4K boundries, but to 1MB boundries.

Fortunately this is trivial in a modern Linux distribution; we partition the target drive with a GPT scheme using gdisk; on a new blank disk it will automatically align the partitions to 2048 sector, or 1MB boundries. On disks with existing partitions this can be enabled with the “l 2048” command in the advanced sub-menu, which will force alignment of newly created partitions on 1MB boundries.

Trim support

In the context of SSD’s TRIM is an ATA command that allows the operating system to tell the SSD which sectors are no longer in use, and so can be cleared, ready for rapid reuse. Wikipedia has some good information on it here. The key in my case is going to be to enable the filesystem to issue TRIM commands, and then enabling the LVM and LUKS containers that hold the filesystem to pass the TRIM commands on through to the actual SSD. There is more information on how to achieve this here.

However, there are significant questions over whether it is best to enable TRIM on the fstab options, getting the filesystem to issue TRIM commands automatically as it deletes sectors, or periodically running the user space command fstrim using something like a cron job or an init script. Both approaches still have scenarios that could result in significant performance degradation. At the moment I’m tending towards using fstrim in some fashion, but I need to do more research before making a final decision on this.

File system choice

Fundamentally I need a filesystem that supports the TRIM command – not all do. But beyond that I would expect any filesystem to perform better on an SSD than it does on a hard drive, which is good.

However, as you would expect, different filesystems have different strengths and weaknesses so by knowing my typical usage patterns I can select the “best” of the available filesystems for my system. And interestingly, according to these benchmarks, the LUKS/LVM containers that I will be forced to use can have a much more significant affect on some filesystems (particularly the almost default ext4) than others.

So based on my reading of these benchmarks and the type of use that I typically make of my machine, my current thought is to run an Ubuntu 13.10 install on BTRFS filesystems with lzo compression for both my root and home partitions, both hosted in a single LUKS/LVM container. My boot partition will be a totally separate ext3 partition.

The slight concern with that choice is that BTRFS is still considered “beta” code, and still under heavy development. It is not currently the default on any major distribution, but it is offered as an installation choice on almost all. The advanced management capabilities such as on-the-fly compression, de-duplication, snapshots etc make it very attractive though, and ultimately unless people like me do adopt it, it will never become the default filesystem.

I’ll be implementing a robust backup plan though!

You don’t have to know the answer to everything – just how to find it

Since I work at IBM, I get to use the companys own email system, which is based on what used to be called Lotus Notes. It’s recently had some extra “social media awareness” added to it, been rebranded “IBM Notes”, and repositioned as a desktop client for social business. Which is all very modern and hip, especially for a product that has it’s roots back in the early 1990’s. However, most organisations (including IBM) tend to use it solely for email – for which it is the proverbial sledgehammer.

But having been using it for some 18 years now, I’m fairly comfortable with it. The only issue I have is that as I’ve been using it for so long, my mail archives contain a huge amount of useful information from old projects that I’ve worked on. I also have other information related to those projects stored elsewhere on my laptop harddrive, and pulling all that information together and searching it coherently isn’t a trivial problem. However, in recent years desktop search engines have begun to provide a really nice solution to this.

The problem here is that Lotus Notes is based on a series of binary databases which form the backbone of its ability to efficiently replicate documents between clients and servers. Desktop search engines generally don’t understand those databases, and hence do not work with Lotus Notes. So searching my laptop becomes a somewhat tedious process, involving the Lotus Notes client search feature, and manually correlating with a desktop search engine of some type. It works, but it’s just not as good as it could be.

What I really want, what I really really want (as the Spice Girls would sing) is a desktop search engine that can understand and integrate my Lotus Notes content. And that’s what this post is all about.

Since I run Linux I have a choice of open source desktop search engines such as Tracker or Beagle (now deceased). But my current preference is for Recoll, which I find to be very usable. And then, last year, I discovered that a colleague had written and published a filter, to enable Recoll to index documents inside Lotus Notes databases. So I had to get it working on my system!

Unfortunately, it turned out that my early attempts to get the filter working on my Ubuntu (now Mint) system completely failed. He was a RedHat user, and there are quite a lot of packaging differences between a Debianesque Lotus Notes install, and a RedHat one, especially inside IBM where we use our own internal versions of Java too. So the rest of this post is essentially a decription of how I hacked his elegant code to pieces to make it work on my system. It’s particularly relevant to members of the IBM community who use the IBM “OCDC” extensions to Linux as their production workstation. I’m going to structure it into a description of how Recoll and the Notes filter work, then a description of how I chose to implement the indexing (to minimise wasteful re-indexing), and hence what files go where, and some links to allow people to download the various files that I know to work on my system.

At a very simplistic level, Recoll works by scanning your computer filesystem, and for each file it encounters, it works out what it is (plain text, HTML, Microsoft Word, etc) and then either indexes it (if it’s a format that it natively understands) using the Xapian framework, or passing it to a helper application or filter which returns a version of the file in a format that Recoll does understand, and so can index. In the case of container formats like zip files, Recoll extracts all the contents, and processes each of those extracted files in turn. This means Recoll can process documents to an arbitrary level of “nesting”, comfortably indexing a Word file inside a zip file inside a RAR archive for example. Once all your files are indexed, you can search the index with arbitrary queries. If you get any hits, Recoll will help to invoke an appropriate application to allow you to view the original file. The helper applications are already existing external applications like unRTF or PDFtotext that carry out conversions from formats that Recoll will commonly encounter, while filters are Python applications that enable Recoll to cope with specialist formats, such as Lotus Notes databases.

So, the way the Lotus Notes filter works, is that:

  1. Recoll encounters a Lotus Notes database, something.nsf
  2. To work out what to do with it, Recoll looks up the file type in its mimemap configuration file, and determines what “mimetype” to associate with that file
  3. It then looks up what action to take for that mimetype in the mimeconf configuration file, which tells it to invoke the rcllnotes filter
  4. It then invokes rcllnotes, passing it the URI to something.nsf
  5. rcllnotes then extracts all the documents (and their attachments) from the Notes database, passing them back to Recoll for indexing
  6. It does this by invoking a Java application, rcllnotes.jar, that must be run under the same JVM as Lotus Notes
  7. This Java application uses Lotus Notes’ Java APIs to access each document in the database in turn
  8. These are then either flattened into HTML output (using an XLST stylesheet) which Recoll can consume directly, or in the case of attachments, output as a document needing further processing; Recoll can tell which is which from the mimetype of the output. Included in the flattened HTML are a couple of metadata tags, one marking this HTML document as descended from a Lotus Notes database, and the other containing the complete Lotus Notes URI for the original document. This latter information can be used by the Lotus Notes client to directly access the document – which is crucial later in the search process
  9. Recoll then indexes the documents it receives, saving enough information to allow Recoll to use rcllnotes again to retrieve just the relevant document from within the Notes database.
  10. So, when a search results in a Notes document, Recoll can use the saved information (the URI of the database and the Notes UNID of the document?) and the rcllnotes filter to obtain either the flattened HTML version of the document, or a copy of an attachment. Recoll then uses the documents mimetype to determine how to display it. In the case of an attachment, Recoll simply opens it with the appropriate application. In the case of the HTML, Recoll combines the expected “text/html” with the information in the metadata tag that describes this HTML as being derived from a Lotus Notes document. This produces a mimetype of “text/html|notesdoc”, which it then looks up in the mimeview configuration file, which causes it to use the rclOpenNotesClient script. That reads the Notes URI from the other HTML metadata field in the flattened HTML file, and then invokes the Lotus Notes client with it, causing the actual document of interest to be opened in Lotus Notes.

One of the problems with using Recoll with Lotus Notes databases is that it’s not possible to index just the few changed documents in a Notes database; you have to reindex an entire database worth of documents. Unfortunately there are usually a lot of documents in a Notes database, and the process of indexing a single document actually seems relatively slow, so it’s important to minimise how often you need to reindex a Notes database.

To achieve this, I make use of a feature of Recoll where it is possible to search multiple indexes in parallel. This allows me to partition my system into different types of data, creating separate indexes for each, but then searching against them all. To help with this, I made the decision to index only Notes databases associated with my email (either my current email database, or it’s archives) and a well-known (to me) subset of my filesystem data. Since my email archives are partitioned into separate databases, each holding about two years of content, I can easily partition the data I need to index into three categories: static Lotus Notes databases that never change (the old archives), dynamic Lotus Notes databases that change more frequently (my email database and its current archive), and other selected filesystem data.

I then create three separate indexes, one for each of those categories:

  1. The static Notes databases amount to about 5.5GB and takes about 2.5 hours 8GB and takes a little under 4 hours to index on my X201 laptop; however, since this is truely static, I only need to index it once.
  2. The dynamic Notes databases amount to about 4GB and take about 2 hours 1.5GB and take about 40 minutes to index; I reindex this once a week. This is a bigger job than it should be because I’ve been remiss and need to carve a big chunk of my current archive off into another “old” static one.
  3. Finally, the filesystem data runs to about another 20GB or so, and I expect this to change most frequently, but be the least expensive to reindex. Consequently I use “real time indexing” on this index; that means the whole 20GB is indexed once, and then inotify is used to determine whenever a file has been changed and trigger a reindex of just that file, immediately. That process runs in the background and is generally unnoticable.

So, how to duplicate this setup on your system?

First you will need to install Recoll. Use sudo apt-get install recoll to achieve that. Then you need to add the Lotus Notes filter to Recoll. Normally you’d download the filter from here, and follow the instructions in the README. However, as I noted at the beginning, it won’t work “out the box” under IBM’s OCDC environment. So instead, you can download the version that I have modified.

Unpack that into a temporary directory. Now copy the files in RecollNotes/Filter (rcllnotes, rcllnotes.jar and rclOpenNotesClient) to the Recoll filter directory (normally /usr/share/recoll/filters), and ensure that they are executable (sudo chmod +x rcllnotes etc). You should also copy a Lotus Notes graphic into the Recoll images directory where it can be used in the search results; sudo cp /opt/ibm/lotus/notes/notes_48.png /usr/share/recoll/images/lotus-notes.png.

Now copy the main configuration file for the Notes filter to your home directory. It’s called RecollNotes/Configurations/.rcllnotes and once you have copied it to your home directory, you need to edit it, and add your Lotus Notes password in the appropriate line. Note that this is by default a “hidden” file, so won’t show up in Nautilus or normal “ls” commands. Use “ls -a” if necessary!

Next you need to set up and configure the three actual indexes. The installation of Recoll should have created a ~/.recoll/ configuration directory. Now create two more, such as ~/.recoll-static/ and ~/.recoll-dynamic/. Appropriately copy the configuration files from the subfolders of RecollNotes/Configurations/, into your three Recoll configuration folders. Now edit the recoll.conf files in ~/.recoll-static/ and ~/.recoll-dynamic/, updating the names of the Notes Databases that you wish to index. Now manually index these Notes databases by running the commands recollindex -c ~/.recoll-static -z and recollindex -c ~/.recoll-dynamic -z.

At this point it should be possible to start recoll against either of those indexes (recoll -c ~/.recoll-static for example) and run searches within databases in that index. I leave it as an exercise for the interested reader to work out how to automate the reindexing with CRON jobs.

Next we wish to set up the indexing for the ~/.recoll/ configuration. This is the filesystem data that will run with a real-time indexer. So start by opening up the Recoll GUI. You will be asked if you want to start indexing immediately. I suggest that you select real-time indexing at startup, and let it start the indexing. Then immediately STOP the indexing process from the File menu. Now copy the file RecollNotes/recoll_index_on_ac to your personal scripts directory (mine is ~/.scripts), ensure it is executable, and then edit the file ~/.config/autostart/recollindex.desktop, changing the line that says Exec=recollindex -w 60 -m to Exec=~/.scripts/recoll_index_on_ac (or as appropriate). This script will in future be started instead of the normal indexer, and will ensure that indexing only runs when your laptop is on AC power, hopefully increasing your battery life. You can now start it manually with the command nohup ~/.scripts/recoll_index_on_ac &, but in future it will be started automatically whenever you login.

While your filesystem index is building, you can configure Recoll to use all three indexes at once. Start the Recoll GUI, and navigate to Preferences -> External Index dialog. Select “Add Index”, and navigate into the ~/.recoll-static/ and ~/.recoll-dynamic/ directories, selecting the xapiandb directory in each. Make sure each is selected. Searches done from the GUI will now use the default index (from the filesystem data) and the additional indexes from the two Lotus Notes configurations.

There is one final configuration worth carrying out, and that is to customise the presentation of the search results. If you look in the file in RecollNotes/reoll-result-list-customisation you will find some instructions to make the search results easier to read and use. Feel free to adopt them or not, as you wish.

Update: To answer the first question (by text message no less!), my indexes use up about 2.5GB of space, so no, it’s not insignificant, but I figure disk really is cheap these days.

Update: Corrected command to copy Notes icon to Recoll image directory to match configuration files, and a couple of the pathnames where I had introduced some typos.

Update: Added the main .rcllnotes configuration file to my archive of files, and updated the installation instructions to discuss installing it, and the need to add your Lotus Notes password to the file.

RIP, old friend

My second ever laptop, my old Thinkpad 600, has finally died.

It replaced my original 760CD, and like that machine, I carried it with me on numerous business trips all over the world. It survived being dropped, splashed with coffee, slung into bags, and then crammed into overhead luggage racks or under seats where it invariably got stood on. Despite all this abuse it still worked faultlessly in temperatures as varied as -20c in the Scandinavian winter, through to +40c in the humid summers of the US Deep South.

When IBM supplied me with an upgrade I bought my 600 off the company. For a time it was my personal system. Then it ran my home network, 24×7 until I could afford the parts to build a proper server. It was my wife’s main machine for a time, and lately my youngest daughter has been doing all her homework on it. Even in retirement it worked hard for it’s living.

I figure it must be 12 years old, and when it finally failed it was still running Ubuntu 11.04, which is a sophisticated and current operating system. Ok, the bizarre hybrid sound card wasn’t recognised (the 600 used a custom DSP to implement the modem and the sound system – a great idea, badly executed) and it wasn’t the fastest system on the block, but the overall experience was still pretty good.

At the end of the day it’s just a pile of electronics attached to a magnesium alloy chassis with some rather drab composite plastic covers, that no longer works. I should just recycle it. But there is an emotional attachment; I can’t bring myself to just take it to the recycling centre and dump it. But given that the motherboard has failed, the best I can offer it is the chance to donate some spare parts to new projects. So for now it’s going into my spare parts box; hopefully the screen, memory, disk and perhaps keyboard will show up in some future projects.

It may yet live again!

The search for a new desktop

As I mentioned here, I’m less than happy with the move Ubuntu have made towards adopting new desktop environments that seem to be more suited to touchscreen devices than desktop computers. So I’ve been test-driving a few of the alternatives to try to find something that will let me get on with my work, without getting in the way all the time.

So far I’m very impressed with LXDE, which is available pre-packaged onto the underlying Ubuntu 11.10 base as “Lubuntu”. Admittedly it’s very basic out the box (or off the USB key), but that seems to be because it’s been designed for very low-powered or old computers. All the default applications have also been selected to keep memory and CPU usage to a minimum. Nothing wrong with that, but in my case, I have a ridiculously powerful laptop with lashings of disk and memory to run it on – so all I want is the good old fashioned desktop metaphor back. Lowered system requirements are simply an added benefit.

So I’ve gone about hacking Lubuntu into something better suited for me. So far I’ve removed sylpheed sylpheed-doc sylpheed-i18n sylpheed-plugins mtpaint osmo xpad ace-of-penguins abiword abiword-common libabiword-2.8 gnumeric and gnumeric-common. That got rid of most of the default applications, and makes way for me to replace them with something fuller-featured.

I then added thunderbird xscreensaver-data-extra xscreensaver-gl-extra recoll inkscape scribus gimp gimp-data gimp-data-extras gimp-help-en gimp-help-common dia shotwell libreoffice aisleriot gnome-sudoku freemind audacity musescore easytag pitivi and conky-all. That adds most of the applications that I would expect to need from the standard Ubuntu repositories.

I then reconfigured the panels. Lubuntu comes with one panel on the bottom of the screen, a la Windows XP, but I’m used to the Gnome approach. So I moved the original panel to the top of the screen and added a second panel to the bottom. I then reconfigured the panels to match what I’m used to in a Gnome 2.x environment by moving around the various panel items – which was far easier than configuring the Gnome panels. So far so good.

Interesting to note that at this point, my test laptop (which is “only” a dual core 2Ghz machine with 2GB of ram) absolutely flies. But I still have a lot of things to try:

I know it’s just eye-candy for the sake of it, but I tried to add a simple Conky installation, showing some key facts and figures to the desktop – date/time, CPU, RAM & disk monitoring etc. It turns out that Conky and LXDE’s FilemanPC (which manages the desktop) don’t play well together out the box. You can get Conky scripts roughly working by altering them to contain “own_window_type normal” rather than “own_window_type overlay” or “own_window_type desktop”. However, the window can easily get minimised, with no way to recover it. Low priority, but more research required on that.

Currently most of the Thinkpad Fn-F1-12 key combinations aren’t recognised. The only one I really care about is Fn-F4 for suspend, which I can workaround using the menus, but I’d like to get at least that one enabled.

Next I need to test all the IBM-specific software that I normally use. I suspect that there may be some issues around the support of some of the Lotus products, which are built on top of an Eclipse base, and hence may not play well with my different desktop environment. Time will tell on that, but its critical for me.

Finally, assuming I can get all these basics working, I think I’ll be looking into producing a custom theme, as the standard Lubuntu (cold) blue isn’t at all to my taste.

Recovering recordings from dead Pace Twinview PVR

A couple of years ago the kids discovered the benefits of our PVR, and suddenly (a) there was no space on the PVR, and (b) we could never find anything we recorded amongst the zillions of recordings of CBeebies, CBBC and CITV. And then we inherited an old Pace Twinview PVR from my father-in-law, who had traded up to a Humax 9200. So I set it up for the kids on “their” TV. This meant that they could record whatever they wanted, without filling up the PVR in the lounge. And for a fair time, life was good.

And then the Twinview started playing up, and eventually died. And suddenly I have three kids who want me to recover all the recordings that they’ve been making.

No problem I think – those recordings are probably just recorded directly off air as transport streams (.ts files) which I can easily transform (using ffmpeg) either into something like H264 video and MP3 in an mp4 container, which they can then watch on the PS3, or an MPEG2 file which I can then make into a standard DVD. So I say not to worry, I’ll fix it for them.

Which was hasty. Very hasty. And possibly a big mistake.

So I extract the hard drive from the PVR. Good news, it’s a simple 20GB PATA laptop drive which will fit nicely into my Thinkpad ultrabay. So I boot up Ubuntu, and do a quick “fdisk -l /dev/sdb” to discover that there are three partitions, all of which are unrecognised by fdisk. They are however flagged with partition identifier 0xE0, which after a bit of Googling turns out to be a completely proprietary filesystem, designed by ST microelectronics, called ST AVFS – presumably the ST Microelectronics Audio Visual FileSystem.

So currently I’m struggling to find a way to get at the data. It’s not possible to just mount the partitions, but it turns out that there has been some work done on some linux command line tools (TwinRIP) to extract the data from those partitions. However, the tools are at least two years out of date, supplied in binary form only, and no longer run under any of my Ubuntu installs (lots of problems with missing libraries). Now, it transpires that there is a GUI program to do much the same under Windows (TDU), which looks to be more recently updated, and (plus point) can directly produce MPEG output files. It will be interesting to see if that runs under Windows 7 RC, which is the only Windows install I currently have on my Thinkpad!

Update: And the results are in. The TDU program doesn’t work any better than the Linux one. And Windows 7 doesn’t want to talk (reliably) to the hard drive, whether mounted using my Thinkpad ultrabay, or a USB caddy. Worse, the author of TDU & TwinRIP has not published any source code, so there’s no chance of my doing anything geeky at this point, so I’ve declared defeat, and told the kids that their old recordings are now officially lost 😦

The only upside is that this gives me another free 20GB PATA laptop drive to play with, if only I can think of a use for something so small.

Smaller is better?

Work provide me with a laptop for business, changing them every three years or so. I’ve had a succession of IBM Thinkpads, which I have to say are truly the Rolls-Royce of laptops. Relatively compact and portable, stuffed with features and the kind of build quality that allows them to survive years of abuse (mine have travelled all over the world with me). My experiences with them have been exemplary … until the last few years, when IBM sold the Thinkpad business off to Lenovo. The last couple of machines have not been so great. My top of the range T40p needed 3 motherboards in as many years. My current T60p has already had 2 motherboards, a fan and a heatsink assembly in two years. And now the screen (a rather nice 15″ 1600×1200 job) is showing lines of dead pixels that probably indicate either the LVDS connector is failing, or I need yet another new motherboard. In addition Thinkpads seem to have been steadily growing. My last two machines have not fitted into my “old faithful” laptop bag, requiring a bigger heavier bag to carry them in.

So recently I found myself wondering why I needed a machine with 2 x 2.16GHz processors, 2GB of ram, and a 1600×1200 pixel 15″ screen. Fundamentally I do the same stuff on this machine that I did on my old Thinkpad 600 some 10 years ago. I do email. I browse the web. I use IM. I follow some (real NNTP!) newsgroups. I create documents and I read documents. I do presentations. I occasionally print stuff. So why do I now need the above monster specification, when I used to get by on a single 300Mhz processor, with 384MB of ram, and a 1024×768 13″ screen? And the more I thought about it, the less I could understand it. All I’ve done is adopt a much bigger and heavier machine, that runs a lot hotter, to do the same stuff. The bigger machine isn’t actually allowing me to do anything faster … I still think at the same speed. In fact, in one area my performance has actually decreased – I actually walk a lot slower with it because it weighs so much.

So at Christmas I decided to have my own personal “grand challenge” – to see how small a machine I could get away with to do my job. So I went out and bought my first ever laptop – an Acer Aspire One “netbook”. I got the one with 512MB of ram, and a 120GB hard drive. Cost me 200GBP here in the UK (I know, they’re a LOT cheaper in the USA – we get screwed on computer stuff here).

A day playing with the preloaded Linpus Lite linux (aka hacked Fedora 6) convinced me that while it was set up to be a foolproof computing appliance for the computer-illiterate, I needed something fuller-functioned. So after a little bit of fiddling with gparted, I now have the Aspire set up to triple boot any of (a) the original Linpus Lite, (b) Windows XP SP3 and (c) Ubuntu 8.10. Ubuntu will be my OS of choice, with Windows there purely for the odd work application that I have to use that isn’t supported under Linux. I kept Linpus purely to compare my performance tuning of Ubuntu to the stock preload.

To Ubuntu I added Thunderbird, Lotus Notes and Sametime (our corporate-standard groupware) and some VPN software. I then added OpenProj, Freemind and Dia, and upgraded the OpenOffice install to v3.0. And to my astonishment, in the last month I’ve needed nothing else.

There have been some teething problems though.

The first problem I experienced was that on Linux, Lotus Notes is implemented as an eclipse plugin, so unfortunately by the time it’s loaded there’s very little left of the 512MB of ram. I gave up after a couple of days, and sprang for another 1GB of ram, at another 13 GBP. Installing it required the complete disassembly of the Aspire … but in actual fact only took about 20 minutes with a set of jewellers screwdrivers. This maxes-out the memory on the Aspire One (at 1.5GB), but I now seem to have memory to burn, even when I have all my applications open at once.

The second problem is that as a long-term thinkpad user, I’m used to having a trackpoint (the little red “nipple” embedded in the keyboard) for moving the cursor. Compared to the control that that provides, a trackpad is a hopelessly painful experience. Fortunately I already had a Logitech bluetooth mouse (a V470), but unfortunately the Aspire One doesn’t come with internal bluetooth. So I had to add an external USB bluetooth adaptor. I imported a USB Bluetooth 2.0 EDR micro-stick from Hong Kong for 6 USD which resolved that problem, and is actually small enough to be left permanently attached, though longer term I’ll probably solder the internals of one to the motherboard somewhere.

Other than that, it’s proving a remarkably solid little performer. The keyboard is a little cramped, but just about big enough to almost-touch-type on. The screen is painfully small, but with multiple workspaces and compiz-fusion doing its thing, you can work around that. And I have external monitors at home and at the office, so it’s only really a problem while travelling anyway. If I have a lot of stuff going on then some applications can be a little sluggish, but even then it’s not problematic … just noticeable.

Boot times for standard Ubuntu were very slow compared to the preloaded Linpus lite, but a custom kernel tailored to the Aspire One has got that down to about 25 seconds, which is faster than standard Ubuntu boots on my T60p. Still not as fast as Linpus though – which is up in about 15 seconds. Battery life on the standard 3 cell battery is only about 2:20; a high capacity 6 cell 3rd party battery should take that to about 7 hours but for my working patterns that’s unnecessary, and would start to add to the weight and size again.

Which leads us to the question of do I declare the challenge beaten or not? I’d justified buying the Aspire One on the basis that if I couldn’t live with it, I’d simply give it to one of my daughters, as I figured it would be fine as a first machine for school-work etc. Well, at the moment I’m not prepared to give it up. It does everything that I need – just, and no more. In fact, these Aspire Ones would be great for my daughters, and the only thing holding me back from going out and buying them a pair is the faint hope that we may see some netbooks built around NVidias ION platform this year.

Replacing the Intel 945GSE chipset (which seems to be based on 3-4 year old designs) with the NVidia GeForce9400 chipset ought to result in a stupendous graphical performance hike for no increase in power consumption. The only question is, will anyone do it? The problem is that it makes low-end netbooks compete with mid-range laptops (problematic for the manufacturers), and Intel is sure to price the Atom/945 combination in such a way that its almost as cheap to take the pair as an Atom on its own – which would stuff any chance that Nvidia have. Will be interesting to see how it works out.

All change …

Due to poor support for my laptop hardware, I’ve been lagging behind the times recently, and I’ve not moved forward from Ubuntu 7.10 since I installed it about a year ago. And that’s despite the arrival of 8.04 which was supposed to be the long term support version, with lots of stability updates.

So imagine my surprise last week, to discover that when I tried a trial fresh install of Ubuntu 8.10 (which is all about new function, not stability), almost every issue I had on 7.10 had been fixed.

So, I now have hardware accelerated graphics on my Mobile FireGL chipset (and hence Compiz support), the ability to switch the external VGA output on (so I can actually present from this laptop under Linux!) and working suspend (and resume afterwards). And the icing on the cake is that my wireless works properly in our corporate environment, for the first time ever. So as of now, I’ve switched over to Ubuntu 8.10 as my main OS, and I’m upgrading to the latest versions of all my applications as we speak.

Which brings me to this post, which I’m currently making through a new tool, called gnome-blog. It’s a pared down to the minimum, extremely simple to use, and completely integrated into the Gnome panel blogging applet, that’s never more than a click away.

Hopefully it may get me posting a little more often.