Stoves spares

A few years ago we bought a range-style oven, with 3 ovens, a warming drawer and 7 gas burners. It’s a lovely thing, resplendent in a bright royal blue paintwork, and makes cooking even the biggest of meals pretty easy. It’s also got all kinds of cooking modes, and various “designer” features, including these nifty sliding telescopic shelves:

Buy STOVES Richmond S900DF CC 90 cm Dual Fuel Range Cooker - Black | Free  Delivery | Currys

These work by having telescopic runners attached to the shelf supports in the sides of the oven, and then the shelves clip to the runners and can be slid out of the oven so you can more easily access the cooking food. This seems like a great idea in the showroom, but turns out to be less useful in real life, because:

  • The runners are in fixed locations. So you can only use them if you want the spacing between the shelves that the manufacturer decided on when they made the shelf supports.
  • Although you can take the shelves off the runners, and use them in some intermediate locations, there just isn’t enough flexibility in the design of the shelf supports, and you often end up needing to use a second oven because you can’t space the shelves well enough. Not good for the environment, or the cooks temper, to have to start a second oven part-way through cooking dinner for eight.
  • Getting the shelves in and out of the runners is awkward when the oven is cold, and extremely difficult when it’s hot, which means you tend to be forced to select a shelf configuration at the point you turn on the oven. When you discover you guessed wrongly, it turns out that rearranging shelves of partly cooked food (at 200c) is no-ones idea of fun.
  • Finally, they just aren’t robust. If the shelf isn’t fully retracted into the oven when you close the door, there is a good chance that the runner will be pushed off the shelf support (along with a shower of tiny ball-bearings).

This latter problem is what finally drove us over the cliff. I repaired the runners twice but the third time I couldn’t find all the ball bearings, and it was clear that a replacement was required. Cost, over £100. Just to get all the same problems as before. So we decided we didn’t want telescopic shelves any more. We wanted old-fashioned low-tech shelf supports, that didn’t telescope, but did have more options for height adjustment.

The manufacturer (Stoves) were not terribly helpful. Their helpline staff (and the outsourced spare-parts operation) were all operating purely on a system that required you supply the oven serial number, and they’d tell you the official spare part that you needed for your device, and the cost to deliver it to you. Which is fine, until you’re trying to find out if they have any other shelf-supports that will fit.

In the end I trawled through all the online suppliers I could find, looking at Stoves shelf supports for all their different models (including those sold under other brand names) checking the pictures and descriptions for likely candidate replacements. And it turns out that they do have exactly what we wanted, for £22. And a day of effort.

So if like us, you have a Stoves Richmond Range Cooker, model DFT1000, and you’re at your wits end with the telescopic shelf supports in the main oven, then you can replace them with a “Guide Shelf Large Rh” and “Guide Shelf Large Lh 1979”, stock numbers ES924555 and ES924554 from http://www.espares.co.uk. They fit perfectly.

Blogging again

It’s been a long time since I last posted anything here. There are a mix of things behind that. Coronavirus, obviously, but also various other events going on my life that initially left me with no spare time, and ultimately little or no enthusiasm to pick up the metaphorical pen again.

But time moves on, enthusiasm returns, and here we are again. Much as before, I’ll produce the odd post on things that I think are interesting or useful, and post it into the void to see if anyone is listening …

Giving new life to old routers

I’ve been planning to update my WiFi network for some time. I have three Access Points (APs) to cope with the all-brick interior walls of my house, but one had simply stopped working, and another could no longer be upgraded to cope with the latest security exploits. None were 5GHz capable, which also meant that I was suffering from problems with the number of other nearby 2.4GHz WiFi networks. Ideally I wanted to move to 2.4/5GHz dual-radio APs with 802.11ac and mimo aerials, which at this point in time is about £60 per device.

Fortunately there are a large number of BT Home Hub v5 home gateways available on UK Ebay from about £10 delivered. These are very nice devices that more than meet my requirements, but the firmware is designed to prevent them from being used on any network other than BT’s, and to seriously limit the functionality.

However, it is possible to replace the router firmware with open source software that removes all those restrictions and allows the full potential of the hardware to be realised. The software I used is called OpenWRT/LEDE. However, the process is fairly technically involved, and as it involves soldering wires to the motherboard of the router, definitely not for the faint-hearted. The “bible” for this process is available from this website, but this has steadily grown until it now comprises a 130-page document. It’s extremely complete, but hardly consumable.

So what follows is a complete summary of the process that I followed, that worked perfectly for the three routers I converted.

My environment is a home network running in the address range 192.168.255.0-255, controlled by a router (running LEDE 17.1.04) and hosting a home server (running Ubuntu server 16.04.4 LTS). A machine needs to be connected to the router via a serial connection, and I used an Ubuntu 16.04.4 LTS laptop, with an FTDI adapter.

In broad terms, we need to force the router to boot into a debug mode where it talks to the laptop via the serial link. Then, download (via the serial link) and run a new, more functional version of the bootloader. We then use that to download and execute a basic OpenWRT/LEDE environment over the network, that we can use to backup the original BT router firmware, repartition the flash storage, before finally installing OpenWRT/LEDE into the flash storage.

First I made up a set of flying leads to connect from the FTDI adapter to the Home Hub motherboard. Then I set up a TFTP server on the home server:

sudo apt-get update
sudo apt-get install tftpd-hpa
sudo service tftpd-hpa status
netstat -a | grep tftp

Then I created the directory that will be used by the TFTP server, and configured the server itself:

sudo mkdir -p /srv/tftp
sudo cp /etc/default/tftpd-hpa /etc/default/tftpd-hpa.ORIGINAL
sudo vi /etc/default/tftpd-hpa
TFTP_DIRECTORY=”/srv/tftp”
TFTP_OPTIONS=”–secure –create”
TFTP_ADDRESS=”:69″

Next, I modified the permissions on the TFTP directory, and restarted the service. I set the ownership to my own userid, and allowed the TFTP service access via it’s group as I also wanted to be able to easily scp files into and out of that same directory, but I guess I could have simply added my userid to the tftp group instead:

sudo chown -R richard:tftp /srv/tftp
sudo service tftpd-hpa restart

I went to the OpenWRT hardware page for the Home Hub v5, and then downloaded: 

lede-lantiq-xrx200-BTHOMEHUBV5A-installimage.bin and
lede-17.01.4-lantiq-xrx200-BTHOMEHUBV5A-squashfs-sysupgrade.bin

into the /srv/tftp/ folder on the home server. I then downloaded

lede-lantiq-bthomehubv5a_ram-u-boot.asc

to the Ubuntu laptop that I intended to connect to the router.

Now you need to open up the case of the Home Hub. It’s a difficult task, even for someone who has experience of these things. The designer clearly never intended for it to be opened, but there are some tutorials on YouTube that may help. Expect to break some of the clips on the sides of the unit, but don’t worry unduly, as these seem not to be essential. Of the four units I opened, I broke all the side clips on two, and none on the other two. All the units went back together again perfectly, so it’s clearly over-engineered. Go figure.

Once you have opened the case, you need to connect the FTDI cable to the UART on the motherboard. This involves soldering some wires from the FTDI cable to some extremely small solder pads. Even if you’re pretty experienced with a soldering iron, this is not a fun experience: the solder pads are tiny, and it’s very easy to damage the motherboard. This picture shows you what you are aiming to solder wires to. I destroyed the first one I attempted to convert, and I’ve been soldering electronics for decades …


Solder the FTDI Rx line to the Home Hub 5a Tx pad (R77)
Solder the FTDI Tx line to the Home Hub 5a Rx pad (R78)
Solder the FTDI Gnd to the Home Hub 5a Gnd (lower left of 4 pins that that run through the board on the top right)
Finally, you will need to ground the pad connected to BootSel2 (R45), but only momentarily while booting the router (ie, you don’t have to solder it, just hold in place for a couple of seconds while turning on the router)

Now connect the serial – USB converter to a Linux machine. I couldn’t find a way to make this work with MacOS, because apparently the ascii-xfr utility needed to transfer the bootloader to the router isn’t available on MacOS, even via ‘brew’.

So on the Linux laptop, install picocom and minicom (which includes ascii-xfr):

sudo apt-get update
sudo apt-get install picocom minicom

Start picocom on the Linux laptop using the command:

picocom -b 115200 /dev/ttyUSB0 –send-cmd “ascii-xfr -s -n”

Turn on the Home Hub v5 with boot_sel2 held low (a couple of seconds only) and then wait until “CFG 04” shows up on the picocom serial console. This should take no more than a couple of seconds. If anything else happens, turn the Home Hub off, and try again. Check carefully that you really are connecting the boot_sel2 pad to one of the routers ground points.

Once you have that “CFG 04” prompt in picocom, press Ctrl+a then Ctrl+s to trigger the send file function, and then input the full path and file name to the ascii bootloader (this is the lede-lantiq-bthomehubv5a_ram-u-boot.asc file that you downloaded to the laptop earlier). You’ll see lots of ‘#’ characters being printed to the picocom terminal while the file is downloaded to the routers RAM.

After the file transfer is complete the image is booted and a command line prompt shows up. The default Home Hub v5 IP address and TFTP server addresses won’t work with my network (as they are in the wrong subnets), so we can change them, as follows:

set ipaddr 192.168.255.19
set serverip 192.168.255.20

You’ll need to change these to match your own configuration. Now we’re going to download and boot from a much larger image that is stored on the home server, accessed via the TFTP server:

tftpboot 0x81000000 lede-lantiq-xrx200-BTHOMEHUBV5A-installimage.bin; bootm 0x81000000

This will cause the Home Hub router to boot into a very basic OpenWRT/LEDE environment, running in RAM, that is designed just to help install the proper firmware into the router flash storage. Note that it takes a LONG time for the router to bring up it’s networking stack. Wait until you see some messages about “br-lan” come up on the picocom terminal, and then check the router’s IP address using “ifconfig”. If you get no output from that command, the network is still not up, so wait a bit longer. When the network does eventually come up, the routers IP address will have changed again, so you will need to reset it to match your home networks subnet again. Use the command:

ifconfig br-lan 192.168.255.19

You can now save the current BT firmware with nanddump. First, on the home server, run the following:

sudo apt-get install pv
ncat -l 60000 | pv -s 128m > hh5a-mtd4-nanddump

This will listen for the backup to be sent to it over the network, provide a progress indication, and save the backup into the file “hh5a-mtd4-nanddump”.

Then on the Home Hub 5a, take the backup, and send it over the network to the home server:

nanddump /dev/mtd4 | nc 192.168.255.20 60000

This will take approximately 20 minutes, as the router is running very slowly at this point. When this completes, transfer the version of OpenWRT/LEDE that we are going to install into the flash, to the router:

scp richard@192.168.255.20/srv/tftp/lede-17.01.4-lantiq-xrx200-BTHOMEHUBV5A-squashfs-sysupgrade.bin /tmp/sysupgrade.bin

This will also transfer very slowly, taking around 4-5 minutes to download. I also found that it took a long time waiting for scp to ask for confirmation of the SSH thumbprint, and to request my password. Sometimes scp timed out on the home server, waiting for the router. Repeating the command invariably worked the second time.

Once that firmware is transferred to the router you need to type:

prepare

on the router, and follow the instructions. Then to finally flash OpenWRT/LEDE to the router flash storage, enter the command:

sysupgrade /tmp/sysupgrade.bin

Once the router has been reflashed it will reboot automatically. If that results in you seeing the “CFG 04” prompt on the picocom terminal again then power cycle the router. It should then reboot into OpenWRT/LEDE. At that point you can turn off the router, disconnect the wires from the motherboard, and close up the case.

All you then need to do is configure the router to suit your needs, using the web interface built into it. Good luck!

Creating media for the VW Discovery Pro II entertainment system

My new company car was delivered this week. It’s another VW Golf Mk7, but this time with the new Discovery Pro II entertainment system. This is a modular system that lets VW add different features to the entertainment system depending on what territory you live in, and how much money you are prepared to pay them for “options”.

The feature I use the most is part of the standard package; the media player. This allows the car to play digital music off various block devices (such as USB storage keys, SDCards and USB hard drives). However, in my old Golf, I never managed to get the media system to recognise the tags that are in all my MP3 media files, which prevented proper album art and track/album/artist information from being displayed. This time I was determined to do better.

It turns out that the new Discovery Pro II is much better at this than the original Composition Media system I had in the last Golf. But it’s still quirky. You need to get the tags “just right” for it to work.

For anyone trying to work out what “just right” is for an MP3 collection, let me help you out with the summary from my experiments with my collection. No more than 1,000 music files in a folder, or more than 10,000 music files on any one device. MP3 files should be tagged with ID3v2.4.0 tags only. If your music also includes ID3v1 tags too (as many tagging programs automatically do) then the media system won’t read any of the tags. Using ID3v2.3.0 tags was very hit and miss, sometimes working, but usually not. APE tags (if you have them) don’t seem to cause any problems, so I think you can safely ignore them.

The best tagging program I’ve found to help get you to this nirvana is KID3. It’s free and open source, and available for Windows, Mac and Linux. Best of all, it comes as both a graphical editor and with a command-line interface, so you can call it from scripts. This allowed me to reprocess my entire music collection (some 15,000 tracks) to remove all the ID3v1 tags, and convert all my ID3v2.3 tags to ID3v2.4 from within a simple bash shell script. Took about 20 minutes to do the conversion, and most of the rest of the evening to work out which 10,000 tracks I’d put in the car on a 64GB SDcard!

The only drawback with KID3 is that the documentation is somewhat impenetrable. So for your information, I’ve attached the script I used on my Mac with GNU core utilities installed. It’s quick and dirty hacking, but worked fine for me. Note that the location of the music library is hard-coded, so you’ll need to change it to match what you have, and I assume you only have MP3 files in it.


#!/bin/bash

# Process all MP3 files below this node in the filesystem
find /Users/richard/projects/CarMusic -type f -iname '*.mp3' -printf '%h\0' | sort -zu |
while read -r -d $'\0' audio_dir;
do
# Change to the next directory and provide a progress indicator
cd "$audio_dir"
pwd
# Select all audio files in the directory, create ID3v2.4 tags from any
# existing metadata, then delete any ID3v1 tags, and save the changes
kid3-cli -c 'select all' -c 'to24' -c 'remove 1' -c 'save'
done;

Hopefully that will save someone else a lot of experimenting.

Dual boot Windows 10 and Ubuntu 15.10 Linux on HP Envy 13 D008NA

My eldest daughter has flown the nest and gone to university. For her first term, she took her trusty 7 year old Acer Aspire netbook with her. With an unofficial memory expansion and a conversion to Ubuntu, it’s seen her through both her GCSE’s and A-Levels, and is still going surprisingly well for its age. However, it’s really well past the time for it to be retired, and my daughter deserves to have something a little significantly more stylish and capable. She didn’t want an Apple, and wanted to keep Ubuntu as her primary OS. She also felt that keeping Windows available (as it would certainly be preloaded on anything we bought) made sense in case she needed it for anything specific.

So after a fair bit of investigation I bought her an HP Envy 13 model D008NA. This is a beautiful metal-bodied Ultrabook with the latest Intel Skylake Core i5 processor, 8GB of RAM and a 256GB SSD, preloaded with Windows 10 Home 64bit. A trial of the latest Ubuntu linux (15.10) from a “live USB” works well, with only the built-in fingerprint reader seeming to be unsupported, so getting it running Linux and Windows 10 in a dual-boot configuration certainly looks possible, though there are a few roadblocks to overcome:

  • Microsoft no longer supply a Windows Certificate of Authenticity with OEM (preloaded) computers, so there is no visible indication of the Product Key for the preloaded Windows 10
  • HP don’t supply install media (only the ability to recover to the “factory state”)
  • The HP recovery process uses hidden Windows partitions on the laptop SSD
  • This is designed and shipped as a UEFI-only system

But we like a challenge! There are two approaches to this problem; either shrink down the existing pre-installed Windows 10, and install Linux next to it, or start from scratch, wipe the SSD, and install new copies of both operating systems. It turns out that HP (in common with most suppliers of consumer IT) ship their preloaded Windows 10 with quite a lot of bundled “trial” software that we don’t want. So my ideal endpoint is a fresh install of bare Windows 10 Home 64 bit, with a fresh install of Ubuntu Linux 15.10 64 bit, in such a way that both coexist happily, and are easy to keep up to date (especially as Ubuntu 16.04LTS isn’t far away now). So, that means finding Windows 10 install media and the product key. It also means finding a way to recover the laptop back to factory state (in case everything goes badly wrong) that doesn’t depend on the internal SSD, as I’ll be overwriting that. And then there are the usual problems of getting Windows to coexist with another operating system, complicated by the need to do everything within a UEFI context. For me that means having Grub be the system Boot Manager, which will then start either Linux or Windows (actually, it will chain to the Windows Boot Manager, which will do nothing but seamlessly start the single instance of Windows).

Getting hold of the Windows install media turns out to be trivial. Microsoft now make them freely available for download. Thankfully I can also get the full version, rather than the European-legislation compliant version that needs post-install updates to provide full multimedia functionality. Turning that ISO into a bootable USB key was harder than expected, but eventually I found this tool called Rufus. Since this is a UEFI-based system, it’s essential to get the options right, and because of a “glitch” in the Rufus tool’s GUI, you also need to get them right in the right order! The trick for me was to specify the USB device, then select the ISO, then specify the partition scheme (which needs to be GPT only), then the filesystem (NTFS). That (and only that!) combination produces a USB memory key that can be booted by the HP.

The product key was a concern, but after a lot of reading, it seems that the way this now works is that the manufacturers burn the Product Key for the preloaded Windows into the NVRAM of the motherboard at manufacture time (for interest the standard defined by Microsoft is available here). The good news is that NeoSmart Technologies provide a free tool to read that information out. So now I have the Product Key to activate Windows 10 Home 64bit, although if I understand the process correctly, I shouldn’t need it, as Windows should be able to detect and use the license key that is embedded in the hardware without any intervention on my part. See posts 10 and 11 of this thread for some really useful background to how Windows licensing and activation really works.

Next problem was how to get factory recovery working without access to the SSD. As it happens, we have warranty cover that would get us out of that hole if the worst came to the very worst. But I’d rather not have to call on HP (and the delays that would entail) if I need to just reimage the laptop. Fortunately, a little Googling revealed this HP Support Page that explains the options available. It turns out that it is possible to create a set of external recovery media, using a preloaded application, that is buried somewhere in the Windows 10 menu system (or possibly behind one of those strange, rotating blocks that Windows 8/8.1/10 seem so keen on). Anyway, 20 minutes with a 32GB USB memory stick, and we have a bootable recovery solution that ought to be able to reimage the whole laptop back to “factory fresh”. Result.

So now it’s time to take a firm grip of the keyboard, and start changing things. In particular, I need to check that I can restore the laptop back to its factory state. If that doesn’t work, then now is the time to invoke the HP warranty process! Fortunately, booting from the recovery USB key works perfectly, and restores the laptop back to it’s Factory fresh state in about 40 minutes.

With that reassurance, I booted into the Live Ubuntu 15.10 environment, and using gparted I wiped the SSD. I then repartitioned it (using the GPT scheme) as:

  1. 250MB, FAT32, “EFIBoot”, flags=esp/boot
  2. 80GB, unallocated (eventually for Windows 10)
  3. 4GB, Linux-swap, “Swap”
  4. 18GB, ext4, “Root”
  5. 136GB, ext4, “Home”

When an OS is installed in a UEFI system it adds an entry to the EFI Boot Manager’s list of installed OS’s, telling the EFI Boot Manager where it’s EFI file is, and getting a unique reference number. I wanted to remove the entry for the pre-installed Windows 10 from the UEFI system (its stored in NVRAM, not on the SSD) to minimise the opportunity for later confusion. To do that, I installed efibootmgr by running the command sudo apt-get install efibootmgr, and then ran it with sudo efibootmgr. I then deleted the entry for “Windows Boot Manager”. On my system that was entry 0000, so I did that by running the command sudo efibootmgr -b 0000 -B. I was EXTREMELY careful not to remove any other entries, as I could easily prevent the system from booting by deleting the wrong entries.

And then I installed Ubuntu 15.10 onto the SSD, using the Swap, Root and Home partitions. Confusingly, the installer asks where the bootloader should be installed, but of course, as this is UEFI, the EFI boot code MUST be installed into the EFI partition (which is marked by the ESP/Boot flags). In practice, the Ubuntu installer knows this, and ignores whatever you tell it when installing in UEFI mode, but the dialogue is a little confusing.

Having tested that the installed Ubuntu system works, the next step is to install Windows 10. My first attempt to install Windows 10 failed, as the SecureBoot setting in the BIOS prevented the bootable Windows 10 installer from starting up. So I temporarily switched that off, booted the Windows 10 USB installer, and pointed it at the unallocated space that I left on the SSD. Windows then partitions that space into two partitions; a small one reserved for its own private use called the MSR, and the rest for the main Windows environment, ie the “C” drive. Unfortunately Windows insists on rebooting several times during the install process, and at this stage in the process it’s highly likely that those reboots will cause Ubuntu to start up, not continue the install of Windows. My approach was to “catch” each reboot by pressing ESC then F9, and then manually guide the reboot back to Windows again by navigating the UEFI firmware back to the Windows EFI bootmanager, which is at /efi/microsoft/boot/bootmgfw.efi in the EFI partition. As expected, Windows 10 never asked for the product key, and self-activated as soon as it connected to the Internet – which was remarkably easy.

Once Windows was actually installed, I followed advice I found on the Internet, and turned off the Windows “Fast Startup” feature while I did the final configuration of Grub. This is a Microsoft feature that caches system-state across restarts (similar to hibernating) so I’m not sure how this could interfere with reconfiguring Grub, but I figured better to be safe than sorry. I did this by:

  1. Open Control Panel (Win+X->Control Panel from the desktop in Windows 8+)
  2. Navigate to Power Options
  3. Click “Choose what the power button does”
  4. Click “Change settings that are currently unavailable”
  5. Uncheck the box that says “Turn on fast startup (Recommended)” at the bottom.

You should be able to re-enable Fast Startup after reconfiguring GRUB, but I’ve not bothered, as I’ve not noticed any significant change in boot performance with it turned off on this machine (boot times are incredibly fast anyway). At this point I shutdown Windows, and rebooted into Ubuntu. There I reconfigured Grub, changing the file /etc/default/grub to make Grub remember the last OS I booted, and it make it the default for the next boot, with a 15 second timeout while there is a visible OS Selection menu on the screen:

GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
# GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=15
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""

I then ran the command sudo update-grub to make those changes live, and to allow Grub to locate the Microsoft Boot Manager, and add it to it’s menu entries. This means that now when the system boots, Grub will get default control when the SSD is selected as the boot device. It then gives you the option to choose between Ubuntu or Windows 10, and will default to booting whichever was chosen last time. The advantage of this is that when Windows 10 runs its automatic updates sequences it expects to be able to reboot itself multiple times, and this Grub configuration will accommodate that requirement without someone needing to sit in front of the machine and babysit Windows through its updates.

Finally, I booted back into Windows, and downloaded the HP Support Assistant, which is supposed to manage updates for all the drivers on the laptop automatically. My experience was that it only downloaded and installed about 4 or 5 of the 15 or so required drivers, so in the end I manually downloaded and installed all the HP drivers for this machine. I’m hoping (but not hopeful) that now the right drivers are all in place, the HP Support Assistant may manage them all properly from now on.

Now I just have to transfer my daughters data over to the Linux partition, and install any required applications; an easy job.

Suppressing duplicate email reports from smartd

One of the precautions I take to ensure that my home server keeps steadily ticking along is to monitor the health of the hard drives with smartmontools. This uses the SMART health monitoring interfaces built into almost every modern hard drive to predict if the drive is starting to exhibit problems that might lead to data loss, or even complete drive failure. To further improve on this, I run the monitoring system as a daemon, and have it run some simple tests each night, and an extensive test (lasting several hours) each week.

And this is great. The system will email me if it spots any problems, giving me the chance to either fix them, or (worst case) order a new hard drive before the old one finally dies. Because generally, when smartd spots a problem, its a sign of the beginning of the end for that drive.

But not always. My current hard drive has been reporting the same error to me for over 9 months now, patiently emailing me the same email every night:

This message was generated by the smartd daemon running on:
host name: house
DNS domain: xxxxxxx.com
The following warning/error was logged by the smartd daemon:
Device: /dev/sda [SAT], 3 Offline uncorrectable sectors
Device info:
WDC WD20EFRX-68AX9N0, S/N:WD-WMC30043xxxx, WWN:5-0014ee-0ae19da81, FW:80.00A80, 2.00 TB
For details see host’s SYSLOG.
You can also use the smartctl utility for further investigation.
The original message about this issue was sent at Sun Jun 29 08:07:30 2014 BST
Another message will be sent in 24 hours if the problem persists.

No matter what I try, I cannot get the drive to resolve the problem, but it’s not getting any worse, and the overall health of the drive is reported as “OK”. So actually, unless the system spots a new error, I just want it to stop emailing me, because otherwise I run the risk of ignoring the server that cried wolf …

So here is the way to get the smartd daemon as installed under Ubuntu Server 14.04 LTS, to not report the same SMART error over and over again:

  1. cd /usr/share/smartmontools
  2. sudo cp smartd-runner smartd-runner.backup

Now, open up smartd-runner in a text editor like vi or gedit, (sudo vi smartd-runner) and make it look like this:


#!/bin/bash -e

laststate="/var/run/smartd.saved.error.state"
# Generate a temporary filename for new error information
tmp=$(tempfile)
# Copy the new error information into the file
cat >$tmp

# Test if the new error information is different to the saved
# error information from our last run.
if ! cmp -s "$tmp" "$laststate"
then
# Save the "new" latest error information for next time
cp $tmp $laststate
# Call the email routine
run-parts --report --lsbsysinit --arg=$tmp --arg="$1" \
--arg="$2" --arg="$3" -- /etc/smartmontools/run.d
fi
# Delete the temporary copy of the error information
rm -f $tmp

Save the file. The system will take one more run of the smartd daemon to “prime” the state into the system, but thereafter the system will not send you the same error twice in a row. Of course, this does mean that you now need to pay attention when the system does email you … or you could modify my code here, so it will send a duplicate “reminder” email again (say) every week, or month, or whatever works for you.

Overcoming Mediatombs disabled javascript import filters in Ubuntu 14.04 LTS

Nearly three years ago I wrote about how I had managed to re-enable the Javascript import filters for Mediatomb, that allowed one to control how Mediatomb imported (and represented) files in its database. Lots of people have followed the same approach, but when I moved to Ubuntu 14.04 LTS, I found it much harder than before because of the addition of various fixes, and the increasing age of the version of Spidermonkey that Mediatomb was tied to. I decided to follow a new path, which I document here.

The basic approach is to rebuild the shipped package of Mediatomb, but rather than trying to enable the Javascript support, we’ll update the in-built default C++ importer so it does what we need. This gets us away from needing to enable an ancient version of Spidermonkey, or having to rollback several fixes that have been made to Mediatomb by the Ubuntu maintainers. Here’s what you need to do:

  1. cd;mkdir temp;cd temp
  2. sudo apt-get install build-essential, to install the tools you will need to build packages on Ubuntu
  3. sudo apt-get build-dep mediatomb, to install all the dependencies needed for mediatomb to compile
  4. sudo apt-get source mediatomb, to get the source code for mediatomb, and unpack it into a convenient subdirectory
  5. cd mediatomb-0.12.1/src/layout, to get into the directory where we are going to make our changes
  6. cp fallback_layout.cc fallback_layout.cc.backup, to take a backup, just in case something should go wrong

In Mediatomb you define a series of directories (& optionally subdirectories) that should be scanned for media that can be included into Mediatombs database. When there is no Javascript importer, a routine in this file gets called for each found piece of media that is “video” in nature (there are also similar routines for audio and pictures in this same file).

If you open up fallback_layout.cc in a text editor (vi or gedit, or similar) and scroll down to (approximately) line 86 you will find a routine called “FallbackLayout::addVideo”. This gets passed an object that describes the video file that the server is trying to import into its database, along with (I think) the base directory from the “scan” that made Mediatomb include this object (file). In practice, I never saw that base directory path set to anything, but that may just be the way I have Mediatomb configured.

So, what I do is alter this routine, to:

  1. Tidy up the name of the file as reported in the Mediatomb database
  2. Build a proper expanding tree structure for my database, based on the directory structure where I store my videos

Hopefully this commented version of my FallbackLayout::addVideo import routine will make clear how this works. Much of the cleverness is already in the support functions provided by Mediatomb, especially addContainerChain(), and all I need to do is some fairly simple text manipulation. This should work for you too; you could simply replace the routine in your file with this one. I think that the only place you would need to make a change is the line dir = dir.substring(17); where you need to change the 17 to however many leading characters of your directory structure you want to remove from the Mediatomb GUI.


void FallbackLayout::addVideo(zmm::Ref obj, String rootpath) {

Ref f2i = StringConverter::f2i();
String dir;
String title;

// Grab the title (its filename) from the object obj, and tidy it up, converting
// underscores to spaces, and removing the filetype
title = obj->getTitle();
title = title.replaceChar('_', ' ');
title = title.substring(0, title.rindex('.') );

// Write the result back as the objects new title
obj->setTitle(title);

// Get location (eg "/srv/media/video/Films/12/James_Bond/Living Daylights.mp4" )
dir = f2i->convert(obj->getLocation());
// Remove leading directories (eg "/srv/media/video/" -> "Films/12/James_Bond/Living Daylights.mp4")
dir = dir.substring(17);
// Remove filename and trailing slash (eg "/Living Daylights.mp4" -> "Films/12/James_Bond")
dir = dir.substring(0, dir.rindex('/'));
// Convert underscores to spaces -> "Films/12/James Bond"
dir = dir.replaceChar('_', ' ');

if (string_ok(dir))
{
// Iterate through the string, building the chain from the constituent parts,
// escaping the text parts as necessary for Mediatomb:
// ie: esc("Films")+"/"+esc("12")+"/"+esc("James Bond")
//
String container, Chain;
int p;

while ( (p = dir.index(0, '/')) > 0 ) {
container = dir.substring(0,p); // Grab the topmost directory name (container)
Chain = Chain + "/" + esc(container); // Add it to the chain
dir = dir.substring(p+1); // Cut off the container (and trailing slash)
}
Chain = Chain + "/" + esc(dir); // Add final directory to chain

// Add the new chain (addContainerChain takes care of duplication / overlaps etc)
int id = ContentManager::getInstance()->addContainerChain(_("/Video") + Chain );

// Now add the object (ie the file) to the chain, using its tidied up name etc
if (obj->getID() != INVALID_OBJECT_ID)
{
obj->setRefID(obj->getID());
add(obj, id);
}
else
{
add(obj, id);
obj->setRefID(obj->getID());
}
}
}

And now we need to compile, install and test it. Though depending on what changes you need to make for your environment, and how rusty your C++ skills are, you may need to iterate around the edit / compile / test phases a few times until it all compiles and works cleanly.

  1. cd ~/temp/mediatomb-0.12.1 and then sudo ./configure. Lots of content will scroll past, but at the end there should be a summary; check that all your dependancies are satisfied.
  2. Start the compilation with sudo fakeroot debian/rules binary. Lots of compilation messages should scroll past.
  3. When it stops, you should have three .deb files in ~/temp. You can install them with sudo dpkg -i mediatomb*.deb

Finally, switch to root (sudo su) and then issue the command echo packagename hold | dpkg –set-selections for each of mediatomb, mediatomb-common and mediatomb-daemon. Then drop back to your user by entering control-D. This will prevent your customised packages being overwritten as part of the normal update processes (they will be “held”.)

Virtual Windows

There is a small but vocal community in IBM who (with the companies support) use Linux as their primary computing environment. It’s not a particularly easy path to follow, because although office productivity tooling under Linux is actually very good indeed, the ability to interoperate with Microsoft’s proprietary “Office” products is still a major limiting factor. Since the use of Office is pretty much universal in business, you need to either avoid that interoperability issue, or find a way to overcome it.

The open source community have been working hard to make this easier, and LibreOffice in particular can now do things that only 5 years ago would have seemed impossible. For many of my colleagues who are in more technically focused roles, where the need for perfect interoperability with Office is less of an issue, this gives a “good enough” solution. But for some of us, who are in more customer-focused roles, it’s still a problem.

IBM helps us by providing a pre-packaged virtual copy of Windows that we can run under Linux, onto which we can add the MS Office suite if our roles require it. Probably 95% of the time, LibreOffice running natively under Linux is good enough for me. But if it isn’t I can always fire up the real thing, without worrying about all the instability issues of running native Windows. Did I mention my laptop’s uptime is a shade over 62 days at the moment?!

Unfortunately, Windows XP (that I run Office on) has nearly reached its absolutely final, never to be extended again, end of support date. So I need to move up to Windows 7. This has advantages and disadvantages for me; it has better functionality, better application support, and proper support from Microsoft. But at the cost of being much more resource-hungry. Although IBM has been pressing us to upgrade for some time, I’ve been holding out to avoid the pain.

Pain? Well, most people in IBM use KVM/QEMU as their hypervisor, and so IBM packages our centrally licensed copies of Windows into qcow2 images. But for mostly historical reasons I’m running VirtualBox on my Ubuntu-based system, and whilst changing to KVM wouldn’t really be a problem, I like some of the management facilities (such as snapshots) that VirtualBox provides. And VirtualBox doesn’t support qcow2 images.

So it’s necessary to do some converting:

  1. First, install qemu-img: sudo apt-get install qemu-utils
  2. Next, convert the image: qemu-img convert -f qcow2 -O vdi

Unfortunately, this expands the VM size considerably, as it doesn’t preserve the sparseness of the original qcow2 format. It looks like it could be possible to do a two step conversion (qcow2 -> raw -> vdi) using a mix of qemu and VirtualBox tooling to preserve that, but I wasn’t sufficiently concerned to investigate further. Storage really is cheap these days.

And at this point I can run my IBM-supplied Windows 7 VM under VirtualBox.

But I also discovered a couple of interesting things while I was doing this.

  • Firstly, to my surprise it’s possible to mount vdi or vdh images so Linux can read and write directly to them. This is achieved by way of the FUSE filesystem, and a plugin called vbfuse. This used to be packaged for Ubuntu as “virtualbox-fuse”, but at the moment installing it takes a bit of hacking. Once installed however, it works beautifully, and allows direct access to the filesystems of (obviously stopped!) virtual machines.
  • Secondly, I discovered that there is a Linux command line tool called “chntpw” that lets you alter the Windows password database directly. Using this it is possible to promote normal users to administrators, and reset or set their passwords.

Together, these allow you to reset any forgotten password within a VM, which can be most useful if like me, you rarely use it, and are forced to regularly change your passwords with non-trivial complexity requirements. More information can be found here, and the code itself is in the standard Ubuntu repositories.

Rebooting an ASROCK Q1900 motherboard from Linux

Some time ago I replaced my original home server with a new one based on an Intel J1900 processor on an ASROCK Q1900 motherboard. This has slightly reduced the power consumption of my home server (which runs 24×7) but more importantly, massively boosted it’s performance. However, it’s left me with one minor problem – the system always hangs on reboot, preventing me from remotely managing the system.

I finally got around to looking at it this evening, and this post provided a wealth of information on how to go about resolving the issue. In my case, the default reboot method (using the keyboard controller) wasn’t working. Switching to the ACPI method immediately resolved the problem for me, on Ubuntu 14.04.1 LTS.

To implement the fix, open a terminal, and edit the file /etc/default/grub, making the line:
GRUB_CMDLINE_LINUX=""
read
GRUB_CMDLINE_LINUX=”reboot=acpi”
Then (again in the terminal) run the command sudo update-grub.

Since the setting takes effect the next time the OS starts, you’ll probably need to restart the server manually one more time before being able to remotely reboot the machine successfully.