Dual boot Windows 10 and Ubuntu 15.10 Linux on HP Envy 13 D008NA

My eldest daughter has flown the nest and gone to university. For her first term, she took her trusty 7 year old Acer Aspire netbook with her. With an unofficial memory expansion and a conversion to Ubuntu, it’s seen her through both her GCSE’s and A-Levels, and is still going surprisingly well for its age. However, it’s really well past the time for it to be retired, and my daughter deserves to have something a little significantly more stylish and capable. She didn’t want an Apple, and wanted to keep Ubuntu as her primary OS. She also felt that keeping Windows available (as it would certainly be preloaded on anything we bought) made sense in case she needed it for anything specific.

So after a fair bit of investigation I bought her an HP Envy 13 model D008NA. This is a beautiful metal-bodied Ultrabook with the latest Intel Skylake Core i5 processor, 8GB of RAM and a 256GB SSD, preloaded with Windows 10 Home 64bit. A trial of the latest Ubuntu linux (15.10) from a “live USB” works well, with only the built-in fingerprint reader seeming to be unsupported, so getting it running Linux and Windows 10 in a dual-boot configuration certainly looks possible, though there are a few roadblocks to overcome:

  • Microsoft no longer supply a Windows Certificate of Authenticity with OEM (preloaded) computers, so there is no visible indication of the Product Key for the preloaded Windows 10
  • HP don’t supply install media (only the ability to recover to the “factory state”)
  • The HP recovery process uses hidden Windows partitions on the laptop SSD
  • This is designed and shipped as a UEFI-only system

But we like a challenge! There are two approaches to this problem; either shrink down the existing pre-installed Windows 10, and install Linux next to it, or start from scratch, wipe the SSD, and install new copies of both operating systems. It turns out that HP (in common with most suppliers of consumer IT) ship their preloaded Windows 10 with quite a lot of bundled “trial” software that we don’t want. So my ideal endpoint is a fresh install of bare Windows 10 Home 64 bit, with a fresh install of Ubuntu Linux 15.10 64 bit, in such a way that both coexist happily, and are easy to keep up to date (especially as Ubuntu 16.04LTS isn’t far away now). So, that means finding Windows 10 install media and the product key. It also means finding a way to recover the laptop back to factory state (in case everything goes badly wrong) that doesn’t depend on the internal SSD, as I’ll be overwriting that. And then there are the usual problems of getting Windows to coexist with another operating system, complicated by the need to do everything within a UEFI context. For me that means having Grub be the system Boot Manager, which will then start either Linux or Windows (actually, it will chain to the Windows Boot Manager, which will do nothing but seamlessly start the single instance of Windows).

Getting hold of the Windows install media turns out to be trivial. Microsoft now make them freely available for download. Thankfully I can also get the full version, rather than the European-legislation compliant version that needs post-install updates to provide full multimedia functionality. Turning that ISO into a bootable USB key was harder than expected, but eventually I found this tool called Rufus. Since this is a UEFI-based system, it’s essential to get the options right, and because of a “glitch” in the Rufus tool’s GUI, you also need to get them right in the right order! The trick for me was to specify the USB device, then select the ISO, then specify the partition scheme (which needs to be GPT only), then the filesystem (NTFS). That (and only that!) combination produces a USB memory key that can be booted by the HP.

The product key was a concern, but after a lot of reading, it seems that the way this now works is that the manufacturers burn the Product Key for the preloaded Windows into the NVRAM of the motherboard at manufacture time (for interest the standard defined by Microsoft is available here). The good news is that NeoSmart Technologies provide a free tool to read that information out. So now I have the Product Key to activate Windows 10 Home 64bit, although if I understand the process correctly, I shouldn’t need it, as Windows should be able to detect and use the license key that is embedded in the hardware without any intervention on my part. See posts 10 and 11 of this thread for some really useful background to how Windows licensing and activation really works.

Next problem was how to get factory recovery working without access to the SSD. As it happens, we have warranty cover that would get us out of that hole if the worst came to the very worst. But I’d rather not have to call on HP (and the delays that would entail) if I need to just reimage the laptop. Fortunately, a little Googling revealed this HP Support Page that explains the options available. It turns out that it is possible to create a set of external recovery media, using a preloaded application, that is buried somewhere in the Windows 10 menu system (or possibly behind one of those strange, rotating blocks that Windows 8/8.1/10 seem so keen on). Anyway, 20 minutes with a 32GB USB memory stick, and we have a bootable recovery solution that ought to be able to reimage the whole laptop back to “factory fresh”. Result.

So now it’s time to take a firm grip of the keyboard, and start changing things. In particular, I need to check that I can restore the laptop back to its factory state. If that doesn’t work, then now is the time to invoke the HP warranty process! Fortunately, booting from the recovery USB key works perfectly, and restores the laptop back to it’s Factory fresh state in about 40 minutes.

With that reassurance, I booted into the Live Ubuntu 15.10 environment, and using gparted I wiped the SSD. I then repartitioned it (using the GPT scheme) as:

  1. 250MB, FAT32, “EFIBoot”, flags=esp/boot
  2. 80GB, unallocated (eventually for Windows 10)
  3. 4GB, Linux-swap, “Swap”
  4. 18GB, ext4, “Root”
  5. 136GB, ext4, “Home”

When an OS is installed in a UEFI system it adds an entry to the EFI Boot Manager’s list of installed OS’s, telling the EFI Boot Manager where it’s EFI file is, and getting a unique reference number. I wanted to remove the entry for the pre-installed Windows 10 from the UEFI system (its stored in NVRAM, not on the SSD) to minimise the opportunity for later confusion. To do that, I installed efibootmgr by running the command sudo apt-get install efibootmgr, and then ran it with sudo efibootmgr. I then deleted the entry for “Windows Boot Manager”. On my system that was entry 0000, so I did that by running the command sudo efibootmgr -b 0000 -B. I was EXTREMELY careful not to remove any other entries, as I could easily prevent the system from booting by deleting the wrong entries.

And then I installed Ubuntu 15.10 onto the SSD, using the Swap, Root and Home partitions. Confusingly, the installer asks where the bootloader should be installed, but of course, as this is UEFI, the EFI boot code MUST be installed into the EFI partition (which is marked by the ESP/Boot flags). In practice, the Ubuntu installer knows this, and ignores whatever you tell it when installing in UEFI mode, but the dialogue is a little confusing.

Having tested that the installed Ubuntu system works, the next step is to install Windows 10. My first attempt to install Windows 10 failed, as the SecureBoot setting in the BIOS prevented the bootable Windows 10 installer from starting up. So I temporarily switched that off, booted the Windows 10 USB installer, and pointed it at the unallocated space that I left on the SSD. Windows then partitions that space into two partitions; a small one reserved for its own private use called the MSR, and the rest for the main Windows environment, ie the “C” drive. Unfortunately Windows insists on rebooting several times during the install process, and at this stage in the process it’s highly likely that those reboots will cause Ubuntu to start up, not continue the install of Windows. My approach was to “catch” each reboot by pressing ESC then F9, and then manually guide the reboot back to Windows again by navigating the UEFI firmware back to the Windows EFI bootmanager, which is at /efi/microsoft/boot/bootmgfw.efi in the EFI partition. As expected, Windows 10 never asked for the product key, and self-activated as soon as it connected to the Internet – which was remarkably easy.

Once Windows was actually installed, I followed advice I found on the Internet, and turned off the Windows “Fast Startup” feature while I did the final configuration of Grub. This is a Microsoft feature that caches system-state across restarts (similar to hibernating) so I’m not sure how this could interfere with reconfiguring Grub, but I figured better to be safe than sorry. I did this by:

  1. Open Control Panel (Win+X->Control Panel from the desktop in Windows 8+)
  2. Navigate to Power Options
  3. Click “Choose what the power button does”
  4. Click “Change settings that are currently unavailable”
  5. Uncheck the box that says “Turn on fast startup (Recommended)” at the bottom.

You should be able to re-enable Fast Startup after reconfiguring GRUB, but I’ve not bothered, as I’ve not noticed any significant change in boot performance with it turned off on this machine (boot times are incredibly fast anyway). At this point I shutdown Windows, and rebooted into Ubuntu. There I reconfigured Grub, changing the file /etc/default/grub to make Grub remember the last OS I booted, and it make it the default for the next boot, with a 15 second timeout while there is a visible OS Selection menu on the screen:

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

I then ran the command sudo update-grub to make those changes live, and to allow Grub to locate the Microsoft Boot Manager, and add it to it’s menu entries. This means that now when the system boots, Grub will get default control when the SSD is selected as the boot device. It then gives you the option to choose between Ubuntu or Windows 10, and will default to booting whichever was chosen last time. The advantage of this is that when Windows 10 runs its automatic updates sequences it expects to be able to reboot itself multiple times, and this Grub configuration will accommodate that requirement without someone needing to sit in front of the machine and babysit Windows through its updates.

Finally, I booted back into Windows, and downloaded the HP Support Assistant, which is supposed to manage updates for all the drivers on the laptop automatically. My experience was that it only downloaded and installed about 4 or 5 of the 15 or so required drivers, so in the end I manually downloaded and installed all the HP drivers for this machine. I’m hoping (but not hopeful) that now the right drivers are all in place, the HP Support Assistant may manage them all properly from now on.

Now I just have to transfer my daughters data over to the Linux partition, and install any required applications; an easy job.

Suppressing duplicate email reports from smartd

One of the precautions I take to ensure that my home server keeps steadily ticking along is to monitor the health of the hard drives with smartmontools. This uses the SMART health monitoring interfaces built into almost every modern hard drive to predict if the drive is starting to exhibit problems that might lead to data loss, or even complete drive failure. To further improve on this, I run the monitoring system as a daemon, and have it run some simple tests each night, and an extensive test (lasting several hours) each week.

And this is great. The system will email me if it spots any problems, giving me the chance to either fix them, or (worst case) order a new hard drive before the old one finally dies. Because generally, when smartd spots a problem, its a sign of the beginning of the end for that drive.

But not always. My current hard drive has been reporting the same error to me for over 9 months now, patiently emailing me the same email every night:

This message was generated by the smartd daemon running on:
host name: house
DNS domain: xxxxxxx.com
The following warning/error was logged by the smartd daemon:
Device: /dev/sda [SAT], 3 Offline uncorrectable sectors
Device info:
WDC WD20EFRX-68AX9N0, S/N:WD-WMC30043xxxx, WWN:5-0014ee-0ae19da81, FW:80.00A80, 2.00 TB
For details see host’s SYSLOG.
You can also use the smartctl utility for further investigation.
The original message about this issue was sent at Sun Jun 29 08:07:30 2014 BST
Another message will be sent in 24 hours if the problem persists.

No matter what I try, I cannot get the drive to resolve the problem, but it’s not getting any worse, and the overall health of the drive is reported as “OK”. So actually, unless the system spots a new error, I just want it to stop emailing me, because otherwise I run the risk of ignoring the server that cried wolf …

So here is the way to get the smartd daemon as installed under Ubuntu Server 14.04 LTS, to not report the same SMART error over and over again:

  1. cd /usr/share/smartmontools
  2. sudo cp smartd-runner smartd-runner.backup

Now, open up smartd-runner in a text editor like vi or gedit, (sudo vi smartd-runner) and make it look like this:

#!/bin/bash -e

# Generate a temporary filename for new error information
# Copy the new error information into the file
cat >$tmp

# Test if the new error information is different to the saved
# error information from our last run.
if ! cmp -s "$tmp" "$laststate"
# Save the "new" latest error information for next time
cp $tmp $laststate
# Call the email routine
run-parts --report --lsbsysinit --arg=$tmp --arg="$1" \
--arg="$2" --arg="$3" -- /etc/smartmontools/run.d
# Delete the temporary copy of the error information
rm -f $tmp

Save the file. The system will take one more run of the smartd daemon to “prime” the state into the system, but thereafter the system will not send you the same error twice in a row. Of course, this does mean that you now need to pay attention when the system does email you … or you could modify my code here, so it will send a duplicate “reminder” email again (say) every week, or month, or whatever works for you.

Overcoming Mediatombs disabled javascript import filters in Ubuntu 14.04 LTS

Nearly three years ago I wrote about how I had managed to re-enable the Javascript import filters for Mediatomb, that allowed one to control how Mediatomb imported (and represented) files in its database. Lots of people have followed the same approach, but when I moved to Ubuntu 14.04 LTS, I found it much harder than before because of the addition of various fixes, and the increasing age of the version of Spidermonkey that Mediatomb was tied to. I decided to follow a new path, which I document here.

The basic approach is to rebuild the shipped package of Mediatomb, but rather than trying to enable the Javascript support, we’ll update the in-built default C++ importer so it does what we need. This gets us away from needing to enable an ancient version of Spidermonkey, or having to rollback several fixes that have been made to Mediatomb by the Ubuntu maintainers. Here’s what you need to do:

  1. cd;mkdir temp;cd temp
  2. sudo apt-get install build-essential, to install the tools you will need to build packages on Ubuntu
  3. sudo apt-get build-dep mediatomb, to install all the dependencies needed for mediatomb to compile
  4. sudo apt-get source mediatomb, to get the source code for mediatomb, and unpack it into a convenient subdirectory
  5. cd mediatomb-0.12.1/src/layout, to get into the directory where we are going to make our changes
  6. cp fallback_layout.cc fallback_layout.cc.backup, to take a backup, just in case something should go wrong

In Mediatomb you define a series of directories (& optionally subdirectories) that should be scanned for media that can be included into Mediatombs database. When there is no Javascript importer, a routine in this file gets called for each found piece of media that is “video” in nature (there are also similar routines for audio and pictures in this same file).

If you open up fallback_layout.cc in a text editor (vi or gedit, or similar) and scroll down to (approximately) line 86 you will find a routine called “FallbackLayout::addVideo”. This gets passed an object that describes the video file that the server is trying to import into its database, along with (I think) the base directory from the “scan” that made Mediatomb include this object (file). In practice, I never saw that base directory path set to anything, but that may just be the way I have Mediatomb configured.

So, what I do is alter this routine, to:

  1. Tidy up the name of the file as reported in the Mediatomb database
  2. Build a proper expanding tree structure for my database, based on the directory structure where I store my videos

Hopefully this commented version of my FallbackLayout::addVideo import routine will make clear how this works. Much of the cleverness is already in the support functions provided by Mediatomb, especially addContainerChain(), and all I need to do is some fairly simple text manipulation. This should work for you too; you could simply replace the routine in your file with this one. I think that the only place you would need to make a change is the line dir = dir.substring(17); where you need to change the 17 to however many leading characters of your directory structure you want to remove from the Mediatomb GUI.

void FallbackLayout::addVideo(zmm::Ref obj, String rootpath) {

Ref f2i = StringConverter::f2i();
String dir;
String title;

// Grab the title (its filename) from the object obj, and tidy it up, converting
// underscores to spaces, and removing the filetype
title = obj->getTitle();
title = title.replaceChar('_', ' ');
title = title.substring(0, title.rindex('.') );

// Write the result back as the objects new title

// Get location (eg "/srv/media/video/Films/12/James_Bond/Living Daylights.mp4" )
dir = f2i->convert(obj->getLocation());
// Remove leading directories (eg "/srv/media/video/" -> "Films/12/James_Bond/Living Daylights.mp4")
dir = dir.substring(17);
// Remove filename and trailing slash (eg "/Living Daylights.mp4" -> "Films/12/James_Bond")
dir = dir.substring(0, dir.rindex('/'));
// Convert underscores to spaces -> "Films/12/James Bond"
dir = dir.replaceChar('_', ' ');

if (string_ok(dir))
// Iterate through the string, building the chain from the constituent parts,
// escaping the text parts as necessary for Mediatomb:
// ie: esc("Films")+"/"+esc("12")+"/"+esc("James Bond")
String container, Chain;
int p;

while ( (p = dir.index(0, '/')) > 0 ) {
container = dir.substring(0,p); // Grab the topmost directory name (container)
Chain = Chain + "/" + esc(container); // Add it to the chain
dir = dir.substring(p+1); // Cut off the container (and trailing slash)
Chain = Chain + "/" + esc(dir); // Add final directory to chain

// Add the new chain (addContainerChain takes care of duplication / overlaps etc)
int id = ContentManager::getInstance()->addContainerChain(_("/Video") + Chain );

// Now add the object (ie the file) to the chain, using its tidied up name etc
if (obj->getID() != INVALID_OBJECT_ID)
add(obj, id);
add(obj, id);

And now we need to compile, install and test it. Though depending on what changes you need to make for your environment, and how rusty your C++ skills are, you may need to iterate around the edit / compile / test phases a few times until it all compiles and works cleanly.

  1. cd ~/temp/mediatomb-0.12.1 and then sudo ./configure. Lots of content will scroll past, but at the end there should be a summary; check that all your dependancies are satisfied.
  2. Start the compilation with sudo fakeroot debian/rules binary. Lots of compilation messages should scroll past.
  3. When it stops, you should have three .deb files in ~/temp. You can install them with sudo dpkg -i mediatomb*.deb

Finally, switch to root (sudo su) and then issue the command echo packagename hold | dpkg –set-selections for each of mediatomb, mediatomb-common and mediatomb-daemon. Then drop back to your user by entering control-D. This will prevent your customised packages being overwritten as part of the normal update processes (they will be “held”.)

Virtual Windows

There is a small but vocal community in IBM who (with the companies support) use Linux as their primary computing environment. It’s not a particularly easy path to follow, because although office productivity tooling under Linux is actually very good indeed, the ability to interoperate with Microsoft’s proprietary “Office” products is still a major limiting factor. Since the use of Office is pretty much universal in business, you need to either avoid that interoperability issue, or find a way to overcome it.

The open source community have been working hard to make this easier, and LibreOffice in particular can now do things that only 5 years ago would have seemed impossible. For many of my colleagues who are in more technically focused roles, where the need for perfect interoperability with Office is less of an issue, this gives a “good enough” solution. But for some of us, who are in more customer-focused roles, it’s still a problem.

IBM helps us by providing a pre-packaged virtual copy of Windows that we can run under Linux, onto which we can add the MS Office suite if our roles require it. Probably 95% of the time, LibreOffice running natively under Linux is good enough for me. But if it isn’t I can always fire up the real thing, without worrying about all the instability issues of running native Windows. Did I mention my laptop’s uptime is a shade over 62 days at the moment?!

Unfortunately, Windows XP (that I run Office on) has nearly reached its absolutely final, never to be extended again, end of support date. So I need to move up to Windows 7. This has advantages and disadvantages for me; it has better functionality, better application support, and proper support from Microsoft. But at the cost of being much more resource-hungry. Although IBM has been pressing us to upgrade for some time, I’ve been holding out to avoid the pain.

Pain? Well, most people in IBM use KVM/QEMU as their hypervisor, and so IBM packages our centrally licensed copies of Windows into qcow2 images. But for mostly historical reasons I’m running VirtualBox on my Ubuntu-based system, and whilst changing to KVM wouldn’t really be a problem, I like some of the management facilities (such as snapshots) that VirtualBox provides. And VirtualBox doesn’t support qcow2 images.

So it’s necessary to do some converting:

  1. First, install qemu-img: sudo apt-get install qemu-utils
  2. Next, convert the image: qemu-img convert -f qcow2 -O vdi

Unfortunately, this expands the VM size considerably, as it doesn’t preserve the sparseness of the original qcow2 format. It looks like it could be possible to do a two step conversion (qcow2 -> raw -> vdi) using a mix of qemu and VirtualBox tooling to preserve that, but I wasn’t sufficiently concerned to investigate further. Storage really is cheap these days.

And at this point I can run my IBM-supplied Windows 7 VM under VirtualBox.

But I also discovered a couple of interesting things while I was doing this.

  • Firstly, to my surprise it’s possible to mount vdi or vdh images so Linux can read and write directly to them. This is achieved by way of the FUSE filesystem, and a plugin called vbfuse. This used to be packaged for Ubuntu as “virtualbox-fuse”, but at the moment installing it takes a bit of hacking. Once installed however, it works beautifully, and allows direct access to the filesystems of (obviously stopped!) virtual machines.
  • Secondly, I discovered that there is a Linux command line tool called “chntpw” that lets you alter the Windows password database directly. Using this it is possible to promote normal users to administrators, and reset or set their passwords.

Together, these allow you to reset any forgotten password within a VM, which can be most useful if like me, you rarely use it, and are forced to regularly change your passwords with non-trivial complexity requirements. More information can be found here, and the code itself is in the standard Ubuntu repositories.

Rebooting an ASROCK Q1900 motherboard from Linux

Some time ago I replaced my original home server with a new one based on an Intel J1900 processor on an ASROCK Q1900 motherboard. This has slightly reduced the power consumption of my home server (which runs 24×7) but more importantly, massively boosted it’s performance. However, it’s left me with one minor problem – the system always hangs on reboot, preventing me from remotely managing the system.

I finally got around to looking at it this evening, and this post provided a wealth of information on how to go about resolving the issue. In my case, the default reboot method (using the keyboard controller) wasn’t working. Switching to the ACPI method immediately resolved the problem for me, on Ubuntu 14.04.1 LTS.

To implement the fix, open a terminal, and edit the file /etc/default/grub, making the line:
Then (again in the terminal) run the command sudo update-grub.

Since the setting takes effect the next time the OS starts, you’ll probably need to restart the server manually one more time before being able to remotely reboot the machine successfully.


It’s been a long time since I last posted about how I was getting on. It feels like an “old” topic, and as there hasn’t been much change, one that I’ve nothing very interesting to write about. But, I notice with some frustration that it’s been very nearly a year since my colostomy operation, and I still have a series of difficulties with it. So I thought I’d post about my current status, and hope you forgive me if it sounds like I’m just moaning.

So firstly, my colostomy hasn’t really settled down into the sort of natural rhythm that the medical professions like to describe, where it operates once or maybe twice a day at fairly set times. Mine seems fairly erratic, which means being prepared to change my bag wherever I go. I suspect that this is at least partly down to the rather erratic schedule of my life, but doing any form of physical activity usually provokes it too, which makes doing regular exercise frustratingly difficult. As a result, I’ve also put on quite a lot of weight over the last 6 months, which I’m finding quite depressing too.

To try to overcome this, I’ve learned how to irrigate my colostomy, which is essentially a self-administered enema. The idea is that if I do this every two to three days then there is not enough waste in the colon between irrigations to cause problems with my daily life. The good news is that the technique works really well. The less good news is that rather than two to three days between irrigations, I only get about 20 hours. The actual irrigation process takes about an hour, and needs me to be relaxed. So I if I can find a relaxing hour, every day, I could use this approach to effectively manage my colostomy. The problem, of course, is who has a spare hour every day? Certainly not me, with my erratic schedule…

More worryingly my perineal wound (where my anus was removed) still hasn’t healed, and still weeps a significant amount of (what I assume to be) interstitial fluid, which means I need to dress (and change) the wound several times a day. It’s not very comfortable either, especially in this warm weather. Sadly, the healing problems I’m experiencing are again probably rooted in the preoperative radiation therapy that I had back in 2009. I sometimes wonder how things might have turned out if I had chosen not to have that radiation therapy: I might never have needed my colostomy. Or my perineal wound might be healed. Or I might be dead. Hmmm.

Of course, such deliberations are ultimately pointless, and instead I’m planning to discuss with my surgeon what options are open to me now to improve the wound when I next see him (in September).

On the urological front, the problem I was experiencing with my left kidney appears to have stabilised; the function won’t get any better, but it’s not getting any worse either. In short, the reimplantation of my left ureter seems to have been a complete success. I experienced a series of urological infections in the first few months after the operation, but six months on a low-grade antibiotic seems to have resolved that. I’ve been off the antibiotics for about five weeks now, and there have been no signs of any recurrence. Hopefully I won’t need to worry any more about my kidneys.

Meanwhile, time marches on, and on August the 11th I have another appointment with the CT Scanner to check to see if there is any sign of recurring cancer. I suspect that this may well be the last of those regular checks, and that I will be largely discharged from the cancer side of things after that, with the doctors concentrating on my perineal wound from now on. But I’m sure I’ll find out more when I next see my surgeon.