Mediatomb on Ubuntu 12.04

One of the things I use Ubuntu for is to run my home server, which serves my video collection to various players around the house. This is achieved with a DNLA server that ships with Ubuntu called Mediatomb.

Unfortunately, despite having all the needed function, and being remarkably stable, Mediatomb hasn’t been under active development for the last year or two, and when I upgraded to the latest version of Ubuntu server, I discovered that a key feature of Mediatomb had been disabled; namely the support for writing import filters in Javascript.

This allows the collection of video to be sorted and filtered by their characteristics into a series of virtual folders, which can then be used by the media players to find whatever content is required. You could have folders of films sorted by leading actor, or director. Or folders of films by certification. Or date of release. The options are endless. It’s a great feature, and utterly indispensable when it’s suddenly removed.

The reason the feature is disabled is that Mediatomb depends on an out of support version of libjs, or Spidermonkey, the Mozilla Javascript engine. The Ubuntu developers don’t have time to fix this, so until the Mediatomb developers do, the Ubuntu developers have applied a quick fix and disabled the Javascript support so the package can still be shipped.

This post shows how to re-enable that Javascript support on Ubuntu 12.04 Server. It’s a bit of a dirty hack, but it will work until either:

  1. The Mediatomb developers fix this
  2. Someone (such as Raik Bieniek or Saito Kiyoshi) packages & maintains a better fix for 12.04 in a PPA
  3. This effort to replace the Javascript support with Python support completes (I couldn’t get it to compile)

The basic approach here is to rebuild the shipped package of Mediatomb, but re-enabling the Javascript support. This requires that we have a version of Javascript on the system that Mediatomb can use. The current version in the 12.04 repositories won’t work (the API’s have changed), so we need to install an older “back-level” version, which we can get from Debian, who have still been applying security fixes to it.

  1. cd;mkdir temp;cd temp
  2. sudo apt-get install build-essential, to install the tools you will need to build packages on Ubuntu
  3. sudo apt-get build-dep mediatomb, to install all the dependencies needed for mediatomb to compile
  4. sudo apt-get source mediatomb, to get the source code for mediatomb, and unpack it into a convenient subdirectory
  5. sudo vi mediatomb-0.12.1/debian/rules, and change the line that says “–disable-libjs” to “–enable-libjs” (note that those are prefixed by double-dashes)
  6. Add a new entry to the changelog file in the same directory, incrementing the version number from zero to one. This will help prevent your changes being overwritten.
  7. Get an old copy of Spidermonkey from the Debian Squeeze distribution (on which Ubuntu is ultimately based). You need libmozjs2d and libmozjs-dev, in either the amd64 or i386 versions, depending on whether you are running in 64-bit or 32-bit mode. To determine which version you need, enter the command “dpkg –print-architecture” in a terminal. Then install the appropriate packages using sudo dpkg -i packagename
  8. In all likelihood you will get an error from one or both of those installs, complaining about dependencies. To resolve them and complete the installs, simply enter sudo apt-get install -f
  9. cd mediatomb-0.12.1 and then sudo ./configure. Lots of content will scroll past, but at the end there should be a summary; look for a line that says something like “libjs : yes”. If present then you have enabled Javascript support in the build, and satisfied the dependencies. You can now install any additional dependencies and reconfigure the build further if you wish.
  10. Switch back to your source code with cd ~/temp/mediatomb-0.12.1
  11. Start the compilation with sudo fakeroot debian/rules binary. Lots of compilation messages should scroll past.
  12. When it stops, you should have three .deb files in ~/temp. You can install them with sudo dpkg -i mediatomb*.deb

Finally, switch to root (sudo su) and then issue the command echo packagename hold | dpkg –set-selections for each of mediatomb, mediatomb-common, mediatomb-daemon, libmozjs2d and libmozjs-dev. Then drop back to your user by entering control-D. This will prevent your customised packages being overwritten as part of the normal update processes (they will be “held”.)

You can now configure Mediatomb normally, including the use of custom import.js scripts by altering /etc/mediatomb/config.xml as desired.

Update: Having just been through a reboot on my server it seems that Mediatomb isn’t installed to autostart properly. To resolve this you need to run the command sudo update-rc.d mediatomb defaults which will install the various rcn.d startup and shutdown links.

Update2: I’ve noticed that sometimes after a reboot Mediatomb still isn’t autostarted properly. Turns out that there is a message in /var/log/mediatomb.log referring to The connection to the MySQL database has failed: mysql_error (2002). What this means is that if you are using MySQL rather than SQLite, there is a race condition where Upstart sometimes tries to bring up Mediatomb before the MySQL database is available. You can resolve this by editing /etc/init/mediatomb.conf, and changing:

start on (local-filesystems and net-device-up IFACE!=lo)

to

start on (started mysql and local-filesystems and net-device-up IFACE!=lo)

Upstart will then ensure that MySQL is running before attempting to start Mediatomb.

Printer statistics (ongoing)

Time has been passing. And I’ve studiously ignored the warnings from the printer, and kept on sending it jobs. Which it has kept on printing. Beautifully.

I’m now at a total of 885 impressions (348 mono and 537 colour) and despite complaints that the magenta cartridge is now also “low”, there seems to be no difference in the quality of the output. I’ve now got replacement black and magenta toner cartridges “waiting to go”, but see no reason to install them until the print quality starts to actually degrade.

I’m cynically starting to wonder if Lexmark have set the warning levels artificially early as a means of promoting toner sales. It will be interesting to see if the printer enforces replacement before the toner actually runs out, and how much toner is left in the cartridge at that point…

Printer statistics

My Lexmark laser printer started complaining about low toner in the back cartridge today. I’ve had the printer just over a year, but even so, I was surprised that it had got through 2500 sides of text (what the black cartridge is rated at) so quickly.

So I checked the statistics. The 543DN has an inbuilt web server that provides all kinds of helpful information, including the fact that my cartridges are:

  • Black: low
  • Yellow: 80%
  • Magenta: 30%
  • Cyan: 50%

There is lots of information on the pages printed, including average job length, job type, etc. It turns out that I’ve printed a total of 519 jobs, of which 474 are 1 or 2 pages long. My longest job was 23 pages.

I’ve printed a total of 312 mono A4 sides, and 499 colour A4 sides, for a grand total of 811 sides overall.

And because my “2500 sheet high capacity black toner cartridge” is nearly empty after only 811 sides printed, this is where I can point out that the old adage about “lies, damned lies and statistics” is absolutely true, and the definition of a printed side (as used by the printer manufacturers) has no standing in the real world whatsoever.

If I assume my cartridge statistics are correct, 811 real world A4 impressions costs me all of a black cartridge, 70% of a magenta, 50% of a cyan, and 20% of a yellow. A total of 2.4 cartridges at £60 each, or £144. Which is 18p an impression. Which seems expensive, but given my preponderance of colour printing, perhaps isn’t as bad as it first seems.

Upgraded study music system

Like a lot of people, I sometimes work from home. In general it’s a day or two a week, and I’m lucky enough to have a small (2m x 2m) study dedicated to that. And since it’s the one room in the house that is genuinely “mine” and no-one else’s, it’s something of a refuge as well as a place of work.

My need for music while I work has meant that over the years I’ve moved from a pair of earphones plugged into the laptop, to a Joggler acting as a DIY Squeezebox Touch. This fed a pair of powered computer speakers from its headphone socket, which produced some basic noise, but wasn’t exactly high fidelity.

For some time now I’ve been watching the discussions around Gainclone and T-class amplifiers on the internet with interest, and this month I decided to spend some of my “toy fund” on improving my study music system. In the end, I was somewhat limited by the Joggler (which I wanted to keep) in my choice of improvements, as the only analogue audio output available is a low-quality headphone output. Which meant I needed to move into the digital domain, and flow my music over the USB output into an offboard DAC and amplifier.

I looked at various options (mostly involving second-hand DACs) but then I discovered this review of the Topping TP30, which is a combination T-class amplifier and USB DAC in a conveniently small package. Further investigation led me to this further (and more detailed) review.

At this point I was pretty much convinced, especially as I was able to get one delivered from a UK Ebay supplier for only £63. My only concern was that as the output is only 10watts RMS per channel into 8 ohm loads, the speakers need to be reasonably efficient, even in as small a room as mine.


Fortunately I had an old pair of speakers from a Sony micro hifi, which looked like they might be good candidates – and were helpfully only 6 ohm loads too.

So, what does it sound like?

Surprisingly good. It’s no match for my old Audiolab 8000A and KEF reference speakers in the lounge, but it’s a massive step up from the old set of 2.1 powered speakers that I was using. It produces a very pleasant sound, tending towards the lean and analytical – which is the sound I prefer anyway. With a good recording, it can be quite revealing, and with the right track it happily grabs the beat and rocks along nicely. Volume levels are (as Rolls Royce would say) “adequate”; half volume is as much as I would ever want in this size room. My only criticism would be that it doesn’t have the reserves of power to provide sudden big slams of bass; but then I wasn’t expecting that it would.

What’s interesting to me from having done this is the minimal cost of creating a very impressive sounding system; the TP30 was £63 delivered. Speakers like mine are about £25 on Ebay. Jogglers are £50 on Ebay. Admittedly I already had the NAS with the Squeezeserver software & music store … but many people have a spare computer these days, or you could equally well feed the amplifier and speakers directly from a laptop. Hard to imagine a better value-for-money set up for a student. It’s even relatively portable.

Yellow light of death

My PS3 is one of the original 60GB launch systems; as such it’s getting a bit long in the tooth. But it still gets used every day for either streaming my movies, or playing the kids games. Or both.

Until Tuesday evening, when it suddenly beeped three times, and turned itself off, with the power light flashing red. Trying to turn it back on again didn’t work, and neither did power cycling it. At which point I did some research, and discovered that this is known as the “Yellow Light Of Death”, or YLOD for short. It’s the way the PS3 indicates a catch-all hardware problem, and basically means the console is dead, and needs to be sent back to Sony for repair

Which would be OK, but since mine is well out of warranty we’re talking a significant bill to repair it, and in all likelihood I wouldn’t get back my rather rare launch system, with all the additional hardware that helps with backwards compatibility for PS2 titles. Not good.

It transpires that early PS3 systems like mine used 90nm Cell B/E processors, which were very power-hungry hot-running processors, and Sony didn’t do a very good job mounting the heatsinks in the early systems. This resulted in the systems running hot, and eventually causing dry solder joints to form between the CPU & GPU and the motherboard, resulting in these YLOD hardware failures. These problems are much less common on the newer PS3 Slims, which user newer 45nm Cell B/E processors that generate much less heat (200w total system consumption, compared to nearly 400w in mine).

Fortunately some enterprising people have managed to find a way of fixing this by making their own hot-air reflow solder station out of a standard hot air paint stripper, and have been able to re-flow-solder all the surface-mounted components around the CPU/GPU, fixing the dry solder joints and restoring their PS3s to life. And better yet they’ve posted videos of how to do it, so others can repeat their success.

So I did.

Going slowly it took me about 3 hours to strip the PS3 down to it’s bare motherboard, heat up the components so their soldered connections re-flowed and remade themselves, and then to rebuild it afterwards. I must admit that I wasn’t expecting it to work – but then again I had nothing to lose beyond a spare evening and £5 of thermal compound to remount the heatsinks with.

But the result is that the PS3 is now working again, and in practice, probably better than it has for a long time; by resolving the poorly mounted heatsinks and using a better quality of thermal compound the fan can now run much more slowly, and still provide better cooling. So it’s much quieter while streaming movies. Which is perfect.

Except I rather fancy getting a second generation Apple TV and hacking XBMC onto it to produce a really smart media hub, and fixing the PS3 has just made justifying the expenditure on that a lot harder!

International Broadcast Conference

As part of my new role supporting the Media and Entertainment industry for IBM, I’ve just spent a week in Amsterdam at the International Broadcast Conference (IBC2010), which was a simply fascinating experience.

Of course, I’m primarily there to work – and in my case I was mainly acting as a host for some of my customers who were attending the show. But I’m also still learning how the broadcast industry works, so this was a great opportunity to do a lot of self-education, see a lot of the basic broadcast technologies “up close”, and talk to the suppliers to understand how they are used. I was also able to network with some of IBMs senior technical and executive team who were attending from around the world, and spend time learning in detail from them about the IBM technologies that they were at the show to demonstrate to our customers.

There was no doubt as to the theme of the show … everywhere you looked was 3D. From the cameras that shoot it, through specialist electronics that “fake” it, to specialist displays that show it, there was absolutely no escaping it. What was interesting (to me anyway) was how unconvincing I found it. The effects were very impressive, but somehow it didn’t seem to add very much to the overall experience. Worse, I found that after a relatively short time watching something in 3D, I started to feel slightly ill – something like a cross between motion sickness and a headache. I had the same result whether I was watching an active system (with the shuttered glasses) or a passive system (with the polarised glasses). After several days of watching these systems in action, I’m not convinced that the technology is really ready for the home; it doesn’t work as well for an “off center” viewer, it’s inconvenient (at best) for anyone who wears spectacles, and (from my informal polling of other attendees) a goodly proportion of people don’t actually find the effect very pleasant.

Add in the significant investment required in new equipment (a new TV, plus several extra sets of glasses) and it will be interesting to see if it really takes off as quickly as the industry would like.

On the other hand, some of the 4k resolution equipment was simply stunning. I spent a few minutes standing next to a Panasonic 152″, 4k (4096 x 2160) 3D plasma display. It was the size of a large wall, and yet even standing with my nose almost pressed to it, I still couldn’t see the individual pixels. The quality of the images it was displaying were simply unimaginable – very, very, impressive. I’d really like to think it could be the next big thing after the industry has got itself over 3D.

Hifi disaster, rescued by KEF (part ii)

Just to conclude this, I disassembled the speaker with the failed HF driver unit, disconnected the HF drive unit from the crossover unit, and measured the resistance through the suspect HF drive unit coil, which proved to be open circuit – ie, dead. So I went ahead and ordered a matched pair of replacement HF drive units from KEF, for £78 delivered the next day.

The units dutifully turned up the next day, along with detailed instructions on how to replace them. In total it required about an hour, mostly because it involved some soldering of connections inside the speaker, which although straight-forward, was a bit physically awkward.

And the good news is that the speakers are now sounding as good as new again.

However, when that HF unit initially failed, I had started thinking of replacing my speakers with a 5.1 speaker set. And the thought of moving to a surround sound set-up has stuck with me, to the point where I started investigating how to create a 5.1 system using my KEF Reference 101/2s as my L+R front speakers.

After some good advice from the avforums website, and a chat with the historic speaker experts at KEF, I was advised to try to find another pair of Reference 101/2s for my rear speakers and a Reference 100c, or 90c for the centre (dialogue) speaker.

By sheer fluke, I managed to find a pair of 101/2s up for sale locally (the only ones I’ve seen so far) and managed to get them for £90 – which seemed a good price. I also found a KEF Reference 90c centre speaker on ebay for £65 delivered. I now need to decide on my AV receiver and a HiFi rack (these look nice!), sort out some wall brackets, and ultimately, whether I need a subwoofer or not. If I do, then I hear very good reports about these ones.

So far, between repairs and purchases, the speakers for my fledgling surround system have cost me £235. Admittedly they are not brand-new, or modern “lifestyle” designs, but they are all top quality, full-range speakers that would have cost nearly £1500 when new.

It’s not how I’d originally expected this to work out, but it is proving exceptionally cost-effective!

Digitising DVD’s with Linux to stream to PS3

Having posted about converting the format of video files to suit streaming to a PS3, I got an email asking me how I actually convert a DVD into a digital file, and how my network is set up to stream videos to the PS3.

So, a little more explanation. I have a small, very low-power server that runs 24×7 in my house, and is connected to my 100mbps ethernet network. On that server I have quite a lot of disk storage, and run Mediatomb, which is a DNLA-compliant UPnP media server. It serves my collection of video and audio files to any device on my network that wants to access them.

In the case of the video files, that device is my PS3, which is also connected to my ethernet network, and also has an HDMI connection to my LCD TV. With this configuration I can start the PS3, which automatically detects the Mediatomb media server and puts an icon on its GUI interface. I can then select that icon, get a list of the available video files, and by clicking on one, have it played on my TV for me.

The only configuration involved in this solution is of Mediatomb, which involves a couple of documented changes to its configuration file, and specifying where my media files are via its web interface. All very simple.

To create the media files from a DVD, I do the following:

  1. Extract the content of the DVD onto my hard drive using a program called vobcopy.
  2. Use ffmpeg to convert that content into a more compact form, better suited to streaming over a network.
  3. Copy the resulting file to my server where Mediatomb can access it.

Now lets look at the first two steps in more detail:

DVD’s actually store their content as a series of VOB‘s; these contain the actual video that you see when you play back a DVD. In general there are separate VOB’s for the main movie, any adverts, trailers, bonus features, etc etc. And just to make life a little more complex, some DVDs store the main movie in more than one VOB, though fortunately vobcopy can hide that from us.

To make things a little more difficult, the movie industry have then encrypted the DVD to make it harder to do what we are about to do. To get around this, you must have installed libdvdcss2, which is a DVD decryption library that is available from the Medibuntu repository.

To find which VOB to extract from the DVD, simply run “vobcopy –info”. This will produce some output like:

richard@t60p:~$ vobcopy –info
Vobcopy 1.1.0 – GPL Copyright (c) 2001 – 2007 robos@muon.de
[Hint] All lines starting with “libdvdread:” are not from vobcopy but from the libdvdread-library

[Info] Path to dvd: /dev/sr0
libdvdread: Using libdvdcss version 1.2.10 for DVD access
[Info] Name of the dvd: STARWARS2UK_D1_2_PAL
[Info] There are 21 titles on this DVD.
[Info] There are 103 chapters on the dvd.
[Info] Most chapters has title 1 with 51 chapters.
[Info] All titles:
[Info] Title 1 has 51 chapters.
[Info] Title 2 has 1 chapter.
[Info] Title 3 has 2 chapters.
[Info] Title 4 has 1 chapter.
[Info] Title 5 has 1 chapter.
[Info] Title 6 has 1 chapter.
[Info] Title 7 has 1 chapter.
[Info] Title 8 has 1 chapter.
[Info] Title 9 has 1 chapter.
[Info] Title 10 has 2 chapters.
[Info] Title 11 has 10 chapters.
[Info] Title 12 has 14 chapters.
[Info] Title 13 has 1 chapter.
[Info] Title 14 has 1 chapter.
[Info] Title 15 has 2 chapters.
[Info] Title 16 has 1 chapter.
[Info] Title 17 has 4 chapters.
[Info] Title 18 has 2 chapters.
[Info] Title 19 has 2 chapters.
[Info] Title 20 has 1 chapter.
[Info] Title 21 has 3 chapters.

[Info] There are 21 angles on this dvd.
[Info] All titles:
[Info] Title 1 has 1 angle.
[Info] Title 2 has 1 angle.
[Info] Title 3 has 1 angle.
[Info] Title 4 has 1 angle.
[Info] Title 5 has 1 angle.
[Info] Title 6 has 1 angle.
[Info] Title 7 has 1 angle.
[Info] Title 8 has 1 angle.
[Info] Title 9 has 1 angle.
[Info] Title 10 has 1 angle.
[Info] Title 11 has 1 angle.
[Info] Title 12 has 1 angle.
[Info] Title 13 has 1 angle.
[Info] Title 14 has 1 angle.
[Info] Title 15 has 1 angle.
[Info] Title 16 has 1 angle.
[Info] Title 17 has 1 angle.
[Info] Title 18 has 1 angle.
[Info] Title 19 has 1 angle.
[Info] Title 20 has 1 angle.
[Info] Title 21 has 1 angle.
[Info] Using Title: 1
[Info] Title has 51 chapters and 1 angles
[Info] Using Chapter: 1
[Info] Using Angle: 1

[Info] DVD-name: STARWARS2UK_D1_2_PAL
[Info] Disk free: 74104.160156 MB
[Info] Vobs size: 5733.919922 MB

It’s highly likely that the VOB with the most chapters is the main movie; in this case title 1. We can check that by running the command “mplayer dvd://[vob_number]”. If we see the main movie playing then we can extract that VOB to our hard disk by running the command “vobcopy –title-number [vob_number]”. vobcopy will then proceed to decrypt and copy that VOB to your hard disk (as STARWARS2UK_D1_2_PAL3-1.vob in this case).

Now we can convert that (very large) file into something smaller and more easy to stream. This uses exactly the same command as the last post; ffmpeg. This time however, we need to make sensible guesses for the bitrates that we want to use for the video and audio streams. Personally I go with 2000k for the video, and 192k for the audio. It’s good enough in terms of quality, and produces a file ~1/3rd the size of the original VOB, which is much more amenable to being streamed over a 100mbps ethernet network. If you’re hoping to do this over wireless, then you’ll probably need to compress even more and sacrifice quality … wireless just doesn’t have the bandwidth to do good quality video streaming.

So, the command to convert that VOB to a .mp4 file is:

ffmpeg -i STARWARS2UK_D1_2_PAL3-1.vob -vcodec libx264 -b 2000k -acodec libfaac -ab 192k STARWARS2UK_D1_2_PAL3-1.mp4

That command will take at least as long to execute as the movie would have taken to play. But once completed, the resulting file can then be copied to my server and be available for instant access in the future.

(Re)encoding video under Linux to stream to a PS3

I’ve recently moved to storing my video collection on my home server as a series of .mp4 files, and streaming them (with Mediatomb) to my PS3, which displays them on my TV. This means I can watch any of my DVD’s without having to find the disc, which is very convenient.

However, I started digitising many years ago, so my collection contains a few older “.avi” files that contain video in early Divx and Xvid formats. Normally these “.avi” files can be streamed directly to the PS3 too, but sometimes the PS3 complains of Unsupported data, error 800288bf, or Corrupted data. I’ve failed to discover exactly what causes these problems, but they’re clearly related to the combination of the codecs used for the audio and video streams and the use of the avi container.

To resolve this, I simply convert (re-encode) these unsupported “.avi” videos into the newer more standards-based .mp4 format. This approach will work equally well for video in the many other formats that the PS3 doesn’t understand. To do this, I use ffmpeg, which is the swiss army knife of video & audio conversion for Linux.

If you intend to replicate this, you need to use a recent and full version of ffmpeg; the version in the standard Ubuntu repositories isn’t sufficient, as it doesn’t include support for patent-encumbered codecs, which you will need. Instead, you can get the version you need from the Medibuntu repository.

First find out what bitrates your existing video is using for it’s video and audio streams. Enter “ffmpeg -i video.avi”. Output will be something like:

richard@t60p:~/Data$ ffmpeg -i video.avi
FFmpeg version SVN-r19352-4:0.5+svn20090706-2ubuntu2, Copyright (c) 2000-2009 Fabrice Bellard, et al.
configuration: –extra-version=4:0.5+svn20090706-2ubuntu2 –prefix=/usr –enable-avfilter –enable-avfilter-lavf –enable-vdpau –enable-bzlib –enable-libgsm –enable-libschroedinger –enable-libspeex –enable-libtheora –enable-libvorbis –enable-pthreads –enable-zlib –disable-stripping –disable-vhook –enable-gpl –enable-postproc –enable-swscale –enable-x11grab –enable-libdc1394 –extra-cflags=-I/build/buildd/ffmpeg-0.5+svn20090706/debian/include –enable-shared –disable-static
libavutil 49.15. 0 / 49.15. 0
libavcodec 52.20. 0 / 52.20. 0
libavformat 52.31. 0 / 52.31. 0
libavdevice 52. 1. 0 / 52. 1. 0
libavfilter 0. 4. 0 / 0. 4. 0
libswscale 0. 7. 1 / 0. 7. 1
libpostproc 51. 2. 0 / 51. 2. 0
built on Oct 13 2009 22:15:16, gcc: 4.4.1
Input #0, avi, from ‘video.avi’:
Duration: 01:24:57.64, start: 0.000000, bitrate: 2281 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 720×304 [PAR 1:1 DAR 45:19], 25 tbr, 25 tbn, 25 tbc
Stream #0.1: Audio: mp3, 44100 Hz, stereo, s16, 192 kb/s
At least one output file must be specified

I’ve highlighted the interesting parts; the average bitrate for the whole file, and the average bitrate of the audio stream. Take one from the other and you have the average bitrate for the video stream.

Now we need to re-encode the video into a format the PS3 likes. We’ll use H.264 video, AAC audio, in an MP4 container, which the PS3 supports well. I’ve just reused the same bitrates as the input file, but in practice you can probably reduce both a little without affecting quality, as H264 and AAC encoding is more efficient than older encoders:

ffmpeg -i video.avi -vcodec libx264 -sameq -b 2089k -acodec libfaac -ab 192k -t “00:05:00” video.mp4

Notes

  • -sameq keeps video quality the same
  • -b 2089k specifies the output video bitrate
  • -ab 192k specifies the output audio bitrate
  • -t “00:05:00” optionally produces only 5 minutes of output – ideal for testing your settings before you convert a long video
  • Encoding time on my dual core laptop is about 1 second for each second of video. So expect a long wait.

Recovering recordings from dead Pace Twinview PVR

A couple of years ago the kids discovered the benefits of our PVR, and suddenly (a) there was no space on the PVR, and (b) we could never find anything we recorded amongst the zillions of recordings of CBeebies, CBBC and CITV. And then we inherited an old Pace Twinview PVR from my father-in-law, who had traded up to a Humax 9200. So I set it up for the kids on “their” TV. This meant that they could record whatever they wanted, without filling up the PVR in the lounge. And for a fair time, life was good.

And then the Twinview started playing up, and eventually died. And suddenly I have three kids who want me to recover all the recordings that they’ve been making.

No problem I think – those recordings are probably just recorded directly off air as transport streams (.ts files) which I can easily transform (using ffmpeg) either into something like H264 video and MP3 in an mp4 container, which they can then watch on the PS3, or an MPEG2 file which I can then make into a standard DVD. So I say not to worry, I’ll fix it for them.

Which was hasty. Very hasty. And possibly a big mistake.

So I extract the hard drive from the PVR. Good news, it’s a simple 20GB PATA laptop drive which will fit nicely into my Thinkpad ultrabay. So I boot up Ubuntu, and do a quick “fdisk -l /dev/sdb” to discover that there are three partitions, all of which are unrecognised by fdisk. They are however flagged with partition identifier 0xE0, which after a bit of Googling turns out to be a completely proprietary filesystem, designed by ST microelectronics, called ST AVFS – presumably the ST Microelectronics Audio Visual FileSystem.

So currently I’m struggling to find a way to get at the data. It’s not possible to just mount the partitions, but it turns out that there has been some work done on some linux command line tools (TwinRIP) to extract the data from those partitions. However, the tools are at least two years out of date, supplied in binary form only, and no longer run under any of my Ubuntu installs (lots of problems with missing libraries). Now, it transpires that there is a GUI program to do much the same under Windows (TDU), which looks to be more recently updated, and (plus point) can directly produce MPEG output files. It will be interesting to see if that runs under Windows 7 RC, which is the only Windows install I currently have on my Thinkpad!

Update: And the results are in. The TDU program doesn’t work any better than the Linux one. And Windows 7 doesn’t want to talk (reliably) to the hard drive, whether mounted using my Thinkpad ultrabay, or a USB caddy. Worse, the author of TDU & TwinRIP has not published any source code, so there’s no chance of my doing anything geeky at this point, so I’ve declared defeat, and told the kids that their old recordings are now officially lost 😦

The only upside is that this gives me another free 20GB PATA laptop drive to play with, if only I can think of a use for something so small.