Blog

January 07, 2022 22:53 +0000  |  Free Software Linux Majel 0

For the last two years I've been working on Majel, a project that allows you to control your computer with your voice. The first incarnation was released back in March, but was dependent on Mycroft, so I've been working to rewrite it to be independent. The end goal: to be able to release it as an image file for the Raspberry Pi so people can just download it, burn it onto an SD card, pop it into their Pi and have it Just Work™.

The development process has been surprisingly easy, with only a few hiccups around audio processing in Python. The new architecture has proven to be really solid and I'm excited to share that in detail at a later date -- but that's not what this post is about. This post is about packaging for the Raspberry Pi and what a nightmare it's been for me.

For the purposes of this post, you just need to know that I wrote a Python application that interfaces with GNOME, Firefox, Chromium, Skype, and other desktop tools to do cool stuff.

What follows is a series of things I now understand that came at the cost of a lot of time and hair-pulling. If you're thinking about going down this road for your own project, my hope is that sharing my experiences here will help save you frustration in the future.

The CPU Architecture

Anyone who knows anything about the Raspberry Pi project can tell you that these little devices don't run the same kind of CPU you're probably used to. Where most computers we use today (not including phones) use x86 processors (typically built by Intel or AMD), the Raspberry Pi uses ARM chips. If your knowledge of the situation (like mine) ended there, then I'm about to save you some pain.

Just like the x86 ecosystem (which consists of i386, i686, x86_64 and other "sub-architectures"), the ARM family includes a wide variety of architectures which you need to build explicitly for when you're making your own stuff. For example, if you've got a Python wheel labelled aarch64 it will only run on 64-bit ARM systems, while one labelled armv7l will run on 32-bit ARM systems.

The Raspberry Pi's hardware 4 can run both but the default "Raspberry Pi OS" is 32-bit and exclusively runs armv7l binaries. If you want to use aarch64, you must install an OS other than the default.

Python Support

In the Python world, the vast majority of packages on PyPI are "pure-python" (ie. they will run on any system already running Python). However there's a lot of packages out there that're bundled with some compiled code (usually C or C++). These packages must be compiled exclusively for your architecture in order to run and if your architecture isn't supported with a pre-existing build, you either have to build it yourself (painful, especially on a Pi) or you're shit out of luck.

For example, the popular cryptography library is not pure python and therefore must be compiled for the architecture it's running on. Thankfully, that project's maintainers support a variety of platforms but note that armv7l isn't one of them.

In fact, finding a package on PyPI with support for armv7l is actually quite rare. Instead, Raspberry Pi users have a special "hack" on their system (one of many I discovered in my travels), an additional Python repo, enabled by default: piwheels.org.

If you're running Raspberry Pi OS, you'll find that nearly all of your not-pure-python packages are not coming from PyPI at all, but are rather coming from piwheels.org: a repository of .whl files, built exclusively for the Raspberry Pi. This is pretty great, though it was definitely a surprise.

If however you're not using Raspberry Pi OS and are instead using an aarch64-based OS like Manjaro, then there's no piwheels.org for you. Instead, you have to hope that the package you need has pre-built support for your architecture. Thankfully, aarch64 is much more common in PyPI, but it's not everywhere. The vosk package for example has armv7l packages but not aarch64 ones.

Finally, Poetry has an annoying bug/limitation in it that means you can't configure your pypackage.toml file work across architectures. Your poetry.lock file will only store hashes for one architecture at a time, so if you run poetry update on an x86_64 machine, the resulting poetry.lock will be entirely different from one generated from an aarch64 machine. As this undermines the whole idea of a consistent, distributable, versioned lock file, it's rather disappointing.

The Operating Systems

So now that we know a bit about the limitations of Python in different operating systems running on the same hardware, let's talk about those systems in more detail

Raspberry Pi OS (formerly Raspbian)

Raspberry Pi OS is Debian-based, but critically it is not your typical Debian system. In an effort to make using the Pi easy for everyone from children to seasoned professionals, the Raspberry Pi Foundation has applied a lot of tweaks and hacks to standard Debian which can catch you off guard if you're not ready for them.

Old & Busted Software

Like any Debian system, everything is old-as-fuck as the maintainers prefer stability over modern features. If you're using your Pi to control humidity in a greenhouse, this is probably a good idea, but if you're hoping to take advantage of modern graphical user interfaces, you're going to have a bad time.

For example, the current version of GNOME available for the Pi is two versions behind the GNOME project's release schedule and it's a good bet that that gap will grow with time. As for Firefox, the most recent version you can get is Mozilla's "extended support release" (ESR) which is a nice way of saying "we promise to support this version for years and years but it won't be meaningfully updated during that time*.

What's more, simply installing GNOME on a standard Raspberry Pi OS image absolutely will not work because there's something called pi-package installed by default that claims to have installed an inferior version of gnome-settings and that conflicts with the would-be-installed version. You must instead use a "Lite" version of the image (the one that doesn't come with X or LXDE) and then install GNOME from there.

Special Configuration Pattern

Configuration of the Pi is done with a program called raspi-config which is installed by default, but if you're using a Pi 4, most of the options you can select in this tool will fail to apply.

As best I can tell, Bluetooth is entirely broken from the start. None of the usual patterns I would expect to get it working (like running systemctl start bluetooth and opening the Bluetooh UI) resulted in success. This is not a hardware problem, but a software one. I can only assume that there's some special Raspian way to do this.

Non-standard Re-packaging

Chromium is a first-class citizen in Piworld, installed by default on the standard image, but strangely listed as chromium-browser rather than the usual chromium. You can even get Widevine support in it (so you can watch encrypted video on Netflix & Prime) simply by running apt install libwidevinecdm0. This deviates from what you see on a typical Chromium install, since modern versions of Chromium allow you to download Widevine support automatically. I can only assume that this is a special concession for the armv7l architecture.

Widevine support in Firefox appears to be impossible.

Kodi has been compiled to exclusively run without an X server or Wayland present. Undoubtedly this is to allow Pi users to just install Kodi and run it without the overhead of a UI they aren't using, but if you want that standard overhead, you're SOL.

Building Your Own Image

If your goal, like mine, is to distribute your app as a Raspberry Pi image, then you'll want to look into pi-gen, an automated system that let's you build a Pi image on an x86-based machine. It's impressively simple, but critically only runs on Debian-based systems. If you're running Fedora, Arch, or some other system, they have a Docker-based runner, but I couldn't get it to work. To get it working on my Arch system, I started a Debian VM and and it worked beautifully... after consuming a whopping 42GB of disk space!

My original idea what to build Majel automatically via Gitlab's CI. With a disk footprint like that however, I'm afraid I'm going to have to rethink that idea.

Manjaro

After running afoul of all of the above, I started looking into alternative base images to work with. Thankfully, Raspberry Pi's excellent Pi Imager (available on FlatHub) makes the burning of alternative images super-easy, and I found Manjaro Linux (a flavour of Arch Linux) to be a really good starting point for my project. In fact, there's a GNOME variant available so you can burn an image that boots into GNOME shell!

As an Arch-derivative, it runs really close to the bleeding edge, so installing a modern version of Firefox and Kodi was super-easy. There were a few surprises though.

It's Not the Same Architecture

While Raspberry Pi OS is running on armv7l, Manjaro builds all of its packages for aarch64. That means that piwheels.org is out of the question, and that there's still going to be some Python packages that aren't published to PyPI with support for Manjaro on a Pi (looking at you vosk).

Wayland is the Default

It's a "new hotness" sort of OS, which means that the default UI server isn't Xorg, but Wayland. For most people, this is probably ok, but for me, since my project relies heavily on Xorg (Majel uses pyautogui which can't do Wayland) this was a problem. Thankfully, you can switch to using Xorg simply by installing xorg-server and uncommenting the WaylandEnable=false line in /etc/gdm/custom.conf.

Widevine is... Problematic

While getting Widevine support in Raspberry Pi OS is easy, getting it working in Manjaro is pretty sketchy. Sure you can install modern versions of both Chromium and Firefox and they work great, but Widevine isn't there, and it won't autodownload, even in Chromium.

Instead, you have to install this crazy/amazing package called chromium-docker from the AUR. The installation process builds a local Docker image of Ubuntu wherein you install Chromium and you can take advantage of the aforementioned libwidevinecdm0. Running it from that point forward involves starting the Docker container and running Chromium from inside it. That's just... bananas.

Packaging is Tricky

The easiest way to make my project installable on Arch-based systems is to contribute an AUR package, but writing one that will install properly on both aarch64 and x86_64 systems was surprisingly not straightforward.

All the docs you read will tell you that there's one variable you set for package sources, conveniently called source=(). What took far too long to find was that you can actually suffix this variable name with the name of the architecture: source_aarch64=() and source_x86_64=(). You then do the same for the sha512sums=() variables and finally, you write some sketchy if/else Bash in your package() function to check if ${CARCH} is equal to aarch64 or x86_64 etc. Have a look at what I had to do for the vosk library if you're curious.

Creating Your Own Image Looks Easy

Manjaro has all of their OS builds available on GitHub, so from the outside it looks like making your own build should be easy. I haven't tried it yet though, so I can't comment.

Everything Else

With the exception of the above, working with Manjaro on the Raspberry Pi is delightful. Getting my Flic button paired with the Pi via Bluetooth was 100% painless and straightforward, and the OS in general has all sorts of nice creature comforts built into it, like zsh by default, a pretty drop-in replacement for cat, and a nice set of custom icons.

Ubuntu

Finally, there's Ubuntu, which admittedly I actively dislike. The whole proprietary Snap system, the ugly re-skinning of GNOME, the dependence on Debian unstable under the hood so everything is both old and broken... Ubuntu is everything I don't want in Linux under one roof. It's also hugely popular though, and likely the only place I'll be able to get Widevine easy out-of-the-box.

The first time I installed it, it locked up the mouse and keyboard for minutes at a time during the initial setup phase. As I write this, I'm still waiting for the initial boot to finish and the mouse is frozen on the screen. I'm not confident that my desire to see this work will be strong enough to overcome my contempt for this distro.

In General

The Pi is marketed as a tiny computer that you can leverage to do anything your heart desires provided you have the time, patience, and are comfortable with a low-power device doing the lifting. The question is though: is something as complicated as a voice-activated desktop automation system that plays streaming video even possible on hardware as limited as a Raspberry Pi?

It turns out, it's totally doable. In Raspberry Pi OS, I managed to bring up simultaneous instances of Firefox and Chromium and play "The Witcher" on Netflix by way of voice command. All processing, even the speech-to-text handling was done on-device and the performance was admirable.

The only caveat I will mention is that streaming video at full screen will absolutely not work at 4K resolution. In fact, I didn't get anything resembling a good framerate until I bumped the resolution all the way down to 1280x720. For my purposes though, this is completely reasonable: this is basically a very smart television after all and the quality of stream I get from Amazon Prime is abysmal anyway.

Conclusion

As long as this post is, it isn't even the end of my development process. I still have to give Ubuntu a fair shake and decide which of the above will be the reference platform for Majel. It'll install just fine on x86-based systems, but as the Pi is what I always envisioned for it, I want to get this part right before I officially "release" the new Mycroft-free version 2.0. Hopefully that'll be sometime in the spring, as I only have a few hours a night to work on it.

Until then, maybe the above will be useful to someone. If it was, please leave a comment! If it wasn't and you have questions, feel free to ask :-)

July 18, 2013 17:24 +0100  |  Apple Linux 13

The end result

The colour is relevant you see. This is how Mac people tell their hardware apart. "Not the grey one that came out in 2010, but the silver one that was released in 2011"... or whatever.

For the sake of those who might search for something like this post, the specs of this particular MacBook are:

Serial No: W872632DYA8
White/2.16/2x1G/120/SD/AP/BT-NLD
EMC No: 2139

The backstory

I recently acquired an older MacBook as part of an experiment and learning experience. Either that or an attempt at self-flogging. You see, I'm forced to use a Mac at work (though thankfully they let me use Linux on it), but my understanding of how to get Linux installed and working on a Mac is still pretty limited, so I picked up one of these to try to turn it into a simple file server. This post is the result of that torturous process experiment.

So here's the deal: the firmware on the white MacBooks is broken, placing it in a unique and problematic position:

The Problem

blinking question mark

Older CDs that do not support EFI

These will boot on the Macbook, but won't be able to install an EFI-aware bootloader so when you're all finished you end up with a blinking folder with a question mark on it (see image)

Newer CDs that support EFI

These will hang with the super-awesome-and-totally-useless prompt:

1.
2.
Select CD-ROM Boot Type:

Searching for this returns all manner of panicked mac users and clueless Windows users trying to figure out what they did wrong and very little helpful information regarding why an ISO that boots just fine on normal computers will just flake out like this on a mac.

Bootable USB sticks

These just won't boot at all.

My attempts

A lot of the how-tos out there point you to rEFIt, which is now defunct, replaced by rEFInd. Both of these projects aim to allow for dual-booting, which I didn't care about, and neither solves the problem above. All you get is a a dual-boot environment with one of those environments refusing to boot.

It turns out though that if you spend enough days, and enough distributions (I started with Gentoo, then Fedora, then Gentoo again, back to Fedora), you will eventually stumble upon what you need. In my case it was the 22nd comment in a bug report to the Fedora mailing list. It explained that the Macbook firmware was broken, and the only work around is to rebuild the install cd without EFI support, which would force the MacBook to revert to BIOS mode, boot the disc, and from there do what you need to install an EFI-friendly bootloader.

In other words, the MacBook was broken, but Apple didn't care because their software works just fine on broken firmware. If you want to use it anyway, you have to break your software to play along.

That's roughly 7days of pain I've saved you. You're welcome.

The actual solution

So enough with the griping. Here's what I had to do to make this work:

# Download a Fedora ISO and mount it (as root) to somewhere convenient
mount -o loop Fedora-Live-Desktop-x86_64-19-1.iso /mnt/floppy

# Create a temporary place to copy the disk data
mkdir /tmp/image
cp -a /mnt/floppy/* /tmp/image

# Create a new ISO, stripping out all of the stuff that confuses the MacBook
mkisofs -r -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -V Fedora-Live-Desktop-x86_64-19-1 -o /tmp/fedora.img /tmp/image/

# Burn that baby to a new disc
cdrecord /tmp/fedora.img

You may still run into problem where the boot process complains about not being able to find the disk labelled "Fedora-Live-Desktop-x86_64-19-1" or whatever you've downloaded. If this happens, you'll be dumped into an emergency shell where you can find out what the CD is calling itself these days:

# ls -l /dev/disk/by-label/

You should see the name of your CD in there, take note of it, reboot, and at the very first screen where it asks you if you'd like to install Fedora, hit tab to edit the options, and change the relevent portion of the command to use the appropriate disk name.

And that's it. I now have Fedora running on my MacBook, which isn't ideal, since I would have preferred Gentoo, but since this is already 7days of my life I won't get back, I'm going to stop here and be happy.

June 28, 2009 19:39 +0100  |  Geek Stuff Linux 2

You may have noticed some sketchy uptime on this blog lately. For a few days there, my site would be online for a few hours, then drop offline for a few then return. It's done horrible things to my traffic as well as my personal productivity.

You see, my router, Serenity was falling apart. The little compact-flash card I was using was starting to flake out and I was seeing data corruption, segfaults and lots and lots of kernel panics. Not fun. This could be managed with the occasional reboot, but that's not a fix. No I had to buy a new cf card and rebuild serenity from the ground up.

This sounds more difficult than it is really. I just hopped down to London Drugs, bought a new card ($23 for 2gb! Looks like SD really did win that race) brought it home and opened up the box with my trusty screw driver, moved a few parts around and replaced the card. The only thing left was the install and configuratioin... except I got stupid.

The case fan was unplugged. I'd removed it months ago 'cause it was making noise and I wanted a quieter house. However, the CPU was *really* hot, so I thought it might be a good idea to test out if the noise was really tolerable or not. I took the little power wire and plugged it into a free set of pins and sure enough, the fan came on -- the server also rebooted.

I'd plugged one of the power cables right into the motherboard on some unlabeled pin. There was some scary-sounding beeping, and then the smell of burnt metal and plastic... even smoke. My curiosity, coupled with a stupid mistake (learn your cables Dan!) had had quite literally "toasted" my router.

So I'm now using my wireless router (originally just an access point) as my primary router and I'm already not liking it. I'd gotten used to handy things like IP blocking, and routing non-standard ports to standard ports to get around lame security on other networks -- all of it gone. However getting a replacement for serenity is looking to be around $400 so that's not going to happen anytime soon.

So let this be a lesson to you kids: be careful when playing with expensive hardware... one mistake and you really could fry your board :-(

June 15, 2009 20:07 +0100  |  Activism Drupal Free Software Linux PHP Software Technology Work [at] Play 0

I attended my first ever OpenWeb conference yesterday and as per company policy, I have to report on and share what I learnt, so what better way to do so then to make a blog post for all to read?

General

OpenWeb is awesome. It's a conference where people from all over the world come to talk about Open design and communication and hopefully, learn to build a better web in the process. Attendees include programmers, entrepreneurs, designers, activists and politicians all with shared goals and differing skillsets. I shook hands with Evan Prodromou, the founder of identi.ca and WikiTravel, heard talks from the guys who write Firefox and Thunderbird as well as the newly-elected representative for the Pirate Party in the European Parliament, Rickard Falkvinge. All kinds of awesome I tell you.

Rickard Falkvinge: Keynote - On the Pirate Party

Founder of the Pirate Party in Sweden and now a representative in the European Parliament (thanks to proportional representation), Falkvinge was a passionate and eloquent speaker who covered the history of copyright, the present fight for greater control of so-called intellectual property and more importantly the far-reaching and very misunderstood effects of some of the legislation being passed to "protect" copyright holders while eliminating privacy rights for the public.

The talk was very in depth and difficult to cover in a single post so I encourage you to ask me about it in person some time. For the impatient though, I'll try to summarise:

The copyright debate isn't about downloading music, that's just a byproduct of the evolution of technology. As the printing press gave the public greater access to information, so has the Internet managed to disperse that information further. The problem is now that the changing landscape has rendered certain business models ineffective, these business are fighting to change our laws to preserve said model rather than change with the times. Ranging from the frustratingly shortsighted attempts to ban technologies that further file sharing (legal or otherwise) to the instant wire tapping on every Internet connection (and by extension phone call) of every free citizen without a warrant, many of these changes are very, very scary.

"All of this has happened before, and it will happen again" he said. Every time a technological advancement creates serious change for citizen empowerment in society, the dominant forces in that society mobilise to crush it. The Catholic church, gatekeepers of the lion's share of human knowledge at the time actively worked to ban the printing press. They succeeded (if you can believe it) in France in 1535. This time, it's the media companies and they're willing to do anything, including associating file sharing with child pornography and terrorism to do it. Falkvinge's Pirate party is becoming the beachhead in the fight for copyright reform. Now the party with the largest youth delegation (30%!) in Sweden, they are working to get the crucial 4% of the seats in Parliament they need to hold the balance of power and they need your help. He'd like you to send the party 5€ or 10€ per month and I'm already on board.

Angie Byron: Keynote - Women in Open Source

Those of you who know me, know that I can get pretty hostile when it comes to treating women like a special class of people (be the light positive or negative) so I was somewhat skeptical about this one. Thankfully, I was happy to hear Byron cover a number of issues with the Free software community ranging from blatant sexism (CouchDB guys... seriously?) to basic barriers to entry for anyone new to a project. There were a lot of really helpful recommendations to people wanting to engage 100% of the community rather than just one half or the other.

Blake Mizerany: Sinatra

Sinatra is a Ruby framework that went in the opposite direction of things like my beloved Django or Ruby's Rails. Rather than hide the nuts and bolts of HTTP from the developer, Sinatra puts it right out there for you. Where traditional frameworks tend to muddle GET, POST, PUT, and DELETE into one input stream, this framework structures your whole program into blocks a lot like this:

  require 'rubygems'
  require 'sinatra'
  get '/hi' do
    "Hello World!"
  end

That little snipped up there handles the routing and display for a simple Hello World program. Sinatra's strength is that it's simple and elegant. It lets you get at the real power at the heart of HTTP which is really handy, but from what I could tell in the presentation, there's not a lot available outside of that. Database management is done separately, no ORM layer etc. etc. It's very good for what it does, but not at everything, which (at least in my book) makes it awesome.

Ben Galbraith and Dion Almaer: Mozilla Labs

These are the guys who make the Cool New Stuff that comes out of Mozilla. You know those guys, they write a nifty web browser called "Firefox", I'm sure you've heard of them.

Mozilla Labs is where the smart nerds get together to build and experiment with toys that will (hopefully) eventually make it into a finished product. Sometimes that product is an add-on or plug-in, other times it's an entirely new project. It's all about how useful something is to the public. And as always, the code is Free. You may have even heard of Ubiquity, an extension to Firefox that promises to reshape how we use a web browser... they're working on that.

This time through, they were demoing Bespin, a code editor in your web browser. Imagine opening a web browser, going to a page and doing your development there: no need for a local environment, but without the usual disadvantages of aggravating lag or difficult, text-only interface. Now imagine that you can share that development space with someone else in real time and that you can be doing this from your mobile device on a beach somewhere. Yeah, it's that awesome.

We watched as they demoed the crazy power that is the <canvas /> tag by creating a simple text editor, in Javascript right there in front of us... with about 15 lines of code. Really, really impressive.

David Ascher: Open Messaging on the Open Internet

Ascher's talk on Open Messaging was something I was really interested in since I've been actively searching for information on federated social networking for a while now. The presentation was divided into two parts: half covering the history of email and it's slow deprecation in favour of a number of different technologies as well as how people are using it in ways never intended for the architecture. Major problems with the protocol itself were touched on, as well as an explanation about how some of the alternatives out there are also flawed.

He then went on to talk about Mozilla Thunderbird 3 and the variety of cool stuff that's happening with it. "Your mail client knows a lot about you" he says "but until now, we haven't really done a lot with it". Some of the new features for Thunderbird 3 include conversation tracking (like you see in Gmail), helping you keep track of what kinds of email you spend the most time on, who you communicate with most etc. and even statistical charts about what time of day you use mail, what kind of mail you send and to whom how often. It's very neat stuff. Add to this the fact that they've completely rewritten the plug-in support, so new extensions to Thunderbird mean that your mail client will be as useful as you want it to be.

Evan Prodromou: Open Source Microblogging with Laconica

Up until this talk (and with the exception of Falkvinge's keynote), I'd been interested, but not excited about OpenWeb. Prodromou's coverage of Laconica changed all of that.

Founder of WikiTravel and one of the developers on WikiMedia (the software behind Wikipedia), Prodromou has built a federated microblogging platform called Laconica. Think Twitter, but with the ability for an individual to retain ownership of his/her posts and even handle distribution -- with little or no need for technical knowledge required. Here, I made you a diagram to explain:

Federated Laconica vs. Monolithic Twitter
Federated Laconica vs. Monolithic Twitter

Here's how it is: whereas Twitter is a single central source of information, controlled by a single entity (in this case, a corporation), Laconica distributes the load to any number of separate servers owned by different people that all know how to communicate. Where you might be on a server in Toronto, hosted by NetFirms, I could be using a Laconica service hosted by Dreamhost in Honolulu. My posts go to my server, yours go to yours, and when my Twitter client wants to fetch your posts, it talks to NetFirms and vice versa.

The advantages are clear:

  1. Infinite scalability: Twitter's monolithic model necessitates the need for crazy amounts of funding and they still don't have a profit model to account for those costs. Laconica on the other hand means that the load is distributed across potentially millions of hosts (much like the rest of the web).
  2. You control your identity, not a private corporation.

The future is where it gets really exciting though. By retaining ownership of your identity and data, you can start to attach a variety of other data types to the protocol. For the moment, Laconica only supports twitter-like messages, but they're already expanding into file-sharing as well. You'll be able to attach images, video and music files, upload them to your server and share them with whomever is following you. After that, I expect that they'll expand further to include Flickr-like photo streams, Facebook-like friendships and LiveJournal-like blog posts. These old, expensive monolithic systems are going away. In the future we'll have one identity, in one place, that we control that manages all of the data we want to share with others.

Really, really cool stuff.

I went home that night and signed up as a developer on Laconica. I've downloaded the source and will experiment with it this week before I take on anything on the "to do " list. I intend on focusing on expanding the feature set to include stuff that will deprecate the monolithic models mentioned above... should be fun :-)

Drupal Oops

I closed out the evening with some socialising in the hallway and some ranting about how-very-awesome Laconica was to my coworker Ronn, who showed up late in the day. He wandered off in search of my other colleagues and I followed after finishing a recap with Karen Quinn Fung a fellow transit fan and Free software fan. Unfortunately though, I wasn't really paying attention to where Ronn was going, I just followed out of curiosity. It turns that out I had stumbled into a Drupal social where I was almost immediately asked: "so, how do you use Drupal and how much do you love it?" by the social organiser. James gave me a horrified "what the hell are you doing here" look and searching for words, I said something to the effect of "Um, well, I was pretty much just dropping in here looking for my co-workers... oh here they are! -- I like Drupal because it makes it easy for people to make websites, but I don't really use it because it gets in my way. I prefer simple, elegant solutions and working around something just to get it to work is too aggravating." Considering the company, my response was pretty well received. I backed out quietly at the earliest opportunity :-)

So that was OpenWeb, well half of it anyway. I only got a pass for the Thursday. I can't recommend it enough though. Really interesting talks and really interesting people all over the place. I'll have to make sure that I go again next year.

May 17, 2009 08:31 +0100  |  Linux Python 1

In the midst of one of those "because I can" moods today, I wrote a fun Python script to get my battery status and colour-code it so it could be loaded into my prompt. I'm posting it here 'cause I think it's nifty:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import re

battery = "/proc/acpi/battery/BAT0"

def getMax(path):
    return getValueFromFile(path + "/info", "last full capacity")


def getRemaining(path):
    return getValueFromFile(path + "/state", "remaining capacity")


def getValueFromFile(name, value):
    f = open(name, "r")
    for line in f:
        remaining = re.match(r"^%s:\s+(\d+)" % (value), line)
        if remaining:
            return remaining.group(1)


def isCharging(path):
    f = open(path + "/state", "r")
    for line in f:
        key = re.match(r"^charging state:\s+charging", line)
        if key:
            return True


def render(path):

    level = int((float(getRemaining(path)) / float(getMax(path))) * 100)

    colour = ""
    if isCharging(path):
        colour = "\033[1;36m" # Cyan
    elif level < 25:
        colour = "\033[1;31m" # Red
    elif level < 50:
        colour = "\033[1;33m" # Yellow
    else:
        colour = "\033[1;32m" # Green

    print colour + str(level) + "%\033[0m",

render(battery)

May 13, 2009 02:40 +0100  |  Employment Geek Stuff Linux 3

It happens, especially in recessions and when it does, there's often little or no warning. You come into work on a Friday, work through the day, and at the end of the day, as you're heading out of the office, the boss comes to you and says something to the effect of: "Sorry, but you're done here."

Not long after you manage to get over your panic attack, your boss drops another bomb: you're not allowed to access your computer again. All of your personal email and/or files that you have on there are going to be backed up into hard drive somewhere and gods know what the sysadmin is going to do with it.

Now one might argue that if you're putting personal stuff on a company computer, the company owns that stuff, and legally speaking, you might be right, but morally, it's your stuff that you access at work because work takes up the vast majority of your day. It only seems fair that if they're going to give you the boot with zero notice that you have a chance to keep your emails and IM conversations with friends and family private.

So, in case you've ever wondered what might be a good way to keep your data more-or-less safe in such situations, I thought that I would post a little how-to here.

Option One

Don't put personal information on your company computer. It will save you all kinds of hassles, even if it does make life at work considerably less bearable.

Option Two

If you're going to put personal information on your company computer anyway, the best way to secure it is to have your computer continuously check a remote source (under your control) for instructions. You can then leave the instructions blank until Something Bad happens. For example, on a Linux machine:

  1. Create a tiny script file (call it "remoterun" for the sake of this example) and put this in it:
          #!/usr/bin/env sh
          curl -s http://somesite.com/instructions.txt | sh
    Now make it executable.
  2. Log into the server hosting somesite.com and place a file called instructions.txt in the document root. It can contain anything you want to execute on your machine. I recommend the deletion on your home directory (so long as there's no company data in there) and the removal of your personal account from the box. If you choose though, you can be a little more zealous and delete your music files, any background wallpapers you if you want. Just don't delete anything belonging to the company or they will be well within their rights to come and kick your ass in all kinds of unpleasant ways. Here's an example of a simple instructions file:
          # Delete my music
          rm -rf /opt/share/music
    
          # Delete my account
          userdel --force --remove daniel
    
          # Delete the remoterun script
          rm -f /path/to/remoterun
    This part is very important: Do not put anything in this file that you do not wish to run immediately. The above would nuke your personal data, so only put destructive instructions in the file when you actually want to delete stuff. Until then, you can just leave it blank.
  3. Now that you have an instructions file, you just need to make sure that your office computer runs the remoterun script every hour or so. That way, the machine will run your instructions within an hour of you setting them up on somesite.com. In Linux, you can do this with cron:
    # crontab -e
    That will allow you to edit the crontab for the current user (be root, it's best for this kind of thing). Now you just add the crontab line:
    00 * * * * /path/to/remoterun

That's all there is to it. Every hour, your office machine will connect to somesite.com and execute whatever instructions.txt says. Windows users, I'm afraid you're on your own but the theory is the same.

Now remember kids, use your powers for Good, not Evil. I've provided the above so you can be a responsible person while protecting your private life from someone who shouldn't have access to it anyway. I hope that you will do the same.

March 05, 2009 07:11 +0000  |  Family Friends Japan Korea Linux Python Scrubby Travel 4

It's true. I'm still alive, though I couldn't blame you if you'd considered otherwise. I've been neglecting this blog of late. Actually, I've been neglecting most of my life lately but soon, very soon, I shall have a break and I wanted to get this Long List of Stuff out of the way before that happens so here goes:

Carmen

A little over a month ago, I attempted to expand my cultural horizons by taking in My First Opera at the Queen Elizabeth Theatre. I accompanied Margaret, Dianna, and Aisha to the show and like good opera-goers we dressed up pretty for the night, then quietly mocked the yahoos who felt that jeans and a tshirt was appropriate.

For my part, I can't say that I really enjoyed the opera. (Sorry Diana). I didn't hate it either though. Frankly, it didn't do much for me at all. I found much of the music frustratingly simple when compared to a symphony or even broadway show, and the characters completely unbelievable. The emotion they conveyed (quite brilliantly I admit) didn't make any sense when the story seemed so trivial. I guess Opera just isn't for me.

I still have trouble getting over the fact that they would hold something like an opera in a venue that doesn't really lend itself to acoustic projection. The QE Theatre, while quite functional as a normal theatre, doesn't hold a candle to the acoustics you find in The Orpheum, yet they hold rock concerts in the latter and opera in the former. This makes no sense to me.

Choir

Not too long after my night at the opera, I went to my first choir practise in years. Simple Gifts, a local amateur choir run by Ieva Wool and for the most part, I liked them. The people I sang with had talent, the director was patient and helpful and overall everyone in the room seemed to really enjoy the whole experience. The only negatives were the average age of the singers (~50ish) and the fact that the practise was held on Tuesday nights... I had no idea how tiring a regular weekday practise from 7:30 - 9:30 would be, but it was.

I had the opportunity to try out the choir for two practises before I decided whether or not I was "in" or not, and the decision of whether or not to keep going came down to a simple gut feeling: I was just too tired. That is, the idea of going to choir on Tuesday felt more like a responsibility ("you're going to like, this so you have to go") as opposed to a joy ("yay! choir!"). I chalked it up to the general energy level of the choir (dear gods I miss Mr. Rhan sometimes) and my own energy reserves at the end of my work day. I just couldn't give anymore, so I declined to join.

If my situations changes for the next "term", I'll drop in again and give it another go, but for now, I just didn't feel like I was getting what I needed out of it.

The Super Secret Project

My father is an Idea man. Much like myself, he has new ideas all the time, though the difference between us is that his ideas are usually profit-driven while mine remain the betterment of mankind-types. His latest idea however has been snowballing into a full-blown project and will likely launch this year. Through the life cycle of his this beast, he's been coming back to me asking questions about how he could do "x" and I would work out with him roughly how everything would work... well it's time, now he wants me to build it.

I've done some research and it looks like I'll be installing Gentoo Linux on one of these running a really cool Python script I wrote that captures mouse clicks and logs stuff to the database and then pushes said data over the Internet to a master server via one of these things. It's gonna be fun.

Korea and Japan

And now for the big one: I'm going to Korea on Saturday and then to Japan on the 14th, then home by the 22nd. It's gonna be frickin' cool. My friend Susan, who's currently teaching English in Daegu, Korea was looking for company for a Japan trip and I jumped at the chance (finances be damned!). The way I see it, Japan is too foreign a country for me to be comfortable exploring on my own, and frankly, few of my friends have the money or the interest in making the trip. This opportunity was too rare to pass up... and so I go!

It looks like th total cost of flights, trains and accommodation will be in the neighbourhood of $3000CAD which may sound crazy high but you have to remember that it is the other side of the world -- the two trans-Pacific flights alone make up 50% of that sum.

It'll be fun to hang with Susan though -- we never spent enough time together when we were both in Toronto, so this will give us time to catch up :-) She has her heart set on a traditional costuming thing that they do regularly in a park in Tokyo, and I'm really stoked about both riding the subway in there and visiting the Nintendo headquarters in Kyoto... no, I don't know if they have tours, but I don't care. I just want my picture in front of the Nintendo sign :-)

I'm currently taking orders for stuff people want me to bring back, so if you want on the list, just drop me a comment. Also, if you think that there's something I should see out that way, let me know and I'll try to add it to our itinerary. The cities I'll be in are: Seoul, Daegu (maybe), Tokyo, Kyoto, Okinawa City, and Naha.

Alright, I figure that makes up for my rather long absence. I'll try to be more studious when I'm blogging on the other side of the planet :-)

November 28, 2008 19:26 +0000  |  Geek Stuff KDE Linux 0

One of the reasons I switched to Arch Linux was that I didn't want to have to compile all of my packages anymore. However, in leaving Gentoo for the Arch world, I also gave up a certain amount of ease of customisability (is that even a word?). Gentoo does, after all, excel in letting you do whatever you want to your machine and there are some circumstances where that's pretty important... even for users like myself.

Such a situation presented itself when I realised that the KDE binaries shipped with Arch do not include debugging support. This is obviously in place to improve performance, but for a bleeding-edge product like KDE, this also makes it very difficult to offer a good bug report. Thankfully, Arch's build system (abs) allows you to compile any program you want and install it with the package manager with little trouble... so I did just that.

Below is a quick script I wrote to rebuild all of my KDE binaries with debugging enabled. It's commented so you know what's going on:

  #!/usr/bin/env bash

  # Create a workspace if it isn't already there
  mkdir -p $HOME/abs

  # Fetch a list of kde packages from pacman
  PACKAGES=$(pacman -Qs kde | grep -v '^ ' | sed -e 's/ .*//' | sed -e 's/local\///' | grep '^kde')

  # Loop through the package list
  for PACKAGE in $PACKAGES; do

    echo $PACKAGE

    # Copy the package to your workspace
    cp -r /var/abs/extra/$PACKAGE $HOME/abs/
    cd $HOME/abs/$PACKAGE

    # Edit the PKGBUILD file to use debugging
    sed -i -e 's/DCMAKE_BUILD_TYPE=Release/DCMAKE_BUILD_TYPE=RelWithDebInfo/' PKGBUILD;
    echo "PATCHED"

    # Make the package
    makepkg -s

  done

Once you've built all of those (it'll take a long time... KDE is huge), you can install each one with pacman:

  # pacman -U PACKAGENAME-VERSION-i686.pkg.tar.gz

It's also a good idea to recompile qt as well. For that, you just add -debug to the configure list in its PKGBUILD file.

For more information, please visit the Arch Linux wiki page on ABS.

November 14, 2008 00:33 +0000  |  Geek Stuff Linux SSH 0

For the longest time, I've been fighting with this problem:

$ ssh someserver.ca
Received disconnect from 123.123.123.123: 2: Too many authentication failures for username

It never asked for my password, it just flat-out failed. After some digging, I realised that the force behind this was my use of ssh-agent, a daemon that holds onto the myriad of keys (and their respective passwords) that I use to access all of my servers. It turns out that by default ssh-agent attempts to use every key you've got to access a server. However, because the destination server usually rejects login attempts > 6, the whole thing blows up before it ever gets to the "enter your password" step.

The solution is this handy one-liner in your ssh client config (~/.ssh/config or /etc/ssh/ssh_config):

  Host *
    IdentitiesOnly yes

Contrary to what you might think this means, IdentitiesOnly doesn't force the use of identities, rather it tells the client to only use identities explicitly defined for this host. This way my client uses identities assigned to a host via the config, and if one isn't set, it isn't used.

Why this isn't the default is beyond me.

November 09, 2008 12:53 +0000  |  Geek Stuff Linux 0

I think that it's been more than 12 hours. More like 14... it's all a blur really.

I started today with a lofty goal: do a complete system wipe of Moulinrouge, my file/web/mail server that hosts pretty much all of my life... including this site. I decided to take the last step in my abandonment of Gentoo Linux in favour of my new love, Arch Linux, the process of which only added to the difficulty. I also moved my DNS and DHCP servers to Serenity my firewall machine as I'd gotten tired of the various exceptions I had to make to host those services with Moulinrouge.

Strictly speaking though, the whole thing went rather well. I had rsync'd my entire filesystem over to the 1TB USB2 drive, and the Arch install ran with no problems at all. The biggest hiccup came when I realised that Exim isn't packaged with MySQL support in Arch, so I had to do a manual compile for that one using ABS. A pretty cool experience I might add, though frustrating when you condiser how common such a setup may be. For those interested, I followed a helpful forum post on what needed to be changed and created a simple patch file for PKGBUILD so I can use it again later:

# pacman -S abs
# abs
$ mkdir -p $HOME/abs
$ cp -r /var/abs/extra/exim $HOME/abs/
$ patch $HOME/abs/exim/PKGBUILD PKGBUILD.patch
$ cd $HOME/abs/exim
$ makepkg
$ pacman -S exim-4.68-5-i686.pkg.tar.gz

The other fun bit I discovered was SSH's ability to not only run its own version of secure-ftp (sftp), but also run it in a chroot environment with ChrootDirectory. This required a lot of experimentation so I thought that I'd post a few notes here:

  • In a chroot environment, logging is not possible until OpenSSH 5.2. Don't try, it'll only cause you pain.
  • You cannot chroot a user into her or his home directory as the "new root" must be owned by the root user. Instead, what i found worked well was setting up a series of user directories owned by root under /srv/http/untrusted/username which then had the user's websites inside.

Here's my sshd_config snippet:

Subsystem  sftp  internal-sftp

Match Group untrusted
  X11Forwarding no
  AllowTcpForwarding no
  # Won't work 'till 5.2
  #ForceCommand internal-sftp -l VERBOSE
  ForceCommand internal-sftp
  ChrootDirectory /srv/http/untrusted/%u

Lastly, PHP in Arch is very different from my experiences in Gentoo, Unbuntu, Debian, Suse and Redhat. Even FreeBSD was more intuitive. For starters, Arch uses some less-than-common defaults in php.ini:

  • error_reporting = E_ALL
  • magic_quotes_gpc = Off
  • short_open_tag = Off

Then, when you try to start up Apache, you find that it's not loading PHP. To make that happen, you have to add the following to httpd.conf and reload your webserver:

LoadModule php5_module modules/libphp5.so
Include conf/extra/php5_module.conf

After all that though, you'll notice that MySQL and a suite of other extensions you're used to seeing as part of PHP aren't there. If you stopped by this site earlier for example, you would have seen the glaring errors complaining that mysql_connect() didn't exist. To make all of that work, you have to go back into php.ini, scroll down to the bottom and un-comment the various extension lines... among them:

extension=mysqli.so

There were other fun problems, but this post is already quite long and it's almost 5am now. Must get some sleep so I can finish it all up tomorrow!