Blog

July 28, 2023 11:56 +0000  |  Employment Management Software 0

When I started as tech lead at Limejump, it was my dream job. Finally, I was going to be able to actually lead a project's technical direction, rather than spend a whole lot of time arguing about how I thought it should work. As it turned out, this job was a whole lot more than that, and all of it has been a fantastic experience.

Now that I'm leaving though, I find myself having to explain just what my job is for those that will step in to replace me and I thought it pertinent to write this all down. Maybe someone will see it and find it useful, or maybe they'll call me out on my bullshit. Either way, the result will likely be net-positive.

External Relations

The idea that as technical lead I'd be spending a big chunk of my time not talking to my team at all was a big surprise, but that's how this has worked out. Over the last 2½ years, a surprising amount of my work has consisted of communication with other teams, our product owner, my manager, and even upper management directly.

Coordination

For the most part, a lot of the comms are about coordination. My team needs X to be finished by Team Y, so I'm talking to them about what they need to get X done. I can then later go back to my team and set reasonable expectations about the future, which will sometimes include a conversation about workarounds for the interim.

It's more than just harassing other teams about deadlines though. In many cases, I'll be asking for advice around what those teams are doing, what works best for them, what they might need from us or trying to arrive at a consensus of best practice across the company.

It goes the other way too. Other teams, when curious about what mine is doing, will usually just reach out to me directly. "Why does service X do Y? Can it do Z instead or as well?" This is a big part of my day.

Technical PR

If your team builds Something Awesome, but no one knows about it, it'll never be used, and so those efforts are effectively wasted. So part of my job is talking to other teams (usually just the nerds) and promoting some of the cool stuff we're doing. Maybe we've got a new library we think others might benefit from, or a new process for our CI that has improved things. Talking about this with other nerds earns our team respect and helps the company as a whole build on our experience.

Taking those Meetings Bullets

No one likes going to meetings, especially engineers who would rather be writing code. Honestly, I'd much rather engineers never have to be in any meeting they don't want to be in 'cause their contributions toward actually building things are much more valuable. To that end, if someone has to go to a meeting, I usually volunteer. It's my job to know everything about what my team is doing technically, so theoretically I should be able to advise on any subject related to what we're doing. Let the nerds do what they love instead.

Criticising Management

When I became a tech lead, I thought I'd never be in a situation again where I had to argue with my boss about the right direction for things, but I've learned that as you move up the chain, you're still writing software, just through additional layers of abstraction ;-)

There have been a few times where upper management has made decisions that I've disagreed with. Whether it was a choice to keep an antiquated legacy service alive, or to migrate a bunch of systems to another standard, it's my job to be critical of things I disagree with.

Sometimes I've been persuasive, and other times I've simply had to adopt a position of "well, at least the truth is where it needs to be". I've even reconsidered my position a few times and gone back to my team to support the new direction. In any case, I think it's important that a tech lead speak out when they see the company doing something they think is wrong. It's basically a big reason they're paying us.

Institutional Knowledge

As someone who's not necessarily deep in the code but rather leading a team of people developing (19!) different projects, I'm in the unique position to be able to "mostly know what's going on" in a lot of different areas. As employee turnover churns, that knowledge becomes more valuable, such that on any given day, about 20% of my conversations are from newer colleagues asking me why something is the way it is and if it can be safely changed to do something else.

A lot of companies think that you can solve this problem with thorough documentation, but in a start-up atmosphere, where things are developed, tested, partially adopted, and then thrown away because of a discovered failure (move fast and break things!), expecting that everything be documented is a bit nuts. Even if you could document it all, no one would ever read it. Hell, if you've read this far into this post, you're probably in the 1%.

So, the best you've got in a lot of cases is good communication between your longer-running staff and the newer staff. Make sure people ask why a lot, so we can pass on lessons learnt.

Greasing the Wheels

My team is awesome and they all really know what they're doing, but sometimes they run up on something that blocks their progress. If that problem is political (management needs to fix something, or someone above needs to approve something) then they go to our engineering manager, but if it's technical they come to me.

I have a lot of days where I'll spend an hour or more with my nerds troubleshooting a problem, pair programming, or just fiddling with configurations together to get things working. Sometimes it's just me sitting in for some technical advice/guidance, and sometimes we're learning together. Either way, my involvement is usually only momentary, getting the engineers un-stuck so they can carry on being awesome.

Mentoring

Probably my favourite part of this job has been the mentoring. I've worked with some really brilliant people at various stages in their careers. With 23years behind me, I get to play the "Elder Nerd" and talk about "that one time where I worked at a company where X happened".

The key thing here for me is that you have to put the interests of the person you're mentoring over the interests of the company as a whole. If you don't, they'll know it and they won't trust you. I think I've managed to cultivate a reputation where people know that I'll always be straight with them, and that's allowed me to have some really great conversations about personal and career development. I've also made some great friends.

Technical Direction

Much of what I do as a technical lead is not technical direction at all.

Imagining the Future

This was the most daunting part of the job when I applied for the role. I figured I was a pretty good coder, but could I actually lead a team? Why they hell would anyone follow me? I decided to look back on all of the tech leads I'd had over the years and apply the good stuff (obviously) but also look deeply at the truly terribly bosses I'd had and decidedly do the opposite.

To that end, I didn't direct the team at all. Instead, I spent months just getting to know the team, the context, and the various codebases we were responsible for. Over time I started to sketch out a diagram of where I thought we should be and updated it daily through various conversations.

After all that time, I had a pretty good idea of where I figured we should be going, but critically I never tried to impose that vision on the team. Instead, I used it to inform my conversations with them and slowly nudge us in the direction I wanted. The idea was to make sure that the team as a whole decided to go in a direction collectively with some guidance, rather than just slapping a diagram on the screen with "Ok kids, here's where we're going!"

Some concessions were made of course, but they were never treated like battles won or lost because there was never a battle at all. We decided, as a team to build things this way. I've been really happy with the result, and I believe, so has the rest of the team.

Code Review

I don't write a lot of code these days, but I review tonnes of it. With a team of 5 other engineers churning out multiple PRs each a day, I'm usually the one going through that code.

For the most part, I'm not looking for bugs. Instead, I'm trying to make sure that the code is:

  • Safe: Have we made any decisions that could leak data or pose a security risk?
  • Boring:
    • Does it conform to standards?
    • Is it needlessly clever?
    • Can someone who's never seen this code before understand what it does easily?
    • Does it violate the principle of least surprise?
    • Is it self-documenting, or do I need a probably-out-of-date document to understand it?
  • Tested: I mandate 100% test coverage, allowing for explicit exceptions that must be defended during review. This may sound extreme, but the result is code that can be regularly and easily updated. On our projects, a complete Django update takes about 1hr of developer time, while at previous companies it was weeks or even months of work combined with a lot of fear & uncertainty.
  • Performant: This is where I get to say things like: "We did this at Y company back in the day and it didn't go well, maybe try Z instead."

I also try to give some time to questions around broader architecture. Should we be storing this code here, or should we instead be moving it into a different folder or even an external library or service? Sometimes these questions are more meant for later conversations though.

The pattern we usually follow is that unless there are "show stopper" bugs, security flaws, or violations of any team standards, I usually mark the PR as "Approved" and let the engineer decide if they're going to implement any of my suggested changes. It's a collaborative effort, and engineers shouldn't feel like their tech lead is writing their code for them.

Compromise

However every once in a while someone writes something that I just think is a Bad Idea. It's not that the code is bad, but rather that it takes the wider codebase in a direction I'm not comfortable with.

This is a Hard Problem for me. In these instances I struggle with balancing what I think is the right direction for the project and making someone on my team feel like they've wasted their time, or worse, that I think they're a bad engineer. What follows is usually a dance of egos and an attempt to find some middle ground, which is not always possible.

This sometimes is a battle, and in the end, someone will have to give a little. I like to think that I've been reasonably conciliatory, but I guess I'll leave it up to my colleagues to be the judge there.

Cheerleading

Humans aren't ants. We need a reason to keep going, so if you work at a job that feels soul-crushing, you won't work there very long if you know what's good for you.

Collective Ownership

I can't take credit for this idea, as I'm pretty sure that Rob, our perpetual team sunshine inspired this, but I'm a big proponent of it:

If you write the code, and I review it, it's not your code anymore. It's our code.

A lot of companies talk about "no fault retros" or a "culture of shared responsibility", but in 23years I've never seen it done as well as we've managed in our team. Somehow, we've managed to foster this culture of collective ownership to the point where we carefully choose our pronouns when talking about our work.

  • "The server fell over when it received X"
  • "We made a change last week to call Y when X was received"
  • "Alright no problem, let's make a ticket for this so we can patch it up for tomorrow's release."

If someone tries to claim ownership of a bug or failure, someone always reminds them that they didn't cause this problem, we did. The result is a team that celebrates individual and collective successes and takes on failures as a shared burden.

Morale

Sometimes things suck. Sometimes there's a load of work ahead, or a colleague has left, or a project was killed. Whatever the cause, as the lead it's at least partially my job to try to keep spirits up, to make what needs to be done feel achievable.

Honestly, this is one of the harder parts for me as it always feels forced. I mean, I'm usually a rather emotional and animated person, but it's hard to step out of myself and try to illicit a particular feeling in others, especially if it's for the benefit of a company rather than a person.

The same goes for good news though. Pitching a subsidised night out to management when a project is delivered on time, or even just to acknowledge the efforts of individuals is a pretty great part of the job.

Actual Code

I used to have a tech lead that would regularly lament: "I didn't even get to write any code today!". As one of his engineers at the time, I thought that this was a pretty weird thing to get worked up about. After all, I was writing code all the time and it wasn't that great.

You start missing it though. If you're in a job where you're only ever looking at other people's code and not writing any of your own, you get... itchy.

Ticketed Work

I'm in stand-up every day, and I try to regularly take a ticket and hack away on it throughout the week. This work generally takes a back seat to everything above though, so I try to avoid taking any work that might require a lot of time or upon which other tickets depend so I don't end up blocking anyone.

Usually, I try to sharpshoot tickets whose work will inform future development, so that I can establish what I think are good patterns for what's coming down the pipe, but it doesn't always work out that way.

Gardening

Finally, if everything else is accounted for, I try to do a little of what I affectionately call "gardening": the process of looking at existing code and creating a pull request to make it a little more stable, performant, or just developer-friendly. Most of the team is focused on churning through tickets, and sometimes improvements are overlooked: typos in comments, missing type hints, somewhat kludgy ways of doing things that could be a lot cleaner and simpler if afforded the time and attention. On days when I need a break, I do some gardening, create a PR, and ask that someone review it when they've got a moment.


And that's it. That's my job. It's a whole lot more than I expected it to be when I answered the recruiter 2½ years ago, but to be honest, I really like it. It's kind of the perfect middle ground between hands-off management and hands-on trench work, and I wish more companies followed a model like this.

At my next job, my title will be "engineering manager", but as I understand it, the role won't be all that different, just with added line-management responsibilities. This probably means I'll have even less direct code access, but that's fine by me. I can satisfy the "itch" with some Free software projects. 😆

January 03, 2021 21:16 +0000  |  Economy Employment Free Software Health Politics Software 0

This year sucked. That line is probably enough to remember the nightmare that is 2020 when I'm (hopefully) looking back on this post in 10 years, but as it's my tradition to go into depth on the past year at the start of a new one, let's go a bit deeper into the why this year sucked so much.

The Pandemic

This was the year that the COVID-19 pandemic took off. Lockdowns all over the world started around March and for the more civilised countries (New Zealand, Taiwan, a few others) that was the end of it. The rest of the world however could not get our shit together.

From the talks of "natural herd immunity" to the politicising of the virus and its prevention as a left-wing conspiracy, nearly every country failed to do the right thing in the most calamitous way possible.

It's left the people with a sense of reason exhausted. I mean, we have experts in this field. Those experts told us what we needed to do to stem the spread. Our leaders overwhelmingly did not heed that advice and chose instead to let 1.8 million people die (so far).

Even while mass graves were being dug in New York, leaders in nearly every nation were refusing to even close the schools. Here in the UK, (home of the famous "take it on the chin" comment by our fearless leader) we had policies that actually encouraged people to eat out at local pubs, and no mask mandate. Now the UK wears the dubious distinction of being the source of a much more virulent strain of the virus. Other countries have closed their borders to us, but nearly all continue with anti-science policy that inevitably leads to more death.

Vaccine Development

There's some good news though: 3 promising vaccines have made their way through a (very rushed) development & testing process to be cleared for emergency use in Europe and North America (and presumably elsewhere). The roll out has (unsurprisingly) been a mess here in the UK, and now there's talk of actually mixing-and-matching the vaccines which sounds insane to me, but again, unsurprising given the kind of leadership this country has.

From my (admittedly ignorant) read of the science behind this though, I'm currently on-board with getting a vaccine (or a "jab" as they call it here) when it's made available to me. As I understand the risks of so-called "Long COVID" vs. the nature of an mRNA vaccine, it's still a smart move in my mind.

Radicalised

Was 2020 a “bad year” or are we simply approaching the inevitable conclusion of living under an economic system that is fundamentally incompatible with human dignity and happiness?

Throughout all of this, I've become more "radicalised". My contempt for capitalism is more palpable, and I'm angrier every day.

All of this, all of this is a direct result of capitalism. From the Chinese government refusing to crack down on wild/exotic animal wet markets, to the world's pandering to their carelessness, to their covering up of the outbreak until it was too late, to the world's reluctance to close the borders, to anti-science policies in nearly every nation treating the working public like expendable peasants. All of it is driven by capitalism:

China

We've continued to trade with China and support their economy because it's profitable for the rest of us. It doesn't matter that they commit genocide or are among the worst polluters on the planet. We pretend that this is only their problem when logically we know that it isn't. The same is true for their public health regulations.

We knew that China's public health policy was a breeding ground for pandemics. We've seen it before. But isolating them? Punishing them for being a threat to world health? That would affect our profits.

And so we did nothing and China acted exactly as everyone knew they would.

Management once the pandemic started

The science was clear on all of this:

  • Close the borders
  • Close the schools, the churches, the markets, and the malls
  • Limit travel
  • Limit the spread by keeping people at home
  • Track and trace infected cases

But we all had rent and mortgages to pay. Around 300 million of us (the Americans) couldn't even have medical care if they were unemployed. How could anyone possibly do the right thing and follow the science?

Our governments could have stepped in. They could have put a moratorium on rent and mortgages. They could have mandated the expansion of grocery store delivery networks and required that no one be permitted to go to work if that work is not directly involved in a key industry like the food supply, public health, utilities, or the military.

The right thing would have been to do this for just a month or two and get a handle on the virus. Limit its spread and understand its behaviour. It could have been financed through a wealth tax or some other fiscal tool levied against those profiting from the pandemic.

We didn't do this though, because capitalism demands that we all go to work doing jobs that don't really matter so that the very rich few continue to accumulate wealth. It's a given that millions will die, but it's also understood we're all replaceable.

Disaster Capitalism

All of this is what Naomi Klein calls "disaster capitalism": the idea that disasters are leveraged (if not also created) by people who profit from them.

There are absolutely winners in all of this: Amazon and Tesco for example both posted record profits while exploiting their workforce. As The Guardian pointed out:

Bezos has accumulated so much added wealth over the last nine months that he could give every Amazon employee $105,000 and still be as rich as he was before the pandemic.

None of this is to say that there's some sort of illuminati cadre of rich assholes running the world. Only that the world is as it is because these sorts of people profit from it the way things are rather than how we all know they should be.

We don't need 2¢ USB sticks from China or next-day delivery of slippers from Amazon. We need a universal basic income, nationalised health care, and a government that understands the economy as a system of land, water, and people rather than currency.

This pandemic has happened entirely because we have prioritised personal wealth over humanity.

It's not just a bad year

Towards the end of the year, it became fashionable to refer to how we'll all be glad that 2020 is over, because somehow everything was going to be better in 2021. Nothing has changed though, and so even if the vaccine is rolled out smoothly and the pandemic subsides, all of this — in one form or another — will happen again because that is what this system was designed to do.

The worst is yet to come. Next up we're looking down the barrel of a crippling depression and the appallingly inevitable climate catastrophe. The skies above California literally turned red this year, and yet that nation still has no salient climate plan. The world community has done little more than talk about how we should probably do something, but fossil fuels are still subsidised by nearly every industrialised nation.

There's a reason you feel like things have only been getting worse: they have. Disaster capitalism is as much about profiting off of disaster as it is about demoralising the peasantry and keeping us fearful. We've been "holding on" for so long, hoping for things to get better when they absolutely will only get worse so long as we live under this system.

In Other World News

Despite the pandemic, there were a lot of things that happened worth noting that happened this year:

Black Lives Matter

George Floyd was murdered by a police officer and the country, the world was (finally) enraged. From what I've been hearing, very little has come of the rage though, as the pandemic has made mobilisations difficult. Still, calls for defunding or abolishing the police are finally being taken seriously, so that's a start.

Trump

Trump made it through all four years and got clobbered in an attempt at re-election. I maintain that if this pandemic hadn't happened, he would have won a second term (I have that little faith in the US), but with more than 350,000 dead so far and millions losing their jobs, there was no way he was going to win in a fair fight.

The question then was how much would the Republicans have to cheat to win this one, and they did their best: everything from gerrymandering, to restricting access to voting places, to sabotaging the postal system. None of it was enough to give Trump a win, though it may well have been enough to hold onto the Senate. We'll know in a few days with the Georgia run-off vote.

Oh, and there's widespread claims that the election was somehow fraudulent, and that Trump was actually the winner. This has led to Trump-devotees holding (maskless, of course) rallies calling for the arrest of Joe Biden.

And one more thing: Q-Anon is a thing now. There's a lot of overlap between these nuts and the nuts claiming that Trump actually won.

My Life, Directly

In comparison to any of the above, my life doesn't exactly feel significant, but this is my blog, so I'm going to cover that too.

Lockdown

The (limited) lockdown we had here in the UK was rough. I was just holding onto my sanity, being able to send my 1 year old away to the child minder during the work-week, but when that was all cancelled, Christina and I became full-time babysitters while also being full-time employees.

We "managed" this by working in shifts. I would work 4 hours while Christina looked after Anna, then I'd take care of Anna for four hours while Christina worked. When Anna napped midday, we'd both work, and when dinner came around, one of us would cook while the other took care of the kid, then she'd go down and both of us would go back to work 'till 11 or midnight at which point we'd go to sleep only to repeat this... for the entire month.

I won't complain though. It was hard, but at least we remained employed through the fortune of having remote-friendly work. I know that a lot of people in this country were looking down the barrel of no income and substantial rent to pay, so I know that we've been very fortunate.

Our childminder was freaking out when she heard the news that she couldn't keep her doors open, since no kids meant that her income was suddenly reduced to £0. Christina and I decided however that so long as our employment situation didn't change, we would continue to pay her as if Anna was in full attendance as usual.

Fear

The worst part of this though — at least for me — as been the looming fear. Yes the odds of death are low, but they're still very high compared to almost anything you would choose to do on a daily basis. On top of that, the long-term health effects of COVID-19 are almost entirely unknown. There are reports of cramps and migraines lasting months, and permanent heart damage, so this isn't something anyone wants to get.

My parents are both very high-risk, and yet they continue to have regular visits with my brother who flies all over Canada for work. It doesn't help that my brother's attitude toward COVID is more dismissive than anything else.

Personally I've had breathing concerns for years ever since I contracted pertussis in my late teens. Every time I've had a bad flu since then, there have been moments where the coughing and seizing locks up my whole respiratory system and I literally can't breathe. In those moments, I'm taken back to that year where whooping cough was destroying my lungs and I think that maybe this time will be the last... and then it subsides.

...and that's the flu.

I may talk a big game about the macro-level implications of this thing, but I'm honestly — personally — worried.

Christina is less concerned (which doesn't help with my own fears). She's frustrated by the way this year has likely stunted Anna's social development, how we see our friends so rarely (always outside, at a "safe" social distance), and she remains (rightly) concerned about the way the vaccines have been rushed through, and how public health is once again being politicised: you're either happy to give your 2 year-old a vaccine that's never been tested on 2-year-olds being rolled out by a government with a demonstrated lack of interest in public health, or you're an idiot anti-vaxxer who hates Britian.

There's a lot of stress to go around.

Goodbye Workfinder, Hello MoneyMover (again)

On the corporate front, I said goodbye to Founders4Schools/Workfinder back in November, and while I'll miss a lot of the people there, I won't miss working there for a variety of reasons.

For the last 2 months of 2020, I went back to MoneyMover to help move some of their codebase forward. I'd been helping to keep things running in my off-hours for the last 2 years, but there were a lot of things that needed more dedicated attention, so I agreed to come back for a short stint to help out. It's a great place to work, so I've really enjoyed being able to work with with everyone again.

Later this month, I'll be moving onto my next full-time job, this time with LimeJump. That move warrants an entirely separate post though, so I hope to get to that soon.

Majel

Finally, the best news (for me anyway) this year was the "launching" of my latest side project, Majel. I won't be announcing it to the nerd world for a few days still, but I'm really happy with how it's turned out.

Majel is a front-end for Mycroft, an OpenSource Alexa replacement. Imagine being able to "install" Alexa on your laptop or a Raspberry Pi and know that it does what you want without eavesdropping on your conversations. Mycroft even sells dedicated devices that do the same thing (just like an Echo), again, all Freely licensed so you can extend it in any way you like.

Majel is one such extension, my add-on to the Mycroft system that allows you to control a web browser with voice commands. Sure, maybe Alexa can control a "smart" TV and play shows from Amazon Prime, but it's unlikely that Amazon will also let Alexa control Netflix, let alone a local library stored in something like Kodi.

So I wrote Majel to do just that. You can say stuff like:

Play The West Wing

and it'll look at your local library and play those files if you have them (remembering where you left off of course). If you don't have them, it'll ask Netflix & Amazon who has the show and then play it with the service that does.

It also does stuff like:

Youtube baby shark

Where it'll look up "baby shark" on Youtube and play the first search result, full-screen and on a loop. Anna was thrilled.

Finally, it plugs into my Firefox bookmarks to do handy things like:

Search my bookmarks for chicken

Where it'll draw up a touch-friendly web page full of chicken recipes from my curated collection.

It's all licensed under the AGPL and regardless of whether or not there's much interest in it, I'll likely continue to develop on it. I want to be able to tell it to do basic web stuff, like do a Google/DuckDuckGo search for something or pull up a Wikipedia page on an arbitrary topic. I also want to get it to a point where I can say:

Call the parents

and have it start a video call, but that'll likely require working with something like PyGUI, so it may be a while before I can figure that out.

Anyway, I'm really happy with it, and it represents the culmination of roughly a year's work, squeezed into my off hours after Anna's gone to bed and when I'm not already expected to do some off-hours contracting. I'm hoping it'll show the Mycroft project a way toward making these digital assistants a more visual experience, but even if it flops, I'm still happy to have it running on my old Surface Pro 3 in the kitchen.

May 18, 2018 09:11 +0000  |  Software 1

Objects vs. Functions

I volunteer with a few groups of new software developers and a question that keeps coming up is: "why should I use objects?". Typically this is couched in something like: "I just have a lot of functions organised into files, and I'm not sure what reorganising all of my code to be OOP would really do for me".

So I thought I'd write out a detailed explanation with examples & such for those who might find it useful. Here goes:

It's ok not to OOP

I love me some objects. In fact, I'll often use objects for no other reason than to have my code neatly tucked into classes, even when a function will do, but I'm crazy like that. The truth is, a lot of smaller projects & scripts don't need OOP to do what you want, but once you really get a handle on what they can do for you, you might find that you're Classifying All The Things.

Regardless, please don't read this as some sort of "Classes are the One True Path" rant, 'cause it's not.

Foodz

I like food, so the examples I'm going to use are food-based. For our exercise we're going to be writing code for an imaginary kitchen run by robots, an idea that in 2018 really isn't all that crazy.

We've got 50 people to feed at 20 tables. Our code needs to keep track of who ordered what from what table, as well as prepare the food in the kitchen and deliver it to our hungry patrons. That's a lot of code, so we won't be writing it all out here. Instead, I'll break down a rough idea of how this might be done using procedural code (functions in files) vs. object-oriented code.

The Procedural Way (functions)

When you're working with functions, you're effectively pushing data around and modifying it when necessary. For our kitchen example, you might start with a complex array of data for our patrons:

patrons = [
    {
        "name": "Amber", 
        "table": 1, 
        "order": [
            {"name": "House Salad", "price": 350, "dressing": "Ranch"},
            {"name": "Death by Chocolate", "price": 250, "flavour": "Chocolate"}
        ]
    },
    {"name": "Brianne", "table": 1, "order": ["steak", "cake"]},
    {"name": "Charlie", "table": 2, "order": ["chicken burger", "pudding"]},
    {"name": "Dianna", "table": 2, "order": ["salad", "pudding"]},
    ...
]

That takes care of who is sitting where and what they ordered. Next we also need instructions for making the food, which uses a branching system of rules:

def make_food(food: str) -> str:
    if food == "salad":
        return make_salad()
    if food == "burger":
        return make_burger()
    if food == "steak":
        return make_steak()

Each of those functions in turn might call other functions to do the "making", for example, the steak might include something like marinade() or salad.py could have another function inside it called make_dressing() which is called from inside make_salad(). The key thing to note here is that each of these functions are either very specific, or contain branching code to decide which thing to call next. It can get very messy, very fast as you add more and more types of food.

Finally, we need to serve the stuff. The kitchen will announce to the waitbots that the food is ready (that's an exercise in queues & pub/sub for another day) and the waitbots will bring the food to everyone.

To do this, the food plates will have a type, like salad or burger, but we also need to include who the food is for. To do this procedurally, this usually involves bundling bits of information together and passing that around, so your business logic might do something like:

def process_order(patron: dict) -> None:
    for food in patron["order"]:
        deliver_to_table(patron["table"], make_food(food))

for patron in patrons:
    process_order(order)

As make_food() returns prepared food, then we can just pass the result of the food making to deliver_to_table along with the table number and we're good, right?

The thing is, this is really complicated, and we've got a lot of raw data floating around that's created a rather rigid system. What if a patron changes their mind and wants to order a side of fries? What about all the different types of burgers out there, are we going to create a separate method for make_chicken_burger(), make_veggie_burger(), and make_cow_burger(), or just one larger method with a bunch of if pattie == "chicken": code in it?

At first glance, this looks like something that will work, but in the long run, managing and extending this code is going to be painful.

The Object-oriented Way

Objects have two primary strengths:

  • Encapsulation
  • Extendability

I'm going to explain both before we get to how to run this kitchen the OOP way.

Encapsulation

It's just a fancy way of saying that objects know how to do stuff. Rather than taking raw data and acting upon it, you just create an object and tell it to do a high-level thing -- it will figure out the rest.

For example, assume that cooking a steak requires using a grill and (hopefully) includes a long series of instructions on how to use that grill. Assume also that it involves a marinade or rub and maybe a selection of sauces to apply.

In a procedural system, you'd have a function called make_steak() which would likely have internal calls to maridade_steak() and to grill_steak(). These functions would likely be different from the instructions for cooking cow burgers or chicken burgers, so each food type would require its own special function with its own rules. Sure, you can probably have some functions cross-call each other, but the more you do that, the messier your code becomes.

In an OOP system, the interface is as simple a calling steak.prepare() -- the steak object will figure out the rest for you.

Extendability

The "figure out the rest" part is only different from the procedural method because you can do stuff like subclassing in OOP. A rib-eye steak is a lot like a sirloin steak, but there may be slight differences in the preparation. Subclassing means that you can take the standard rules for steak preparation and extend them for your specific case. Suddenly your code is a lot simpler.

Have a look at the examples to see what I mean.

Our OOP Kitchen

We need to keep track of our patrons: where they're sitting, what they've ordered, so we could create a Patron class, but as the patrons aren't really doing anything in our exercise, this would kinda be overkill. I mean you could create a class called Patron, but it wouldn't do much more than hold data, which a dictionary or list will do just fine already:

class Patron:
    def __init__(self, name: str, table: int, orders=None) -> None:
        self.name = name
        self.table = table
        self.orders = orders or []

Well it's neat & clean, so let's keep it for now. I suppose we could later extend Patron to include a .pay() method that would tally the costs for their meal table and pay from their bank account, but for now, let's just use this as an example of a really simply class.

Now, as each person comes through the door and they're seated, we create new Patron objects and attach them to our list of patrons:

patrons = [
    Patron(name="Amber", table=1, order=["salad", "cake"]),
    Patron(name="Brianne", table=1, order=["steak", "cake"]),
    Patron(name="Charlie", table=2, order=["burger", "pudding"]),
    Patron(name="Dianna", table=2, order=["salad", "pudding"]),
    ...
]

So far, not very useful. It's basically the same as our procedural system. However now let's make our code smarter and do away with these strings for the food, replacing them with objects.

We'll start with a Food class:

class Food:

    def __init__(self, name, price) -> None:
        self.name = name
        self.price = price
        self.calories = 0
        self.is_prepared = False
        self.is_served = False

    def prepare(self) -> None:
        self.is_prepared = True

    def serve(self) -> None:
        self.is_served = True

    def get_price(self) -> int:
        return self.price

We now have a thing (food) that knows how to prepare itself. We can create an instance of food, and it will know what it means to prepare itself. Of course right now, all it does is set .is_prepared = True, but should that ever need to change to say, notify a central server that a particular food was just prepared, you only need to modify the Food class and your business logic won't know the difference.

So that's encapsulation, but let's do the extension part. Let's define a series of different foods:

class Salad(Food):

    def __init__(self, name, price, dressing) -> None:
        super().__init__(name, price)
        self.dressing = dressing
        self.calories = 50

    def prepare(self) -> None:
        self._add_dressing()
        super().prepare()

    def _add_dressing(self):
        self.calories += 200


class Burger(Food):

    def __init__(self, name, price) -> None:
        super().__init__(name, price)
        self.temperature = 22
        self.calories = 300
        self.pattie = None  # Defined in the subclasses

    def prepare(self) -> None:
        self._grill()
        self._add_bun()
        super().prepare()

    def _grill(self):
        self.temperature = 100
        self.calories += 20

    def _add_bun(self):
        pass  # Obviously important, but I'm not sure how to code this.


class ChickenBurger(Burger):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.pattie = "chicken"


class CowBurger(Burger):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.pattie = "cow"
        self.calories += 30

This is the magic of extending your classes: you get to be lazy.

We defined Food once and since everything else is a kind of food, we extend that class to further define the food we're talking about:

  • Salads are low on calories, but need dressing
  • Cow burgers are calorific
  • Chicken burgers have more calories than salads, but fewer than cow burgers
  • Both cow & chicken burgers are prepared the same way, so we have the intermediary Burger class that knows how to ._grill() and ._add_bun().
  • Salads need to have an _add_dressing() step in their preparation.

Note that everywhere, we're calling super() to make sure that we get the benefits of the parent class. Not only that, but doing this ensures that if we change the parent class (say we start notifying someone that food has been prepared), then all the child classes automatically get that happening.

Now, let's go back to our patron definition and spice that up a bit:

patrons = [
    Patron(
        name="Amber", 
        table=1, 
        order=[
            Salad(name="House", price=350, dressing="Ranch"), 
            Cake(name="Death by Chocolate", price=250, flavour="Chocolate")
        ]
    ),
    Patron(
        name="Brianne", 
        table=1, 
        order=[
            Steak(name="Rib Eye", price=1399)
            Cake(name="Lemontastic", price=200, flavour="lemon")
        ]
    ),
    ...
]

Now your business logic looks like this:

for patron in patrons:
    for food in patron.orders:
        food.prepare()
        food.serve(patron.table)

At this stage, your code is ready to be extended like crazy just by editing the objects themselves:

  • If you've introduced a new buffalo burger that has special preparation instructions like combining with coriander, you can just create a subclass of Burger with special instructions to do just that.
  • If you run a promotion on burgers, you can override .get_price() in your Burger class and put your discounting logic in there to affect all burgers.
  • If you want to trigger a notification of some kind when the food is served, you just update the code in Food.serve()

As a bonus, your code is a lot cleaner because your function calls aren't riddled with all of these suffixes like prepare_steak(), grill_steak(), etc. Instead, you have one concise interface: .prepare() and .serve(). Let the object figure out what that means to it.

Finally, your objects know more about themselves, say for example, we now want to tally a per-patron bill for the end of the night. OOP makes this easy, we just update our Patron class with a .get_bill() method:

class Patron:
    def __init__(self, name: str, table: int, orders=None) -> None:
        self.name = name
        self.table = table
        self.orders = orders or []

    def get_bill(self) -> None:
        for order in self.orders:
            print(f"{order.name}: {order.price}")

After that, you need only call .get_bill() on each patron to get what they owe. No looping over complex data sets, or calling special functions for calculations. You could even modify Patron to allow for coupon discounts -- your interface is the same: .get_bill() while the Patron knows what that means.

July 30, 2017 19:23 +0000  |  Software 1

I'm just writing down my thoughts here in the hopes that Someone Smarter Than Me might be able to shed some light on the idea, or perhaps even work with me to make it happen.

I'm reading more and more about how fake news stories are circulating, and how technology has developed to the point where we can literally create images, audio, and video of events that never happened but appear as though they did. The effort so far seems to be in the area of somehow detecting a fake by searching for evidence of tampering, but this to me feels wrong-headed: it's expensive, slow, and will always be a step behind the fakes.

Why instead do we not simply sign each file on a sub-channel so it can be easily proven to be legit from the source?

For example, the BBC does a story about a politician and includes with it a picture of her doing something interesting. This picture is then circulated around the web with two bits of information hidden inside the EXIF data:

  • The original source organisation (BBC)
  • The signature of the image based on the BBC's private key
  • The original URL of the image (maybe?)

The image is then re-shared onto Facebook, where they've got simple software that:

  • Reads the original file and authenticates its origin against the BBC's public key
  • Resizes the image for its own purposes
  • Appends a second signature using Facebook's private key
  • Posts the video into the user's timeline with a "Verified BBC image, resized original from Facebook" caption

If the image is re-shared onto Twitter, or Google+, or Diaspora, these services will only be able to know that the image came from Facebook, but theoretically this still means more than not knowing the origin at all.

The goal is to create a means of authenticating the original source -- or at least a source more credible than "Jim's computer", and perhaps even the chain of modifications to said source There's also no reason this couldn't be applied to all kinds of media.

Maybe this technology already exists, though a cursory search didn't turn up anything for me. Anyone have any bright ideas?

January 22, 2014 17:46 +0000  |  Employment Software Web Development 0

Every once in a while I hear people speaking with authority about what exactly agile software development is, and the funny thing is, they usually conflict with other statements with similar authority about agile. Often, this is coupled with negative comments about how agile is impractical because X, which is frustrating, because some of my most productive years were spent in a fully agile office environment.

So I thought that I'd write something about agile as well, if for no other reason than to hopefully point people in the direction of what I know to be a very efficient and practical means of getting stuff done. I don't want to claim that this is the One True Way of agile development though, as I'm not interested in having the kind of conversation where we re-classify everything for the sake of giving it a name. My team lead at the time, Mike Gauthier called this system agile, and that's good enough for me.

Talk Less, Code More

The goal behind agile is to have developers spend time doing what they love: rolling code, and to keep them out of meetings they want no part of to begin with. Instead developers have only 3 responsibilities over and above writing code throughout the sprint. I'll cover these in more detail below:

  • A Morning stand up meeting: Every day, 10min
  • Sprint meeting: 1hr
    • 30min to recap the last sprint
    • 30min to prepare the next one
  • Any additional initiative taken to talk to the client about what they want

Note what isn't in that list:

  • Requirements meetings
  • Proposals
  • Logging hours
  • Documentation

The idea behind agile is essentially: "Here's a task, go!". The key to making this work is to keep the tasks simple and concise, so that the result of the sprint is incremental. Read: easy to deploy, with no surprises.

The rapid pace of an agile project means that the usual slow processes of planning meetings and wiki documentation becomes an exercise in futility: the job is done before it's planned, and it's changed not long after it's documented.

Stand Up

It sounds like a pointless process, but it's probably the most powerful part of an agile system. The morning "stand up" meeting, or "scrum" is exactly what it sounds like: the entire team stands up in a corner of the room to answer 3 questions each:

  1. What'd you do yesterday?
  2. What're you expecting to do today?
  3. What happened yesterday that prevented you from doing what you needed to do?

Each developer should talk for no more than a few minutes, answering these questions point blank. It's the opportunity for the team lead to address whatever problems were mentioned (after the meeting), and for other developers to find out that their colleagues are waiting for them to finish something.

Note that this meeting is not for design discussions, or gripes etc. Rather, the purpose is to be a quick update on what's going on -- which is why you're supposed to stand up through the whole thing. The minute someone starts to look like they need to sit, that's your cue that the meeting has gone on too long.

Sprints

Think of sprints as a deploy schedule, but short and seemingly insignificant in what they produce. While a typical software deploy schedule may last months or even years, consisting of massive upgrade paths and a long complex list of changes, sprints are typically 1-2 weeks long. You write the code, and it's live in a few days.

The big difference from other methods is that sprints are incremental, so while new features roll out bit by bit, bugs are fixed weekly with no having to maintain multiple branches for extended intervals.

Keeping the sprint short ensures 4 things:

  • The tasks are always short-term and easy to comprehend both for developers and clients
  • Clients see progress on a regular, predictable schedule
  • Releases are predictable, and easy to break new features into
  • Your team has a concrete and easy to understand goal to work toward

Code Debt

But what about those elaborate project charts with tasks designated to different developers, all colour coded by week, accounting for availability?

Gone. All of it. Throw it out. You now have a binder full of post-its, or if you're feeling all 21st century about it, a Jira task list. This bundle of tasks is your code debt and should not be organised as priorities are expected to change from sprint to sprint. At most the PM should keep a loose tally of priorities, so as to make the sprint planning meetings go smoother.

Chipping Away at that Debt

At the start of every sprint, you hold a meeting in which the project manager talks to the developers about what's most pressing in terms of bug fixes and new features. Importantly, this is a two-way conversation: the PM representing the needs of the client, and the developers representing their own limitations and the quality/maintainability of the code.

This sprint planning meeting is where you take stuff out of your code debt, break it into bite-sized chunks, and assign it to the current sprint. You need to keep the tasks small and easy to achieve in < 4hours. If it takes longer than that, it needs to be broken down. This has a couple big benefits:

  • Big jobs can be spread around, potentially finishing them faster
  • Knowledge sharing is easier as everyone has the opportunity to work on smaller portions of a greater whole.
  • It's an easy way to make big jobs suddenly feel possible.
  • Finishing a task results in a sense of accomplishment for the developers
  • Incremental change gives the client a sense that something is being done

No Ticket, No Work

Now that your sprint planning meeting has broken up a portion of your code debt into tasks, the team is presented with a white board with a simple grid layout:

+--------+--------------+-----------+------------+---------------+
|  Todo  |  Developers  |  Working  |  Finished  |  QA Complete  |
+--------+--------------+-----------+------------+---------------+
|        |  Daniel      |           |            |               |
|        +--------------+-----------+------------+---------------+
|        |  Aileen      |           |            |               |
|        +--------------+-----------+------------+---------------+
|        |  Charlie     |           |            |               |
|        +--------------+-----------+------------+---------------+
|        |  Aisha       |           |            |               |
+--------+--------------+-----------+------------+---------------+

That Todo column is where you put the amorphus blob of post-it notes, each one representing one of the aforementioned bite-sized tasks for this sprint. Note that while in this column, they aren't actually assigned to anyone; they're simply waiting for someone to take them and stick it onto their Working column.

Now, say that there are 30 tasks to complete before the end of the sprint. Aileen sits down at her desk and as she has nothing to do yet, she looks at the board and grabs the post-it about fixing a bug in email notifications. She moves the post-it from the Todo column into the Working column on her row, and opens her editor.

When the job's done, she moves it to Finished, at which point the QA team can now take a look, and when they're happy with the job, they move it to QA Complete. If however her change broke something, or if it's simply unsatisfactory, they move the post-it all the way back to the Todo column, where Charlie might grab it later that day, since Aileen has moved onto another ticket about the statistics engine.

In practise, developers will often gravitate toward tasks they're familiar with, and they'll often leave tickets that have been bounced-back by QA for the initial developer and this can be ok. However if ever one developer becomes a dominant force on a particular component, (s)he might be forbidden from working on it for a while, to make sure that the other developers have a chance to spend some time learning how that software works.

The most important part of this is that developers aren't supposed to do any work unless there's a ticket for them. This keeps people on-task toward completing the sprint on-time and as expected. If there's other work that deserves attention, this is best brought up at the next sprint planning meeting.

Spikes

It's about at this point where people start with comments like "What if the server goes down? Are we expected to wait until the next sprint to fix it?". Obviously not. Emergencies or "directives from on high" are things that can't wait and by their nature they can't be part of the sprint plan. They're also rare, so breaking a working system to accommodate them is a little absurd.

The solution is what's called a "spike". A task injected into the Todo list, typically flagged to be done as soon as possible. Its presence in a sprint taints the sprint, so that it can be pointed to in the event of an overrun:

The server went down on Friday and Aisha had to burn half her day fixing it. As a result, we only finished 33 of our 36 tickets this sprint.

This is the sort of thing talked about in the post-sprint meeting, and if more action is needed (either to fully correct the problem or to avoid future cases) these tasks are added to the next sprint.

So, How'd it Go?

There's one other meeting of consequence. At the end of every sprint, you meet to talk about how the sprint fared: what went well, what didn't. In those 30 minutes, you talk about how awesome the QA team was, and how much it sucked when that module we thought would save us work turned out to create more than it solved. It's important to use this time to blow off steam and celebrate the accomplishments of the previous sprint and to take some time to figure out what could have gone better. It facilitates knowledge sharing more than anything else, and allows the PM and team lead to make better decisions in the future.

Documentation

The one thing people freak out about most when I talk about this method is the lack of documentation. They conjure up nightmare scenarios where one of the developers is hit by a bus and "no one knows how their stuff works", or point out that new developers won't have anywhere to start. Both of these are non-issues though, so long as you stick to the process and don't write terrible code.

If any member of the team doesn't know how a component works enough to get in there and complete a task, then it's time to get that person working on one of those tasks. Knowledge transfer happens best through doing, which means making sure that every member of the team has her fingerprints on every part. To put it in real terms, if Daniel gets hit by a bus, the project can go on because Aileen, Charlie, and Aisha have all spent some time poking at the payment engine. Not one of them wrote the whole thing, but the understanding is there.

Of course this can only happen if the code is readable and adheres to established standards. Variable names should be in the common language of the team and be whole words, method calls should be given names that explain what they do, and class names should make sense as singular objects. If the code can't be understood by someone who's never seen it before, then it's broken by design. Making sure that everyone has an opportunity to interact with this code is the best way to ensure it's readability.

Be Rigid

Probably the hardest part of agile software development is sticking to the process. As simple as it is, it's just too easy to fix a bug that someone found that isn't in the sprint, or add a simple feature that the client mentioned earlier that day. If agile is going to work, this can't be allowed to happen, and a lot of people have a hard time with this.

What you have to remember is that while the process feels pointlessly rigid, it's there to protect the team and ensure that the client gets exactly what was promised on the schedule that was promised. Adding in bug fixes can potentially derail the schedule, or introduce bugs that shouldn't have been there in the first place. It teaches the client that she can have whatever she wants whenever she wants, and as it's not part of the agreed sprint, she may try to get away with not paying for it.

From the developer side, it's important to remember that we like lists. If we can look at the list of stuff to do and know that that's all that's ever going to be there for the whole sprint, this introduces a sense of calm, and knowing exactly what's expected.

To this end, it's important to reward a team that manages to complete its sprint ahead of schedule. If they get everything finished by Thursday, let them take Friday off. The project is exactly as far along as you expected, so why not? Similarly, if the team is routinely late in completing the sprint, overtime is justified since the entire team helped write the sprint schedule during the planning meeting.

Conclusions

What makes agile work is having a simple and concise plan to follow, that has been agreed upon by all parties. I've worked at companies that implement this system without involving the developers so the schedule is imposed by people who have no knowledge of what actually needs to be done. I've also worked at companies where the developers run the schedule, which is to say, there's barely any schedule at all and the results are products that "mostly work", according to whatever the developer at the time thought was appropriate. As with so many other things, the key is openness, honesty, and inclusion in the process for all sides.

Agile is a system that everyone understands and agrees to, but doesn't get in the way of actually getting stuff done. It protects all parties involved from undue stress, and unexpected results, and I can honestly say that it was (at least for me) the best system to work with.

November 15, 2010 22:19 +0000  |  Drupal Programming Software 14

I've been doing Drupal development on-and off for nearly three years now and it's always been frustrating. I'm a pretty vocal and animated kind of person too, so my co-workers soon came to know me as the anti-Drupal guy, which can be pretty rough when your employer has chosen to standardise on the platform. Now that I'm finally out of the Drupal world, I wanted to write a little about the platform, specifically speaking to its weaknesses and failures.

My hope here is two fold: (a) that this post serve as a means of communicating to the thousands of frustrated developers out there that they're not alone in their pain, and (b) that perhaps some of this post will help development shops choose Drupal where appropriate and other technologies when it is not.

For the Drupal fan(girl|boy)s, I ask only that you try to read this with an open and constructive mind. While I may rant and curse about Drupal in my Twitter feed, I've tried very hard to make this an unemotional, hopefully useful post about something I've spent a lot of time thinking about and working with.

Drupal Centricism

Drupal Ideology

It seems to be a mantra within the community: "You don't even need to write code". The Drupal ideology is user-centric, choosing ease-of-use over performance at every turn. There's nothing wrong with this of course, so long as your goal is to let unskilled people make websites. However if your priority is a performant application capable of handling a lot of traffic, you're going to have a number of problems.

Some examples of prioritising user-focus over performance:

  • Silent failures are the bane of any developer's existence. It's important to know when a variable isn't defined, or that writing a record to the database failed, or that a file didn't upload properly. Drupal suppresses such messages by default, and as a result nearly every contrib module in the community is so riddled with errors and warnings that development with these messages enabled is near impossible.
  • Views, the de-facto standard way to store and retrieve data from your database, writes queries to the database, so that in order to perform a query against the database, you must first fetch the query from the database. Similar inefficiencies can be found in other "standard" modules like CCK and Panels.
  • Drupal relies almost entirely on caching in order to function at all. Without caching, a method usually reserved for high to extreme traffic situations, Drupal can't handle even a small number of concurrent visitors. Indeed, some projects I've seen have taken more than 10minutes to load a single page, even in development where there was only one connection in use.

Drupal Magic

It's a term celebrated by many in the community. The idea being that Drupal does a mountain of work for you, so you don't have to worry about it. The only problem is that when you're trying to build a finely-tuned application, most of this magic either gets in the way, or even works against you. You get 80% of the way there with Drupal and its contrib modules, and then spend three months fighting the whole application, undoing the damage it's done, just to get what you need out of your website.

The hook-dependent system requires and fosters this anti-pattern. Re-using code often means unpredictable, site-wide changes. A property is written in module X, overwritten in module Y, and altogether removed in module Z, and there's no way to be certain that these functions will execute in a predictable order.

This problem is notably worse when it comes to new developers on a project, since they will undoubtedly not be privy to the magic that is running under the hood, and will have a difficult time discovering it on their own. To those who will answer this with "the project simply needs better documentation", I respectfully suggest that a good code base is easy to understand, and doesn't require a manual that is usually out of date.

To work with Drupal Magic is to attempt to produce useful code against an unordered, uncontrolled, grep-to-find-what-is-going-on-dependent architecture.

Drupal Community

For all the victories in community engagement Drupal has achieved (a massive, diverse and engaged membership), it's the glaring failures that make the whole project a miserable situation for developers. I've already mentioned the standardising on inefficient modules, but I haven't talked about the mountains of really horribly written code yet. Drupal Core, for what it does, is pretty efficient, but too many contrib modules are written by inexperienced developers, or are simply incapable of scaling to enterprise-level capacity. The result of this is that non-developers (managers, sometimes even clients) will point to the functionality of module X and insist: "don't redesign the wheel, just use that", and you spend the next three weeks trying to work around the poor design of said module, eventually being forced to write garbage that talks to garbage.

Often the perceived strength of the community is Drupal's greatest weakness. Drupal is promoted based on its theoretically infinite feature set, but the reality is that in order to use every one of those contrib modules in your site, the memory footprint will be massive, the stability suspect, and the performance abysmal. And gods help you if you try this on a site with millions of users or a similar number of content nodes.

Drupal Establishment

None of this is a problem however if Drupal is used where its features and shortcomings are both understood and accepted as the nature of the platform. Drupal is a great tool in some situations and a horrible burden in others. Sadly, this has not yet sunk in with many of the decision-makers in the web development community. Drupal is being used and promoted as a solution hammer, with every potential development project, a Drupal-shaped nail.

This has a number of negative outcomes, the most dangerous of which is a lack of skill diversity in developers. Companies that insist on Drupal-centric development are in fact promoting ignorance of alternatives that might do a better job and that hurts everyone. Unless developers at these companies take it upon themselves to spend time outside of their 8-12 hour work day to write code for a different platform or language, this Drupal dependency will force their non-Drupal skills to atrophy, limiting their ability to produce good code in the future.

Conclusion

I'm finally at the end of my admittedly unenthusiastic involvement in the Drupal community. Whether the Drupal shops out there read this isn't really up to me, but I hope that this manages to help some people re-evaluate their devotion to the platform. Comments are welcome, so long as they're constructive (I moderate everything), but I'm not going to get into a shouting match on the Internet. If you think I'm wrong, we can talk about it in 5 years.

October 04, 2010 01:41 +0000  |  Blogger Django Python Software 8

I haz a new site! I've been hacking at this for a few months now in my free time and it's finally in a position where I can replace the old one. Some of the features of the old site aren't here though, in fact this one is rather limited by comparison (no search, no snapshots, etc.) but the underlying code is the usual cleaner, better, faster, more extendable etc. so the site will grow beyond the old one eventually.

So, fun facts about this new version:

  • Written in Python, based on Django.
  • 317133 lines of code
  • Fun libraries used:
    • Flot (for the résumé skillset charts)
  • Neat stuff I added:
    • A new, hideous design!
    • A hierarchical tagging system
    • A custom image resizing library. I couldn't find a use for the other ones out there.
    • The Konami Code. Try it, it's fun :-)
  • Stuff that's coming:
    • Search
    • Mobile image upload (snapshots)
    • The image gallery will be up as soon as the shots are done uploading.

Anyway, if you feel so inclined, please poke around and look for problems. I'll fix them as soon as I can.

January 03, 2010 12:07 +0000  |  Django Facebook Python Software TheChange.com Web Development 2

This is going to be a rather technical post, coupled with a smattering of rants about Facebook so those of you uninterested in such things might just wanna skip this one.

As part of my work on my new company, I'm building a syncroniser for status updates between Twitter, Facebook, and our site. Eventually, it'll probably include additional services like Flickr, but for now, I'm just focusing on these two external systems.

A Special Case

Reading this far, you might think that this isn't really all that difficult for either Twitter or Facebook. After all, both have rather well-documented and heavily used APIs for pushing and pulling data to and from a user's stream, so why bother writing about it? Well for those with my special requirements, I found that Facebook has constructed a tiny, private hell, one in which I was trapped for four days over the Christmas break. In an effort to save others from this pain, I'm posting my experiences here. If you have questions regarding this setup, or feel that I've missed something, feel free to comment here and I'll see what I can do for you.

So, lets start with my special requirements. The first stumbler was the fact that my project is using Python, something not officially supported by Facebook. Instead, they've left the job to the community which has produced two separate libraries with different interfaces and feature sets.

Second, I wasn't trying to syncronise the user streams. Instead, I needed push/pull rights for the stream on a Facebook Page, like those created for companies, politicians, famous people, or products. Facebook claims full support for this, but in reality it's quite obvious that these features have been crowbared into the overall design, leaving gaping holes in the integration path.

What Not to Do

  • Don't expect Facebook to do the right/smart thing. Everything in Facebookland can be done in one of 3 or 4 ways and none of them do exactly what you want. You must accept this.
  • Don't try to hack Facebook into submission. It doesn't work. Facebook isn't doing that thing that makes sense because they forgot or didn't care to do it in the first place. Accept it and deal. If you try to compose elaborate tricks to force Facebook's hand, you'll only burn 8 hours, forget to eat or sleep in the process and it still won't work.

What to Do

Step 1: Your basic Facebook App

If you don't know how to create and setup a basic canvas page in Django, this post is not for you. Go read up on that and come back when you're ready.

You need a simple app so for starters get yourself a standard "Hello World" canvas page that requires a login. You can probably do this in minifb, but PyFacebook makes this easy since it comes with handy Django method decorators:

# views.py
from django.http import HttpResponse, HttpResponseRedirect
import facebook

@facebook.djangofb.require_login()
def fbCanvas(request):
    return HttpResponse("Hello World")
Step 2: Ask the User to Grant Permissions

This will force the user to add your application before proceeding, which is all fine and good but that doesn't give you access to much of anything you want, so we'll change the view to use a template that asks the user to click on a link to continue:

# views.py
from django.shortcuts import render_to_response
from django.template import RequestContext
import facebook

@facebook.djangofb.require_login()
def fbCanvas(request):
    return render_to_response(
        "social/canvas.fbml",
        {},
        context_instance=RequestContext(request)
    )

Note what I mentioned above, that we're asking the user to click on a link rather than issuing a redirect. I fought with Facebook for a good few hours to get this to happen all without user-input and it worked... sometimes. My advice is to just go with the user-clickable link. That way seems fool-proof (so far).

Here's our template:

<!-- canvas.fbml -->
<fb:header>
    <p>To enable the syncronisation, you'll need to grant us permission to read/write to your Facebook stream.  To do that, just <a href="http://www.facebook.com/connect/prompt_permissions.php?api_key=de33669a10a4219daecf0436ce829a2e&v=1.0&next=http://apps.facebook.com/myappname/granted/%3fxxRESULTTOKENxx&display=popup&ext_perm=read_stream,publish_stream,offline_access&enable_profile_selector=1">click here</a>.
</fb:header>

See that big URL? It's option #5 (of 6) for granting extended permissions to a Facebook App for a user. It's the easiest to use and hasn't broken for me yet (Numbers 1, 2, 3 and 4 all regularly complained about silly things like not having the app instaled when this was not the case, but your milage may vary). Basically, the user will be directed to a page asking her to grant read_stream, publish_stream, and offline_access to your app on whichever pages or users she selects from the list of pages she administers. Details for modifying this URL can be found in the Facebook Developer Wiki.

Step 3: Understanding Facebook's Hackery

So you see how in the previous section, adding enable_profile_selector=1 to the URL will tell Facebook to ask the user to specify which pages to which she'd like to grant these shiny new permissions? Well that's nifty and all, but they don't tell you which pages the user selected.

When the permission questions are finished, Facebook does a POST to the URL specified in next=. The post will include a bunch of cool stuff, including the all important infinite session key and the user id doing all of this, but it doesn't tell you anything about the choices made. You don't even know what page ids were in the list, let alone which ones were selected to have what permissions. Nice job there Facebook.

Step 4: The Workaround

My workaround for this isn't pretty, and worse, depends on a reasonably intelligent end-user (not always a healthy assumption), but after four days cursing Facebook for their API crowbarring, I could come up with nothing better. Basically, when the user returns to us from the permissioning steps, we capture that infinite session id, do a lookup for a complete list of pages our user maintains and then bounce them out of Facebook back to our site to complete the process by asking them to tell us what they just told Facebook. I'll start with the page defined in next=:

# views.py
@facebook.djangofb.require_login()
def fbGranted(request):

    from cPickle import dumps as pickle
    from urllib  import quote as encode

    from myproject.myapp.models import FbGetPageLookup

    return render_to_response(
        "social/granted.fbml",
        {
            "redirect": "http://mysite.com/social/facebook/link/?session=%s&pages=%s" % (
                request.POST.get("fb_sig_session_key"),
                encode(pickle(FbGetPageLookup(request.facebook, request.POST["fb_sig_user"])))
            )
        },
        context_instance=RequestContext(request)
    )
# models.py
def FbGetPageLookup(fb, uid):
    return fb.fql.query("""
        SELECT
            page_id,
            name
        FROM
            page
        WHERE
            page_id IN (
                SELECT
                    page_id
                FROM
                    page_admin
                WHERE
                    uid = %s
            )
    """ % uid)

The above code will fetch a list of page ids from Facebok using FQL, and coupling it with the shiny new infinite session key, bounce the user out of Facebook and back to your site where you'll use that info to re-ask the user about which page(s) you want them to link to Facebook.

Step 5: Capture That page_id

How you capture and store the page id is up to you. For me, I had to create a list of organisations we're storing locally and let the user compare that list of organisations to the list of Facebook Pages and make the links appropriately. Your process will probably be different. Regardless of how you do it, just make sure that for every page you wish to syncronise with Facebook, you have a session_key and page_id.

Step 6: Push & Pull

Because connectivity with Facebook (and Twitter) is notonoriously flakey, I don't recommend doing your syncronisation in real-time unless your use-case demands it. Instead, run the code via cron, or better yet as a daemon operating on a queue depending on the amount of data you're playing with. However you do it, the calls are the same:

import facebook

# Setup your connection
fb = facebook.Facebook(settings.FACEBOOK_API_KEY, settings.FACEBOOK_SECRET_KEY)
infinitesessionkey = "your infinite session key from facebook"
pageid             = "the page id the user picked"

# To push to Facebook:
fb(
    method="stream_publish",
    args={
        "session_key": infinitesessionkey,
        "message":     message,
        "target_id":   "NULL",
        "uid":         pageid
    }
)

# To pull from Facebook:
fb(
    method="stream_get",
    args={
        "session_key": infinitesessionkey,
        "source_ids": pageid
    }
)["posts"]

Conclusion

And that's it. It looks pretty complicated, and... well it is. For the most part, Facebook's documentation is pretty thorough, it's just that certain features like this page_id thing appear to have fallen off their radar. I'm sure that they'll change it in a few months though, which will make my brain hurt again :-(

November 13, 2009 17:51 +0000  |  Programming Python Software 0

I wrote something like this some time ago, but this version is much better, if only because it's in python. Basically, it's a script that highlights standard input based on arguments passed to it.

But how is that useful? Well imagine that you've dumped the contents of a file to standard output, maybe even piped it through grep, and/or sed etc. Oftentimes you're still left with a lot of text and it's hard to find what you're looking for. If only there was a way to highlight arbitrary portions of the text with some colour...

Here's what you do:

$ cat somefile | highlight.py some strings

You'll be presented with the same body of text, but with the word "some" highlighted everywhere in light blue and "strings" highlighted in light green. The script can support up to nine arguments which will show up in different colours. I hope someone finds it useful.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import sys,re

colours = [
    "\033[1;34m", # light blue
    "\033[1;32m", # light green
    "\033[1;36m", # light cyan
    "\033[1;31m", # light red
    "\033[1;33m", # yellow
    "\033[0;32m", # green
    "\033[0;36m", # cyan
    "\033[0;33m", # brown
    "\033[1;35m", # pink
    "\033[0m"     # none
]

args = sys.argv[1:]

# Strip out arguments exceeding the maximum
if len(args) > 9:
    print("\n%sWARNING: This script only allows for a maximum of 9 arguments.%s\n\n" % (colours[4], colours[9]), file=sys.stderr)
    args = args[0:8]

while True:
    line = sys.stdin.readline()
    colour = 0
    for arg in args:
        line = re.sub(
            r"(%s)" % (arg),
            "%s%s%s" % (colours[colour], "\g<1>", colours[9]),
            line
        )
        colour = colour + 1
    if line == '':
        break
    try:
        print(line.rstrip("\n"))
    except:
        pass

July 17, 2009 00:01 +0000  |  Programming Software Twitter 0

Wil Wheton posted to Twitter today a request for an easy way to fetch all of one's tweets and store them locally. Someone might want to do that if they want a personal archive, or if they're interested in porting their data over to a Free implimentation like Laconica. Whatever your reasoning, here's a quick and dirty way to do it:

for i in {1..999}; do
  curl -s "http://twitter.com/statuses/user_timeline.xml?screen_name=your_screen_name&count=200&page=$i" | grep '<text>' | sed -e 's/^ *<text>\(.*\)<\/text>/\1/'
  sleep 2
done

Just hit "ctrl-c" when you hit your first post ever.