Blog

January 05, 2024 10:52 +0000  |  Django 0

Long-running Django projects can start to create a lot of migrations. After just a few years, an actively developed project can create thousands of them! This can put a serious dent in your test running, because (a) Django runs the migrations at test time to setup your database, and (b) you can't test your migrations unless you're happy to have 30min CI runs!

Migrations can also often be a painful source of technical debt, since they sometimes import libraries that you don't use anymore, but can't remove because someone, some day will try to run manage.py migrate from scratch only to have it blow up looking for a dependency you don't actually use anymore.

So, looking down the barrel of a performance, tech debt, and stability headache, it's a good idea to pay some attention to your migrations from time to time.

Option 1: squashmigrations

This is the official advice. You run this command and Django will effectively squash all of your existing migrations into one Great Big Migration, so where you before you had:

0001_initial.py
0002_something.py
...
0132_something_else.py

you now have:

0001_squashed_0132_something_else.py

This is pretty slick, because it doesn't actually need any database changes. You're just merging the administrative overhead of 132 files into one and clawing back some of the performance you lost with having so many files.

There's not much more happening here though. Any old migrations you might have that depended on old_module_you_dont_use_anymore are still in that Great Big File, including the import, and the compute overhead of processing that migration doesn't really go away (though there are optimisations that Django says can sometimes cause problems). There's also the risk of a CircularDependencyError which is no fun to fix.

Personally, I find this process high-risk, high-effort, low-reward, so I chose a more drastic, simpler path.

Option 2: Collapse migrations

There's nothing magic or automated about this process. It's very manual, but it's also not terribly complicated.

1. Prepare

Make sure your production environment is up-to-date, and freeze any concurrent development that may involve migrations. Theoretically, you can still continue to deploy changes to production while this is happening, but I wouldn't recommend it.

If you have testing and/or staging environments, do the same there too.

On your local machine, switch your environment to master and pull any updates so you definitely have the very same code that's in production. Start up your environment, and if you've got a snapshot of production, you should use that now.

2. Local file changes

Delete all migrations, but not the __init__.py in each migrations folder:

rm */migrations/0*

Next, run manage.py makemigrations on your laptop. This will create a bunch of initial migrations, one for each app (though in some cases where there are foreign keys between apps, there may be two or three).

3. The scary part

The sticking point of all of this is that Django maintains a history of migrations in its django_migrations table, and step 2 above knocked our file structure out of sync with that table. You can't deploy anything until that sync is restored.

On your local environment, hop into your database and delete all migrations:

DELETE FROM django_migrations;

Then hop out of your database and run:

$ python manage.py migrate --fake

This should re-populate your django_migrations table with the "new history". The thing to remember is that you're not actually changing anything here. All of these migrations have already been applied, so you're just rewriting history to throw out the intermediate steps.

Now test that this all works. Shut down your environment, wipe your local database and spin it back up. Run your test suite and bask in the heroic speed improvement your efforts have won you. Try creating a new migration, running it, and rolling it back. When you're happy with the result, do the same on production.


And that's it! You can remove those old libraries you don't need anymore now, and add that migration testing you've been meaning to include in your CI. Future developers won't know to thank you for saving them the time it initially took to stand everything up, and everyone will get stuff done faster.

June 23, 2017 16:12 +0000  |  Django Python 0

I sunk 4 hours of my life into this problem yesterday so I thought I might post it here for future frustrated nerds like myself.

If you're using django-debreach and Django REST Framework, you're going to run into all kinds of headaches regarding CSRF. DRF will complain with CSRF Failed: CSRF token missing or incorrect. and if you're like me, you'll be pretty confused since I knew there was nothing wrong with the request. My token was being sent, but it appeared longer than it should be.

So here's what was happening and how I fixed it. Hopefully it'll be useful to others.

Django-debreach encrypts the csrf token, which is normally just fine because it does so as part of the chain of middleware layers in every request. However, DRF doesn't respect the csrf portion of that chain. Instead it sets csrf_exempt() on all of its views and then relies on SessionAuthentication to explicitly call CSRFCheck().process_view(). Normally this is ok, but with a not-yet-decrypted csrf token, this process will always fail.

So to fix it all, I had to implement my own authentication class and use that in all of my views. Basically all this does is override SessionAuthentication's enforce_csrf() to first decrypt the token:

class DebreachedSessionAuthentication(SessionAuthentication):

    def enforce_csrf(self, request):

        faux_req = {"POST": request.POST}

        CSRFCryptMiddleware().process_view(faux_req, None, (), {})
        request.POST["csrfmiddlewaretoken"] = faux_req["csrfmiddlewaretoken"]

        SessionAuthentication.enforce_csrf(self, request)

Of course, none of this is necessary if you're running Django 1.10+ and already have Breach attack protection, but if you're stuck on 1.8 (as we are for now) this is the best solution I could find.

April 14, 2017 13:07 +0000  |  Django 0

I love DjangoCon. I've been going to it almost every year since I arrived in Europe back in 2010. Sure, a considerable portion of my career has been based on Django, but it's more than that: the community is stuffed full of amazing people who genuinely want us all to succeed and that just makes the conference all the more exciting.

This year we all converged on Florence for three days of talks in a historic old theatre at the heart of the city and like every year, the talks at this single-track event were hit-and-miss -- but that's ok! When the talks were less-than-useful we could always just pop out for gelato or catch up in the hallways with other developers.

The Good

Community

From talks covering gender bias or autism, to the re-labelling of all bathrooms to be unisex, DjangoCon has long been a shining example of how to be inclusive in a software development community and it's something I'm proud to be a part of. This year, they even raised enough money to pay for flights and accommodation for a number of people from Zimbabwe who are trying to grow a local Django community.

It feels good to be part of a group that's so welcoming, and I would argue that IT, while traditionally straight-white-male-dominated, is uniquely suited for the multicultural mantle of tolerance. Every other field has a uniform: a standard by which you're judged as "in" or "out" (just watch London's financial sector at lunch hour they all wear the same thing). In the software world however, we're all defined as being the odd ones. We are the all-singing, all-dancing nerds of the world: our differences are what make us fabulous. DjangoCon embraces that in a way I've not seen anywhere else and I love it.

Talks

Level up! Rethinking the Web API framework: Tom Christie

Tom Christie is the genius who brought us Django REST Framework and he's now working to improve the whole process by taking advantage of Python 3's type annotations to make your code self-documenting and then use that self-documentation to better build a browseable API. His code samples were beautifully simple and I'm very excited about the future of DRF. He's doing some great work there.

The Art of Interacting with an Autistic Software Developer: Sara Peeters

This was one of those talks that really felt as though it was lifting metaphorical scales from my eyes. Like many software engineers, Peeters is autistic, but unlike too many such people, she's extremely self-aware and articulate about what this means for her own human interactions.

She walked us through an average day for her: how she chooses her route home not based on the efficiency of the route, but because it limits the intensity of crowds on her commute as well as the chance that she'll encounter rain. It's the sensory overload you see, the idea of so many raindrops impacting her skin like that is a terrible feeling.

In 20min she helped paint a picture of the limitations and fascinations of dealing with autism in her day-to-day life, and outlined a few ways the rest of us might help communicate and accommodate people in her situation.

After her talk, I found myself thinking back on a few former coworkers. Perhaps if I'd been more understanding, and if they'd been self-aware enough to help me understand their needs, we might have gotten on better.

The OpenHolter Project: Roberto Rosario

This talk blew my frickin' mind.

The guy has a severe heart condition which left him bedridden for 23hours a day, and he's managed to make his life liveable with $30 worth of equipment and some Free software.

His talk walked us through the process of building your own mobile EKG machine. A device that normally costs thousands of dollars and typically only used in a hospital, Rosario built with an Arduino and parts he bought off the internet.

He then showed all of this to his doctor who asked if he could develop a diary: basically a log of his heart rate throughout the day, annotated with explanations as to what he was doing when anomalies appeared in the log.

He managed this by having his little device push daily log data onto his Django stack where it was all neatly logged and charted:

That's 100 samples per minute of biometric data generated by yourself on a desk in your house for $30 plus the cost of cables. This future we're living in is amazing.

Autopsy of a Slow Train Wreck: Russell Keith-Magee

Russell ran a start up from optimistic start to a brutal, crushing finish years later, and decided to do a talk to teach us all what went wrong.

The talk was broken down into succinct sections, with a lesson in each case. A valuable talk for anyone considering a future in a small business. When it's made available online, I'll be sending it around to a few people I know.

Fighting the Controls: Daniele Procida

Daniele wrapped up the event with a final talk about a plane crash, or maybe it was Icarus -- it's hard to explain. His message was simple though: bad things happen when you don't stop and consider what's happening.

When stuff is exploding, the server is on fire, and everything is falling apart, sometimes the best thing to do is to just sit there and breathe: consider the situation and act when you have a better handle on things.

His talks are always a delight, as he has a unique way of humanising software. Once the videos are live, I recommend this one to anyone in any sort of high-stress job.

The People

Meeting the developer of Mayan EDMS

About a year ago now, I was sitting in a London pub, hacking away at my latest project, Paperless when I stumbled onto Mayan EDMS: another open source project that did almost exactly the same thing as mine, but it was prettier and more featureful.

I was crushed. Here I was pouring literally hundreds of hours into this thing, with thousands of people using the code through GitHub, and suddenly, it all felt like it was for nothing because someone else had done it all already.

The guy who wrote that thing? I met him over lunch on the 2nd day of DjangoCon. He's also the same genius who built the mobile EKG machine mentioned above.

It was fun to meet him, talk about what worked for him and what didn't, and what sort of future he has planned for Mayan. He's a pretty smart dude, and it was nice to just sit and chat with a sort of "rival" nerd.

Talking to Paperless contributors

I also ended up talking to Philippe Wagner, one of the Paperless users who's been quite helpful in pushing the project forward. He wants to repurpose Paperless into a sort of markdown-based Evernote clone, and to do that all he needs from me are some minor changes to the project core to make it more pluggable. We'd been talking about it in the GitHub issues queue for a few weeks and he recognised me in the DjangoCon Slack channel, so he sent me a private message asking if we could chat for a bit.

I stepped out of one of the less interesting talks and we worked out a plan to make things work just outside the theatre. He's a cool guy and very driven. It's great to have him working on Paperless.

New Friends

After the first lunch, I sort of fell in with a group of fun people for the rest of the conference. We hung out after hours looking for food or just company for a walk around town. This is uncommon for me as while I'm a relatively friendly person, I generally avoid people save for superficial conversation. This was a nice change.

The Bad

Questions

The event was really squeezed for time and almost every talk didn't allow for questions. Instead, we were directed to the Slack channel (which was only good for people with working wifi and laptops for fast-typing) or "later around the conference". Personally, I've always liked the questions, as it allows the audience to get the speaker to publicly defend an assertion or elaborate on something. Without it, it felt really disconnecting, as if I just watched the talk on YouTube.

Language

While I think that DjangoCon should be celebrated for its adoption of a code of conduct and for its inclusive attitude, I feel that it's fallen into that ugly trap of adopting a language police. In an effort to be an inclusive community, they're effectively rewriting the dictionary.

Specifically, I'm most annoyed by the policing of the word "guys" in reference to a group of people regardless of gender. I get that our community is composed of men and women, and people who defy gender labels, but I don't believe that that means that we need to strip non-aggressive language to accommodate some people.

In the same way that we don't censure people for talking about hamburgers around vegans, your comfort with my words is not my problem. Of course this isn't a defence of racial slurs, aggressive language, threats or hate speech -- that's totally inappropriate for an open and tolerant community, but I think that this business of reducing language based on the comfort of a few is a threat to the free exchange of ideas, not to mention entirely tone deaf to the fact that at least 70% of the attendees to DjangoCon were non-native English speakers who rightly use this word in reference to any group of people regardless of their position on the gender spectrum.

The worst part of all of this is that by simply discussing my distaste for this practise, especially at the conference, I risk being ejected from the community like some sort of nerd heretic. I maintain that it's dangerous and unhealthy, but I had to wait until now to say anything because I didn't want to be kicked out of the event. This can't be conducive to a Free and Open society, let alone a conference.

Conclusion

So to wrap up: some good, some bad, but on the whole, I'd say it's was well into the good column. I'll be back next year, and maybe I'll even try to give a talk on something.

September 17, 2015 18:42 +0000  |  Django Python 0

I ran into something annoying while working on my Tweetpile project the other day and it just happened to me today on Atlas. Sometimes, removing code can cause explosions with migrations -- even when they've already been run.

Example:

  • You've created a new class called MyClass.
  • It subclasses models.Model
  • It makes use of a handy mixin you wrote called MyMixin:

    class MyClass(MyMixin, models.Model):
        # stuff here
    
  • You create a migration for it, run it, commit your code and congratulate yourself on code well done.

  • Months later you come back and realise that the use of MyMixin was a terrible mistake, so you remove it.
  • Now migrations don't work anymore.

Here's what happened:

Creating a migration that's dependent on non-Django-core stuff to assemble the model (think mixins that add fields, or the use of custom fields etc.) means that migrations has to import those modules to run. This is a problem because every time you run manage.py migrate it loads all migration files into memory, and if those files are importing now-non-existent modules, everything breaks.

Solution:

It's an ugly one, but so far it's the only option I can figure: manually collapsing the migration stack. Basically you make sure you've run all of the migrations to date, then delete the offending classes, delete all of the migration files, and recreate a new empty migration:

$ cd /project/root/
$ ./manage.py migrate
$ rm -rf myapp/migrations/*
$ touch myapp/migrations/__init__.py
[ modify your code to remove the offending fields/mixins ]
$ ./manage makemigrations myapp

Now run this in your database:

DELETE FROM django_migrations WHERE app = 'myapp' AND name <> '0001_initial';
UPDATE django_migrations SET applied = NOW() where app = 'myapp';

The new single migration created won't be importing the removed classes, so everything will be ok, and you have the added benefit of not having so many migrations to import. Note however that this may cause problems with migrations from other apps that may have been created dependent on your now-deleted migrations, so this may start you down a rabbit-hole if you're unlucky.

I hope this helps someone in the future should this sort of thing present itself again.

October 15, 2014 15:45 +0000  |  Django 0

So with version 33, Firefox did something rather annoying, they now use a more restrictive library that rejects connections to servers running older versions of SSL. On the one hand, this is pretty awesome because at some point we all need to grow up and start using modern encryption, but on the other, it can make development really difficult when all you really need a an SSL setup -- any SSL setup to make your local development environment Just Work.

We've been using django-extenstion's runserver_plus feature, which is awesome because it includes a browser-based debugger and other really cool stuff, but also importantly, it supports the ability for you to run the Django runserver in SSL mode. This means that you can do stuff like:

./manage.py runserver_plus --cert=/tmp/temporary.cert

And that's enough for you to be able to access your site over SSL:

https://localhost:8000/

However, now that Firefox has thrown this monkeywrench into things, we spent far too much time today trying to figure out what was wrong and how to fix it, so I'm posting the answer here:

Basically, you just need a better cert than the one django-extensions creates for you automatically.

So, instead of just running --cert=/path/to/file and letting runserver_plus create it for you, you should run openssl yourself to create the cert and then point runserver_plus to it:

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/temporary-cert.key -out /tmp/temporary-cert.crt
$ ./manage.py runserver_plus --cert=/tmp/temporary-cert.crt

Of course, you can locate temporary-cert.* wherever you like, but you get the idea.

June 18, 2014 18:01 +0000  |  Django 1

I ran into this problem last night, and since Googling for it didn't help, I thought it prudent to post my solution publicly in case anyone else might have a similar problem.

Tastypie is a nifty REST API server app for Django that does a lot of the work for you when all you want to do is share your model data with the world using standardised methods. It's smart enough to do introspection of your models and then use what it finds to make serialisation decisions further down the line. That's how you get from a series of model field definitions to an easy to parse JSON object.

Django-Polymorphic is an amazing piece of software that lets you effortlessly dump polymorphic attributes into your models in an understandable and performant way. With it you can do things like ask for a list of Items and get back a series of EdibleItems and MetalItems.

While both of these technologies are awesome, they don't appear to play nice together. From what I can tell, this is due to how Tastypie does its introspection: at startup time rather than run time. Introspection is done once, on the initial class, and then never again, so polymorphism just won't work.

To fix this, you have one or two options: (a) Break up your API by item type using something like /api/v1/edible-items/ and /api/v1/metal-items/ rather than just /api/v1/items/, or (b) teach Tastypie to combine them. As I needed the latter, I wrote this:

from django.db.models.fields.related import ForeignKey, OneToOneField
from tastypie.resources import ModelResource
from .models import Item

class ItemResource(ModelResource):

    class Meta:
        queryset = Item.objects.all()
        resource_name = "items"

    def dehydrate(self, bundle):
        """
        Account for django-polymorphic
        """

        unacceptable_field_types = (ForeignKey, OneToOneField)

        for field in bundle.obj._meta.fields:
            if field.name in bundle.data:
                continue
            if not isinstance(field, unacceptable_field_types):
                bundle.data[field.name] = getattr(bundle.obj, field.name)

        return bundle.data

It's not the prettiest of solutions, but it seems to do the trick for me at this stage. If you're reading this and think I've missed something, please feel free to drop me a comment.

October 04, 2010 01:41 +0000  |  Blogger Django Python Software 8

I haz a new site! I've been hacking at this for a few months now in my free time and it's finally in a position where I can replace the old one. Some of the features of the old site aren't here though, in fact this one is rather limited by comparison (no search, no snapshots, etc.) but the underlying code is the usual cleaner, better, faster, more extendable etc. so the site will grow beyond the old one eventually.

So, fun facts about this new version:

  • Written in Python, based on Django.
  • 317133 lines of code
  • Fun libraries used:
    • Flot (for the résumé skillset charts)
  • Neat stuff I added:
    • A new, hideous design!
    • A hierarchical tagging system
    • A custom image resizing library. I couldn't find a use for the other ones out there.
    • The Konami Code. Try it, it's fun :-)
  • Stuff that's coming:
    • Search
    • Mobile image upload (snapshots)
    • The image gallery will be up as soon as the shots are done uploading.

Anyway, if you feel so inclined, please poke around and look for problems. I'll fix them as soon as I can.

August 10, 2010 12:16 +0000  |  Blogger Django PHP Python 1

For those who have been demanding that I post something, anything, (*cough* Noreen *cough*) I apologise for the delay, but it won't be long now. I've been using all this time to write a new version of my site, done up in Python/Django. The next version will be a watered-down version of this one (on account of the complete rewrite) but will grow with time.

I may also decide to abandon all attempts at making it pretty... 'cause well... I suck at that :-)

January 03, 2010 12:07 +0000  |  Django Facebook Python Software TheChange.com Web Development 2

This is going to be a rather technical post, coupled with a smattering of rants about Facebook so those of you uninterested in such things might just wanna skip this one.

As part of my work on my new company, I'm building a syncroniser for status updates between Twitter, Facebook, and our site. Eventually, it'll probably include additional services like Flickr, but for now, I'm just focusing on these two external systems.

A Special Case

Reading this far, you might think that this isn't really all that difficult for either Twitter or Facebook. After all, both have rather well-documented and heavily used APIs for pushing and pulling data to and from a user's stream, so why bother writing about it? Well for those with my special requirements, I found that Facebook has constructed a tiny, private hell, one in which I was trapped for four days over the Christmas break. In an effort to save others from this pain, I'm posting my experiences here. If you have questions regarding this setup, or feel that I've missed something, feel free to comment here and I'll see what I can do for you.

So, lets start with my special requirements. The first stumbler was the fact that my project is using Python, something not officially supported by Facebook. Instead, they've left the job to the community which has produced two separate libraries with different interfaces and feature sets.

Second, I wasn't trying to syncronise the user streams. Instead, I needed push/pull rights for the stream on a Facebook Page, like those created for companies, politicians, famous people, or products. Facebook claims full support for this, but in reality it's quite obvious that these features have been crowbared into the overall design, leaving gaping holes in the integration path.

What Not to Do

  • Don't expect Facebook to do the right/smart thing. Everything in Facebookland can be done in one of 3 or 4 ways and none of them do exactly what you want. You must accept this.
  • Don't try to hack Facebook into submission. It doesn't work. Facebook isn't doing that thing that makes sense because they forgot or didn't care to do it in the first place. Accept it and deal. If you try to compose elaborate tricks to force Facebook's hand, you'll only burn 8 hours, forget to eat or sleep in the process and it still won't work.

What to Do

Step 1: Your basic Facebook App

If you don't know how to create and setup a basic canvas page in Django, this post is not for you. Go read up on that and come back when you're ready.

You need a simple app so for starters get yourself a standard "Hello World" canvas page that requires a login. You can probably do this in minifb, but PyFacebook makes this easy since it comes with handy Django method decorators:

# views.py
from django.http import HttpResponse, HttpResponseRedirect
import facebook

@facebook.djangofb.require_login()
def fbCanvas(request):
    return HttpResponse("Hello World")
Step 2: Ask the User to Grant Permissions

This will force the user to add your application before proceeding, which is all fine and good but that doesn't give you access to much of anything you want, so we'll change the view to use a template that asks the user to click on a link to continue:

# views.py
from django.shortcuts import render_to_response
from django.template import RequestContext
import facebook

@facebook.djangofb.require_login()
def fbCanvas(request):
    return render_to_response(
        "social/canvas.fbml",
        {},
        context_instance=RequestContext(request)
    )

Note what I mentioned above, that we're asking the user to click on a link rather than issuing a redirect. I fought with Facebook for a good few hours to get this to happen all without user-input and it worked... sometimes. My advice is to just go with the user-clickable link. That way seems fool-proof (so far).

Here's our template:

<!-- canvas.fbml -->
<fb:header>
    <p>To enable the syncronisation, you'll need to grant us permission to read/write to your Facebook stream.  To do that, just <a href="http://www.facebook.com/connect/prompt_permissions.php?api_key=de33669a10a4219daecf0436ce829a2e&v=1.0&next=http://apps.facebook.com/myappname/granted/%3fxxRESULTTOKENxx&display=popup&ext_perm=read_stream,publish_stream,offline_access&enable_profile_selector=1">click here</a>.
</fb:header>

See that big URL? It's option #5 (of 6) for granting extended permissions to a Facebook App for a user. It's the easiest to use and hasn't broken for me yet (Numbers 1, 2, 3 and 4 all regularly complained about silly things like not having the app instaled when this was not the case, but your milage may vary). Basically, the user will be directed to a page asking her to grant read_stream, publish_stream, and offline_access to your app on whichever pages or users she selects from the list of pages she administers. Details for modifying this URL can be found in the Facebook Developer Wiki.

Step 3: Understanding Facebook's Hackery

So you see how in the previous section, adding enable_profile_selector=1 to the URL will tell Facebook to ask the user to specify which pages to which she'd like to grant these shiny new permissions? Well that's nifty and all, but they don't tell you which pages the user selected.

When the permission questions are finished, Facebook does a POST to the URL specified in next=. The post will include a bunch of cool stuff, including the all important infinite session key and the user id doing all of this, but it doesn't tell you anything about the choices made. You don't even know what page ids were in the list, let alone which ones were selected to have what permissions. Nice job there Facebook.

Step 4: The Workaround

My workaround for this isn't pretty, and worse, depends on a reasonably intelligent end-user (not always a healthy assumption), but after four days cursing Facebook for their API crowbarring, I could come up with nothing better. Basically, when the user returns to us from the permissioning steps, we capture that infinite session id, do a lookup for a complete list of pages our user maintains and then bounce them out of Facebook back to our site to complete the process by asking them to tell us what they just told Facebook. I'll start with the page defined in next=:

# views.py
@facebook.djangofb.require_login()
def fbGranted(request):

    from cPickle import dumps as pickle
    from urllib  import quote as encode

    from myproject.myapp.models import FbGetPageLookup

    return render_to_response(
        "social/granted.fbml",
        {
            "redirect": "http://mysite.com/social/facebook/link/?session=%s&pages=%s" % (
                request.POST.get("fb_sig_session_key"),
                encode(pickle(FbGetPageLookup(request.facebook, request.POST["fb_sig_user"])))
            )
        },
        context_instance=RequestContext(request)
    )
# models.py
def FbGetPageLookup(fb, uid):
    return fb.fql.query("""
        SELECT
            page_id,
            name
        FROM
            page
        WHERE
            page_id IN (
                SELECT
                    page_id
                FROM
                    page_admin
                WHERE
                    uid = %s
            )
    """ % uid)

The above code will fetch a list of page ids from Facebok using FQL, and coupling it with the shiny new infinite session key, bounce the user out of Facebook and back to your site where you'll use that info to re-ask the user about which page(s) you want them to link to Facebook.

Step 5: Capture That page_id

How you capture and store the page id is up to you. For me, I had to create a list of organisations we're storing locally and let the user compare that list of organisations to the list of Facebook Pages and make the links appropriately. Your process will probably be different. Regardless of how you do it, just make sure that for every page you wish to syncronise with Facebook, you have a session_key and page_id.

Step 6: Push & Pull

Because connectivity with Facebook (and Twitter) is notonoriously flakey, I don't recommend doing your syncronisation in real-time unless your use-case demands it. Instead, run the code via cron, or better yet as a daemon operating on a queue depending on the amount of data you're playing with. However you do it, the calls are the same:

import facebook

# Setup your connection
fb = facebook.Facebook(settings.FACEBOOK_API_KEY, settings.FACEBOOK_SECRET_KEY)
infinitesessionkey = "your infinite session key from facebook"
pageid             = "the page id the user picked"

# To push to Facebook:
fb(
    method="stream_publish",
    args={
        "session_key": infinitesessionkey,
        "message":     message,
        "target_id":   "NULL",
        "uid":         pageid
    }
)

# To pull from Facebook:
fb(
    method="stream_get",
    args={
        "session_key": infinitesessionkey,
        "source_ids": pageid
    }
)["posts"]

Conclusion

And that's it. It looks pretty complicated, and... well it is. For the most part, Facebook's documentation is pretty thorough, it's just that certain features like this page_id thing appear to have fallen off their radar. I'm sure that they'll change it in a few months though, which will make my brain hurt again :-(

December 31, 2008 22:19 +0000  |  Django Family Friends Python 9

It's funny, I've had mountains of "free" time lately and somehow, none at all available to do the simplest of cumulative tasks. I've not replied to the nineteen emails sitting in my inbox, and keeping this site up to date has clearly not been a priority. However, in an effort to "clean house" so to speak before the New Year, I'll try to cover everything here. If you like to read everything, I suggest taking a moment to procure a beverage.

Carolling: A Reunion

Grandma Nana at Christmas dinner

Way back in October, I received a text message from my old friend Michelle containing a request to re-capture some of our better memories by going carolling this year, an annual tradition we once supported by hadn't attempted for nearly a decade. Excited at the thought of it, I agreed to play my role and she recruited Gary (another old friend) and a Soprano friend of theirs for the task. I did some digging of my own and managed to coax Merry out as well and with a group of five very out-of-practise choir folk, we set out on December 19th to bring some Christmas cheer to the suburbs.

The whole thing didn't go off nearly as well as we'd hoped at the start. The first neighbourhood we landed in seemed to be filled with people who didn't like carollers at all. No matter how hard we sung, no one came to the door. We quickly decided that Surrey sucked and that the uber-Christians in Langley were more likely to be receptive. We were right, and then tilted the odds even greater in our favour by selectively hitting neighbourhoods filled with Christmas lights and people we knew personally :-) This made the bitter cold somewhat more bearable since we were repeatedly asked in for free drinks and cookies. Had the night been kinder and our start been earlier, we might have hit more houses, but as it worked out, we collected $30 for the food bank and had a really nice time singing with old friends.

My parents at Christmas dinner

I'd also like to take a moment to thank Michelle personally for single-handedly organising the whole thing. Despite my best intentions, I contributed very little to the planning. Michelle is a rock star.

Christmas: Another Reunion

Fighting the odds, I managed to catch my flight out of Vancouver to Kelowna on time, bailing out of the Lower Mainland just before the Storm from Hell ravaged the area. My condolences to those who were booked on flights set to leave only hours after mine -- as I understand it, a whole lot of people spent Christmas in YVR this year.

I arrived here in Kelowna in preparation of two big events: Christmas and my cousin Ashley's wedding. Thanks to the latter, the former was filled with distant relatives whom I see to rarely as it is. Ashley's brother Fraser was here, all the way from London and he brought is girlfriend and their common friend, both from Spain. My (2nd) cousin Roy was here, as was his mother June and a big chunk of my uncle's family as well. All good people, all with interesting stories I've not heard before.

The happy couple: Ashley and Jared Nelson

In terms of a Christmas "haul", the biggest most impressive gift was a hand-made cookbook from my parents containing family recipes from all the big chefs in the family. My father's pastas, my grandmother's famous soup... it's all in there. A really great gift.

Oh, and Lara, you'll be pleased to know that I got six pairs of socks as well :-)

The Wedding

If you've been following my Twitter feed, you probably already know that Ashley's wedding was outside, in the dark, on a mountain, under the trees, in the snow... with bagpipes. It sounds insane, and it was, but it was also beautiful. Ashley wore a gorgeous gown, and covered it with a pretty white hood to keep her warm during the (mercifully short) service. The bride cried, the groom cried, and I think even the Man of Honour cried. Young love is so cute. The Groom wore a black tux with red pinstripes and a white tie and, along with his groomsmen, bright red skate shoes. They were awesome.

The reception was about as fun and exciting as most receptions usually are. Lots of old people, lots of 80s and 90s music (courtesy of my brother the DJ) and lots of dancing. The bride and groom had a few really great performances on the dance floor and much fun was had by all. Only one blight on the whole thing really: one of the guests, a bridesmaid's date no less showed up in jeans, a hoodie, a cowboy hat, and plumber's crack. I tried to convince my mother to lecture him on his lack of respect but she didn't go for it. But yes, this is normal out here.

Catching up

My brother the DJ

I decided before I came up here that I'd spend a great deal of time teaching myself a new web framework called Django. It's a real framework (as opposed to Drupal, which is in fact a content-management system) based on a relatively new language called Python. So far the experience has been two-sided for me. On the one hand Django appears to do a lot for you so code is smaller and easier to maintain, but on the other hand I feel like a lot of the simplicity and art in coding has disappeared. Where you once saw a long, easy to read set of files filled with a series of very short declarative statements, you now have something that reads more like a novel. More compact yes, but is it art anymore?

I've also promised myself that I'd get through my emails this week -- all nineteen of them. This task, along with fixing up Stephen's site (I haven't forgotten about you!) has proven ridiculously difficult though, since Internet connectivity here is terrible at best. I have to syphon access from a neighbour's flaky router that routinely drops connectivity for hours at a time. At this very moment in fact, I'm writing this post into a file in the hopes that I'll be able to acquire some bandwidth later tomorrow at my father's store.

So that's everything for now. It's 2:30am now, but before I go to bed I think that I'll put together some good images for this post. I'll try to find some good shots of Christmas and the wedding. Next up is my New Year's recap post -- not sure when I'll have time to write it though.