Blog

December 01, 2022 17:12 +0000  |  Mastodon 1

Mastodon is Twitter's logical successor. Like Twitter, it's a "microblogging" platform that lets you follow and ~~retweet~~ "boost" posts you like for your followers to see. The key difference compared to its predecessor is the "federated" nature of the platform, which others have written on heavily, so for the uninitiated I'll just say that "it's distributed, so no central authority controls it."

What we don't talk about nearly as much though, is Mastodon's painfully limited system of managing that federation at scale. When faced with the reality of dealing with unwanted content, the answer in the Mastodon community is "the instance moderator does that". The assumption being that not only does the owner of the server you're using have the time/skill/inclination to sift through reports of your content, but also that this moderator shares your values.

Additionally, instances can be "de-federated" from the greater network by moderators of other (potentially huge) instances for not blocking content those networks consider objectionable. It all sounds like a neat and tidy way to keep the baddies out, but it's also a recipe for an echo chamber.

The reality of living in a society is that there's very little consensus around what content should be permitted, and this is a good thing! Some people share pornographic content daily, while others consider this a mortal sin. For that matter, posting a drawing of a prophet is enough to drive some idiots to violence, while for others, it's considered hate speech.

The hate speech question alone is especially difficult, as the term itself even lacks a consensus-backed definition. It's regularly used in online spaces to shut down debate rather than to inform it.

So with all of this, how can we expect the human moderator model to scale? The word "scale" itself suggests expanding a system's capabilities beyond that of individuals. We must accept the use of algorithms in navigating this space, and we can do this without risking the bias AI-based systems have demonstrated.

For my money, the answer is community tagging and leveraging that tagging to allow clients rather than (or at least in addition to) instances to filter according to their values.

The idea is allow users to tag content (and even other users) with whatever string they want: tree-hugger, fascist, pedo, cute-puppies, porn, nazi, deluded-psycopath, nerd, bootlicker -- whatever. Instances or clients can then choose which tags they want to filter on, as well as the weight they want to allocate to tags applied by people and posts bearing specific tags.

So for example if a user gets a lot of nazi tags from 15 different accounts, your client, configured to have a nazi threshold of 10 will filter out that user's content. If however a user was tagged as pedo from 15 different users who are also already tagged as nazis according to your threshold, the weight of those tags could be diminished or completely invalidated and so the "probably not a pedo" user's posts would get through.

The idea is to mimic actual human behaviour. If a MAGA nut tells you that "that dude's a fascist" you're less likely to care about that statement than if it had come from someone whose opinion you actually value.

Of course this system suffers from the whole question of gaining a reputation from the start, so maybe this would have to work much like other reputational systems (eBay comes to mind) and rely on people less filter-conscious to rate people and posts before the more filter-heavy users see that content.

I'm curious what other Mastodon users (Mastodonians?) think about such a system. If they're satisfied with the current system of filtering/defederation or if they have different/better ideas to manage the problem with limited bias.