The good news? Twitter last week seemed to finally take at least some responsibility for the content posted on its website when it removed verified status from a group of white nationalist and other accounts. That followed a period where it had ceased all new verifications until it could introduce new guidelines around behavior.

The bad news? Twitter now has signaled its 100% aware of what happens on the site and therefore has assumed a level of accountability for the content people share.

A central tenet of the defense mounted by Twitter, Facebook and other tech companies when confronted by the hate speech and other offensive material posted there is that they are just dumb platforms whose sole goal is to enable free speech. That argument has never held up to much scrutiny since not only do most all companies rely on an algorithm that analyzes content for various components to determine what is or isn’t shown but the search features offered make it clear it’s possible to find literally anything. They’re certainly capable of finding information when they want to, so pleading ignorance has never held much water.

Previously, Twitter’s Verified status was meant to simply denote that someone was who they said they were or were the most prominent and well-known person with that name. It was awarded to politicians and celebrities. The designation was awarded in secret through a process known only to Twitter. Eventually, the waters got muddier and anyone could apply to be Verified.

The primary problem is that it became a badge of honor, signaling importance or prominence. While Twitter has never been great about fighting the kind of harassment and other problems women and people of color have been subjected to, the fact that so much of it seemed to be stemming from or was encouraged by Verified users was especially problematic. It created the impression that those people were too vital to Twitter to sanction.

There were exceptions, of course, but the problem only got worse as the level of vitriol coming from various hate groups only intensified in recent years. People often pointed out the hypocrisy of someone who’s been reported for clear hate speech going unpunished while the people who call them out are suspended or worse.

Now, though, it’s stating clearly that there are certain actions that will get you suspended or have your Verified status stripped. That includes the promotion of violence both on and off Twitter, engaging in hate speech or making sexual threats and more.

That has the effect of making Twitter absolutely responsible for what people are posting. It is now on the hook for the behavior of its users and, if it fails to act in accordance with the guidelines it’s laid out, may even find itself culpable to some degree.

As an example, by saying that it has the power to prevent certain content that violates its operating principles from trending, it signals it knows full well what kind of material is bubbling up. So any instance where it does or doesn’t act on that knowledge raises the question of why it failed to do so in other situations.

It’s good that Twitter is doing *something* to fight back against those who use its platform to share hate speech and other repulsive ideologies. Limiting their ability to spread those messages isn’t a move against free speech. Those individuals are free to take their ideas to more hospitable waters. It just means that those on Twitter can do so without being subjected to the sexual harassment that’s all-too-common for women online, the anti-semitism that’s all-too-common for anyone of Jewish descent and other groups.

A commitment to free speech doesn’t by necessity mean an “anything goes” atmosphere that can create a toxic environment for many. It’s laudable, but also requires the assumption of due diligence and responsibility for fostering discussion, not facilitating abuse. The first duty has to be the protection of users. Without that, all the rest is meaningless.

Twitter’s new guidelines, while still flawed, appear to be a move in that direction. It also could mean it’s exposed to even more responsibility for the content that makes it past the filters and safeguards, which may have unintended consequences for everyone.

Chris Thilk is a freelance writer and content strategist who lives in the Chicago suburbs.

1 Comment

  1. Interesting – this comes on the heels of a good article I read in Outside mag about cyberbullying – particularly against women. Not confident it will curb any behavior myself but we’ll see I guess

Comments are closed.