Imagine if your boss made up hundreds of petty rules and refused to disclose them, but every week, your pay was docked based on how many of those rules you broke.
Algospeak is a new English dialect that emerged from the desperate attempts of social media users to please the algorithm: that is, to avoid words and phrases that cause social media platforms algorithms to suppress or block their communication.
Algospeak is practiced by all types of social media users, from individuals addressing their friends to science communicators and activists hoping to reach a broader public. But the most ardent practitioners of algospeak are social media creators, who relydirectly or indirectlyon social media to earn a living.
For these creators, accidentally blundering into an invisible linguistic fence erected by social media companies can mean the difference between paying their rent or not. When you work on a video for days or weeksor even yearsand then the algorithm decides not to show it to anyone (not even the people who explicitly follow you or subscribe to your feed), that has real consequences.
Social media platforms argue that theyre entitled to establish their own house rules and declare some subjects or conduct to be off-limits. They also say that by automating recommendations, theyre helping their users find the best videos and other posts.
Theyre not wrong. In the U.S., for example, the First Amendment protects the right of platforms to moderate the content they host. Besides, every conversational space has its own norms and rules. These rules define a community. Part of free speech is the right of a community to freely decide how theyll speak to one another. Whats more, social medialike all human systemshas its share of predators and parasites, scammers and trolls and spammers, which is why users want tools to help them filter out the noise so they can get to the good stuff.
But legal issues aside, the argument is a lot less compelling when the tech giants are making it. Their moderation policies arent community normstheyre a single set of policies that attempts to uniformly regulate the speech of billions of people in more than 100 countries, speaking more than 1,000 languages. Not only is this an absurd task, but the big platforms are also pretty bad at it, falling well short of the mark on speech, transparency, due process, and human rights.
Algospeak is the latest in a long line of tactics created by online service users to avoid the wrath of automated moderation tools. In the early days of online chat, AOL users used creative spellings to get around profanity filters, creating an arms race with a lot of collateral damage. For example, Vietnamese AOL users were unable to talk about friends named Phuc in the companys chat-rooms.
But while there have always been creative workarounds to online moderation, Algospeak and the moderation algorithms that spawned it represent a new phase in the conflict over automated moderation: approaching moderation as an attack on new creators that help these platforms thrive..
The Online Creators Association (OCA) has called on TikTok to explain its moderation policies. As OCA cofounder Cecelia Gray told the Washington Posts Taylor Lorenz: People have to dull down their own language to keep from offending these all-seeing, all-knowing TikTok gods.
For TikTok creators, the judgments of the services recommendation algorithm are hugely important. TikTok users feeds do not necessarily feature new works by creators they follow. That means that you, as a TikTok user, cant subscribe to a creator and be sure that their new videos will automatically be brought to your attention. Rather, TikTok treats the fact that youve explicitly subscribed to a creators feed as a mere suggestion, one of many signals incorporated into its ranking system.
For TikTok creatorsand creators on other platforms where theres no guarantee that your subscribers will actually be shown your videosunderstanding the algorithm is the difference between getting paid for your work or not.
But these platforms will not explain how their algorithms work: which words or phrases trigger downranking. As Lorenz writes, TikTok creators have created shared Google docs with lists of hundreds of words they believe the apps moderation systems deem problematic. Other users keep a running tally of terms they believe have throttled certain videos, trying to reverse engineer the system (the website Zuck Got Me For chronicles innocuous content that Instagrams filters blocked without explanation).
The people who create the materials that make platforms like YouTube, Facebook, Twitter, Snap, Instagram, and TikTok valuable have dreamed up lots of ways to turn attention into groceries and rent money, and they have convinced billions of platform users to sign up to get their creations when theyre uploaded. But those subscribers can only pay attention to those creations if the algorithm decides to include them, which means that creators only get to eat and pay the rent if they please the algorithm.
Unfortunately, the platforms refuse to disclose how their recommendation systems work. They say that revealing the criteria by which the system decides when to promote or bury a work would allow spammers and scammers to abuse the system.
Frankly, this is a weird argument. In information security practice, security through obscurity is considered a fools errand. The gold standard for a security system is one that works even if your adversary understands it. Content moderation is the only major domain where if I told you how it worked, it would stop working is considered a reasonable proposition.
This is especially vexing for the creators who wont get compensated for their creative work when an algorithmic misfire buries it: for them, I cant tell you how the system works or you might cheat is like your boss saying I cant tell you what your job is, or you might trick me into thinking youre a good employee.
Thats where Tracking Exposed comes in: Tracking Exposed is a small collective of European engineers and designers who systematically probe social media algorithms to replace the folk-theories that inform Algospeak with hard data about what the platforms up- and down-rank.
Tracking Exposed asks users to install browser plugins that anonymously analyze the recommendation systems behind Facebook, Amazon, TikTok, YouTube, and Pornhub (because sex work is work). This data is mixed with data gleaned from automated testing of these systems, with the goal of understanding how the ranking system tries to match the inferred tastes of users with the materials that creators make, in order to make this process legible to all users.
But understanding the way that these recommendation systems work is just for starters. The next stageletting users alter the recommendation systemis where things get really interesting.
YouChoose is another plug-in from Tracking Exposed: it replaces the YouTube recommendations in your browser with recommendations from many services from across the the internet, selected according to criteria that you choose (hence the name).
Tracking Exposeds suite of tools is a great example of contemporary adversarial interoperability (AKA Competitive Compatibility or comcom). Giving users and creators the power to understand and reconfigure the recommendation systems that produce their feedor feed their familiesis a profoundly empowering vision.
The benefits of probing and analyzing recommendation systems doesnt stop with helping creative workers and their audiences. Tracking Exposeds other high-profile work includes a study of how TikTok is promoting pro-war content and demoting anti-war content in Russia and quantifying the role that political disinformation on Facebook played in the outcome of the 2021 elections in the Netherlands.
The platforms tell us that they need house rules to make their conversational spaces thrive, and thats absolutely true. But then they hide those rules, and punish users who break them. Remember when OCA cofounder Cecelia Gray said that her members tie themselves in knots to keep from offending these all-seeing, all-knowing TikTok gods?
Theyre not gods, even if they act like them. These corporations should make their policies legible to audiences and creators, adopting The Santa Clara Principles.
But creators and audiences shouldnt have to wait for these corporations that think theyre gods to descend from the heavens and deign to explain themselves to the poor mortals who use their platforms. Comcom tools like Tracking Exposed let us demand an explanation from the gods, and extract that explanation ourselves if the gods refuse.