Amid Systemic Censorship of Palestinian Voices, Facebook Owes Users Transparency

Over the past few weeks, as protests inand in solidarity withPalestine have grown, so too have violations of the freedom of expression of Palestinians and their allies by major social media

companies. From posts incorrectly flagged by Facebook as incitement to violence, to financial censorship of relief payments made on Venmo, and the removal of Instagram Stories (which also heavily affected activists in Colombia, Canada, and Brazil), Palestinians are experiencing an unprecedented level of censorship during a time where digital communications are absolutely critical.

The vitality of social media during a time like this cannot be understated. Journalistic coverage from the ground is minimalowing to a number of factors, including restrictions on movement by Israeli authoritieswhile, as the New York Timesreported, misinformation is rife and has been repeated by otherwise reliable media sources. Israeli officials have even been caught spreading misinformation on social media.

Palestinian digital rights organization 7amleh has spent the past few weeks documenting content removals, and a coalition of more than twenty organizations, including EFF, have reached out to social media companies, including Facebook and Twitter. Among the demands are for the companies to immediately stop censoringand reinstatethe accounts and content of Palestinian voices, to open an investigation into the takedowns, and to transparently and publicly share the results of those investigations.

A brief history

Palestinians face a number of obstacles when it comes to online expression. Depending on where they reside, they may be subject to differing legal regimes, and face censorship from both Israeli and Palestinian authorities. Most Silicon Valley tech companies have offices in Israel (but not Palestine), while somesuch as Facebookhave struck particular deals with the Israeli government to deal with incitement. While incitement to violence is indeed against the companys community standards, groups like 7amleh say that this agreement results in inconsistent application of the rules, with incitement against Palestinians often allowed to remain on the platform.

Additionally, the presence of Hamaswhich is the democratically-elected government of Gaza, but is also listed as a terrorist organization by the United States and the European Unioncomplicates things for Palestinians, as any mention of the group (including, at times, something as simple as the groups flag flying in the background of an image) can result in content removals.

And it isnt just Hamaslast week, Buzzfeed documented an instance where references to Jerusalems Al Aqsa mosque, one of the holiest sites in Islam, were removed because Al Aqsa is also contained within another designated group, Al Aqsa Martyrs Brigade. Although Facebook apologized for the error, this kind of mistake has become all too common, particularly as reliance on automated moderation has increased amidst the pandemic.

Dangerous Individuals and Organizations

FacebooksCommunity Standard on Dangerous Individuals and Organizations gained a fair bit of attention a few weeks back when the Facebook Oversight Boardaffirmed that President Trump violated the standard with several of his January 6 posts. But the standard is also regularly used as justification for the widespread removal of content by Facebook pertaining to Palestine, as well as other countries like Lebanon. And it isnt just Facebooklast Fall, Zoom came under scrutiny for banning an academic event at San Francisco State University (SFSU) at which Palestinian figure Leila Khaled, alleged to belong to another US-listed terrorist organization, was to speak.

SFSU fell victim to censorship again in April of this year when its Arab and Muslim Ethnicities and Diasporas (AMED) Studies Program discovered that its Facebook event Whose Narratives? What Free Speech for Palestine?, scheduled for April 23, had been taken down for violating Facebook Community Standards. Shortly thereafter, the programs entire page, AMED STUDIES at SFSU, was deleted, alongwith its years of archival material on classes, syllabi, webinars and vital discussions not only on Palestine but on Black, Indigenous, Asian and Latinx liberation, gender and sexual justice and a variation of Jewish voices and perspectives including opposition to Zionism. Although no specific violation was noted, Facebook has since confirmed that the post and the page were removed for violating the Dangerous Individuals and Organizations standard. This was in addition to cancellations by other platforms including Google, Zoom, and Eventbrite.

Given the frequency and the high-profile contexts in which Facebooks Dangerous Individuals and Organizations Standard is applied, the company should take extra care to make sure the standard reflects freedom of expression and other human rights values. But to the contrary, the standard is a mess of vagueness and overall lack of claritya point that the Oversight Board has emphasized.

Facebook hassaid that the purpose of this community standard is to prevent and disrupt real-world harm. In the Trump ruling, the Oversight Board found that President Trumps January 6 posts readily violated the Standard. The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained.

But in two previous decisions, the Oversight Board criticized the standard. In adecision overturning Facebooks removal of a post featuring a quotation misattributed to Joseph Goebbels, the Oversight Board admonished Facebook for not including all aspects of its policy on dangerous individuals and organizations in the community standard.

Facebook apparently has self-designated lists of individuals and organizations subject to the policy that it does not share with users, and treats any quoting of such persons as an expression of support unless the user provides additional context to make their benign intent explicit, a condition also not disclosed to users. Facebook's lists evidently include US-designated foreign terrorist organizations, but also seems to go beyond that list.

As the Oversight Board concluded, this results in speech being suppressed which poses no risk of harm and found that the standard fell short of international human rights standards: the policy lacks clear examples that explain the application of support, praise and representation, making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users. Moreover, the policy fails to explain how it ascertains a users intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly.

The Oversight Board recommended that Facebook explain and provide examples of the application of key terms used in the policy, including the meanings of praise, support, and representation. The Board also recommended that the community standard provide clearer guidance to users on making their intent apparent when discussing such groups, and that a public list of dangerous organizations and individuals be provided to users.

The United Nations Special Rapporteur on Freedom of Expression alsoexpressed concern that the standard, and specifically the language of praise and support, was excessively vague.

Recommendations

Policies such as Facebooks that restrict references to designated terrorist organizations may be well-intentioned, but in their blunt application, they can have serious consequences for documentation of crimesincluding war crimesas well as vital expression, including counterspeech, satire, and artistic expression, as weve previously documented. While companies, including Facebook, have regularly claimed that they are required to remove such content by law, it is unclear to what extent this is true. The legal obligations are murky at best. Regardless, Facebook should be transparent about the composition of its "DangerousIndividuals and Organizations" list so that users can make informed decisions about what they post.

But while some content may require removal under certain jurisdictions, it is clear that other decisions are made on the basis of internal policies and external pressureand are often not in the best interest of the individuals that they claim to serve. This is why it is vital that companies include vulnerable communitiesin this case, Palestiniansin policy conversations.

Finally, transparency and appropriate notice to users would go a long way toward mitigating the harm of such takedownsas would ensuring that every user has the opportunity to appeal content decisions in every circumstance. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a baseline for companies.