Meta’s content moderation policies are systematically censoring pro-Palestinian content on its social media platforms, New York-based NGO Human Rights Watch (HRW) found.
In a recent report, HRW wrote that the Facebook and Instagram owner has increasingly silenced pro-Palestinian voices on Instagram and Facebook following the October 7th attack by Hamas and the beginning of Israel’s war against the militant group.
The NGO accused Meta of furthering the erasure of Palestinians’ pain, effectively throttling their opportunity to let the world know what’s happening in Gaza.
Since the start of Israel’s retaliation against the Hamas attack, about 20,000 people have reportedly been killed in the Gaza strip, according to local authorities. Many of the casualties were women and children. In Israel, approximately 1,200 people were killed in the October 7th massacre, most of them civilians.
Social media users on Meta’s platforms were the first to realise that their posts calling for a ceasefire in Gaza and the protection of the region’s civilians were being suppressed, removed or “shadowbanned” (when some content becomes drastically less visible).
According to HRW, the removal of peaceful expressions of support for Gazans is the result of “flawed Meta policies and their inconsistent and erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals”.
Many users have since found a way around the strict content moderation policies. It’s become common on Instagram and Facebook to see pro-Palestinian activists talk about what’s happening in Gaza by writing “G4z4” or using a watermelon instead of the Palestinian flag to avoid Meta hiding the content on its platforms.
Now, HRW is asking Meta to permit protected expression on its platforms, “overhauling policies to make them consistent with human rights standards, equitable, and non-discriminatory.”
“Meta’s censorship of content in support of Palestine adds insult to injury at a time of unspeakable atrocities and repression already stifling Palestinians’ expression,” said Deborah Brown, acting associate technology and human rights director at HRW.
“Social media is an essential platform for people to bear witness and speak out against abuses while Meta’s censorship is furthering the erasure of Palestinians’ suffering.”
Which voices are being silenced?
HRW analysed 1,050 cases of online censorship from more than 60 countries, identifying six common patterns: content removals, suspension or deletion of accounts, inability to engage with content, inability to engage with content, inability to follow or tag accounts, restriction on the use of features such as Instagram and Facebook Live, and shadow-banning.
Meta was also found to have censored dozens of posts documenting injuries suffered by Palestinians in Gaza under its policies on violent and graphic content, despite the fact that they should have been considered newsworthy under the companies’ policies.
According to HRW, Mark Zuckerberg’s company is aware of the problem, of which they were first informed by the NGO in 2021.
An independent investigation conducted by Business for Social Responsibility and commissioned by Meta found that the company’s content moderation in 2021 “appear[s] to have had an adverse human rights impact on the rights of Palestinian users,” adversely affecting “the ability of Palestinians to share information and insights about their experiences as they occurred.”
Last year, Meta committed to making changes to its policies and their enforcement of content moderation. But according to HRW, these changes were ultimately not implemented.
“Almost two years later, Meta has not carried out its commitments, and the company has failed to meet its human rights responsibilities,” HRW concluded. “Meta’s broken promises have replicated and amplified past patterns of abuse.”
Asked for comment by Euronews, a spokesperson for Meta shared the following statement: “This report ignores the realities of enforcing our policies globally during a fast-moving, highly polarised and intense conflict, which has led to an increase in content being reported to us.
“Our policies are designed to give everyone a voice while at the same time keeping our platforms safe. We readily acknowledge we make errors that can be frustrating for people, but the implication that we deliberately and systemically suppress a particular voice is false.
“Claiming that 1,000 examples – out of the enormous amount of content posted about the conflict – are proof of ‘systemic censorship’ may make for a good headline, but that doesn’t make the claim any less misleading.”