Human rights NGOs say social media platforms continue to censor pro-Palestine content
One year after the escalation between Israel and Hamas, some NGOs claim little progress has been made to stop the suppression of pro-Palestine content.
Human rights NGOs say little progress has been made to stop the digital censorship of pro-Palestine voices on social media networks, one year into the escalation of the Israel-Hamas conflict.
The war broke out last year when Palestinian militant group Hamas launched an attack in southern Israel where they took 250 people hostage and killed 1,200.
Israel responded with air strikes and by sending ground troops to the Gaza Strip, with the war killing around 42,000 Palestinians, according to the Gaza health ministry.
Since the October 7 attack, the Palestinian Observatory of Digital Rights Violations has recorded more than 1,350 instances of online censorship from major platforms through an open call on their website through July 1, 2024, with most of the reports related to Meta, TikTok, X, and Youtube.
The sample includes stories of suspensions, content takedowns, and account restrictions.
The Arab Center for the Advancement of Social Media (7amleh) interpreted these results in a September report as a “deliberate decision” to “aggressively over-moderat(e) Palestine-related content”.
“When online platforms allow hate speech and incitement on their platforms, they could be guilty of helping spread content that dehumanises Palestinians and justifies their collective punishments,” the report reads.
Pro-Israeli groups have, however, criticised what they say are attempts to roll back social media restrictions on antisemitism.
How content or accounts get removed
The NGO Human Rights Watch previously documented how users had their content blocked or removed by Meta in a report released last December.
Users would first have a single post, story or comment that referenced Palestine reviewed then removed with little to no explanation pointing to a specific policy breach, according to Rasha Younes, a senior researcher with Human Rights Watch.
Then, Younes said they heard from users who had their accounts restricted from commenting on other pro-Palestine content or disabled for anywhere from 24 hours to three months.
There are others who described being “shadowbanned,” the idea that their posts were less visible to other users on both Instagram and Facebook, Younes continued.
Younes said users who tried to challenge these restrictions found the “we made a mistake?” button disabled, which she believes “violates Meta’s own policies”.
For those that are blocked, Younes said they “might not have any place to go” to express their political activism or lived reality during the conflict.
Both HRW and 7amleh’s reports rely on direct user experiences, but researchers from both groups want to push social media companies like Meta to release data about which posts are being blocked by automatic moderation so they can do more in-depth research.
“What we’re seeing is people who work in these companies, they want these changes … but unfortunately they are not the decision-makers, so they can’t really change anything,” Taysir Mathlouthi, 7amleh’s EU Advocacy Officer, told Euronews Next.
Tech companies ‘refining their approach’ during the conflict
Meta and TikTok declined to answer any direct questions about their content moderation policies and instead referred Euronews Next to recent reports about their responses.
In Meta’s report from September, the company said they’ve been refining their approach to “reflect the changing dynamics” of the humanitarian crisis in Gaza and hostage-taking by Hamas.
But, the company admitted that some of their policy decisions, like lowering thresholds for automated enforcement, “inadvertently limit(s) discussion of critical world events”.
A Meta spokesperson told Euronews last year, however, that the HRW report “ignores the realities of enforcing our policies globally during a fast-moving, highly polarised and intense conflict,” adding that “the implication that we deliberately and systemically suppress a particular voice is false”.
For TikTok, the company said in an October 2 report they’ve taken down 4.7 million videos and suspended 300,000 livestreams between October 7, 2023 and September 15, 2024, for either promoting Hamas, hate speech or misinformation.
Earlier this year, the company said they added “Zionist” content to their hate speech policy “when it is used … [as a] proxy with Jewish or Israeli identity”.
“This policy was implemented early this year after observing a rise in how the word was increasingly used in a hateful way,” TikTok said.
Euronews Next reached out to YouTube and X but did not receive an immediate reply.
EU urged to ‘pressure’ social media companies
There’s a responsibility that the EU needs to take on as well, even if the conflict isn’t directly within their borders, according to 7amleh’s Mathlouthi.
The European Commission recently passed the Digital Services Act (DSA) which introduced new mechanisms to fight illegal online content, according to a description of the new law.
However, Mathlouthi said there’s no real definition of what the law considers “incitement or harmful content,” which makes it difficult to put pressure on these big companies through the act.
“We want more regulation, we want more control and we want more transparency and this will never be achieved without pressure,” Mathlouthi said.
Last October, the EU asked X, Meta and TikTok for information about how they were regulating content about the conflict. That’s the first step in figuring out whether a full investigation is needed under the DSA.
In December, the European Commission opened formal proceedings against X to address, among other concerns, “the dissemination of illegal content in the context of Hamas’ terrorist attacks against Israel,” a press release said at the time.
The EU has since launched formal investigations against Meta, TikTok and TikTok Lite for other possible DSA breaches but did not explicitly mention Israel or Palestine-related content as one of their reasons.
Euronews Next reached out to the European Commission to confirm whether the information they received from Meta and TikTok about their moderation policies on the Israel-Hamas war was satisfactory but did not receive an immediate reply.