Meta, X approved ads containing violent anti-Muslim, antisemitic hate speech ahead of German election, study finds

According to recent data by Eko, a nonprofit campaign group focused on corporate responsibility, social media behemoths Meta and X allowed advertisements that targeted German users with violent anti-Muslim and anti-Jewish hate speech in the lead-up to the nation’s federal elections.

Ahead of an election where immigration has become a major topic in mainstream political discourse, the group’s researchers tested whether the ad review systems on the two platforms would accept or reject submissions for ads that target minorities with violent and hateful messaging. These ads included anti-Muslim slurs, calls for immigrants to be gassed or imprisoned in concentration camps, and AI-generated imagery of mosques and synagogues being burned.

In mid-February, the majority of the test advertisements were accepted within hours of being sent in for evaluation. The federal elections in Germany are set to take place on Sunday, February 23.

Hate speech ads scheduled

Just days before the federal election, Eko said, X authorized all ten of the hate speech advertising that its researchers submitted, whereas Meta approved half of them (five ads) to appear on Facebook (and maybe Instagram) but denied the other five.

According to Meta’s explanation for the five rejections, the platform thought there might be political or social sensitivity issues that could affect voting.

The five advertisements that Meta allowed, however, contained violent hate speech that called Muslim immigrants “rapists,” compared them to “viruses,” “vermin,” or “rodents,” and demanded that they be gassed, burned, or sterilized. In order to “stop the globalist Jewish rat agenda,” Meta also approved an advertisement advocating for the burning of synagogues.

As an aside, Eko claims that despite Meta’s policy requiring disclosure of the use of AI imagery for advertisements about social issues, elections, or politics, half of the ten ads were still approved despite the fact that none of the AI-generated imagery it used to illustrate the hate speech ads was marked as artificially generated.

All five of these horrible advertisements, as well as five more that featured hate speech that was equally violent and directed toward Muslims and Jews, were approved by X.

These extra permitted advertisements had antisemitic slurs that said Jews were fabricating climate change to demolish European industry and gain economic advantages, as well as messaging criticizing “rodent” immigrants who the ad content said were “flooding” the nation “to steal our democracy power.

The later advertisement was paired with artificial intelligence (AI)-generated visuals that significantly referenced antisemitic stereotypes and showed a group of eerie men seated around a table surrounded by gold bar stacks, with a Star of David on the wall above them.

The center-left SPD, which currently controls Germany’s coalition government, was directly attacked in another commercial that X allowed. It made the false claim that the party intends to let in 60 million Muslim refugees from the Middle East before attempting to incite a violent reaction. Additionally, X dutifully scheduled an advertisement advocating for the eradication of Muslim “rapists” and implying that “leftists” favor “open borders.”

The owner of X, Elon Musk, has personally intervened in the German election through the social media site, where he has about 220 million followers. He urged German people to support the Far Right AfD party in order to “save Germany” in a December tweet. Additionally, he conducted a webcast on X featuring Alice Weidel, the leader of the AfD.

To make sure that no platform users were exposed to the violent hate speech, Eko’s researchers blocked all test advertisements before any that had been approved were set to run.

It claims that the tests reveal obvious problems with the way the ad networks handle content moderation. Given that all ten of the violent hate speech advertisements on X were swiftly cleared for display, it is unclear whether the site is engaging in any kind of ad moderation.

The results also imply that the ad platforms may be making money by disseminating hateful and violent content.

EU’s Digital Services Act in the frame

According to Eko’s tests, neither platform is effectively implementing the hate speech rules that they both assert are applicable to ad material under their own standards. Furthermore, after performing a similar test in 2023 ahead of the upcoming new EU internet governance regulations, Eko came to the same conclusion in the case of Meta, indicating that the regime has no bearing on how it functions.

An Eko representative told TechCrunch, “Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, even though the Digital Services Act (DSA) is now in full effect.”

They continued, citing Meta’s recent announcement about reversing its moderation and fact-checking policies as evidence of “active regression” that they said puts the company on a direct collision course with DSA rules on systemic risks. “Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board,” they said.

The European Commission, which is in charge of enforcing important DSA provisions against the two social media behemoths, has received Eko’s most recent findings. Additionally, it stated that both businesses were informed of the results, but none replied.

The Commission has not yet completed the EU’s ongoing DSA investigations into Meta and X, which include issues with illegal content and election security. However, it stated in April that it believes Meta is not properly regulating political advertisements.

In July, it announced a preliminary decision on a portion of its DSA probe on X, which included accusations that the platform was not adhering to the regulations’ ad transparency requirements. But the whole inquiry, which began in December 2023, also looks into the risks of unlawful content, and more than a year later, the EU has not yet reached any conclusions on the majority of the study.

Systemic non-compliance may also result in the temporary blocking of regional access to infringing platforms, and confirmed violations of the DSA may result in fines of up to 6% of worldwide annual turnover.

However, the EU is still deliberating over the Meta and X investigations, so potential DSA sanctions are still up in the air pending final findings.

German voters will cast their ballots in a few hours, and an increasing amount of civil society research indicates that the EU’s premier internet governance law has not been able to protect the democratic process of the largest EU economy from a variety of tech-driven dangers.

Tests of X and TikTok’s algorithmic “For You” feeds in Germany were published earlier this week by Global Witness, and the findings indicate that the platforms favor AfD content above those of other political parties. In the lead-up to the German election, civil society researchers have also accused X of obstructing data access, which the DSA is meant to facilitate, so they can’t investigate election security threats.

“By initiating DSA investigations into both Meta and X, the European Commission has taken significant steps. We now need to see the Commission take strong action to address the concerns raised as part of these investigations,” Eko’s spokeswoman added.

“Our results demonstrate that Big Tech will not voluntarily clean up its platforms, as does growing evidence from other civil society organizations. The representative went on to say, “Despite their legal obligations under the DSA, Meta and X continue to allow illegal hate speech, incitement to violence, and election disinformation to spread at scale.” (To avoid harassment, the spokesperson’s name has been withheld.)

In addition to enforcing the DSA, regulators must take decisive action, such as by putting pre-election mitigation measures into place. This might entail disabling recommender systems that rely on profiling right before elections and putting in place other suitable “break-glass” procedures to stop algorithmic amplification of content that is deemed inappropriate, such hate speech, in the lead-up to elections.

The advocacy group also cautions that the Trump administration is now pressuring the EU to relax its regulations on Big Tech. They say, “There is a real danger that the Commission does not fully enforce these new laws as a concession to the U.S. in the current political climate.”

Leave a Comment