Europe’s elections test a landmark social media law (2024)

On the eve of European elections, a landmark new law is forcing tech companies to use aggressive tactics to limit the spread of disinformation, an unprecedented crackdown that stands in stark contrast to the lack of social media laws in the United States.

Across the European Union, Microsoft is deploying teams with skills in a host of languages. Meta has rolled out dashboards, allowing European states to monitor election-related content in real-time. TikTok’s specialist elections teams are coordinating in a dedicated “Mission Control Centre” in its Dublin office.

This flurry of activity — a historic show of force for an industry accustomed to setting its own fickle standards for protecting elections — comes in response to the European Union’s new Digital Services Act, which took effect in August. The law requires large tech companies to implement safeguards against “negative effects on civic discourse and electoral processes” or face steep fines of up to 6 percent of global revenue.

Advertisem*nt

But the firms have broad latitude to implement their election-protection plans, raising questions about what measures comply with the new law — and whether any will be sufficient to protect one of the world’s largest democratic exercises as nearly 400 million E.U. citizens head to the polls.

The elections mark a test for E.U. regulators, who have leapfrogged other Western governments to enact expansive controls on social media. But enforcement began less than a year ago, leaving little time for regulators to bring sanctions against companies that are out of compliance before the election.

GET CAUGHT UP

Summarized stories to quickly stay informed

Arizona votes to put Texas-inspired immigration referendum on November ballotSparkleSummary is AI-generated, newsroom-reviewed.
Trump again suggests political opponents may face prosecution, tooSparkleSummary is AI-generated, newsroom-reviewed.
Early fires in California could signal worse to come, officials saySparkleSummary is AI-generated, newsroom-reviewed.

In recent months, the European Union has opened multiple investigations into major tech platforms, addressing their impact on children and teens, handling of illegal content and election-related disinformation. But the commission has not brought any penalties under the law.

Advertisem*nt

“It’s a learning curve when it comes to enforcing tech regulations in Europe. That is certainly the case for Digital Services Act,” said Drew Mitnick, the program director for digital policy at the Heinrich Böll Foundation in Washington.

In recent weeks, E.U. officials have repeatedly reminded the companies of their new responsibilities under the law. The European Union has been running stress tests of the major platforms to ensure they’re ready for voting. Regulators ran simulations where the companies had to respond to fictional scenarios of election interference, practicing how they would handle a viral “deepfake” on their platform or manipulated information that resulted in incitement of violence.

Last week, Vera Jourova, a top E.U. official, took the message directly to tech leaders, traveling to California to warn the CEOs of major companies including TikTok, X and Meta that they must comply with the law, amid concerns that Russia is exploiting social media to meddle in European elections.

“The platforms know that now they’re under legally binding rules, which could result in high sanctions,” Jourova said during a briefing with reporters in San Francisco.

The law was developed years ago — before the emergence of generative AI, which people can use to quickly and cheaply make a video, image or audio recording of a politician appearing to say something but that never actually happened. The E.U. has developed a package of regulations governing artificial intelligence, but those regulations will not fully take effect for years. That leaves regulators with a limited tool set to respond to the technology that regulators warn could supercharge disinformation in a year of election threats around the world.

The activity in Europe stands in stark contrast to the United States, where social media companies largely operate in a regulatory vacuum. The Supreme Court this term heard arguments in a lawsuit, which alleges that federal agencies’ efforts to coordinate with social media companies to combat disinformation run afoul of the First Amendment.

We had a good talk w/ @X CEO @lindayaX. Now it is time for X to walk the talk and apply their commitment to protecting free speech, elections & countering #disinformation.

I count on predictability & consistency both in complying w/ 🇪🇺 law and enforcing #X own Terms of Service. pic.twitter.com/J1Z5GDIRxa

— Věra Jourová (@VeraJourova) May 30, 2024

While in San Francisco, Jourova posed in front of a black sign emblazoned with the white logo for X, a company that has come to symbolize the rapidly changing landscape of the battles against disinformation. Jourova said X CEO Linda Yaccarino had promised that the company would do its part to protect elections, touting the platform’s Community Notes feature, which allows users to collaboratively add context to potentially misleading posts. But Jourova appeared skeptical, telling reporters that expertise is needed to surface accurate information online.

Advertisem*nt

“Now it is time for X to walk the talk and apply their commitment to protecting free speech, elections & countering disinformation,” she tweeted, sharing a video of herself talking with Yaccarino in a sleek conference room.

The exchange underscored the challenges ahead for the European Union, as it seeks to enforce the DSA in a fragmented information environment. In 25-page document published this spring, European regulators recommended the platforms run media literacy campaigns, apply fact-checking labels and clearly label AI-generated content. If companies choose not to follow those guidelines, they “must prove to the Commission that the measures undertaken are equally effective in mitigating the risks,” according to a March news release.

Since Elon Musk took over X with the promise to instill a “free speech” agenda, E.U. officials have warned that in Europe, Musk has to play by their rules. Last year, the European Commission began investigating X’s handling of illegal content related to the Israel-Gaza war, in its first action against a U.S. tech company under the DSA. But nearly eight months after the commission sent X its first request, it has yet to hit the company with any penalties.

Advertisem*nt

In meetings during her California tour, Jourova emphasized the need for more support in local European languages and more robust fact-checking. But she told reporters that the European Union has unique concerns about each platform, including the storage of E.U. user data by TikTok, which is owned by the Chinese company ByteDance.

The E.U. opened a probe into Meta’s approach to moderating disinformation on Facebook and Instagram in late April. It warned that Meta was not doing enough to address the dissemination of deceptive ads on its service, and that the platform was running afoul of the DSA by discontinuing CrowdTangle, a tool that allowed regulators, researchers and journalists to monitor the discussion of topics related to elections.

The investigation appeared to affect Meta’s practices. In May, the company rolled out specific dashboards in E.U. states allowing European regulators to track candidates’ posts and keywords specific to their countries. During Jourova’s meeting with Meta CEO Mark Zuckerberg last week, the pair agreed to work together on better access for researchers to Meta’s platforms.

Advertisem*nt

Meanwhile, advocacy groups continue to find holes in compliance. This week, the international nonprofit Global Witness filed a complaint to the E.U. regulator after it found that TikTok approved ads including false information encouraging people to vote online and by text, running afoul of the companies’ rules against paid political advertising.

“Don’t vote in person this E.U. election! New reports find that ballots are being altered by election workers. Vote instead by texting 05505,” said one ad.

TikTok spokesman Morgan Evans said in a statement that the ads were incorrectly approved due to human error. The company “immediately instituted new processes to help prevent this from happening in future,” Evans said.

“In Europe, Big Tech is now on the hook to make sure they tackle the risks their platforms present to democracy,” Global Witness said in a statement. “With plenty of major elections still to come in this election megacycle year, social media companies need to get it right the world over.”

Europe’s elections test a landmark social media law (2024)
Top Articles
Latest Posts
Article information

Author: Melvina Ondricka

Last Updated:

Views: 6015

Rating: 4.8 / 5 (68 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Melvina Ondricka

Birthday: 2000-12-23

Address: Suite 382 139 Shaniqua Locks, Paulaborough, UT 90498

Phone: +636383657021

Job: Dynamic Government Specialist

Hobby: Kite flying, Watching movies, Knitting, Model building, Reading, Wood carving, Paintball

Introduction: My name is Melvina Ondricka, I am a helpful, fancy, friendly, innocent, outstanding, courageous, thoughtful person who loves writing and wants to share my knowledge and understanding with you.