Elon Musk’s promise to European regulators that Twitter would follow the rules set forth in the European Union’s new Digital Services Act (DSA) will finally be put to the test. The first few weeks of Musk’s reign at the company suggest he could be in for a brawl.
Within hours of Musk taking over, racist language previously blocked on the platform surged. The billionaire praised targeted ads, which is the focus of an EU crackdown to better protect users from pervasive online surveillance. He personally retweeted misinformation about the assault and attempted kidnapping that left U.S. House Speaker Nancy Pelosi’s husband gravely injured. And he has now slashed Twitter’s workforce by half and jettisoned the majority of the company’s thousands of contractors, who traditionally handle content moderation.
The DSA, which came into force on Nov. 16, is a bold set of sweeping regulations about online-content governance and responsibility for digital services that make Twitter, Facebook, and other platforms subject in many ways to the European Commission and national authorities. These bodies will oversee compliance and enforcement and seem determined to make the internet a fairer place by reining in the power of Big Tech. Many expect this EU flagship legislation to be a new gold standard for other regulators in the world.
Concerns for Safety and Respect
The DSA’s content-moderation rules could be a real challenge for Musk. While the DSA doesn’t tell social media platforms what speech they can and can’t publish, it sets up new standards for terms of service and regulates the process of content moderation to make it more transparent for both users and regulators. The DSA is as much animated by concerns for safety and the respect of fundamental rights as it is by market interest. It draws attention to the impact of algorithmic decision-making (and there’s a fine line between Twitter’s new pay-to-play subscription model and algorithmic discrimination), and gives stronger teeth to EU codes of conduct, such as the Code of Practice on Disinformation, which Twitter has signed onto.
And it does require platforms to remove illegal content. EU regulators say the DSA will ensure that whatever is illegal offline will also be illegal online. But what is considered illegal varies among EU member states. For example, an offensive statement flagged as illegal in Austria might be perfectly legal in France. It is true that the DSA took some steps to avoid imposing the law of one country on all other countries, but the EU fragmentation has caused and will continue to require significant compliance efforts by platforms. There are also new due-diligence obligations, some of which could increase the chance that lawful speech will be swept up as platforms err on the side of removal to avoid fines.
Perhaps working in Musk’s favor, the DSA does not require platforms to remove all harmful content, such as “awful but lawful” hate speech. In fact, the DSA sets some limits on what can be taken down, which means Musk may find it easier to remove as little as possible. But platforms are obliged to act responsibly and work together with trusted flaggers and public authorities to ensure that their services are not misused for illegal activities. And while platform operators have a lot of leeway to set out their own speech standards, those standards must be clear and unambiguous, which is something Musk vowed to do anyway.
Sometimes it’s hard to take Musk at his word. He said in late October that no major content decisions or account reinstatements would occur until a content moderation council comprised of “widely diverse viewpoints” was formed. He hasn’t announced such a council yet exists, but on Nov. 20 reinstated several people kicked off the platforms for hate speech and misinformation, including former President Donald Trump — who was allowed back on Twitter after Musk polled users – and Kanye West.
Twitter’s trust-and-safety and content-moderation teams have taken huge hits since Musk took over as “Chief Twit,” as he calls himself. Half of Twitter’s 7,500 employees were let go, with trust and safety departments hit the hardest. Twitter’s top trust-and-safety, human rights, and compliance officers left the company, as have some EU policy leads. That Musk fired the entire human rights team may have a boomerang effect. That team was tasked with ensuring Twitter adheres to U.N. principles establishing corporate responsibility to protect human rights.
Media outlets reported that content-moderation staffers at the company were blocked from accessing their enforcement tools. Meanwhile, a second wave of global job cuts hit 4,400 of Twitter’s 5,500 outside contractors last week, and many of them had worked as content moderators battling misinformation on the platform in the U.S. and abroad.
Far-Reaching Obligations for Large Platforms
All this raises questions about whether Twitter has the technical and policy muscle to comply with and implement DSA obligations, especially if Twitter is designated as a “very large online platform” or VLOP (more than 45 million users) under the DSA. If it is designated as such, the company will have to comply with far-reaching obligations and responsibly tackle systemic risks and abuse on their platform. They will be held accountable through independent annual compliance audits and public scrutiny and must provide a public repository of the online ads they displayed in the past year. It’s unlikely that Twitter will be able to respect these commitments if it doesn’t have enough qualified staff to understand the human rights impact of its operations.
What’s more, Musk is a self-described “free speech absolutist,” who said he acquired Twitter because civilization needs “a common digital town square.” He has criticized Twitter’s content-moderation policies and said he opposes “censorship” that “goes far beyond the law.” Yet, after widespread criticism, he also said Twitter can’t become a “free-for-all hellscape” where anything can be said with no consequences.
Sadly, Twitter descended into pretty much that kind of landscape shortly after Musk took over. A surge in racists slurs appeared on the platform in the first few days, while a revised paid-subscription program that gives users a blue check mark — which used to denote that the account holder’s identity had been verified as authenticated — was reintroduced but without the verification step. Anyone paying $7.99 could buy a blue check, and many who did set up fake accounts impersonating people and companies and tweeted false information, from an Eli Lilly account announcing its insulin product was now free to multiple accounts impersonating and parodying Musk himself.
Musk paused the rollout, but the damage had been done. Margrethe Vestager, executive vice president of the European Commission, told CNBC that such a practice indicates “your business model is fundamentally flawed.”
If Twitter is considered a “very large online platform,” it will need to assess all sorts of risks, including disinformation, discrimination, and lack of civic discourse stemming from the use of the service, and take mitigating actions to reduce societal harm. If Musk is successful with his plans to increase Twitter’s user base in the next few years, Twitter will certainly face tougher scrutiny under the DSA and could be required to reverse course and staff up its content moderation department.
This is also the opinion of Thierry Breton. Commenting on the reduction in moderators, the EU’s internal market commissioner warned Musk that “he will have to increase them in Europe.”
“He will have to open his algorithms. We will have control, we will have access, people will no longer be able to say rubbish,” Breton said.
Restricting User Information for Ads
Targeted advertising is another area where Musk’s plans could clash with DSA obligations. Twitter’s ad business is its primary source of revenue — with the company losing money, Musk is keen to boost ad revenue. The DSA restricts platforms from using sensitive user information such as ethnicity or sexual orientation, for advertising purposes. Advertisements can no longer be targeted based on such data.
More broadly, the DSA increases transparency about the ads users see on their feeds: platforms must clearly differentiate content from advertising; ads must be labeled accordingly. It’s hard to see how all of these requirements will square perfectly with Musk’s plans. He’s told advertisers that what’s needed on Twitter are ads that are as relevant as possible to users’ needs. Highly relevant ads, he says, will serve as “actual content.”
One of the most concerning things the DSA does not do is fully protect anonymous speech. Provisions give government authorities alarming powers to flag controversial content and to uncover data about anonymous speakers — and everyone else — without adequate procedural safeguards. Pseudonymity and anonymity are essential to protecting users who may have opinions, identities, or interests that do not align with those in power.
Marginalized groups and human rights defenders may be in grave danger if those in power are able to discover their true identities. Musk has vowed to “authentic all real humans” on Twitter; unfortunately the DSA does nothing to help them if Musk follows through on his promise.
The DSA is an important tool to make the internet a fairer place, and it’s going to cause turbulence for Twitter as Musk seeks to carry out his vision for the platform. Much will depend on how social media platforms interpret their obligations under the DSA, and how EU authorities enforce the regulation. Breton, the internal market commissioner, vowed that Twitter “will fly by our rules.” For Musk, the seatbelt sign is on. It’s going to be a bumpy ride.