Governments from Russia to Iran have exploited social media’s connectivity, openness, and polarization to influence elections, sow discord, and drown out dissent. While responses have also begun to proliferate, more still are needed to reduce the inherent vulnerability of democracies to such tactics. Recent data privacy laws may offer one such answer in limiting how social media uses personal information to micro-target content: Fake news becomes a lot less scary if it can’t choose its readers.
Current efforts to combat online disinformation fall broadly into one of three categories: content control, transparency, or punishment. Content control covers takedowns and algorithmic de-ranking of pages, posts, and user accounts, as well as preventing known purveyors of disinformation from using platforms. Transparency includes fact-checking, ad archives, and media literacy efforts, the last of which fosters general transparency by increasing user awareness. Punishment, the rarest category, involves sanctions, doxxing (outing responsible individuals), and other tactics that impose direct consequences on the originators of disinformation. All these initiatives show promise and deserve continued development. Ultimately, online disinformation is like cancer, a family of ills rather than a single disease, and therefore must be met with a similarly diverse host of treatments.
However, none of the above techniques fundamentally alter the most pernicious aspect of online disinformation, which is the ability to micro-target messaging at the exact audience where it will have the greatest impact. Content control and punishment are reactive—no matter their success in the moment, the bigger picture is a never-ending game of whack-a-mole as new tactics and operations crop up. Transparency doesn’t actively impede online disinformation but just lessens the blow, betting that more aware audiences will engage less with false or inflammatory content.
Data privacy may offer a more precise solution. Data privacy laws like the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are not intended to address harmful speech. Their main goal is giving users greater control over their personal data, allowing people to check what data has been stored, opt out of data sharing, or erase their data entirely. Personal data generally includes information directly or indirectly linking accounts to real-life individuals, like demographic characteristics, political beliefs, or biometric data.
By limiting access to the information that enables personalized ad targeting and polarization loops, data privacy laws can render disinformation a weapon without a target. Absent the detailed data on users’ political beliefs, age, location, and gender that currently guide ads and suggested content, disinformation has a higher chance of being lost in the noise.
This includes not just the targeted ads that composed a small fraction of Russian efforts to interfere in the 2016 U.S. elections, but the larger mass of both malicious and organic content behind it. Former Facebook CSO Alex Stamos described microtargeted ads as the “tip of the spear,” using engagement metrics to fine-tune messaging that eventually reached a much wider audience through other means. In his words, “the goal of these ads is not actually to push the message. The goal of the ads is to build an audience to which the message can then eventually be delivered.” Therefore, anything that impairs microtargeting and blunts the proverbial spear will ripple out and disrupt other parts of the disinformation food chain unrelated to advertising, as malicious actors find it harder to build and reach audiences. For example, a 2018 study of anti-vaccine disinformation found that a Facebook policy change blocking sharers of disinformation from running ads reduced organic sharing of anti-vaccine fake news stories by 75 percent.
There are important limits to the impact of data privacy laws. At best, they give users a choice about how their data is used, rather than categorical prohibitions. Already, Facebook and Google have done their best to soft-pedal GDPR compliance by setting total access to data as the default, and hiding mandatory disclosures behind obscure menus.
Also, some argue that data privacy laws can impede responses to disinformation, by making it more difficult to gather and share evidence of ongoing campaigns. After all, people who interact with disinformation still have privacy rights that could be violated by sharing the massive datasets needed to understand the problem. But this criticism ignores the broad research-related carveouts for storing and processing data in both the GDPR and CCPA, as well as new techniques that can effectively anonymize personal data and therefore enable lawful dissemination.
Existing data privacy laws could nonetheless be improved to better fight disinformation, among other reasons. A U.S. national data privacy law would do well to include a GDPR-esque opt-in model of consent for data collection, processing, and sharing rather than opt-out or a simple notice of collection. Such a law also could encourage better-designed interfaces to inform users while avoiding “consent fatigue.”
Additionally, the CCPA defines personal data broadly, including browsing history as well as “inferences drawn…from any [personal] information to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.” That reflects a realistic understanding of how online service providers monetize data, and would be a more useful starting point for future laws than the GDPR’s confusing line between “sensitive” and “non-sensitive” personal data.
In sum, data privacy laws can offer an elegant arrow in the quiver of responses to online disinformation, intervening directly in the machinery of microtargeting essential to disinformation campaigns. Because they are adversary-agnostic, such laws protect against foreign and homegrown trolls alike, while avoiding problems of consistency and censorship that plague reactive approaches. Finally, data privacy laws provide a rare convergence of interests between privacy and national security, at a time when they are often opposed. Fake news is here to stay. But by strengthening data privacy, we may have a new tool to fight it.
(This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC. LLNL-JRNL-786217.)
IMAGE: A protester wearing a model head of Facebook CEO Mark Zuckerberg poses for media outside Portcullis House on Nov. 27, 2018 in London, England. Facebook Vice President Richard Allan appeared in front over the House of Commons culture committee that day as part of its fake news inquiry. (Photo by Jack Taylor/Getty Images)