After nearly two years of investigation by Congress and the Department of Justice, there’s no longer any serious dispute that Russia and other unfriendly state actors are exploiting social media, hoping to fracture Western democratic institutions and social cohesion. The question is only to what extent it’s working. But whether you believe that Russian exploits shifted the outcomes in the United Kingdom’s Brexit referendum or the 2016 U.S. presidential election is, oddly enough, beside the point. The hard fact is, these disruptive efforts are certain to continue, and state-level adversaries will only up their game and push harder for successful outcomes as time goes on. Neither state-actor trolls nor would-be domestic propagandists will be going away anytime soon.
That’s why a comprehensive response-and-deterrence strategy is critical. One of us (DiResta) is a technical researcher, and the other (Godwin) is a civil-liberties lawyer, but we’re both committed to democracy and to cybersecurity. We also share the conviction that it’s time to come up with a bold, full-spectrum strategy for addressing and mitigating information operations, and that the public needs to participate in this conversation. So that’s why we’ve decided to present our plan—Seven Steps for Fighting Disinformation—to proactively prepare for the next wave of social-media and internet-based psychological operations.
The seven steps can be summarized as follows:
- Move past blame, and look ahead to solutions
- Define disinformation as a cybersecurity issue, not a content problem
- Specify protections for the rights to free expression and privacy
- Create multistakeholder mechanisms for sharing threat information effectively
- Establish a fiduciary framework to promote platform ethics and user well-being
- Establish an oversight body (or bodies) to identify disinformation problems and strategic solutions
- Backstop all this with civil and criminal deterrence strategies
Our First Step is simple: move past blame. Yes, the tech platforms screwed up in 2016. Yes, our media institutions got played. Our government institutions failed to recognize the full scope of the disinformation problem. But ultimately, the finger-pointing, denial, and defensiveness about past election outcomes are small-bore debates. Despite their shortfalls in 2016, the tech companies will have to be key partners in any comprehensive plan to handle disruptive social media propaganda and platform-based psychological operations. Addressing disinformation going forward requires unity and cooperation.
Our Second Step: classify disinformation as a cybersecurity problem. The problem with information operations isn’t just that they are unpleasant, disagreeable, or even false—it’s that their intended purpose isn’t to argue or communicate, but instead to destabilize and undermine genuine argument and communication. Even provable falsehood is no good as a litmus test. The most effective propaganda frequently contains seeds of truth; the truthful elements are incorporated precisely to immunize propagandistic communications from criticism. So truth policing and fact-checking can’t be this century’s Maginot Line. The goal of our adversaries is corrosion, not communication – they are working to create confusion, to make it feel exhausting to determine what’s true or what to trust.
No single state-actor or ideological group owns these tactics. Russia’s operation is the most widely-analyzed and publicly discussed, but it’s only one example of a wide-ranging collection of documented information attacks, scaled from the very large to the very small. What distinguishes today’s disinformation operations isn’t where they comes from, but how they’re executed: the systematic manipulation and exploitation of social network dissemination capabilities, often leveraging tactics used by spammers and malware creators. The tactics evolve rapidly, and new vulnerabilities emerge as platform features and technology change. That’s why we have to treat disinformation as a cybersecurity problem, learning to detect dissemination signatures characteristic of deliberate deceptions, distortions, and other intentionally disruptive measures. A cybersecurity mindset doesn’t just enable us to spot problems—it also helps us craft solutions. This includes establishing ethical, industry-accepted framework for “red-team” methodologies to identify vulnerabilities, which will enable us to be proactive rather than endlessly reactive.
Our Third Step: Any U.S. anti-disinformation strategy has to be designed with preserving both freedom of expression and privacy as a top priority. Disinformation isn’t merely “speech we disapprove of.” We can’t let genuine differences of political opinion (or other kinds of opinion) become collateral damage as we detect and root out disinformation campaigns. At the same time, we also have to remember that bad actors will insist, disingenuously, that what we do to combat disinformation is censorship. To defang these narratives, addressing disinformation campaigns must be done as transparently as possible. Platform companies must inform the public about what we’re all doing to combat disinformation. Frameworks similar to the practice of making Digital Millennium Copyright Act takedowns publicly accessible (both individually and collectively) provide a good starting point. This would allow the platforms to demonstrate that any takedown is done with care toward protecting the right of the people to speak freely, to preserve privacy, and to associate with one another on the internet as well as elsewhere.
The first three Steps establish a common understanding of the threat. The Fourth Step moves us toward building tangible infrastructure for addressing the problem. We must establish multi-stakeholder bodies to construct standards and promote clearly defined accountabilities and oversight. We already have good examples of dedicated, formalized multi-stakeholder structures for cooperation and threat information sharing between the public and private sectors–successful models like the Cyber Threat Alliance (CTA); and the National Telecommunications and Information Administration’s (NTIA) Multi-stakeholder Collaboration on Vulnerability Research Disclosure. We should build on these models, enabling government agencies and tech companies to share their evolving knowledge of disinformation tactics and defense strategies with one another.
The evolving nature of information attacks leads to our Fifth Step: platforms must agree to prioritize ethics and user well-being. Godwin has argued that the social-media companies should answer rising concern about disinformation and privacy breaches by adopting a standard, shared professional code of ethics in the same way that doctors, lawyers, and other professions are bound by law and ethics to do. Obviously, legislators and regulators can and should impose duties on the companies to be proactive in protecting consumer well-being and preventing the rampant spread of disinformation. The companies might reactively oppose these duties, but we argue they should instead embrace them. The more voluntary the companies adoption of “information fiduciary” duties is, the better—not only do we need the companies’ input to strike the right balance (and protect free expression and privacy), but enthusiastic embrace of a fiduciary role could also restore the public trust in the companies that prior lapses have eroded.
The Sixth Step is oversight—a necessary measure to ensure that platforms remain accountable. By ensuring third-party oversight with teeth, we can verify that tech companies are doing their utmost to manage and mitigate pervasive disinformation and manipulation in our privately-owned public squares. The integrity of their products is increasingly tied to the information integrity that undergirds our democracy. Because bad actors will develop new tactics and strategies as internet platforms continue to evolve, we must ensure that the tech companies are incentivized to remain proactive in assuming responsibility for addressing disinformation on their platforms.
To establish this oversight, Congress needs to come together in bipartisan consensus and take action. That may be a challenge—recent Congressional investigative hearings have exposed knowledge gaps and devolved into partisan bickering. But, reaching further back into its rich history, Congress also has shown its potential to overcome partisan division and pass targeted solutions. Among current proposals, we endorse the Honest Ads Act from Senators Warner and Klobuchar as a good first step toward regulating political advertising online to reduce the risk of manipulation. Senator Warner’s 20 Policy Proposals point the way to more measures with potential to clean up social media disinformation, protect user data, improve media literacy education, and make bots more visible.
But Congress can’t be expected to do all this alone. To restore integrity to our information ecosystem, we need strategic defense planning and deterrence frameworks in addition to private-public collaboration and platform accountability.
That brings us to our Seventh Step—the need for a criminal-law and civil-liability framework to establish deterrence. There are currently no direct and consistent consequences for running influence operations, and therefore no downside to attempting them. The sheer variety, scope, and range of actors makes the argument for a whole-of-government cybersecurity doctrine, with deployed tools to detect and respond to malign influence campaigns. We need to better align government resources to address what are currently asymmetric threats. At a minimum, the President should start by immediately reinstating a cybersecurity coordinator on the National Security Council.
We’re not claiming to have gotten everything right with our Seven Step Program for overcoming disinformation, but we hope to spark a conversation about how our new democratic media are being used to corrode democratic societies. Just as the widespread adoption of the automobile forced society to think harder about how to cope with accidents, traffic jams, and safety regulations, we need to plan deliberately, in an ongoing way, for how we will contain this downside to our new social communication infrastructure. And just as the United States and other countries have had to deal with domestic as well as foreign terrorists, we face domestic as well as foreign disinformation campaigns. This means we need to put aside partisan divisions and commit to a shared program of anticipating, responding to, and deterring the threat of deliberate manipulation campaigns.
We believe that we can build an effective response to disinformation without eroding or undermining fundamental values such as freedom of expression online. But first we must resolve—as experts, company leaders, legislators, judges, regulators, and individual citizens—to build it.