On Thursday, March 25th, two subcomittees of the House Energy & Commerce Committee will hold a joint hearing on “the misinformation and disinformation plaguing online platforms.” Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will testify and respond to questions from lawmakers. And Senator Chris Coons (D-DE) has now said that the Senate Judiciary subcommittee he chairs is “very likely” to call Zuckerberg and Dorsey to testify as well, with a focus on how algorithms amplify misinformation and the spread of extremism.
Next Thursday will be the first time the tech CEOs will face Congress since the January 6th siege on the U.S. Capitol, where different groups of individuals incited by disinformation campaigns led by former President Donald Trump and his allies sought to prevent the certification of the presidential election. Questions about the role of the tech platforms in contributing to radicalization and extremism and propagating disinformation related to the election are expected, according to a press release from the Committee. They are also interested in the spread of disinformation about the coronavirus pandemic.
“This hearing will continue the Committee’s work of holding online platforms accountable for the growing rise of misinformation and disinformation,” Energy and Commerce Committee Chair Frank Pallone, Jr. (D-NJ) and the chairs of the Communications and Technology and Consumer Protection and Commerce subcommittees, Mike Doyle (D-PA) and Jan Schakowsky (D-IL), said in a statement. “For far too long, big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences. Industry self-regulation has failed. We must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation.”
Unfortunately, prior Congressional hearings have occasionally highlighted the gap in understanding of tech issues on Capitol Hill, such as Senator Orrin Hatch’s much pilloried query on how Facebook makes money, since its service is free for users (Zuckerberg’s response- “Senator, we run ads”- is now a meme). The general understanding of these issues has certainly improved; but after four years of such hearings, getting the questions right has never been more urgent, nor the circumstances more severe.
There are signs this hearing may go into a substantial level of detail. Last month, for instance, the Committee sent a letter to Facebook CEO Mark Zuckerberg with requests for specific information, including internal documents and presentations, related to “Facebook’s research on divisive content and user behavior, the reported presentations and recommendations made to Facebook executives and their actions in response, and the steps Facebook leadership has taken to reduce polarization on its platform.” A similar letter in early March to Google CEO Sundar Pichai requested specific details on YouTube’s “policies and practices for combating extremism,” asking for specific information about YouTube’s recommendation system and content moderation practices. And another letter to Facebook on March 8th requested specific information about “ads showing gun accessories and protective equipment next to content that amplified election misinformation,” as well as information about content related to the siege at the Capitol.
It is important that the American public understand how the design choices and business decisions these companies make helped foment and facilitate the violent attack on democracy on January 6th, and how they contribute to radicalization, extremism and the spread of disinformation generally. That’s why we asked a range of experts to provide potential questions for lawmakers to pose. Responses came in from:
● Imran Ahmed, CEO of Center for Countering Digital Hate (@Imi_Ahmed)
● Damian Collins, Minister of Parliament for Folkestone and Hythe (@DamianCollins)
● Renee DiResta, Research Manager at the Stanford Internet Observatory (@noUpside)
● Dr. Mary Anne Franks, President of the Cyber Civil Rights Initiative and Professor of Law at the University of Miami (@ma_franks)
● Bryan Jones, Chairman, Tech Policy Press (@dbryanjones)
● Mor Naaman, Professor at Jacobs Technion-Cornell Institute at Cornell Tech and Information Science Department at Cornell University (@informor)
● Katie Paul, Director of the Tech Transparency Project (@TTP_updates)
● Gretchen Peters, Executive Director, Alliance to Counter Crime Online (@GretchenSPeters)
● Erin Shields, National Field Organizer, MediaJustice (@mediajustice)
● Jonathan Zittrain, Professor of Law and Professor of Computer Science, Harvard University (@zittrain)
● Reader submissions are included in an addendum.
We encourage readers to propose additional questions by sending suggestions to lte@justsecurity.org. We will add selected readers’ questions to the list with attribution, but let us know if you prefer anonymity.
Questions for all three Tech CEOs
1. The First Amendment is often erroneously invoked to suggest that your companies cannot or should not restrict content. But as you know, the First Amendment actually gives you, as private businesses, the right to set terms of service as you see fit, including what kind of content and conduct to allow, and to deny the use of your services to those who violate those terms. What specific actions are each of your companies doing to exercise those First Amendment rights to ensure that your platforms and services are not plagued with dangerous misinformation? (Mary Anne Franks)
2. One highly influential piece of misinformation is that the tech industry is biased against conservative figures and conservative content. Conservative figures and content actually perform very well on social media sites such as Facebook, even though they disproportionately violate companies’ policies against misinformation and other abuse. Are each of you willing to commit, going forward, to enforcing your policies against misinformation and other abuses of your policies regardless of accusations of political bias”? (Mary Anne Franks)
3. The principle of collective responsibility is a familiar concept in the physical world. A person can be partly responsible for harm even if he did not intend for it to happen and was not its direct cause. For instance, a hotel can be held accountable for failing to provide adequate security measures against foreseeable criminal activity against its guests. An employer can be liable for failing to address sexual harassment in the workplace. Do you believe that tech companies are exempt from the principle of collective responsibility, and if so, why? (Mary Anne Franks)
4. Do you think your services’ responsibilities to address disinformation vary depending on whether the content is organically posted by users, versus placed as paid advertising? (Jonathan Zittrain)
5. Along with law enforcement agencies, Congress is conducting multiple lines of inquiry into January 6th and there may indeed be a National Commission that will have the role of the tech platforms in its remit. Have you taken proactive steps to preserve evidence that may be relevant to the election disinformation campaign that resulted in the January 6th siege on the Capitol, and to preserve all accounts, groups and exchanges of information on your sites that may be associated with parties that participated in it? (Justin Hendrix)
6. Looking beyond content moderation, can you explain what exactly you have done to ensure your tools — your algorithms, recommendation engines, and targeting tools — are not amplifying conspiracy theories and disinformation, connecting people to dangerous content, or recommending hate groups or purveyors of disinformation and conspiracy theories to people? For example, can you provide detailed answers on some of the Capitol riot suspects’ use history, to include the following:
- Facebook: Which Facebook groups were they members of? Did Facebook recommend those groups, or did the individuals search for the specific groups on their own? What ads were targeted at them based on either the data you gathered or interests you inferred about them? Were they connected to any known conspiracy theorists, QAnon believers, or other known January 6th rioters due to Facebook’s recommendations?
- YouTube: Of the videos the individuals watched with Stop the Steal content, calls to question the election, white supremacy content and other hate and conspiracy content, how many were recommended by YouTube to the viewer?
- Twitter: Were any of the conspiracy theorists or other purveyors of electoral misinformation and Stop the Steal activity recommended to them as people to follow? Were their feeds curated to show more Stop the Steal and other conspiracy theory tweets than authoritative sources?
- And to all: Will you allow any academics and members of this committee to view the data to answer these questions? (Yael Eisenstat)
7. There is a growing body of research on the disproportionate effects of disinformation and white supremacist extremism on women and people of color. This week, there was violence against the Asian American community- the New York Times reports racist memes and posts about Asian-Americans “have created fear and dehumanization,” setting the stage for real-world violence. Can you describe the specific investments you are making on threat analysis and mitigation for these communities? (Justin Hendrix)
Questions for Facebook CEO Mark Zuckerberg
1. Mr. Zuckerberg, you and other Facebook executives have routinely testified to lawmakers and regulators that their AI finds and removes as much as 99% of some forms of objectionable content, such as terrorist propaganda, human trafficking content and, more recently, child sex exploitation content. It is normally understood to mean that Facebook AI and moderators remove 99% of overall content. But can you define clearly that you mean to say your AI removes 99% of what you remove, rather than the total amount of such content? Does Facebook have evidence about its overall rate of removal of terror content, human trafficking content, and child sexual abuse material (CSAM) that it can provide to this Committee? Studies by the Alliance to Counter Crime Online indicate you are removing only about 25-30%. Can you explain the discrepancy? (Gretchen Peters)
2. Facebook executives like to claim that Facebook is just a mirror to society. But multiple studies — including, apparently, internal Facebook studies — have shown that Facebook recommendation tools and groups connect bad actors, amplify illegal and objectionable content and amplify conspiracies and misinformation. Why can’t you, or won’t you, shut down these tools, at least for criminal and disinformation content? (Gretchen Peters)
3. Aside from labeling misinformation or outright deleting it, there’s also the possibility of simply making it circulate less. (a) Is an assessment of misinformation taken into account as Facebook decides what to promote or recommend in feeds? (b) Could users be told if such adjustments are to be applied to what they are sharing? (2) Decisions about content moderation often entail obscuring or deleting some information. Would Facebook be willing to automatically document those actions as they happen, perhaps embargoing or escrowing them with independent research libraries, so that decisions might be understood and evaluated by researchers, and trends made known to the public, later on? (Jonathan Zittrain)
4. Why is it that Facebook and Instagram act to remove fewer than one in twenty pieces of misinformation on Covid and vaccines reported to them by users? (Imran Ahmed)
5. There is clear evidence that Instagram’s algorithm recommends misinformation from well-known anti-vaxxers whose accounts have even been granted verified status. With lives depending on the vaccine rollout, when will Facebook address this problem and fix Instagram’s algorithm? Did Facebook perform safety checks to prevent the algorithmic amplification of Covid-19 misinformation? Why were posts with content warnings, for example content warnings about Covid-19 misinformation, promoted into Instagram feeds? What is the process for suggesting and promoting posts that are not verified first? (Imran Ahmed)
6. Former Facebook policy employees came forward to say that “Mark personally didn’t like the punishment, so he changed the rules,” when it came to banning Alex Jones and other extremists like the Oath Keepers. What role do you play in the moderation of misinformation and deciding what harmful content qualifies for removal? (Imran Ahmed)
7. 2016 research from Facebook showed that 64% of people who joined FB groups promoting extremist content did so at the prompting of Facebook’s recommendation tools. Facebook reportedly changed its policies. You were previously asked in a Senate hearing whether you had seen a reduction in your platform’s facilitation of extremist group recruitment since those policies were changed, to which you responded, “Senator, I’m not familiar with that specific study.” Are you now familiar with that study, and what’s your response now — did you see a reduction in your platform’s facilitation of extremist group recruitment since those policies were changed? (Damian Collins)
8. Did Facebook complete the app audit it promised during the Cambridge Analytica scandal? Have you found evidence of other apps harvesting Facebook user data in a similar way to Alexander Kogan’s app? Will you make public a list of such apps? (Damian Collins)
9. A Washington Post story referenced internal Facebook research that focused on super spreaders of anti-vaccine content. What are the remedies you are considering to balance freedom of expression while recognizing that a committed handful of people are repeatedly responsible for spreading harmful content across Facebook as well as Instagram? (Renée DiResta)
10. A recent report in MIT Technology Review found that there is no single team at Facebook tasked with understanding how to alter Facebook’s “content-ranking models to tamp down misinformation and extremism.” Will you commit today to create a department at Facebook that has dominion over all other departments to solve these problems, even if it hurts Facebook’s short term business interests? (Justin Hendrix)
11. The Technology Review report also found that you have limited efforts to explore this question because of the influence of your policy team, in particular Joel Kaplan, Facebook’s vice president of global public policy. The Technology Review report said that when deciding whether a model intended to address misinformation is fair with respect to political ideology, the Facebook Responsible AI team determined that “fairness” does not mean the model should affect conservative and liberal users equally. “If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.” But the article says members of Joel Kaplan’s team “followed exactly the opposite approach: they took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless.” In other words, the change would have literally no impact on the actual problem of misinformation. Is it Facebook’s policy to seek political balance even when that means allowing harmful misinformation and disinformation to remain on its platform? (Justin Hendrix)
12. As Facebook continues development of artificial intelligence through its Responsible AI project, how is the company deploying this technology to limit the impact of hate speech, misinformation, and disinformation campaigns on its platform? By recent accounts, Facebook is deploying AI in service of platform growth and not necessarily in the interest of communities they claim to care about. (Erin Shields)
13. With respect to the January 6 attacks, Sheryl Sandberg said, “I think these events were largely organized on platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency.” We now know that Facebook was the most cited social media site in charging documents the Justice Department filed against members of the Capitol Hill mob. Can you provide us an accurate answer to how many people discussed these plans in Facebook groups or on the platform? Of those groups where discussions of Stop the Steal or the events of January 6 occurred, how many had been flagged for violating your own policies? How many remained up, despite either internal flags or reports from external groups? Why did they remain up? (Yael Eisenstat)
14. After a group is found to be engaged in the propagation of disinformation or planning violence, such as the Stop the Steal group that was terminated shortly after the November 3rd election, what steps do you take? Do you continue to monitor the activities of group organizers? Facebook had hundreds of Stop the Steal pages that were not deactivated, and many remained active into this year, even after the Capitol siege. What steps do you take to limit the propagation of false claims in these groups? Do you monitor how accounts that participate in banned groups later reconnect in new ones? Do you communicate to the participating accounts about why the group was deactivated? (Bryan Jones)
15. In April of last year, the Tech Transparency Project identified 125 Facebook groups devoted to the boogaloo boys, with some sharing tips on tactical organizing and instructions for making gasoline bombs and Molotov cocktails. Just weeks after TTP’s report, multiple boogaloo supporters were arrested by the FBI Joint Terrorism Task Force in Las Vegas on terrorism-related charges. They had met in some of the same Facebook groups identified by TTP. It was not until after these arrests that Facebook said it would stop recommending groups and pages related to the boogaloo movement. But the problem continued. A short time later, authorities arrested alleged boogaloo supporters Steven Carrillo and Robert Justus for the murder of a Santa Cruz County deputy officer. Both men were members of Facebook boogaloo groups. Facebook finally acted to ban the militant boogaloo movement from its platform on June 30, a month after someone was murdered. We saw a similar failure by Facebook to address these issues in the Kenosha shootings, where BuzzFeed News found that the Kenosha Guard militia’s event listing was reported 455 times and not removed until after the shooting had taken place. Facebook told the media that it had removed the event, which turned out to be false. The militia group that organized the event actually removed it, not Facebook. And just this month, the FBI informant in the foiled militia plot to kidnap Michigan Gov. Gretchen Whitmer said that he joined the militia’s Facebook group because it was recommended by your algorithms—he didn’t even search for it. Facebook vice president for global policy management and counterterrorism Monika Bickert told the Senate Commerce Committee in September 2019 the company has “a team of more than 350 people who are primarily dedicated in their jobs to countering terrorism and hate.” Why is it that even with your specialized teams and AI tools, outside researchers and journalists continue to easily find this content on your platform? (Katie Paul)
16. Mr. Zuckerberg, in November you told the Senate Judiciary Committee that “we’re also clearly not like a news publisher in that we don’t create the content.” But an SEC whistleblower petition in spring 2019 found that Facebook was actually auto-generating business pages for white supremacist and terrorist groups, and that these pages created by Facebook can serve as a rolodex for extremist recruiters. Just weeks after this revelation made headlines, your VP, Monika Bickert, was asked about this auto-generation of extremist content at a House hearing. One year later, however, little appears to have changed. A May 2020 report from the Tech Transparency Project found that Facebook was still auto-generating business pages for white supremacist groups. How do you expect this Congress, the public, and your investors to trust that your AI can solve these issues when that same AI is not only failing to catch, but actually creating, pages for extremists? (Katie Paul)
17. After the January 6 Capitol insurrection, a report from BuzzFeed News revealed that Facebook’s algorithms were offering up advertisements for armor and weapons accessories to users alongside election disinformation and posts about the Capitol riot. After complaints from lawmakers and Facebook employees, you announced that Facebook would be pausing these military gear ads through inauguration—but that doesn’t appear to have happened. In the days following Facebook’s announcement, BuzzFeed reporters and the Tech Transparency Project continued to find ads for military gear in Facebook feeds, often posted alongside algorithmic recommendations for militia groups. This is yet another instance of your company promising something will be dealt with and not following through. Why should Congress, your investors, or the public believe that you will address harmful content? Thus far, profit has prevailed. (Katie Paul)
Questions for Google CEO Sundar Pichai
1. To what extent do you view Google Web search as simply about “relevance,” rather than tweaking for accuracy? For example, if Google search offers a site containing rank disinformation as the first hit on a given search, under what circumstances, if any, would the company think itself responsible for refactoring the search to surface more accurate information? (Jonathan Zittrain)
2. On a recent Atlantic Council webinar, YouTube CEO Susan Wojcicki explained that YouTube did not implement a policy about election misinformation until after the states certified the election, on December 9. She said that starting then, a person could no longer allege the election was due to widespread fraud. First, this begs the obvious question: Why did you wait until December 9? (Yael Eisenstat)
3. She then continued to explain that due to a “grace period” after the policy was finally made, Donald Trump’s numerous violations didn’t count, and he only has one actual strike against him and will be reinstated when YouTube deems there is no longer a threat of violence. How will you make that assessment? How will you, YouTube, decide that there is no longer a threat of violence? And does that mean you will allow Donald Trump, or others with strikes against them, to reinstate their accounts and be allowed to continue spreading mis- and disinformation and conspiracy theories? (Yael Eisenstat)
4. At the Atlantic Council, when asked about testifying to Congress Wojcicki also said: “If asked, I would always attend and be there.” It is my understanding she has turned down numerous requests to testify. Can you confirm that she will attend the next time she is invited by Congress to testify? Youtube is the second most used social media platform boasting 2.3 billion active users each month clocking in 1 billion hours of watch time daily. YouTube has flown under the radar of most as a huge contributor to the erosion of public trust in information and remains reluctant to engage with stakeholders about these policies. The public needs to hear directly from executives approving and instigating the policy decisions that impact content moderation policies on the platform. (Erin Shields)
5. In the months leading up to the election, Google claimed that it would “protect our users from harm and abuse, especially during elections.” But an investigation from the Tech Transparency Project (TTP) found that search terms like “register to vote,” “vote by mail,” and “where is my polling place” generated ads linking to websites that charge bogus fees for voter registration, harvest user data, or plant unwanted software on people’s browsers. When questioned about the malicious scam ads, Google told media outlets it had removed some of the ads that charged large fees to register to vote or sought to harvest user data. But a second investigation by TTP less than four months later found that Google continued allowing some kinds of misleading ads, just weeks before the November election. Is Google unable—or unwilling—to fix problems in its advertising apparatus given that a topic like voting, which was the subject of intense national attention and one that Google had promised to monitor closely, continued exhibiting problems the company said it addressed? (Katie Paul)
6. As research from Cornell and the Election Integrity Partnership makes clear, YouTube serves as a library of disinformation content that is often used to populate posts on Twitter and Facebook. Because of your three strike system, it is possible that offending content, no matter how popular, can continue to be shared and available from YouTube. Is it reasonable to have a policy where misleading videos can remain intact on YouTube because an account has failed to accrue three strikes? (Mor Naaman)
Questions for Twitter CEO Jack Dorsey
1. On several occasions you’ve touted a decentralized version of Twitter, such as “Bluesky.” How do you envision interventions for disinformation taking place on a distributed version of Twitter, if at all, and what business model, if any, would Twitter contemplate for such a version? How far along is it, and how open is it in its conception, whether in code or in participation by other software developers and organizations? (Jonathan Zittrain)
2. Does Twitter believe its labels and other restrictions on Trump and other tweets that shared election disinformation were effective? How exactly do you measure that effectiveness? Why, or why not? What criteria was used to decide on these sanctions and who applied them? (Mor Naaman)
3. As the volume and spread of false claims was becoming obvious, when did you first consider taking action on some of the most prominent accounts spreading disinformation? Researchers have identified several top accounts that were most active in spreading these false claims, including in research from the Social Technologies Lab at Cornell Tech, and in a report from the Election Integrity Partnership. Was anyone at Twitter tasked with monitoring or understanding this influencer network as it was evolving? Who was responsible for the decision to continue to allow these accounts to use the platform, or to suspend them, and where and when were these decisions made? (Mor Naaman)
4. Mr. Dorsey, following the deadly siege on the US Capitol on January 6th, you introduced a detailed strike system specifically for civic integrity policy. Has Twitter applied this new policy since its creation? And do you intend to expand the strike system to other problem areas, such as COVID19 misinformation? (Justin Hendrix)
* * *
“Should social media companies continue their pattern of negligence, governments must use every power – including new legislation, fines and criminal prosecutions – to stop the harms being created,” Imran Ahmed, CEO of Center for Countering Digital Hate wrote in the introduction to his recent report, Malgorithm: How Instagram’s algorithm publishes misinformation and hate to millions during a pandemic. “Lies cost lives.”
Certainly, Congress learned that lesson all too well when it was attacked on January 6th. As the CEOs prepare their responses, they should keep this in mind. We are well past the point where platitudes and evasions are acceptable- Americans deserve complete answers to the questions our elected Representatives pose.
ADDENDUM: Reader Submissions
Evan Greer, Fight for the Future:
In March of last year, multiple media reports emerged about Facebook removing large numbers of posts containing legitimate public health information about COVID-19 posted by medical professionals. The company blamed it on “a bug.” When YouTube announced it would remove white nationalist content from its platform, the company also took down videos by anti-racist groups like the Southern Poverty Law Center. Facebook has also incorrectly labeled posts about the U.S. government’s mass surveillance programs as “misleading,” based on a fact-checker who cites a former top NSA lawyer as a source. Over the last several years, researchers found that Big Tech platforms’ automated “anti-terrorism” filters regularly removed content from human rights organizations and activists, disproportionately impacting those from marginalized groups outside the U.S.
Attempts to remove or reduce the reach of harmful or misleading content, whether automated or conducted by human moderators, always come with tradeoffs and can lead to the silencing of legitimate speech, remove documentation of human rights abuses, and undermine social movements who are confronting repressive governments and corporate exploitation.
● Does your company have a way to measure and report on the number of legitimate posts that are inadvertently deleted, labeled, or algorithmically suppressed as part of efforts to remove disinformation?
● Does your company maintain demographic data to assess whether the “collateral damage” of efforts to remove or suppress disinformation has a disproportionate impact on the speech of marginalized groups?
● For Facebook: does your company believe that activist groups opposing racism are the same as white nationalist groups? Why did you ban multiple Black liberation activists and organizations during a purge of accounts ahead of Joe Biden’s inauguration? What steps have you taken since to prevent efforts that you claim are intended to address hate and disinformation from silencing anti-racist activists?
● Has your company studied the potential long term impacts of collateral damage to online freedom of expression and human rights caused by haphazard attempts to address online disinformation?
● Will your company commit to moderation transparency, providing researchers and advocates with a complete data set of all posts that are removed or algorithmically suppressed as part of efforts to stem the spread of disinformation so that the potential harm of these efforts can be studied and addressed?
Neil Turkewitz, CEO, Turkewitz Consulting Group:
There are questions about how the tech industry uses its wealth to influence the media, academia, think tanks and government to suppress or counter criticism. Can you please provide the committee with an exhaustive list of all organizations, academics, think tanks, businesses and other enterprises you fund, either directly or indirectly, together with the amount of such funding? To decrease any potential hardship in terms of workload, feel free to list only funding that exceeds a certain threshold. We leave that to your discretion, but in no event shall you exclude any entity that receives funding that exceeds $50,000.
Dick Reisman, President, Teleshuttle Corp.:
What is your view of adapting Twitter’s “Bluesky” initiative to consider decentralization of the filtering of news feeds from your service to an open market from which users can choose? That effort draws on 2019 proposals from Stephen Wolfram and Mike Masnick, which have also recently been eloquently advocated by the Stanford working group on Platform Scale (and in Foreign Affairs and the WSJ). Isn’t it antithetical to democracy and a free society for users of your services to have their window into this increasingly large portion of our marketplace of ideas selectively controlled by a few private companies? Wouldn’t it reduce the problematic onus on you to moderate the impression you impose on your users, by shifting and dispersing that responsibility to services that act as the user’s agents, giving users freedom of impression and minimizing need for restrictions on their freedom of expression?
Published jointly with Tech Policy Press.