“Today’s teenager is tomorrow’s potential regular customer.”
“The base of our business is the high school student.”
“The fragile, developing image of the young person needs all the support and enhancement it can get.”
Facebook’s recruitment and marketing to children is no secret. Globally, 8% of active Facebook users are known to be children 13-17 years old—possibly more than 170 million in total. And Facebook is pushing products to younger children: in December 2017 the company announced a new Messenger product aimed at 6-12 year olds. The product is a gateway to get kids on to the platform as early as possible.
Of course the above statements were not made by Facebook executives, they were made by tobacco company executives (here and here). Like Facebook, the tobacco industry recognized that early adoption of its addictive product was critical to its success. And just as Congress ultimately saw fit to strictly regulate the marketing of tobacco to children, Congress should now take a close look at how Facebook induces children to use its services as well.
Last week the company announced 87 million people had their data siphoned away by Cambridge Analytica, the firm that provided data services to the Trump campaign and a long list of other Republican campaigns. In a separate disclosure this past week, Facebook admitted that a massive security flaw put most of its 2 billion users at risk of having their identities discovered and public profiles scraped from the site. Given these systemic failures to protect personal information, can Facebook be trusted to do business with children? These problems are not limited to Facebook, of course, as demonstrated by a complaint filed on Monday against Google by more than 20 consumer advocacy groups with the Federal Trade Commission for violating children’s privacy.
The evidence on Facebook is unsettling. “God only knows what it does to our children’s brains,” Facebook founding president Sean Parker recently said. A year ago, an Australian newspaper obtained a confidential Facebook document pitching advertisers the opportunity to target 6.4 million children experiencing psychological vulnerabilities, such as feeling “worthless,” “insecure,” “stressed,” “defeated,” “anxious,” and like a “failure.” Plus researchers are increasingly concerned about the use of the platform in schools, and what it means for the privacy of student data. And, as the BBC reported last year, Facebook has a checkered record on managing child exploitation, including the spread of sexualized images of children.
So, lawmakers questioning Mark Zuckerberg should keep Facebook’s youngest users in mind. Here are a dozen questions for Zuckerberg on social media and children:
1. Is Facebook aware of studies that suggest social media may be highly addictive, and what does that mean for children on its platforms? Has Facebook done any internal research on this question, and will it share the results?
2. Other evidence suggests social media causes depression among young people—is Facebook aware of these studies, and do they inform its product development processes or decisions to launch products like Messenger for Kids?
3. What research partners—commercial, academic, or otherwise—has Facebook worked with using children’s data? What are the security and privacy protocols on children’s data?
4. How many children’s information was acquired by Cambridge Analytica? How many other breaches of children’s information have occurred on the platform?
5. Has Facebook taken any extra steps with developers who had access to the API that Cambridge Analytica exploited to verify the safety of children’s data?
6. Can Facebook estimate how many children’s public profiles were scraped by those exploiting security flaws?
7. What percentage of content removed from Facebook is exploitative of children? How are content moderators trained to look for this kind of content? What responsibility do they have to inform victims?
8. Will Facebook automatically extend all of the protections of GDPR to do with children—up to the age of 18—to American children (and without users having to opt in to those protections)? If not, why not?
9. Is data acquired on child users on Messenger for Kids retained by the company after they turn 13? Is it used in any way, even indirectly, to provide information to advertisers?
10. Mental health experts and child psychologists have asked you to discontinue Messenger for Kids, citing risks. Why should Facebook be permitted to market services to children as young as six at all?
11. Will Facebook extend the right to be forgotten to all minors?
12. Conspiracy theories targeting minors such as the Parkland survivors are particularly troublesome. Does Facebook do anything unique to protect minors who are targets of harassment?
It is time the United States consider more robust protections for children on social media platforms and on the internet in general. In the United States, the Federal Trade Commission administers the Children’s Online Privacy Protection Rule (“COPPA”), but those protections extend only to children under the age of 13.
Contrast that to relevant key provisions of Europe’s General Data Protection Regulation (GDPR), such as is its Article 8, “Conditions applicable to child’s consent in relation to information society services.” GDPR recognizes children of all ages as “vulnerable individuals” that are worthy of “specific protection”. It sets in place rules for parental consent, firm data protections and other common sense requirements, such as ensuring language on data is written in child friendly terms. We can and should have similar rules in the United States.
In the wake of massive privacy scandals such as Facebook and Cambridge Analytica, Equifax, eBay, Anthem and even Home Depot, many Americans are discouraged about privacy, and many poll results suggest they want lawmakers to do more to protect them. Perhaps the need to protect our children can galvanize our polarized political representatives to act. We no longer let companies sell our kids cigarettes. Will we let Facebook build addictive apps to collect our kids’ personal data and sell access to anyone anywhere willing to pay, with no rules in place to protect them?