I believe that Elon Musk’s vision for making Twitter a completely free speech-promoting platform could help to solve one of its most egregious problems: the abundance of child sexual exploitation material.
This global issue has been practically ignored by the platform for years. Twitter has done the bare minimum to stay in compliance with government regulations when firmly pressed on the issue, but other than that, the tech giant is behind its competitors regarding the removal and reporting of child pornography at scale. Twitter put a lot of energy behind designing the algorithm to control conservative speech, while failing to prioritize the removal and reporting of child sexual exploitation material. There is a vast difference between mean words, outrageous ideas or misinformation, and an adult raping a child, documenting the crime, and sharing it to the world.
I don’t blame Twitter for the initial assaults against children. Twitter can’t fix all the evil in the world. I do blame Twitter for receiving reports of child sexual abuse material and not addressing the reports in a timely fashion. Time is of the essence when these reports are received. In some circumstances, it can be a matter of life or death for the children being abused in these images and videos.
Typically when I start speaking about these issues, folks think to themselves, “I’ve never seen it, so it must not be that big a deal.”
Let’s look at the numbers and some examples.
In 2021, the National Center for Missing and Exploited Children (NCMEC) CyberTipline received 29.3 million reports of suspected child sexual exploitations, an increase of 35% from 2020. In 2021, over 29.1 million of the 29.3 million total reports were from electronic service providers.
In 2021, Twitter reported 86,666 child sexual abuse material images to NCMEC. I believe that number could, and should, be higher.
In Twitter’s latest transparency report, including relevant data from January 1, 2021, through June 30, 2021, the platform says, “In these six months, Twitter permanently suspended 453,754 unique accounts for violations of our child sexual exploitation (CSE) policy — 89% of those accounts were proactively identified and removed by deploying a range of internal tools and/or by utilizing the industry hash sharing (e.g., PhotoDNA) prior to any reports filed via the designated CSE reporting channel.”
Whenever discussing this issue on Twitter, one can’t ignore the case of John Doe #1 and John Doe #2. In 2021, Twitter was sued by minor survivors of sexual exploitation. Their sexual exploitation video, which had already racked up over 167,000 views and 2,223 retweets, had been reviewed by Twitter after multiple reports. The platform said that it wouldn’t remove the content, claiming that it “didn’t find a violation” of the company’s “policies.” The minors were both 13 years old in the video. It took the Department of Homeland Security stepping in to get Twitter to remove the content.
A Delaware school custodian was recently charged with dealing in child sexual abuse material on Twitter, according to court documents. These types of individuals need to be apprehended quickly so that they don’t have access to children.
Governments have been pushing back against Twitter about this issue. Child exploitation is an easy excuse for tyrannical governments to clip Twitter’s wings (and its potential for supporting free speech) and slow down service or block the platform altogether in the country. Blocking Twitter in any country silences the voices of survivors, activists, political dissidents, journalists, and whistleblowers.
From March to April 2021 Roskomnadzor [the Russian Federal Service for Supervision of Communications, Information Technology and Mass Media] considered a ban and the removal of the IP of Twitter from Russia completely. The government agency was met with denials and lack of urgency from the social network. Roskomnadzor has the necessary ‘technical capabilities’ to completely remove Twitter from [the] Russian domain. The severity of the situation occurred when over 3,000 posts containing child pornography in violation of Community Guidelines have been detected in 2021 by the agency that was later sent to Twitter regulatory board for verification. However Twitter sent no response back to the agency concerning the illegal content and has thereafter been charged of withholding its duty to maintain the social network’s Community Guidelines.
Russia has fined Twitter multiple times over its refusal to remove banned content, threatening to slow down service and/or remove the platform entirely from the country. In 2021, India opened multiple cases against Twitter because of child sexual exploitation material. The country has continuously threatened the platform with action if the platform doesn’t clean up its act.
Domestically, Apple could, at any point, remove the platform from the App Store for failure to remove child sexual abuse material content. It has taken a hard-line stance on this issue in the past. In 2018, Apple removed Tumblr from the App Store because of child sexual abuse material. In February 2018, Apple removed Telegram and Telegram X, encrypted messaging applications made by Telegram Messenger LLP, from the App Store due to content it deemed inappropriate. The company specifically cited instances of child pornography that was made available to users and subsequently banned the apps until the situation could be dealt with.
There are some ways that Twitter could address child sexual abuse material that won’t violate digital privacy moving forward. The platform recently made the reporting process for this content slightly easier, but it needs to be made even easier for Twitter users and survivors. When a platform thinks about creating ways to report child sexual exploitation, it needs to think of a very young child in a panic trying to figure out how to report their own exploitation. It should be very clear and easy to report, not a vague labyrinth for the minor to stumble through. When a report is received, the minor should be offered safe, trauma-informed recourse. The platform should prioritize these reports. Innovation will be the key to addressing the reports at scale. As much as I hate to say it, Elon Musk should reach out to Mark Zuckerberg about this.
Meta (Facebook) has implemented proprietary tools to combat the issue of child sexual exploitation material specifically and has amazing proactive detection, removal, and reporting rates. Meta isn’t perfect; it still needs a lot of work, but its latest data collection around this specific issue has been a welcome change.
If a piece of content is flagged on Twitter as child sexual exploitation, the account should be immediately locked with an opportunity for appeal if there was an error. If a user continuously falsely reports child sexual exploitation, the individual making the false reports should have their account locked. We all make mistakes, but false reports can clog up the system, and we need to make sure that the most egregious reports are handled quickly.
I’d also like to see images blurred or grayed out and a warning put on any content that could be child sexual abuse material with an option to appeal the warning. Twitter started offering this for potentially disturbing images of war, and it should expand on that technology.
Twitter flagged all types of content during the pandemic, showing it is capable of locating and identifying text and imagery. If content is flagged as child sexual abuse material, it should not be able to be retweeted, at the very least. This would help to keep the content from spreading. Twitter will need to hire more staff to review the child exploitation material content that the AI doesn’t catch. The staff should be using the latest technology to reduce harm to employees while reviewing content. Terms of service should be clear about content shared by those who mean well but often miss the mark. Sometimes individuals share the content to get folks to report, but that’s spreading the content. Individuals who share the content should not be allowed to remain on the platform regardless of intent. It sounds harsh, but the goal is to minimize the harm to the victims, not perpetuate the crime.
As we move ahead, I’m not really sure whether Elon Musk is the answer to these problems. I know that some love the news of Elon Musk owning Twitter and some hate it, but I believe that we can all agree on one thing: We should be prioritizing the protection of our world’s most vulnerable. When it comes to Twitter, those are the minors being sexually exploited on the platform. We don’t need any government to overreach; neither do we need to violate digital privacy to get there, we just need aggressive innovation and attention on the issue.