https://www.protocol.com/bulletins/twitter-crisis-misinformation-policy

Twitter will begin taking action against misinformation in crisis situations, the company said Thursday. The new policy will be immediately applied to misinformation surrounding the war in Ukraine.


Given the way misinformation and disinformation have been weaponized in that war, it’s an important update. But it’s also a challenging one for Twitter to pull off, and not just because Twitter’s would-be new owner believes the company should let all legal speech stand. It also puts Twitter in a position of defining what’s true — or not true — in often chaotic situations and, perhaps even more challenging, deciding what constitutes a crisis to begin with.

“During periods of crisis like international armed conflict, public health emergencies and large-scale natural disasters, we find misinformation can undermine public trust and cause further harm to already vulnerable communities,” Yoel Roth, Twitter’s head of Safety and Integrity, said on a call with reporters. Roth said the company eventually plans to deploy this policy in “any situation in which there’s a widespread threat to life, physical safety, health or basic subsistence,” but that the company was starting off in Ukraine because of “the unique role that disinformation has played in this conflict.”

To figure out what’s true and not, Roth said, Twitter is relying on public information from multiple “credible sources,” including humanitarian groups, news organizations, conflict-monitoring services and open-source intelligence investigators. Once Twitter determines that a given post is misinformation, it’ll stop amplifying and recommending it, and will add warning notices that users have to click through in order to view the tweet. Users also won’t be able to retweet, quote tweet or engage with posts with those labels. The company will prioritize acting on tweets with high visibility and tweets from accounts with lots of followers.

Roth said Russian state media accounts on Twitter saw a 30% drop in their reach when the company stopped recommending or amplifying them. “We believe that we’ll see similar effects in this context, but we’re studying it closely and we’re going to share data about this as we learn more,” Roth said.

Twitter will remove content, Roth said, only “in the most severe cases where the potential to cause harm is the greatest.”

Twitter started developing this policy long before the war in Ukraine began. According to Roth, the idea began in 2020 when misinformation began spreading about arsonists starting the wildfires in the West. “That was resulting in first responders being unable to pass through national parks and federal land in order to do their job,” Roth said. The company has been working on developing the scope of the policy since last year.

While the war in Ukraine is an obvious first target, the big question now is how Twitter will define crises in the future. The company is starting with international conflicts, and plans to apply these policies in Ethiopia, Afghanistan and India. But Twitter also expects to use this policy in the future in relation to a wide range of crises, including mass shootings and natural disasters, Roth said. That will inevitably set Twitter up for getting things wrong and lead to public criticism about why the company is or isn’t intervening in a given crisis.

The new policy rollout also suggests that Twitter is forging ahead with new forms of content moderation, even as employees fear Musk’s takeover could send them back in time to a period of Twitter history that was even more lawless than it is now.

“This is an area of active investment and development by us, and we’re going to be evolving and expanding this policy over time,” Roth said. “This work remains full steam ahead.”

You Might Like
Learn more about RevenueStripe...