Nov 18, 2024 Business Administration Faculty Research in Education
Study: Community Notes on X could be key to curbing misinformation
When it comes to spreading misinformation — about elections, conspiracy theories, the deaths of celebrities who are actually still alive — social media gets a large share of the blame. Fake news often starts on Facebook or X (formerly Twitter) and spreads unchecked, sometimes taking on the authority of actual news, like the tale of JD Vance’s relationship with a couch.
Journalists and professional fact checkers, like the staff of FactCheck.org, have tried to stop the spread of misinformation, but often by the time a story receives sufficient attention to warrant a check, the damage has already been done.
In 2021, Twitter piloted a new crowd-sourcing program called Birdwatch that was intended to stop social media-based misinformation at its source. Users were encouraged to send in corrections or add context to false or misleading tweets. Later, its name was changed to Community Notes, and after Elon Musk bought the platform in 2022, the program expanded.
“Elon Musk tweeted a lot about this Community Notes system,” said Yang Gao, an assistant professor of business administration at Gies College of Business at the University of Illinois Urbana-Champaign. “He said, ‘This whole idea is fantastic, it will help curb misinformation,’ but without any solid evidence.”
So Gao (left) decided to find that evidence himself. With his PhD student, Maggie Mengqing Zhang of the Institution of Communications Research at the University of Illinois Urbana-Champaign and Professor Huaxia Rui of the Simon Business School at the University of Rochester, he provides solid evidence on how effective Community Notes were at convincing X users to voluntarily retract false or misleading tweets. They’ve announced their findings in a working paper called “Can Crowdchecking Curb Misinformation? Evidence from Community Notes.”
Much to their collective surprise, Community Notes actually works.
But when they started the project, Gao said, they weren’t sure what the results would be. “People tend to be stubborn,” he said. “Even if the note is true – if it's 100% accurate – they can easily just deny what the note is saying.” There was also the question of whether users would accept corrections (or even criticism) from their peers, as opposed to a higher authority like an X staffer or a professional fact checker. Insiders of Community Notes expressed the concern of partisan bias in the notes that might lead to more polarization. But he could see several advantages in using a crowdsourced fact-checking system like Community Notes.
“It's very hard to scale professional fact-checking,” Gao said. “When you have crowdchecking, well, you're using the wisdom of the crowd, so it's very easy to scale up. That's one advantage. The other advantage is now you're introducing diverse perspectives from the audience. That's in the spirit of democracy to my understanding.”
The Community Notes algorithm is also public, which lends transparency to the program and, therefore, gives users more reason to trust it. It also combats charges of bias by demonstrating that users have a wide variety of political beliefs.
The hardest part of the research was gathering data. X releases a public dataset of Community Notes every day, but there were too many tweets and notes to monitor manually. At first Gao and his colleagues tried using an application programming interface (API), but the cost was prohibitively expensive – $100 a month for 10,000 data requests – which would not have been nearly enough. Instead, they created their own system: they would download the public Community Notes data set every day, and then they developed an automated tool to do a day-to-day comparison and determine which tweets had been retracted. They did this every day between June 11 and August 2, 2024.
Once they’d assembled the dataset of 89,076 tweets, they used a regression discontinuity design and an instrumental variable analysis to examine whether a publicly displayed note under a tweet leads to a higher chance of voluntary tweet retraction.
The data showed that X users were more willing to retract their tweets in response to notes.
This finding is strikingly promising for social media platforms because users’ voluntary retraction, in contrast to forcible content removal, may face less criticism for infringing on freedom of speech, reduce polarization, and eventually “bring down the temperature” as President Joe Biden recently remarked. In other words, crowdchecking, like Community Notes, strikes a balance between protecting First Amendment rights and the urgent need to curb misinformation.
But it took some further analysis to figure out why.
There are two forms of influence in social media, Gao explained, observed influence and presumed influence. Observed influence is determined by how many people actually interact with an individual social post – for example, a tweet by someone with only a few followers that somehow goes viral. Presumed influence is how influential the user thinks they are based on the number of followers they have. A tweet by a person with 100,000 followers may get only a few likes, but it will, presumably, be seen by a lot of people.
It turned out that observed influence drove retractions far more than presumed influence. This made Gao realize that to make Community Notes even more effective, X should not only notify people who interacted with a tweet that received a note, but also all the followers of the person who wrote the tweet, who may have seen and absorbed the misinformation without any further interaction.
In the future, Gao wants to continue his investigations of misinformation on social media, particularly the way generative AI has been used to produce misinformation (e.g., deep fakes). He and his coauthors are still waiting to see the reaction from the current paper. He thinks it could have wider implications for social media platforms besides X. At the moment, the only other two social media platforms that have crowdsourced fact-checking are YouTube and Weibo, the Chinese microblogging site.
“Based on our findings,” he said, “we can say this crowdchecking system is working pretty well. The government should consider legislation or provide support to those social media platforms to help them build similar systems. Most importantly, the details of such systems should be transparent to the public, which is the key to foster trust.”