People on Twitter like passing on lies better than they like retweeting the truth

Typing with Tap, the wearable keyboard that almost works
March 8, 2018
Michael B. Jordan’s production company will adopt inclusion riders
March 8, 2018

Fake news is 70 percent more likely to be retweeted on Twitter than real news, according to new research, and bots may not be the culprit.
In an article published today in the journal Science, researchers analyzed the spread of all verified stories (as true or false) by six data verification organizations from 2006 to 2017. The analysis shows that false political news spread more quickly that any other kind, such as news about natural disasters or terrorism, and predictably, shoots up during events like the US presidential election. UU of 2012 and 2016. (The researchers deliberately use the term "false news" because "false news" is too politicized, they write). Although Twitter accounts that broadcast false stories probably have fewer followers and tweet less than those who share real news, there is still false news spreads quickly because it looks like a novel, the study says.
It's the humans who are responsible.
First, the researchers went to six data verification organizations and pulled out all the news they had verified as true or false. (The six orgs were Snopes, PolitiFact, FactCheck, Truth or Fiction, Hoax Slayer and Urban Legends.) Then, researchers, who had access to the entire Twitter archive, looked for mentions of these stories on the social networking site. Every time they found a mention, they tried to determine if that mention was the original tweet, or if they answered or repeated a different tweet. That way, they could trace the origin of the story, and then track the ways in which the information spread through Twitter. Ultimately, their data set included some 126,000 stories tweeted by 3 million people more than 4.5 million times.
Their analysis shows that the real news rarely extends to more than 1,000 people, but 1 percent of the fake news could reach 100,000. This was not because the accounts that tweeted false news were particularly influential, but because we are more likely to share news that seems interesting and new. "It is believed that new information is more valuable than redundant," says study co-author Sinan Aral, a management professor at the Massachusetts Institute of Technology. "People who disseminate new information gain social status because they believe they are" in the know "or that they have privileged information."
To test this hypothesis, the Aral team analyzed the emotional content of these stories and the responses of people to them. As expected, the false news was seen as more surprising and provoked more disgust. Finally, the scientists used a bot detection algorithm and discovered that the robots helped spread false news and real news at the same rate. "So the bots could not explain this huge difference in the dissemination of true and false news that we are finding in our data," says Aral, "it's the humans who are responsible."
Of course, it can be very difficult to say if a bot is really a bot, says Joan Donovan, a sociologist who studies media manipulation at the Data & Society Research Institute and who did not participate in the study. You need a very high level of proof to really know that there is no human behind it. Still, he adds, the document makes it clear that we must be very serious about content moderation. "Even if bots are not the problem and it is people and networks that move information, this document gives us a better influence and a better understanding of what we should evaluate," says Donovan.
Next, Aral and his team want to better understand the spread of false news and look for possible solutions. It suggests labeling news sources based on how factual they are (similar to Startup NewsGuard's ambitions), or that companies like Twitter and Facebook analyze more closely how they can build their platforms or algorithms to discourage spreading false news. (Facebook is already trying to do this).
It would also be worth it, says Donovan, to learn more about how fake news propagates over time. "If you take any individual rumor on a platform, we know that there are disinformation windows that occur within the first 24 hours," he says. "So, one of the things I would like to know is, over time, what parts of the rumors continue to persist, and what groups are likely to cling to the misinformation that becomes a conspiracy theory."


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.