Why AI isn’t going to solve Facebook’s fake news problem

Delta and Sears say data breach exposed hundreds of thousands of credit cards
April 5, 2018
Mophie’s new $60 charger fixes some problems from the original, but is still too expensive
April 5, 2018



Facebook has many problems at this time, but one that will definitely not disappear in the short term is a false news. As the company's user base has grown to include more than a quarter of the world's population, (understandably) it has struggled to control what everyone publishes and shares. For Facebook, unwanted content can be anything from mild nudity to serious violence, but what has proven to be most sensitive and damaging to the company are deception and misinformation, especially when it has a political inclination.
So, what is Facebook going to do about it? At the moment, the company does not seem to have a clear strategy. Instead, he's throwing a lot into the wall and seeing what works. More human moderators were hired (as of February this year it had around 7,500); gives users more information on the site about news sources; and in a recent interview, Mark Zuckerberg suggested that the company could establish some kind of independent body to decide what content is kosher. (What could be considered democratic, an abandonment of responsibility or an admission that Facebook is out of reach, depending on your point of view). But one thing that experts say Facebook must be extremely careful about is giving AI all the work.
So far, the company seems to be experimenting with this approach. During and in an interview with The New York Times about the Cambridge Analytica scandal, Zuckerberg revealed that for the Alabama special elections last year, the company "implemented some new artificial intelligence tools to identify false accounts and false news." He specified that these were Macedonia accounts (a center established in the fake news business for profit), and the company later clarified that it had implemented machine learning to find "suspicious behavior without evaluating the content itself."
This is smart because when it comes to false news, AI is not up to the job.
AI can not understand the false news because AI can not understand the writing
The challenges of building an automated fake news filter with artificial intelligence are numerous. From a technical perspective, AI fails at a series of levels because it simply can not understand human writing in the same way as humans. You can extract certain facts and do a raw feeling analysis (guess if a content is "happy" or "angry" according to the keywords), but can not understand the subtleties of the tone, consider the cultural context or call someone. to corroborate information. And even if he could do all this, which would eliminate the most obvious misinformation and deception, he would eventually find extreme cases that confuse even humans. If the people on the left and the right can not agree on what they are and are not "false news," there is no way we can teach a machine to make that judgment for us.
In the past, efforts to deal with fake news using AI have quickly come up against problems, such as with the Fake News Challenge, a competition for multi-tasking machine learning solutions held last year. Dean Pomerleau of Carnegie Mellon University, who helped organize the challenge, tells The Verge that he and his team soon realized that AI could not address this alone.
"In fact, we started with a more ambitious goal of creating a system that could answer the question" Are these false news, yes or no? "We quickly realized that machine learning was not the height of the task. "
Pomerleau emphasizes that understanding was the main problem, and to understand why exactly language can be so nuanced, especially online, we can draw on the example established by Tide pods. As Cornell professor James Grimmelmann explained in a recent essay on false news and platform moderation, the embrace of irony on the Internet has made it extremely difficult to judge sincerity and intention. And Facebook and YouTube discovered it when they tried to delete the Tide Pod Challenge videos in January of this year.

A YouTube thumbnail of a video that may be supporting the Tide Pod Challenge, or warning it, or a combination of both. Image: YouTube / Leonard

As Grimmelmann explains, when deciding which videos to eliminate, companies would face a dilemma. "It's easy to find videos of people who hold Tide Pods, observing with sympathy how tasty they look and then give a speech with their fingers about not eating them because they are dangerous," he says. "Are these candid anti-pod public service announcements, or are they surfing the wave of interest in eating pods by superficially claiming to denounce it? Both at the same time?
Grimmelmann calls this effect "kayfabe mimetic", borrowing the wrestling term for the voluntary suspension of disbelief on the part of the audience and the fighters. He also says that this opacity in meaning is not limited to the meme culture, and has been adopted by political supporters, often responsible for creating and sharing false news. Pizzagate is the perfect example of this, says Grimmelmann, as it is "both a true theory of conspiracy, a cheating farce of a conspiracy theory and a demeaning meme about conspiracy theories."
So if Facebook had chosen to block any pizzeria items during the 2016 elections, they probably would have received complaints not only about censorship, but also protests that such stories were "just a joke." Extremists frequently exploit this ambiguity, as was best shown in the filtered style guide of the neo-Nazi website The Daily Stormer. Founder Andrew Anglin advised aspiring writers: "the uneducated should not know if we are joking or not," before making it clear that they are not: "This is obviously a ploy and I really want gas kikes. it is neither here nor there. "
Given this complexity, it is not surprising that the Pomerleau Fake News Challenge has ended by asking teams to complete a simpler task: to create an algorithm that can detect articles that cover the same topic. Something in which they turned out to be pretty good.
With this tool, a human could label a story as false news (for example, claiming that a certain celebrity died) and then the algorithm would nullify any coverage that repeated the lie. "We talked to real-life data verifiers and we realized they would be informed for quite some time," says Pomerleau. "So the best thing we could do in the machine learning community would be to help them do their job."
Even with human data verifiers in tow, Facebook relies on algorithms
This seems to be Facebook's preferred approach. For this year's Italian elections, for example, the company hired independent data inspectors to flag false news and deception. Problematic links were not removed, but when shared by a user, they were labeled "In dispute by third-party fact verifiers". Unfortunately, even this approach has problems, with a recent Columbia Journalism Review report highlighting the many data verifiers frustrations with Facebook. The journalists involved said it was often unclear why Facebook's algorithms told them to check certain stories, while sites known to spread lies and conspiracy theories (such as InfoWars) were never controlled at all.
However, there is definitely a role for the algorithms in all this. And while AI can not do anything heavy to eliminate fake news, you can filter it in the same way that spam filters out of your inbox. Anything with bad spelling and grammar can be eliminated, for example; or sites that depend on the imitation of legitimate outlets to attract readers. And as Facebook has shown with its goal of Macedonian accounts "that were trying to spread false news" during the special elections in Alabama, it can be relatively easy to point out false news when they come from known trouble spots.
However, experts say that this is the limit of AI's current capabilities. "This kind of whack-a-mole could help filter rich teens into getting rich from Tbilisi, but it's unlikely to affect coherent but large-scale offenders like InfoWars," said Mor Naaman, associate professor of science at the University. information at Cornell Tech, he tells The Verge. He adds that even these simpler filters can create problems. "Classification is often based on language patterns and other simple signals, which can" trap "honest independent and local publishers along with producers of fake news and misinformation," says Naaman.
And even here, there is a potential dilemma for Facebook. Although to avoid accusations of censorship, the social network must be open about the criteria used by its algorithms to detect false news; If it's too open, people could play with the system, working around their filters.
For Amanda Levendowski, NYU law professor, this is an example of what she calls the "Valley Fallacy." Speaking to The Verge about Facebook's moderation of AI, she suggests it's a common mistake, "where companies start saying: & # 39; We have a problem, we should do something, this is something, so we should do this & # 39; not carefully consider whether this could create new or different problems. "Levendowski adds that despite these problems, there are many reasons why technology firms will continue to look for AI's moderation, which ranges from" improving experiences. " of the users until mitigating the risks of legal responsibility ".
Undoubtedly, these are temptations for Zuckerberg, but even so, it seems that leaning too hard on artificial intelligence to solve his problems of moderation would be unwise. And it's not something he would like to explain to Congress next week.

ICS
ICS

Leave a Reply

Your email address will not be published. Required fields are marked *