In the course of an accumulated 10 hours spread over two days of hearings, Mark Zuckerberg dodged question after question citing the power of artificial intelligence.
Moderating hate speech? AI will fix it. Terrorist content and recruitment? AI again. False accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI.
It is not even clear what Zuckerberg means by "IA" here. He repeatedly mentioned how Facebook's detection systems automatically eliminate 99 percent of "terrorist content" before any type of signaling. In 2017, Facebook announced that it was "experimenting" with AI to detect the language that "could be advocating terrorism", supposedly a deep learning technique. It is not clear that deep learning is actually part of Facebook's automated system. (We sent an email to Facebook to get clarifications and we still have not received a response). But we do know that AI is still in its infancy when it comes to understanding language. As James Vincent of The Verge concludes from his report, AI does not measure up to the nuances of human language, and that does not even take into account the extreme cases in which even humans do not agree. In fact, it is possible that the AI will never be able to handle certain content categories, such as false news.
The invocation of AI is an elusive
Beyond that, the types of content that Zuckerberg focused on were images and videos. From what we know about Facebook's automated system, in its essence, it is a search mechanism in a shared database of hashes. If you upload a video of a decapitation that has previously been identified as terrorist content in the database, by Facebook or one of its partners, it will be automatically recognized and deleted. "It's hard to differentiate between that and the early days of the Google search engine, from a technology perspective," says Ryan Calo, a law professor and director of the Tech Policy Lab at the University of Washington. "If that was AI, then this is AI."
That's the good thing about artificial intelligence as an excuse: artificial intelligence is a broad general term that can include the automation of all varieties, machine learning or even more specifically deep learning. It is not necessarily wrong to call the AI of the automatic Facebook removal system. But you know that if you say "artificial intelligence" in front of a body of lawmakers, they will begin to imagine AlphaGo or maybe more fantastic, SkyNet and C-3PO will tear down the videos of terrorist decapitation before anyone sees them. None of them is imagining the Google search.
The invocation of AI is a maneuver deployed in a group of laity that, for the most part, unfortunately swallowed it in part. The only exception could have been Senator Gary Peters (D-MI), who followed up with a question about AI transparency. "But you also know that artificial intelligence is not without risks, and that you have to be very transparent about how those algorithms are built." Zuckerberg's response was to recognize that it was a "really important" question and that Facebook had a whole AI ethics team working on the issue.
"I do not think that in 10 or 20 years, in the future that we all want to build, we want to end systems that people do not understand how they are making decisions," Zuckerberg said.
Zuckerberg said over and over again in the hearings that in five to ten years, he was confident that they would have sophisticated artificial intelligence systems that were up to the challenge of dealing with even linguistic nuances. Give us five to 10 years, and we will have all this resolved.
Artificial intelligence can not solve the problem of not knowing what the hell you're doing
But the point is not just that Facebook has not been able to scale for content moderation. It has not detected entire categories of misbehavior to be taken into account, such as intentional misinformation campaigns carried out by nation-states, the diffusion of false reports (either by national states or mere speculators) and leaks of data such as the Cambridge scandal Analytica He has not been transparent about his moderation decisions even when these decisions are driven by human intelligence. It has not addressed its growing importance in the media ecosystem, it has not managed to safeguard the privacy of users, it has not anticipated its role in the genocide of Myanmar, and it may not even have safeguarded American democracy.
Artificial intelligence can not solve the problem of not knowing what the hell is doing and not really worrying about one aspect or another. It is not a solution for lack of vision and lack of transparency. It's an excuse that deviates from the question itself: yes and how to regulate Facebook.
In fact, advances in artificial intelligence suggest that the law itself should change to keep pace with it, not justify a non-intervention approach.
Artificial intelligence is only a new tool, which can be used for good and bad purposes, and that also brings with it new dangers and disadvantages. We already know that, although machine learning has great potential, data sets with ingrained biases will produce partial results: garbage, garbage. The software used to predict recidivism in defendants results in racist results, and more sophisticated artificial techniques will simply make these types of decisions more opaque. This type of opacity is a big problem when machine learning is deployed with the purest of good intentions. It's an even bigger problem when machine learning is implemented to better attract consumers with ads, a practice that, even without machine learning, allowed Target to discover that a teenager was pregnant before her parents knew it.
"Either everything is publicity and we should not react in an exaggerated way, or represent a legitimate radical change."
"At the same time, the claim is that AI changes everything, changes the way we do everything, changes the rules of the game, but nothing should change," says Ryan Calo. "One of these things can not be correct, or everything is publicity and we should not overreact, or it represents a legitimate radical change, it's really misleading to argue that the reason why we should get out of AI's way is that it's very transformer ".
If it was still not clear that "wait and see what technological wonders come to mind" is just a post, it is obvious from the privacy approach of Facebook that is more than willing to stagnate forever. In a part of Wednesday's hearing before the Committee of the Chamber of Energy and Commerce, Zuckerberg said in response to a question about privacy: "I think we'll discover what the social norms and rules we want to implement are." Then, in five years time, we will return and have learned more things. And either, that will be simply because social norms have evolved and the company's practices have evolved, or we are going to put rules in place. "
Five years? Will we wait five years to discover the user's privacy? It's been 14 years since the founding of Facebook. There are people of voting age who do not remember once before Facebook. Facebook was criticized for privacy error in 2006 when it launched its News Feed without telling users how it would look and how their privacy settings would affect what their friends saw. In 2007, he launched Beacon, which injected information about user purchases into News Feed, a decision that resulted in a class-action lawsuit that resulted in $ 9.5 million. The FTC put Facebook under a decree of consent in 2011 about its privacy flaws, a consent decree that may be in violation due to the Cambridge Analytica scandal.
Mark Zuckerberg is simply preparing to stumble from one ethical swamp to another
In citing AI's excuse, Mark Zuckerberg is simply preparing to stumble from one ethical swamp to another. He did not know what he was doing when he created Facebook, and to be fair, nobody did it. When Facebook launched, it launched itself headlong into a brave new world. Nobody knew that the cost of connecting people around the world to get advertising revenue was going to be Cambridge Analytica.
But the clues were there all the time: privacy advocates repeatedly warned against the aggressive and indiscriminate collection of data, others opined about the creepy targeting of ads, and experts expressed concern about the effects of social media. In the elections.
Give Facebook five to 10 years to fix your problems, and within five to 10 years, Mark Zuckerberg will once again testify before Congress about the unintended consequences of his use of artificial intelligence.