This Elon Musk-approved AI documentary is mostly bluster and scare tactics

Reddit CEO Steve Huffman clarifies that racism is not welcome on the platform
April 12, 2018
Credit card signatures are ending in the US on April 13th
April 12, 2018

Do you trust this computer? It is not a particularly subtle clock. The documentary, by filmmaker Chris Paine, is dedicated to the dangers of artificial intelligence, and although it did not cause a sensation in theaters, it was enthusiastically promoted by Elon Musk, who tweeted the movie and paid for it to broadcast it for free. in early April. (Musk also appears in the documentary as a talking head). Start bombing the spectators with appointments and turns of phones and brains. "We have an intelligence network that observes us, knows everything about us," says one. "Change comes and nobody can stop it," says another. It feels more like a trailer for a bad science fiction movie than a documentary about artificial intelligence.
It is a pity, since the field of artificial intelligence desperately needs nuanced public discussion. Instead, do you trust this computer? takes viewers on a guided tour of various topics related to artificial intelligence, including work automation, autonomous weaponry and driverless cars, all illustrated with CGI robots and quotes from respected researchers. Paine, who previously directed the 2006 documentary "Who Killed the Electric Car?", Is trying to give an ambitious overview of the threat and potential of artificial intelligence, but it does so in the same way as satellite images They provide a "good overview" of where you left your car keys. There simply are not enough details to be useful.

Take, for example, the discussion of the film about superintelligence, the theory that animates many apocalyptic scenarios of artificial intelligence. The idea is that once we build a more intelligent computer than humans, their intelligence will grow exponentially and will become a serious threat to humanity. If we do not program AI with the proper morality, the theory says, it will eventually erase us out of malice, carelessness or simple indifference. Musk, who is also the leading voice of the film in superintelligence, warns that this system would become "an immortal dictator from which we could never escape".
This can be tremendously exciting, like any action film scenario based on fear, but it is also an incomplete and misleading summary of what the AI ​​community believes about this topic. Yes, many experts recognize the possible threat of superintelligence, but quickly add that the technology we have now is not capable of creating conscious machines and that AI could create many more pressing threats to society, such as algorithmic surveillance and automatic surveillance . .
Superintelligence should not be ruled out, but neither should it overshadow other concerns
And yes, people like Musk argue that the threat of superintelligence still deserves more attention because it is existential (that is, it has the potential to annihilate humanity). But this kind of calculation is useful mainly in an academic environment, where superintelligence research has stimulated a lot of useful work in AI security. In the media, where attention is scarce and fleeting, fear tactics distort the debate and pave the many nuances in the discussion of superintelligence.
It's like finishing a documentary about violence in the cities saying: "Forget the robberies, your neighbor could be making a nuclear bomb in his garage right now!". That may be technically true, but it is not particularly useful. It's no wonder Twitter scientists have been less than flattering about Do You Trust This Computer ?, describing it as "a free fear" and a "very good comedy".
It is also worth noting that you trust this computer? It suggests that the solution is increasing to humans with AI so that they do not "leave us behind". Musk himself has founded a company based on this premise. And while there's nothing wrong with Musk promoting something that supports his theories and financial interests, he suggests that his enthusiasm is biased.

Do you trust this computer? It uses many impressive graphics, but it lacks important details. Image: Papercut Films

Do you trust this computer? Spends time on important matters. Discuss the possibility that work automation generates greater inequality and alludes to the large amount of data collected about us by companies such as Facebook and Google. (Although this has very little to do with artificial intelligence and everything related to monopolies in the technology industry). There is also a particularly interesting section on autonomous weaponry, which makes the point depressing, but often overlooked, despite our discomfort for machines making decisions on the battlefield, the conveniences of war will likely nullify ethical objections. In the documentary, political scientist P.W. Singer points out that unrestricted underwater warfare directed against freighters and tankers was considered excessive at the beginning of the 20th century, but normalized after the Second World War. We are in the midst of a similar transition on the ethics of drone combat, and autonomous weapons can follow the same path.
These sections suffer from the same deficiencies as the rest of the film: they are too short and too sensational. But the frustrating thing is what has been omitted completely. At present, an incredible amount of important work is being done in AI, which explores the ethical implications of the integration of machine learning systems in society. These are important issues, such as how biased data sets affect the decision-making algorithms used for criminal judgments and the hiring of personnel. And although they are complex, they are not difficult to communicate. When an algorithm developed by Google to filter comments online gives the statement "I'm a gay black woman" with a toxicity rate of 87%, even the most daring documentary makers should be able to express why misapplied AI could be worrying .
The film ignores too many incredibly important concerns
It is also notable that while much of this important work is being done by women, people like Kate Crawford of AI Now and Joy Buolamwini of the Algorithmic Justice League, the list of talking heads in Do You Trust This Computer? It is overwhelmingly masculine. Of the 26 experts presented in the film, 23 are men. As Microsoft researcher AI Timnit Gebru pointed out on Twitter, this is really a "difficult feat to achieve", taking into account the diversity of gender in the field.
The film does nothing to dispel the impression that only important men who talk about important ideas with their persistent use of female members of the public are interested in illustrating the general naivety about computers. Throughout the documentary, women and children are interviewed in the street, and their relaxed and informal comments, such as "My God, I trust my computer so much", are constantly contrasted with the assured expertise of men. This motive (presumably unintentional) says more about the prejudices of society and AI than the documentary itself attempts.
Do you trust this computer? It is defensible in some way. It is interesting, imaginative and easy to observe, and draws attention to a topic that will have real and important effects in our lives. But it sacrifices too much complexity and detail to achieve this, and is more deceptive than informative.
Paine anticipates this criticism. Its dramatic opening sequence features a clip from Terminator 2, with a robot that walks on a human skull. And then Westworld co-creator Jonathan Nolan says that the media and Hollywood have been "fucked up" by "crying wolf enough" to inoculate the public against the fear of AI. This time is different, says Nolan: fear is real and present. Then the movie begins and the scream begins. "Wolf, wolf, wolf!"


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.