In recent news, two of the biggest names in tech have been facing off in a showdown over the perils of artificial intelligence (AI). In one corner, CEO of SpaceX and Tesla, Elon Musk; in the other, Facebook CEO, Mark Zuckerberg. For years Musk has warned that “AI is a fundamental risk to the existence of human civilisation”, while Zuckerberg says he is “optimistic” about the possibilities AI offers. So who is right about the perils of AI?
It’s an issue worth pondering. AI is already an integral part of our everyday lives. It’s giving us information and helping us with our daily tasks on our phones and computers, in the form of Apple’s Siri and Android’s Cortana; it’s translating languages on Google Translate; and it’s now officially better at the complex game Go than everyone else on the planet. And it’s only going to become more ubiquitous. AI is, for example, an essential component of autonomous cars, which are set to take over our roads in the near future.
But while these are positive examples of AI enhancing our lives, there are also examples of AI that make one pause, such as when Microsoft’s AI chatbot, Tay, resorted to spouting racist, misogynistic and anti-Semitic slurs after only 24 hours of interacting with humans on Twitter. In response to one of Tay’s tweets, Musk replied, “Will be interesting to see what the mean time to Hitler is for these bots. Only took Microsoft’s Tay a day.”
So the question is: could AI eventually take over the world?
Elon Musk on AI
According to Musk, people just don’t appreciate how quickly AI technology is currently developing.
At the International Space Station (ISS) R&D conference in Washington DC on 19 July, Musk said this during a Q&A session: “I think it is difficult to appreciate just how far artificial intelligence has advanced and how fast it is advancing because we have a double exponential at work: we have exponential increase in hardware capability, and we have an exponential increase in software talent that is going into AI. So when you have a double exponential, it’s very difficult to predict. Real predictions are almost always going to be too conservative.”
His fear is not robots rising up and taking over the world; what is more likely, he thinks, is in fact much more sinister.
In an interview with CNN, Musk said: “It would be fairly obvious if you saw a robot walking around talking and behaving like a person … What’s not obvious is a huge server bank in a dark vault somewhere with intelligence that’s potential vastly greater than what a human mind can do. I mean, it’s eyes and ears would be everywhere, [in] every camera, every microphone, every device that’s network accessible. That’s really what AI means.”
He went on to say: “If we’re not careful about the advent of AI, it’s possible that there could be what’s called a ‘bad utility function’ … Humanity’s position on this planet is dependent on its intelligence, so if our intelligence is exceeded, it’s unlikely we will remain in charge of the planet.”
Mark Zuckerberg on AI
Zuckerberg’s take on the matter is decidedly sunnier.
In a Facebook Live broadcast on 20 July, he said: “If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick. I just don’t see how, in good conscience, some people can do that. I’m just much more optimistic on this, in general, than probably a lot of folks are.”
It’s no secret that Facebook are investing heavily in AI, and that Zuckerberg has ambitions for Facebook that are reliant on the technology. In December 2016 last year, Zuckerberg previewed an AI assistant he had built for his home. In the video, the assistant, Jarvis, voiced by none other than Morgan Freeman, shot t-shirts out of a cannon, spoke Mandarin with Zuckerberg’s daughter, and use facial recognition software to see that Zuckerberg’s parents were at the door.
It was also reported just this week that Facebook’s translations, of which it makes about 4.5 billion a day, are now completely powered by AI. By using neural networks that use a machine-learning component known as a long short-term memory (LTSM) network, the new system can now, according to a company blog post, “account for context, slang, typos, abbreviations and intent simultaneously”.
And just last week, Facebook were slammed in the press for an AI experiment involving chatbots that had gone slightly awry. Built by Facebook Artificial Intelligence Research, the bots were designed to learn how to barter and trade by mimicking humans. But when the bots were paired against each other, they began to diverge from English into what seemed to be their own language. The headlines were sensationalist; one article read, “Facebook engineers panic, pull plug on AI after bots develop their own language”.
One of the researchers, Dhruv Batra, spoke out about the reporting, saying, “While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI … Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximise reward. Analysing the reward function and changing the parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI’. If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine.”
In that same Facebook Live broadcast, Zuckerberg said, “I think that people who are naysayers and kind of try to drum up these doomsday scenarios – I just, I don't understand it. I think it's really negative and in some ways I actually think it is pretty irresponsible.”
To which Musk retorted: “I've talked to Mark about this. His understanding of the subject is limited.”
I've talked to Mark about this. His understanding of the subject is limited.— Elon Musk (@elonmusk) July 25, 2017
Is Musk’s fear genuine?
The question of whether Musk is genuinely fearful about AI bringing about the demise of the human race is a debatable one. After all, he, too, has fingers in the AI pie.
In March, Musk launched Neuralink, a start-up with an incredibly ambitious goal: to merge AI with the human brain. In theory, this technology would connect our brains to the cloud, allowing us to transmit thoughts and ideas instantly, and in effect drastically altering the very human experience.
Musk justified the need for this technology by saying that his warnings about AI weren’t being heeded, so he wanted to harness AI in a way that would be of benefit to humanity. In other words, if you can’t beat them, join them.
In an interview with waitbutwhy.com, Musk said, “We’re going to have the choice of either being left behind and being effectively useless or like a pet – you know, like a house cat or something – or eventually figuring out some way to be symbiotic and merge with AI.”
It could be argued therefore, that it is in Musk’s interests to cultivate a fear of AI, as the founder of the company that will ostensibly hold the solution to this “crisis”.
Others, however, say he is for real. Eliezer Yudkowsky, a co-founder of the Machine Intelligence Research Institute, said in an interview with Vanity Fair: “He’s Elon-freaking-Musk. He doesn’t need to touch the third rail of the artificial-intelligence controversy if he wants to be sexy. He can just talk about Mars colonisation.”
Opinions remain heavily divided on this issue. Heavyweights Bill Gates and Stephen Hawkings have both sided with Musk on this issue. Gates has compared AI to a nuclear catastrophe, while Stephen Hawkings once said to the BBC: “I think the development of full artificial intelligence could spell the end of the human race.”
AI scientists, however, have called Musk needlessly alarmist.
Subbarao Kambhampati, a professor of computer science at Arizona State University, said to Inverse, “While there needs to be an open discussion about the societal impacts of AI technology, much of Mr. Musk’s oft-repeated concerns seem to focus on the rather far-fetched, super-intelligence take-over scenarios … Mr. Musk’s megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate.”
At the ISS R&D conference, Musk argued for a need for a regulatory government body to help monitor and police AI, in much the same way the FDA regulates the food industry and the FAA regulates the aviation industry. But it is unclear at this stage how much traction this idea will get, particularly since Musk left Trump’s advisory council in June this year, following Trump’s decision to withdraw the United States from the Paris climate accord.
So will AI bring about our eventual demise? The question remains open. What is certain, however, is AI is not going away anytime soon – and perhaps it would be wise to proceed with caution.
Are you working on an artificial intelligence project that you’d like to share at the next CeBIT? Register your interest for CeBIT 2018 today!