One ticket. Four technology forums. Register now.
31
Oct

Is Elon Musk right about artificial intelligence regulations?

Is Elon Musk right about artificial intelligence regulations?

It seems Elon Musk is at it again. At a National Governors Association meeting on 15 July, he told the audience, “I have exposure to the very cutting-edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

According to Musk, the solution is artificial intelligence regulation: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

Researchers, however, remain sceptical about whether regulations are indeed necessary, saying they will merely stifle innovation.

So could Elon Musk be wrong about AI regulations?

Here are 3 reasons why some researchers claim we don’t need AI regulations.

1. “We would hinder advances that could have hugely positive impacts on society”

While AI could potentially be put to nefarious uses, it could also be used for the benefit of mankind. For example, it could be highly beneficial in the realm of food production, which is becoming an ever-increasing problem as the world’s population increases.

Enforcing regulations could have the effect of stifling progress in such areas, and could mean we won’t get the benefit of such advances until much later, or perhaps even at all.

There’s also no historical precedent that supports the notion that AI will destroy human civilisation as we know it. Other revolutionary technologies, whether it’s the internet or atomic energy, has had doubts cast upon them, and have even brought about some highly negative consequences, but never has the human race been thrown into chaos, in the way Musk is suggesting might happen with AI. As Barack Obama said to Wired on the topic of AI: “I tend to be on the optimistic side—historically we’ve absorbed new technologies, and people find that new jobs are created, they migrate, and our standards of living generally go up.”

2. “We would be less globally competitive as a nation”

At the moment, Australia is lagging well behind China and the US when it comes to the AI race – and this gap will only become wider if regulations hamper the ability of scientists and researchers to keep up. This could have negative impacts on our ability to compete in a global market.

AI research papers by country

Source: MIT Technology Review

There are also other fears when it comes to falling behind in the AI race. As Tristan Greene of The Next Web put it, “If the Trump administration sees fit to place restrictions on AI development that hamper Silicon Valley’s ability to compete with Beijing, it’ll lose more than just market shares. It could lose military superiority over countries like China and Russia.”

To avoid global disparities in AI, international regulations would need to be imposed. But is it reasonable to believe all nations would adhere to these, when so much is at stake?

3. “Governments simply don’t understand AI well enough to know how to regulate it”

Some argue that the technology is at such an early stage that is impossible to know for sure what the AI landscape will even look like in several years, let alone be able to regulate it.

The field is also so complex that some researchers feel policymakers simply lack a fundamental understanding about how AI works, which makes writing effective policy a highly difficult task.

Elon Musk himself has acknowledged that governments need to better educate themselves about AI. Speaking to Fortune, he said:

“It’s just something that I think, anything that represents—that is a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being. And that insight is different from oversight, so at least the government should have insight to understand what’s going on, and then decide what rules are appropriate to ensure public safety.

“That is what I’m advocating for. I’m not advocating for that we stop the development of AI, or any of the sort of straw man, hyperbole things that have been written. I do think there are great benefits to AI. We just need to make sure that they’re indeed benefits, and we don’t do something really dumb.”

Some researchers think AI itself might hold the key to appropriate regulation. As Oren Etzioni, University of Washington computer science professor and CEO of the Allen Institute for Artificial Intelligence, said, “Instead of creating a new regulatory body, we need to better educate and inform people on what AI can and cannot do. We need research on how to build ‘AI guardians’—AI systems that monitor and analyze other AI systems to help ensure they obey our laws and values. The world needs AI for its benefits; AI needs regulation like the Pacific Ocean needs global warming.”

Regulation is needed – perhaps sooner than we think

Some think Musk’s vision of “killer robots” is rather alarmist, considering the majority of researchers think it will take at least 50 years before machines reach the level of super-intelligence that Musk is purporting in his apocalyptic visions.

But that doesn’t mean AI can’t still have a detrimental effect on society – and, in fact, it may already be happening.

As Toby Walsh, professor of artificial intelligence at the University of New South Wales and author of It’s Alive!: Artificial Intelligence from the Logic Piano to Killer Robots, said, “Elon is right about one thing: We do need government to start regulating AI now. However, it is the stupid AI we have today that we need to start regulating. The biased algorithms. The arms race to develop ‘killer robots’, where stupid AI will be given the ability to make life-or-death decisions. The threat to our privacy as the tech companies get hold of all our personal and medical data. And the distortion of political debate that the internet is enabling.”

But governments move slowly – and Musk is likely well aware of this. As Carlos E. Perez, author of the Deep Learning AI Playbook, wrote in an article:

“Musk is proactively kickstarting the conversation about government regulation with the calculation that when government eventually becomes ready, AI technology will have advanced enough to allow for meaningful regulation. It indeed is placing the cart before the horse.

“Musk’s gamble here is that the negative effects of premature regulation outweighs an existential threat. Musk calculates that it is better to be early but wrong than to be late and correct.”

If harnessed properly, AI – and other smart technologies – can be used to improve the lives of citizens. To learn more about how technology has transformed communities, check out our free ebook: Smart technology, happy citizens: how governments can foster social inclusion. Download it now.

Register your interest for CeBIT Australia 2018