The 2015 mystery Ex Machina explores the dark relationship between an AI creation, her maker and a programmer unwittingly brought in to be a test subject in the maker’s experiment:
The maker ultimately wants to establish whether his creation has developed to the point where it can evoke empathy from the programmer. Unfortunately, the experiment works too well. Not only does the programmer become attached to the AI unit, she proves to be smarter than both of them, playing them off against each other and killing them. It’s a grim and literal variation on the ‘machines will be the death of us’ theme.
While the storyline may feel sensational, the film raises some important considerations about the ethical and social ramifications of a technology that doesn’t just mimic the actions of people, processes and machines, it can learn, adapt and become superior to the original.
These very ramifications were also explored by the World Economic Forum (WEF) in its work titled Global Risks Report 2017. While the report examines how broader changing social, political and technological trends are transforming the world, it highlighted two particular technologies that are moving at such a rate, governments aren’t able to adequately provision for them: Robotics and AI.
The growth of the latter is accelerating to the extent that this year’s panellists were surprised and concerned at the speed of development. Within the report itself, they have identified AI as a key risk and dedicated a chapter to exploring the following question:
‘Can we (or should we) build trust in systems that can make decisions beyond human oversight that may have irreversible consequences?’
Data and AI: A ticking time-bomb?
Every step forward in artificial intelligence (AI) challenges assumptions about what machines can do.
WEF Global Risks Report 2017
Governments have always had an abundance of data. But it's only recently that technology can harness this information and turn it into something valuable - so much so that the Department of Finance declared it a national asset in 2013.
There are a number of ways technology can turn data into insights. On this year’s WEF panel on Artificial Intelligence, IBM CEO and President Ginni Rometty explained how AI technology transformed this data stating, ‘There’s a reason that we call AI cognitive. If we were to gather all this information we’d be so overwhelmed, that we wouldn’t be able to internalise it, to use it to what its full value could be.’
As analytic tools and algorithms have become more sophisticated, AI machines are starting to surpass the limits of human capacity in some areas such as image and speech recognition, chess and more recently in a game of strategy and intuition called Go.
While it may be amusing when a machine beats a human at a game when the same process is applied to warfare, a grave situation arises:
‘Some of AlphaGo’s moves puzzled observers, because they did not fit usual human patterns of play. DeepMind CEO Demis Hassabis explained the reason for this difference as follows: “unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimising margins”. If this binary logic – in which the only thing that matters is winning while the margin of victory is irrelevant – were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories that required it to kill one adversary or 1,000.’
AI and warfare
The international community should tackle this issue with the utmost urgency and seriousness because, once the first fully autonomous weapons are deployed, it will be too late to go back.
WEF Global Risks Report 2017
The potential consequences are so grave that industry and community leaders such as Stephen Hawking, Elon Musk, Noam Chomsky and Steve Wozniak signed an open letter to the UN voicing their concerns over the use of AI in warfare and urging for international cooperation in the ‘ban on offensive autonomous weapons.’
The UN responded in December last year with a move to ban the use of autonomous weapons in warfare with 89 signatory nations indicating that they would like to see international guidelines applied to the use of AI in defence. The international situation reflects the intrinsic problem. How are we to establish principles around the use and the purpose of this technology?
The issue of governance
‘It's frankly our responsibility, as leaders out there that are putting these technologies out, to guide them in their entry into the world in a safe way.’
Ginni Rometty, CEO and President of IBM
The WEF report outlines that there must be guidelines for every stage of the AI process, from development to use. Rometty goes one step further by suggesting that the guidelines must address the following points:
- What is the purpose of the technology?
- Are your processes engendering transparency?
- What are you doing to ensure that AI learnings are shared with the wider community?
She argues that these criteria are essential in cultivating trust and cooperation, so instead of fearing the harmful consequences of this technology, we can harness the technology for the many positives it can provide.
In Ex Machina, the ending was a tragic one. The human maker and programmer turned on each other and were killed, while the AI unit flew off into freedom, having outsmarted the humans in their claustrophobic, secretive misanthropic battle. It also exposed an important truth: that the technology could have been used for many beneficial outcomes, had the maker been working from the beginning in a spirit of cooperation and transparency and instead of using it to create, fear uncertainty and chaos.
If you would like to know more about how technology and big data technology will impact on government in the next few years, you should consider signing up for the Big Data and Analytics 2017 @ CeBIT conference today.