By CEBIT Australia chairperson Stephen Scheeler, ex Managing Director ANZ, Facebook and Founder, Digital CEO
There’s no doubt that the growing realm of artificial intelligence has the potential to revolutionise our world for the better. Intelligent machine systems are already transforming our lives by conducting research, optimising logistics, writing articles, detecting fraud, and providing translations, wresting much of the drudgery associated with repetitive tasks away from human hands, freeing us up to be more creative and productive. As a result, our world has become more efficient and profitable.
Recently the Australia government launched a call out on the ethics framework for AI in Australia which is open to public comment. Based on the core issues and impacts of AI, eight principles were identified to support the ethical use and development of AI in Australia advocating for systems which include: doing no harm; generating net benefits; regulatory and legal compliance; privacy protection; fairness; transparency and explainability; contestability and accountability.
Should we fear AI?
These are all helpful, basic guidelines, and like anything in its infancy, we have to figure out how to plug in and navigate the territory going forward. However, the issues surrounding AI are far ranging which will most likely necessitate incredibly complex frameworks as time goes on. Unfortunately, much of the talk surrounding AI tends to focus on somewhat fantastical future scenarios, which detracts from the actual issue at hand.
The notion that machines will somehow become more “intelligent” than humans and we will be taken over by super computers such as HAL in 2001: A Space Odyssey, or be fleeing robotic cyborgs like the helpless mortals in Terminator 3: Rise of the Machines, is something that belongs purely in Hollywood.
The rise of AI personal assistants
I believe that the next “big thing” in the evolution of AI will be somewhat less dramatic but will still be a game changer. It’s much more realistic to envisage that one day in the not too distant future, we will each have our own AI “personal assistants” working for us. We already have a range of personal assistant apps like Amazon Alexa, Siri and Cortana however I think we’re headed towards a greater evolution of this with future AI assistants that will be more encompassing and do a whole lot more. I see systems created and offered by say five major companies which will manage all your personal data relating to your interests, needs and wants with all systems that you are connected with. Your system will be able to generate and manage areas of interest for you on your behalf.
For example, if your individual system knows you are always on the look out for a great pair of jeans, it may go searching on your behalf, present you with options, and then organise payment all with minimal input from you. They may also be able to defend you or filter out other forms of intrusive AI, keep an eye on your finances, keep tabs appointments and do all that mundane stuff as one principal point of control. This in turn will free up more of your time and actually has to potential to make people less chained to their devices. In a sense it may free us up to do more “human stuff.”
So, what are the implications of such a scenario? For one, it may lead us to question in greater depth who we really are in the cyber world. If we tell our own personal system that we are similar in looks and interests to a big celebrity such as Kim Kardashian, even if we’re not, and to look out for products that relate to her lifestyle, then is there any harm in that? How true to ourselves do we remain, the more technology advances? This is already becoming a thorny issue given that the majority of what people post on Facebook and Instagram is an idealised view of their life, and often not actual reality.
As systems advance, we also need to be aware of how the algorithmic recommendations of those feeds will need to change. We already saw how the Russian government interfered in the 2016 US presidential election with the goal of harming Hillary Clinton’s campaign in boosting the candidacy of Donald Trump and increasing political and social discord. The clandestine influence campaign involving the Internet Research Agency “troll farm” created thousands of social media accounts that impersonated Americans supporting radical groups, planning and promoting pro-Trump and anti-Clinton rallies. This reached millions of social media users between 2013 and 2017. We therefore will need to make it harder for AI to “game the system” and make them more robust.
Thirdly, the ethical rules surrounding AI will need to become more complex and specific to each industry. Say if there’s AI controlled by Australian government relating to Centrelink and the ATO, which interface with your own personal AI, then what will be the ground rules about how those two systems interfaces with each other? Is your AI allowed to represent you in relation to all aspects of the government AI or only certain elements of it? What will be your own personal rights when it comes to AI?
How will we deal with the questions?
When it comes to an area as broad as the future of AI, it’s clear that there are still currently more questions than concrete answers or theories. The World Economic Forum recently identified a series of issues and conversations keeping AI experts up at night including: What happens after the end of existing jobs? How do we distribute the wealth created by machines? How can we guard against mistakes? And how do we keep AI safe from adversaries?
The fact is, we still don’t have enough systems and processes in place to understand exactly how AI will affect our lives – or how exactly we will manage it collectively and as individuals. As the Forum points out, “Whilst artificial intelligence has vast potential its responsible implementation is up to us.”
Stephen Scheeler is the CEBIT Australia chairperson and ex Managing Director ANZ, Facebook and Founder, Digital CEO