CEBIT 2019 - CTA Blog Top
21
Aug

The onset of autonomous vehicles, and what that means for society

The onset of autonomous vehicles, and what that means for society

A future with autonomous vehicles (AVs) is often touted as a sort of utopia. Imagine it: no frustrating traffic jams, no driving around in circles trying to find a parking spot, more green cities – in both literal and environmental terms, as areas previously used for transport infrastructure are reclaimed for green spaces, while our carbon footprint shrinks. We’ll have more mobility, more time, and more money in our pockets.

This technology will also save lives. Currently, 1.2 million people die on the road each year – this is equivalent to a 737 airplane falling out of the sky every hour – and in 94% of cases, the cause is human error. Driverless cars promise to drastically cut down the number of road fatalities.

But is the future really as rosy as tech and auto corporations make it out to be? And with 10 million (partially and fully) self-driving cars predicted to be on the road by 2020, are we fully prepared for the repercussions that are right around the corner?

There are some tough questions that have yet to be fully answered.

Are they safe?

With cameras and radar that can scan 360 degrees, the capabilities of AVs already far outstrip that of humans. So if the question is, “Are self-driving cars safer than humans?”, the answer is an unequivocal “Yes”.

However, if the question is, “Are they safe?”, the answer becomes less clear. While AVs don’t drink alcohol, get distracted by phones or fall asleep at the wheel, as a human might, they have demonstrated a more limited ability to cope with novel situations than humans have. Volvo, for example, has admitted that their self-driving cars, which can recognise elk and caribou, are confused by kangaroos because of how they hop. There’s also some evidence that AVs perform poorly in adverse weather conditions, such as rain, which can obscure cameras, create confusing glare and reflections, and reduce the range and accuracy of sensors. Even graffiti on road signs can confound the visual recognition software on AVs, with researchers showing that simply by applying stickers to a stop sign, they could trick the machine into thinking it was a speed limit sign. And with all the technology and software required to operate an AV, they are also vulnerable to cyber security attacks and hacking.

As these issues get ironed out, it is inevitable that others will arise, causing injuries and even death. There has already been one death: in May 2016, Joshua Brown became the first known fatality in a self-driving car when his Tesla Model S collided with a semitrailer, because the car’s auto-pilot system was unable to detect the white truck against the brightly lit sky. After a federal investigation, auto-safety regulators said no defects were found in the system and that Tesla’s Autopilot-enabled cars did not need to be recalled.

The reality is that AVs will likely never be perfectly safe. However, when you weigh the benefits against the costs, it seems impossible to prevent AVs from taking over our roads. As Nikolaus Lang, of the Boston Consulting Group’s Centre for Digital in Automotive, says, “As with any new technology, there will be failures and even fatalities, but the overall benefits – in terms of [estimated] 90% fewer accidents, 40% less congestion, up to 80% less emissions, and 50% of parking space saved – are so substantial that the technological development will prevail.”

Are they ethical?

There will undoubtedly come a time when AVs will have to make difficult decisions about who lives and who dies – should it sacrifice the passenger to save the life of a pedestrian, or vice versa?

Researchers have already started to tackle these sticky moral dilemmas. MIT, for example, have decided to use crowdsourcing to gather human perspectives on moral decisions made by artificial intelligence, in an initiative called Moral Machine.

And there is evidence to suggest that human moral behaviour can be adopted by machines. One study by the University of Osnabrück found human moral behaviour follows a relatively simple ‘value of life’-based model that could theoretically be described by an algorithm, meaning the ways humans would react in such situations can be applied to technology such as AVs.

But while we perhaps can potentially make machines act like humans, the question remains, should we?

What are the legal implications?

Along with new technology, comes new legal issues. Who is deemed at fault, for example, when a driverless car gets a parking fine or commits a traffic violation, like running a stop sign? Who is liable if an AV is involved in accident, and damages property, or causes injury, or even death?

Some countries are already looking to the future in terms of their legislation. The UK have recently passed a bill that says insurers would be primarily responsible for paying out damages stemming from accidents caused by AVs in self-driving mode, where the vehicle is insured at the time of the accident. This in effect protects manufacturers from potential lawsuits, which might threaten to stifle the development of AVs.

California has also recently proposed regulations to allow fully AVs to drive on public roads without any people in the car (currently AVs have to have a backup driver at all times) – a move that has been labelled a “game-changer”. The new changes would also allow companies to self-certify their vehicles as safe to operate without a human, though some experts have misgiving about this. Ryan Calo, a law professor at University of Washington, called it “a very big leap”, adding, “I’m worried by the idea of a company saying, ‘We’re good.’”

While countries like the US, the UK and Sweden already have policies in place to facilitate the development and introduction of AVs onto public roads (for example, in 2015, the UK government published a Code of Practice  for testing driverless cars and the Swedish government launched a Strategic Innovation Program called Drive Sweden), Australia still lags behind in terms of a federal regulatory framework.

Some state governments are starting to get on board, though – in September 2015, South Australia became the first state to introduce legislation to permit on-road testing of driverless cars. And just recently, the NSW government has passed similar legislation, allowing for the testing of AVs on both city and regional roads across the state. This comes as the first trial of AVs in NSW gets underway, with the government partnering with HMI Technologies, NRMA, Telstra, and IAG for a two-year trial of a driverless shuttle bus at Sydney Olympic Park.

Other organisations have also taken up the helm in a bid to prepare Australia for the inevitable future. The Australian Driverless Vehicle Initiative, for example, is an organisation that aims to “accelerate the safe and successful introduction of driverless vehicles onto Australian roads”.

What societal impact will they have?

While AVs promise us more free time, less accidents and markedly reduced healthcare costs, less is said about the disruptive effects they will have, particularly on the labour market. The mining industry may be among the first hit, according to a McKinsey report, as AVs are initially adopted in controlled environments such as mines.

Thousands of taxi, bus, van and truck drivers will eventually lose their jobs, and this will have a trickle-down effect, impacting on management and support roles, as well as on those businesses, particularly in regional areas, that depend on trucking routes, like motels, retail stores and restaurants.

The entire auto industry will also likely get a big shake-up, as car ownership drops and cars run on electricity rather than petrol. This potentially impacts car dealers, auto mechanics and petrol station attendants.

As these people grapple with job losses, it will be up to governments to balance the merits of introducing this technology over the challenges of supporting, reskilling and redistributing this significant proportion of the labour force. There will also be implications on taxation, particularly on those multinationals who stand to gain the most financially. Some have even suggested a universal basic income could be part of the solution.

A brave new world  

Autonomous vehicles hold a lot of promise, but they also come with a swathe of challenges that will need to be carefully considered and regulated in order to get the most societal benefit out of the technology and ensure no citizen is left behind.

Government agencies are already using smart technology to tackle big issues, create transformative change and foster social inclusion. Want to know how? Then download our free ebook Smart technology, happy citizens: how governments can foster social inclusion now.

Register your interest in CeBIT Australia 2019 now