Just as we do before we learn to trust a pet dog, humans will have to spend quality time with autonomous vehicles, experiencing all they have to offer, before feeling safe to let them off the leash.

Humans don’t simply trust new technology because they’re told they should and we need to directly engage with the technology, establishing and fulfilling mutual expectations, before it earns our trust.

This is the view of Dr Max Cappuccio, a philosopher of technology and robot ethicist at UNSW Canberra.

“Trust became a central concept in computer science only recently, with the prospect to build artificial systems that are not only automated but also autonomous” said Dr Cappuccio, a Senior Researcher in UNSW Canberra’s School of Engineering and IT. 

“The paradigm of the second and third industrial revolutions was based on automation.

“Automation means machines do repetitive work, relieving us from its burden.

“We don’t feel the need to trust our washing machine or our car: it is sufficient they are reliable. But those machines are not autonomous, in the sense they do not need to make decisions or navigate complex environments.” 

“The key concept of the fourth industrial revolution is autonomy, because we aim to design machines that operate adaptively under decreasing levels of human supervision and control.”

However, the idea of machines, such as an autonomous vehicle (AV), moving freely around us, and even transporting us at speed through populated areas, is one we still need to adjust to.

“With autonomous systems it’s more difficult to tell whether they are going to do exactly what you want them to do because complex, intelligent behaviour is, by definition, unpredictable," Dr Cappuccio said.

"It transcends standardized protocols and rules. If cars have to be autonomous, they must have their own intelligence.”

Machines will do things their way

Dr Cappuccio says autonomous machines are not just tools. In a sense, they’re companions or co-workers.

Teaming with them involves coordination established through the fine-tuning of reciprocal expectations. Having trust means managing correctly these expectations, so that they are commensurate to our imperfect understanding of their implicit processes.

“Machine-learning-based systems learn from experience and develop their own, complex behavioural patterns,” Dr Cappuccio said.

“In a growing number of domains, they are showing a surprising capability to replicate many aspects of human intelligence, but the greater the intelligence they acquire the harder for us to predict whether and how they will fulfil our expectations.”

“Some theorists suggest that the best way to deal with them is like you might with animals. You can’t ask your dog how it will go about doing something in particular but you still have a good understanding of your dog. If you have developed a good relationship with your dog, then you have very clear expectations around its behaviour, which means you can trust it.”

Trust emerges from a relationship with technology

Deeply interested in the mechanism of the human mind as well as human/machine interaction, Dr Cappuccio works in a multidisciplinary environment with engineers, psychologists, roboticists, and experts in human performance.

His work has covered the application of humanoid robots, particularly in the classroom and to help children socialise. He has conducted research into the ethical regulation of autonomous weapon systems and their impact on the attitudes of military personnel.

Dr Cappuccio, who has conducted research in nine different countries before moving to Australia, is currently studying to get a second PhD in human-computer interaction. 

“Between human and machine, trust emerges dynamically from a history of interactions,” Dr Cappuccio said.

“How do you build trust between a human and a horse? Horses cannot promise to faithfully follow a human’s instructions. But if you create the conditions for them to ride together, they will become progressively acquainted and establish the strongest sense of loyalty.”

It will be somehow similar with self-driving cars, when they finally arrive, and other robustly autonomous systems. 

“The first step to build trust in a technology is to offer the user a direct, concrete - even if imperfect - experience of it, like the first few times you fly in an plane," Dr Cappuccio said. 

“The subsequent process is tentative and iterative. As people get progressively exposed to AVs, a feedback loop will form. The technology will develop, improve, and be perfected based on users’ feedback. Conversely, trust will grow as people start forming reasonable expectations that are systematically fulfilled by the technology.”

Find out more

UNSW Canberra’s AV-themed Long Road Ahead series continues on December 6, with an online/in-person seminar on the topic of ‘Smart traffic control for the era of autonomous driving’. Click here for more.