A study by the University of Lincoln claims that humans are more likely to warm to robots that make mistakes than those that are “perfect”
The touching relationship between Dr George Millican, a retired artificial intelligence researcher, and Odi, his outdated “synth”, in the recentChannel 4 series Humans was perhaps not as far-fetched as it first appeared, if new research is to be believed.
According to a study by the University of Lincoln, humans are more likely to form successful working relationships with robots if they are in some way flawed, while those that are programmed to be too “perfect” are considered offputting.
Most robots in existence today follow a set of well-ordered and structured rules and behaviours, meaning that, for the most part, they don’t make judgemental mistakes or wrong assumptions, and they cannot express empathy.
However, the study found that a person is much more likely to warm to an interactive robot if it shows human-like “cognitive biases” – deviations in judgement which form the basis of individual characteristics and personalities, complete with errors and imperfections.
To test this theory, PhD researcher Mriganka Biswas, overseen by Dr John Murray from the University of Lincoln’s School of Computer Science, used two robots – Erwin, which can express five basic emotions, and Keepon, which is designed to study social development by interacting with children.
The researchers examined a number of interactions between the robots and human participants. During half of the interactions the robots were not affected by cognitive biases, but during the remainder, Erwin made mistakes when remembering simple facts, and Keepon showed extreme happiness or sadness using various movements and noises.
Participants in the study were then asked to rate their experiences. The results revealed that almost of all of those taking part enjoyed a more meaningful interaction with the robots when they made mistakes, or when they expressed human-like emotions such as boredom or over-excitement.
“We monitored how the participants responded to the robots and overwhelmingly found that they paid attention for longer and actually enjoyed the fact that a robot could make common mistakes, forget facts and express more extreme emotions, just as humans can,” said Mr Biswas.
“By developing these cognitive biases in the robots – and in turn making them as imperfect as humans – we have shown that flaws in their ‘characters’ help humans to understand, relate to and interact with the robots more easily.”
Interactive or “companion” robots are increasingly used to support carers for elderly people and for children with autism, Asperger syndrome or attachment disorder. However, many of these robots lack human characteristics, so it can be hard for users to relate to them.
Mr Biswas said that, in order to be effective, a companion robot needs to be friendly and have the ability to recognise users’ emotions and needs, and act accordingly.
“The human perception of robots is often affected by science fiction; however there is a very real conflict between this perception of superior and distant robots, and the aim of human-robot interaction researchers,” he said.
“As long as a robot can show imperfections which are similar to those of humans during their interactions, we are confident that long-term human-robot relations can be developed.”
The University of Lincoln’s study was presented at the International Conference on Intelligent Robots and Systems in Hamburg this month.
It paves the way for the next phase of Mr Biswas’ research, which will look at whether the appearance of humanoid robots helps people to understand their gestures more intuitively, and whether this familiarity, coupled with human-like faults, will stimulate even more positive reactions from users.