We are facing a care crisis of epic proportions. In less than 20 years’ time the number of over 65s in the UK, currently at 10 million, will have risen to 17 million. Yet estimates show we won’t have sufficient care workers to tend to this ageing population.
Assistive robots remain the anonymous shadow in the background of today’s not fit for purpose care system. While governments fund projects that focus on designing care robotics for the future (such as CHIRON) we must tread carefully when designing something to work with humans at their most vulnerable stage in life. The question many are now asking is: what should these assistive robots look like? Should they keep the form of a machine, or should they try and emulate a human?
Science fiction is full of robots-usurping-humans’ stories. If we were to believe Hollywood movies, machines are either out to eliminate us or to trick us into a state of surrender. While attitudes towards robots can vary greatly depending on their application, the care sector can be a very personal and thorny area indeed – especially when people start to consider the well-being of their own parents or grandparents. The word ‘care’ denotes an action or feeling performed by something capable of emotion. So is it ethical to design a machine that we feel could care for us, and should we risk eliciting an attachment to this machine in the way we might to a human caregiver?
One way to look at this issue is through Bowlby’s theory of attachment. Attachment does not have to be reciprocal. One person may have an attachment to an individual which is not shared. The evolutionary theory suggests that children come into the world biologically pre-programmed to form attachments with others, because this will help them to survive, and that the determinant of attachment is not food, but care and responsiveness. While Bowlby’s focus is mainly in between primary caregiver and baby – attachment patterns which come from childhood can repeat throughout our adult lives as we attach to new partners or friends. In fact, if we attach to a robot it might not be so bad, but we may bring all of our previous attachment issues with us. Which leads us to experience the robot not as an object or machine, but instead as a similar replica of our early caregivers in that might have been angry or rejecting, kind or smothering.
Humans developing emotional attachments to robots is well known, and well documented. If humans can form bonds with pets (who can reciprocate on some level and who often rely on us for food and affection), some might ask, why not robots?
It’s surprisingly easy for humans to endow robots with personalities. We are likely to anthropomorphise something if it appears to have many traits similar to those of humans, through human-like movements or physical features such as a face.
According to Rick Nauert PhD, ‘anthropomorphism carries many important implications. For example, thinking of a nonhuman entity in human ways renders it worthy of moral care and consideration. In addition, anthropomorphised entities become responsible for their own actions — that is, they become deserving of punishment and reward.’
So, if care robots are merely designed to do a job, is it healthy to attach to them, and what about if they need to be replaced? The Channel 4 series Humans covers this issue. One of the characters – Dr. George Millican, a retired artificial intelligence researcher and widower who suffers from an unknown disability forms a special bond with his outdated care robot named Odi. George refuses to let go even with his GP insisting it be recycled, and goes to many lengths to conceal the care robot (with whom he has formed a father/son like attachment with) from the authorities.
While fictitious, this idea is not entirely far-fetched. In the real world in 2013, researcher Julie Carpenter documented soldiers who developed strong emotional bonds with their robotic helpers, to the point of experiencing frustration, anger, and grief when the robots were destroyed on the battlefield. Some even held funerals for them.
Currently on the market in Japan, is a fluffy robot seal companion named Paro. Used by the older generation, Paro does not help with the dishes, carry heavy items or administer medication. Instead, Paro offers companionship, responds to being stroked and behaves more like a pet. In a BBC article regarding the robot a Japanese care home resident Kazuo Nashimura, said: “Paro is my friend. I like it that he seems to understand human feelings.”
Which all begs the question – if we build something that seems to understand us, something that can talk to us and something that can help us with all of our tasks; a robot to share our ups and downs with, who can help us sift through our most poignant memories – is this be ethical and can we detach from what is real and what is not?