My robot Valentine: could you fall in love with a robot?

Can a robot really feel and express emotions such as love? Shutterstock/Charles Taylor

Kate Letheren, Queensland University of Technology and Jonathan Roberts, Queensland University of Technology

Imagine it’s Valentine’s Day and you’re sitting in a restaurant across the table from your significant other, about to start a romantic dinner.

As you gaze into each other’s eyes, you wonder how it can possibly be true that as well as not eating, your sweetheart does not – cannot – love you. Impossible, you think, as you squeeze its synthetic hand.

Could this be the future of Valentine’s Day for some? Recent opinion indicates that yes, we might just fall in love with our robot companions one day.

Already, robots are entering our homes at increasing rates with many households now owning a robot vacuum cleaner.

Robotic toys are becoming more affordable and are interacting with our children. Some robots are even helping rehabilitate special needs children or teach refugee children the language of their new home.

Robot romance

Will these appliances and toys continue to develop into something more sophisticated and more human-like, to the point where we might start to see them as possible romantic partners?

While some may compare this to objectophilia (falling in love with objects), we must ask whether this can truly be the case when the object is a robot that appears and acts like a human.

It is already the norm to love and welcome our pets as family members. This shows us that some varieties of love needn’t be a purely human, nor even a sexual phenomenon. There is even evidence that some pets such as dogs experience very similar emotions to humans, including grief when their owner dies.

Surveys in Japan over the past few years have shown a decline in young people either in a relationship or even wanting to enter a relationship. In 2015, for instance, it was reported that 74% of Japanese in their 20s were not in a relationship, and 40% of this age group were not looking for one. Academics in Japan are considering that young people are turning to digital substitutes for relationships, for example falling in love with Anime and Manga characters.

What is love?

If we are to develop robots that can mirror our feelings and express their digital love for us, we will first need to define love.

Pointing to a set of common markers that define love is difficult, whether it be human-to-human or human-to-technology. The answer to “what is love?” is something that humans have been seeking for centuries, but a start suggests it is related to strong attachment, kindness and common understanding.

We already have the immensely popular Pepper, a robot designed to read and respond to emotions and described as a “social companion for humans”.

How close are we to feeling for a robot what we might feel for a human? Recent studies show that we feel a similar amount of empathy for robot pain as we do human pain.

We also prefer our robots to be relatable by showing their “imperfect” side through boredom or over-excitement.

According to researchers in the US, when we anthropomorphise something – that is, see it as having human characteristics – we start to think of it as worthy of moral care and consideration. We also see it as more responsible for its actions – a freethinking and feeling entity.

There are certainly benefits for those who anthropomorphise the world around them. The same US researchers found that those who are lonely may use anthropomorphism as a way to seek social connection.

Robots are already being programmed to learn our patterns and preferences, hence making them more agreeable to us. So perhaps it will not be long before we are gazing into the eyes of a robot Valentine.

Society’s acceptance

Human-robot relationships could be challenging for society to accept, and there may be repercussions. It would not be the first time in history that people have fallen in love in a way that society at the time deemed “inappropriate”.

The advent of robot Valentines may also have a harmful effect on human relationships. Initially, there is likely to be a heavy stigma attached to robot relationships, perhaps leading to discrimination, or even exclusion from some aspects of society (in some cases, the isolation may even be self-imposed).

Friends and family may react negatively, to say nothing of human husbands or wives who discover their human partner is cheating on them with a robot.

Robot love in return

One question that needs to be answered is whether robots should be programmed to have consciousness and real emotions so they can truly love us back?

When love is returned by a robot.
Shutterstock/KEG

Experts such as the British theoretical physicist Stephen Hawking have warned against such complete artificial intelligence, noting that robots may evolve autonomously and supersede humanity.

Even if evolution were not an issue, allowing robots to experience pain or emotions raises moral questions for the well-being of robots as well as humans.

So if “real” emotions are out of the question, is it moral to program robots with simulated emotional intelligence? This might have either positive or negative consequences for the mental health of the human partner. Would the simulated social support compensate for knowing that none of the experience was real or requited?

Importantly, digital-love may be the catalyst for the granting of human rights to robots. Such rights would fundamentally alter the world we live in – for better or for worse.

But would any of this really matter to you and your robot Valentine, or would love indeed conquer all?

The Conversation

Kate Letheren, Postdoctoral research fellow, Queensland University of Technology and Jonathan Roberts, Professor in Robotics, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

What human emotions do we really want of artificial intelligence?

The challenge in making AI machines appear more human. Flickr/Rene Passet, CC BY-NC-ND

David Lovell, Queensland University of Technology

Forget the Turing and Lovelace tests on artificial intelligence: I want to see a robot pass the Frampton Test.

Let me explain why rock legend Peter Frampton enters the debate on AI.

For many centuries, much thought was given to what distinguishes humans from animals. These days thoughts turn to what distinguishes humans from machines.

The British code breaker and computing pioneer, Alan Turing, proposed “the imitation game” (also known as the Turing test) as a way to evaluate whether a machine can do something we humans love to do: have a good conversation.

If a human judge cannot consistently distinguish a machine from another human by conversation alone, the machine is deemed to have passed the Turing Test.

Initially, Turing proposed to consider whether machines can think, but realised that, thoughtful as we may be, humans don’t really have a clear definition of what thinking is.

Tricking the Turing test

Maybe it says something of another human quality – deviousness – that the Turing Test came to encourage computer programmers to devise machines to trick the human judges, rather than embody sufficient intelligence to hold a realistic conversation.

This trickery climaxed on June 7, 2014, when Eugene Goostman convinced about a third of the judges in the Turing Test competition at the Royal Society that “he” was a 13-year-old Ukrainian schoolboy.

Eugene was a chatbot: a computer program designed to chat with humans. Or, chat with other chatbots, for somewhat surreal effect (see the video, below).


And critics were quick to point out the artificial setting in which this deception occurred.

The creative mind

Chatbots like Eugene led researchers to throw down a more challenging gauntlet to machines: be creative!

In 2001, researchers Selmer Bringsjord, Paul Bello and David Ferrucci proposed the Lovelace Test – named after 19th century mathematician and programmer Ada, Countess of Lovelace – that asked for a computer to create something, such as a story or poem.

Computer generated poems and stories have been around for a while, but to pass the Lovelace Test, the person who designed the program must not be able to account for how it produces its creative works.

Mark Riedl, from the School of Interactive Computing at Georgia Tech, has since proposed an upgrade (Lovelace 2.0) that scores a computer in a series of progressively more demanding creative challenges.

This is how he describes being creative:

In my test, we have a human judge sitting at a computer. They know they’re interacting with an AI, and they give it a task with two components. First, they ask for a creative artifact such as a story, poem, or picture. And secondly, they provide a criterion. For example: “Tell me a story about a cat that saves the day,” or “Draw me a picture of a man holding a penguin.”

But what’s so great about creativity?

Challenging as Lovelace 2.0 may be, it’s argued that we should not place creativity above other human qualities.

This (very creative) insight from Dr Jared Donovan arose in a panel discussion with roboticist Associate Professor Michael Milford and choreographer Prof Kim Vincs at Robotronica 2015 earlier this month.

Amid all the recent warnings that AI could one day lead to the end of humankind, the panel’s aim was to discuss the current state of creativity and robots. Discussion led to questions about the sort of emotions we would want intelligent machines to express.

Empathy – the ability to understand and share feelings of another – was top of the list of desirable human qualities that day, perhaps because it goes beyond mere recognition (“I see you are angry”) and demands a response that demonstrates an appreciation of emotional impact.

Hence, I propose the Frampton Test, after the critical question posed by rock legend Peter Frampton in the 1973 song “Do you feel like we do?

True, this is slightly tongue in cheek, but I imagine that to pass the Frampton Test an artificial system would have to give a convincing and emotionally appropriate response to a situation that would arouse feelings in most humans. I say most because our species has a spread of emotional intelligence levels.

I second that emotion

Noting that others have explored this territory and that the field of “affective computing” strives to imbue machines with the ability to simulate empathy, it is still fascinating to contemplate the implications of emotional machines.

This July, AI and robotics researchers released an open letter on the peril of autonomous weapons. If machines could have even a shred of empathy, would we fear these developments in the same way?

This reminds us, too, that human emotions are not all positive: hate, anger, resentment and so on. Perhaps we should be more grateful that the machines in our lives don’t display these feelings. (Can you imagine a grumpy Siri?)

Still, there are contexts where our nobler emotions would be welcome: sympathy and understanding in health care for instance.

As with all questions worthy of serious consideration, the Robotronica panellists did not resolve whether robots could perhaps one day be creative, or whether indeed we would want that to pass.

As for machine emotion, I think the Frampton Test will be even longer in the passing. At the moment the strongest emotions I see around robots are those of their creators.



Acknowledgement: This article were inspired by discussion and debate at the Robotronica 2015 panel session The Lovelace Test: Can Robots be Creative? and I gratefully acknowledge the creative insights of panellists Dr Jared Donovan (QUT), Associate Professor Michael Milford (QUT) and Professor Kim Vincs (Deakin).

The Conversation

David Lovell, Head of the School of Electrical Engineering and Computer Science, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.