Robots are increasingly roving our homes and public spaces. Some sweep floors or assemble automobiles; others have a social role, interacting with people in nursing homes, schools and malls. To design robots that act friendly and approachable, engineers work with psychologists, spawning a field known as robot-human relations. This research does more than build better robots: It also sheds light on the workings of the human mind.

Health psychologist Elizabeth Broadbent of the University of Auckland in New Zealand explores this rich topic in “Interactions With Robots: The Truths We Reveal About Ourselves,” in the Annual Review of Psychology. We chatted with Broadbent about her research on healthcare robots, ethical questions in robotics and what robot studies reveal about the human mind.

How did you first get interested in robot-human relations?

It was mainly through reading science fiction when I was growing up, books by Isaac Asimov and Harry Harrison. I just became fascinated with these robots that were helping people. So I did a degree in electro-engineering.

When I graduated, in the mid-’90s, there weren’t really any social robots. The only jobs around were with the more industrial robots, which I wasn’t interested in. I worked for a few years as an engineer, then went back to university and studied health psychology, which is how the brain interacts with the body. By the time I started working as a lecturer at the University of Auckland, in 2005, robots were getting much more social. I started collaborating with engineers to help design what the robots should do, and test if their behavior was acceptable.

Why is it important to understand how robots and humans get along?

If you want them to be useful, they’re going to have to understand what you want, and you’re going to have to understand how to tell them what you want. And you’re not going to want something that behaves in a way that is not socially acceptable. They have to understand the social norms of how people behave. For example, if a robot was approaching somebody, then they would have to approach that person in a way that wasn’t aggressive, or didn’t interrupt people when they were talking to somebody else.

Eye gaze is critical. Eye contact can tell whether somebody is listening to you, whether they’re interested in what you’re saying, whose turn it is to speak next. As humans, we understand these behaviors. Engineers have studied them in detail, and they’ve tried to write down rules for robots to follow.

And what about the look of robots? A classic concept called the “uncanny valley” states that robots become more appealing as they become more humanlike — but that likability dips when they start to look almost, but not exactly, like people. What do modern studies say about this?

It’s not quite as simple as originally thought. A more recent study suggested that uncanniness results from an inconsistent mix of human and artificial facial features, such as computer-generated eyes and mouth in a face that’s otherwise natural.

What kinds of robots do you study?

We’ve been working with autonomous healthcare robots. One of them is Paro the seal, a pet-type robot. There are also a few different Korean robots that were not originally healthcare robots. They were made for things like working with children in kindergarten or serving drinks in cafés. We’ve taken those basic forms and turned them into healthcare robots, and tested them with older people and people in their own homes.

How can robots assist older people?

Reminding people to do things – in particular, take their medication. Also monitoring their health, like taking blood pressure and sending the results to a remote server where the doctor or nurse can log in. Another of the tasks we identified was detecting falls.

But you can set your mobile phone to remind you to take your meds. You can wear devices that let you call for help if you fall. What’s the advantage of integrating this into an autonomous robot?

That’s a vital question, to show that robots are different to another device. We’ve done a study where we compared one of the robots, iRobi, with a computer tablet, like an iPad, running the exact same software. The software was asking people to do some exercises, like relaxation exercises, and cycling on a bike.

In that study, we found that people were more likely to do the exercises if the robot asked them than if a computer tablet asked them. And people rated the robot differently to the computer tablet. They thought they could trust the robot more. It was rated as more popular and sociable, and as having a better personality.

Is there good evidence that such social robots really do help people?

In healthcare, most studies of social robots are observational and small. Part of that is because the technology is relatively new and it’s still being developed.

But because Paro the seal has been around for a number of years, there have been a number of randomized controlled trials, and the evidence for the benefits is beginning to come out.

One thing we’ve found is that the seal increases communication between people. Ordinarily, the caregiver might come in and put a cup of tea down for somebody, and then walk out again. Whereas if Paro was there, they would come over and say, “Oh wow, what have you got there? Oh, it’s a robot. Isn’t he cute? He’s got lovely soft fur.” They have something to talk about.

And we’re beginning to find out who the robot is most suitable for. In one of our recent studies, we found that people with more severe dementia don’t benefit as much as people who’ve got more moderate or mild dementia.

How can psychologists use human-robot interaction studies to not only design better robots, but also to better understand human psychology?

With robots, you can isolate particular features. You can have two robots that are exactly the same on everything except for, say, personality, or the color of their hair, or the shape of their chin. And then you can test how people react to the robots.

For example, in Germany they’ve done quite a few studies comparing robots that are supposedly either made in Germany and given a German name, or made in Turkey and given a Turkish name. It’s exactly the same robot. And it comes out that a German participant prefers the German one, and rates it more highly than the Turkish one. It just shows our inherent in-group bias, that we favor other people like us, who have the same background as us.

Asimov had three laws of robotics, laying out how robots should not harm people, should follow orders and should protect themselves. What ethical rules or questions come up around today’s robots?

There are quite a few ethicists working in this area now. And they talk about things like, if a robot accidentally hurts someone, whose fault is it? Who do you hold accountable? Is it the robot, or the person who made or bought it?

And is it ethical to make a robot that looks like a human, and therefore people might think it is a human?

Another big question is, what do we want robots to do? Do we want them in all the spaces of our lives? Are there any lines that we don’t want to go past?

Along those lines, if engineering could produce any robot you could imagine, what would you, personally, want?

I would want one that could converse very easily. And I wouldn’t want it to look too human-like. I would want it, perhaps, to be a bit like C-3PO from Star Wars. He’s got a nice, self-deprecating sense of humor. Yeah, if I could have a C-3PO, that would be nice.