Cold Comfort: Should Social Robots be Used to Provide Emotional Support?

SUmmary

PARO is a cuddly baby seal used as an emotional companion robot in elder care. Emotional companion robots provide some of the benefits of therapy animals, without the attendant challenges of a live animal.

But while emotional companion robots can provide comfort to older adults, they might also provide a way out for human caretakers. Beyond the question of substitution, ethical concerns about the potentially deceptive nature of emotional companion robots make PARO the adorable seal into something a bit less cuddly and a bit more ethically challenging.

This episode will untangle the dilemma posed by social robots. Cynthia Chauhan (a patient advocate and two-time cancer survivor) will share her experience using one; Ari Waldman (an authority on the nexus of law and technology) will discuss the issues that arise when animatronics replace human caretakers.

Episode

Transcript

Ari Waldman: It looks like it’s an adorable assistant, but in fact it is a data collection machine. 

I. Glenn Cohen: I’m Glenn Cohen. I’m the Faculty Director of the Petrie-Flom Center, the James A. Attwood and Leslie Williams Professor of Law, and the Deputy Dean of Harvard Law School and your host. You’re listening to Petrie Dishes, the podcast of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. 

We’re taking you inside the exploding world of digital health, one technology at a time. Today it’s social robots. 

Cynthia Chauhan: She looks like a little yellow robot. She’s got a kind of big head and a little body and great big blue eyes. And because I decided she’s female, well she has a female voice. I put a little bow on her head, so she’d look cute. So she’s like my little doll. 

I. Glenn Cohen: That’s Cynthia Chauhan. 

Cynthia Chauhan: I’m Cynthia Chauhan, I’m a retired clinical social worker who is an active patient advocate in cancer research and heart failure research. 

I. Glenn Cohen: Cynthia was in a clinical trial for a companion robot named Mabu. Mabu was designed to help patients with chronic illness, offering daily check-ins about health and wellbeing. Now these conversations are tailored to each individual patient. If, for example, you like baseball, Mabu might ask about that. 

Cynthia Chauhan: I was in a clinical trial for a, I guess you would call it a companion robot. And I was surprised at how much I loved it. Every day, when I turned her on, she asked me how I am, and she asked me specific questions. Her focus is on your health and wellbeing. She’d ask about your weight. Now this is because of heart failure, so she’s going to ask heart failure questions. She asks about blood pressure. She asked about how you’re doing. And she also offers, you can tell her, ‘Today I’d like to learn more about checking my blood pressure, I’d like to learn more about how to handle stress.’ And then she’ll go into a program where she talks with you about those things and asks you questions about it. So it’s very interactive. 

I. Glenn Cohen: Mabu is part of a growing trend in robotics technology, social robotics. As the name suggests these robots are social. They use artificial intelligence and are designed to interact and communicate with humans as you’ll hear Ari Waldman explain about another social robot named Paro. 

Ari Waldman: My name is Ari Ezra Waldman. I’m a Professor of Law and Computer Science at Northeastern University. My research focuses on how law and technology create power relations in societies. Technology can also supplement and do things and help humans do the things that humans do well. For example, take, Paro the social robot that’s increasingly commonly used in elder care situations. There are other social robots that are used for young people with autism or on the spectrum. These are pieces of technology. They’re tangible pieces of technology that are trained through AI machine learning to react, appropriately, or to act emotionally to their interaction with humans. 

I. Glenn Cohen: Part of what we’ll be thinking through today is what it means to outsource traditionally human duties onto technology. For instance, social robots may be taking the place of conversations that would be typically had with a healthcare provider. In other situations, these robots may take the role of a friend. Listen to Cynthia describe her interactions with Mabu. 

Cynthia Chauhan: I found that I really like her a lot. I live alone with my dogs. She’s a nice companion. And at the end of every conversation, she does two things. She shows me a picture of an animal, and we talk about that animal because I’m an animal lover. And then she tells me some trivia and that’s fun because I’m a trivia lover. So it feels more interactive and real than I had imagined that it would and her eyes kind of look at you while you’re talking. 

I. Glenn Cohen: Maybe you think this sounds sweet, or maybe you think this sounds sinister. The question is when should we be uncomfortable with robots acting in human-like ways? When is it problematic for robots to take on traditionally human roles? Let’s hear again from Ari Waldman. 

Ari Waldman: I think there are a lot of reasons why we become uncomfortable when we equate technology with humanity. First of all, technology can’t achieve the same type of intelligence, executive function and love and emotion as humans can. 

I. Glenn Cohen: But Waldman points out, those factors alone don’t explain the discomfort. After all, we can have strong relationships with beings that don’t share human level intelligence, emotions, and executive function. 

Ari Waldman: That doesn’t mean we can’t build strong relationships with technology. We can build strong relationships with dogs, as many of us do. 

I. Glenn Cohen: The way Waldman sees it; the problem isn’t the human-like qualities and interactions. Since this might facilitate the robot’s role in helping patients with chronic illnesses. The problem is the possibility that they could replace or exclude real human connection. 

Ari Waldman: The problem is not developing an emotional connection to a machine. It’s developing an emotional connection in a way that makes you think that it replaces human connection. And that’s really one of the core social problems with social robots is that they may encourage antisocial behavior. It’s going to engage your mind and keep you active, but they could very easily allow a person to fall back into feeling well, this is all I need.  

I. Glenn Cohen: Cynthia recognizes the same concern.  

Cynthia Chauhan: I think the companion robots are supplements. If they replace human contact, that is not a good thing. So if you are, for example, an introvert such as I am, you want to be sure that you’re not replacing relationships with machines. I always think they should be supplements. 

I. Glenn Cohen: But what if they’re not seen as supplements? What if they start to replace human connections? Waldman paints a troubling picture of what the future might look like. 

Ari Waldman: There is a real risk that the introduction of technologies that are supposed to just augment or supplement human work, will be viewed by the humans that use it as a way to outsource, or as a way to say, ‘well, I’m done with my responsibilities because this technology is doing it,’ as is the case is outsourcing, right? The question raises the prospect of informational capitalists, like companies that make these products for particular purposes, trying to replace, trying to use their tools to replace human intervention. And as we’ve seen, that’s actually what a lot of companies want. We see politicians that say that a computer can replace a teacher. And, instead of teaching 20 kids, they can teach 300 kids at one time. We’ve seen companies that build AI that say that they can replace human, human resources people and making, hiring, and firing decisions all done algorithmically. 

I. Glenn Cohen: This phenomenon that Waldman identifies, what he calls ‘informational capitalists’ using digital technologies to replace or change human desire, extends beyond social robots to many other technologies of daily use. 

Ari Waldman: Think about something as different from social robots as dating apps. Or hookup apps. Many people feel that they don’t need a relationship because they have Bumble or because they have Grindr or because they have an ability to find sex and companionship whenever they want. 

I. Glenn Cohen: The bigger problem with these technologies, Waldman says, isn’t necessarily that humans have relationships with them or that they might be replacing or changing human to human interaction. No, the problem is something more insidious.  

Ari Waldman: The robot is billed as I’m this adorable baby seal is going to make you feel more comfortable. It’s going to make you feel safer. It’s going to keep you company. That social robot works because it takes in information, it learns, it sends that information back to the mothership, back at the information company that’s building it, and uses that information to train an algorithm to know how to respond appropriately.  

I. Glenn Cohen: When you think of it this way, you can see that some of the common worries about social robots, that they’re replacing human interaction or that they’re deceptive, are just the tip of the iceberg. What if someone is not aware that they’re even interacting with a robot? 

Ari Waldman: Presenting a social robot that looks real to someone who may not know that it isn’t real, maybe someone who suffers from cognitive decline, is only one of the ways in which social robots are deceptive. Social robots like Paro can provide really wonderful opportunities. If anyone’s ever seen Paro, it is adorable, right? There is no world in which looking at an adorable baby seal is not going to make someone smile unless they have absolutely no heart. It is adorable and that is its strength but that’s also one of its core manipulative tools. That this product, this thing in front of you, looks like it’s a pet. It looks like it’s an adorable assistant, but in fact it is a data collection machine. These products are entirely based on data collection from somewhat and in many cases, vulnerable populations that don’t know the extent to which there’s another company that’s listening to everything that they say. 

I. Glenn Cohen: As Waldman explains, the core problem here relates to the divergence between our perception of the robot as a friend or a companion, and what’s actually going on under the hood. That the robot is listening to you and collecting your data. Are consumers aware of this? 

Ari Waldman: The key consumer protection problem with social robots is a sociological one. They are designed specifically to let our guard down to make us think that we’re not engaging in broadcasting information or sharing information because it’s a baby seal or it’s a puppy or it’s a little robot. 

I. Glenn Cohen: Now it’s one thing if the information collected is being used strictly for a medical purpose and the data’s use is protected by laws like HIPAA, but as Waldman points out, this technology can have a much broader reach. And if the data is not being strictly shared with your physician, then what is its use? Who is it being shared with? 

Ari Waldman: They are designed to make us think that this is just a friend. And in so doing, they deceive us. They put us in a situation where we cannot fully comprehend the extent to which information is being extracted from us, and that information is being used for someone else’s profit.  

I. Glenn Cohen: You may even have experienced something like this yourself, with your phone or smart home devices. When you discover that your data is used in ways you didn’t contemplate. 

Ari Waldman: So I can imagine a lot of us have had this experience. You’re home with your partner or your friend and you’re talking about something, whether it’s traveling to a new place after a lockdown, or that you need to buy a cutting board or whatever, and suddenly you see an advertisement for that travel destination or that product on Instagram. There’s no stopping them using that information for profit in a way that is outside the context in which you consented to have that robot gather information from you. 

I. Glenn Cohen: So again, the problem with social robots isn’t unique to social robots at all. It’s a problem with 21st century information gathering technology and a lack of understanding about what consenting to use a service or here, a robot, means. So what’s the solution? 

Ari Waldman: Our regulatory structure right now is not set up to address the concerns associated with social robots, and that’s because our consumer protection law, mostly governed by the federal trade commission, is focused on lies in privacy policies or in terms of service, misleading statements. So if a company says that they’re not going to collect information and then end up collecting that information, it’s very easy for the federal trade commission to go after them for misleading consumers in their privacy policies. But there’s also a very easy way around that where they just use very vague and broad language that allows them to collect information. And even if the facility or the patient or the child never read the privacy policy, which very few people do, it was there so the current system in which we operate says that that would normally be sufficient.  

I. Glenn Cohen: In other words, our regulatory system has a pretty big loophole. As long as tech products don’t mislead users in their terms of service, they’re pretty much set when it comes to collecting and using data. But what this means is that tech companies often have very broad policies, which of course, very few people read and which allow them to collect all sorts of data, unbeknownst to the consumer. Another problem with this system is that privacy policies and terms of service, that’s usually a one and done affair. You click accept or agree, and that’s that. You’ve perpetually consented to the robot’s data collection, but that’s not how life works. 

Ari Waldman: In the case of, for example, a social robot to support elder care, you may have someone who exhibits cognitive decline, who may have never have had the choice to say in the first place I want this in my life, or yes, I consent to this. Whether it’s because an elder care facility made that choice for them, or because they simply can’t understand the nature of that choice and then they’re conscripted into being a part of this informational capitalistic ecosystem in which their information is used to train an algorithm that affects other people. 

I. Glenn Cohen: Even in less high stakes examples, let’s just say you change your mind about a given device. You no longer want the device to collect your data, or you want the device to erase all the data it currently has about you. 

Ari Waldman: There’s no platform, there’s no interface, to indicate your ongoing consent to interacting with this product. 

I. Glenn Cohen: Essentially, this is a system-wide problem that trades on a misunderstanding of how people in society operate. Because in the real world, we often change our minds as circumstances change. Technology does not always account for that. 

Ari Waldman: So in a system, in a consumer regulatory system, that focuses mostly on consumers as autonomous beings, which is a complete myth [laughs] but in a system that adopts that legal fiction, we’re not protected.  

I. Glenn Cohen: The solution Waldman suggests is to focus more on the design of the products. This would place the onus on the designer or the manufacturer.  

Ari Waldman: A consumer protection regime that is more focused on manipulative design, may have more of an opportunity to ensure not just that these privacy policies are accurate, but that social robots aren’t actually deceptive.  

I. Glenn Cohen: What might that look like? It’s a tough balance to strike, but Waldman suggests it mostly comes down to tightening data policies significantly. 

Ari Waldman: It’s a needle to thread because we want the benefits. We want the benefits of these technologies that allow us to supplement care. There should be no API or no way for another company to gain access to that information. We design it out of the system. The company can collect that information in order to enhance the ability of the robot to respond to the patient, but there should be no way that that information should get out or used for any other purpose. Notably the general data protection regulation, which is the European Union’s comprehensive privacy law, data protection law has a provision called ‘purpose limitation.’ It says you can only collect information for the specific purpose for which you ask for it. 

I. Glenn Cohen: For a social robot, this would mean only using collected data for the robot’s stated purpose. Here, providing healthcare. Preventing the monetization of data collection through social robots, Waldman suggests, would go a long way to ensuring that consumers are protected while reaping the benefits these devices may offer. 

Ari Waldman: There is no world in which it seems right to me that a company that builds a social robot for elder care, or for caring with children with autism, should be able to take that data, sell it or rent it or share it to some other company to sell you advertisements. These products should have to be designed in such a way as it prevents that sharing of information. And that may sound radical because it cuts off a source of profit. It cuts off a business. But I think it’s worth it. 

I. Glenn Cohen: Should care and companionship become big business? Our episode today highlighted a few key tensions of social robots. First, there are concerns about humans having relationships with robots. You might remember this from the 2013 movie ‘Her’ where Joaquin Phoenix falls in love with a virtual assistant. But when we start to pull back the layers here, we see that the real concern isn’t about building relationships with tech. But rather, building relationships with tech companies or not realizing exactly how it’s changing our relationships to humans. These robots can offer patients a lot of benefit, but we should push for ethics by design. That they be designed and manufactured in a way that reduces the downside risk. There’s a concern about what these social robots are really doing. Namely collecting lots of data from us often under acute and innocuous facade, and often from vulnerable individuals, such as the elderly or people with autism. When we see social robots as data miners, the stakes of this technology and the need for legal and regulatory frameworks that are responsive to these concerns become quite clear. 

I. Glenn Cohen: If you liked what you heard today, check out our blog ‘Bill of Health’ and our upcoming events. You can find more information on both at our website, petrieflom.law.harvard.edu. And if you want to get in touch with us, you can email us at petrie-flom@law.harvard.edu. We’re also on Twitter and Facebook @petrieflom. 

Today’s show was written and produced by Chloe Reichel. Nicole Egidio is our audio engineer. Melissa Eigen provided research support. We also want to thank Ari Waldman and Cynthia Chauhan for talking with us for this episode. 

This podcast is created with support from the Gordon and Betty Moore Foundation and the Cammann Fund at Harvard University. 

I’m Glenn Cohen and this is Petrie Dishes. Thanks for listening. 

Created with support from the Gordon and Betty Moore Foundation and the Cammann Fund at Harvard University.