Sue Anne Teo is a technology and human rights fellow at the Kennedy School’s Carr-Ryan Center for Human Rights. Her work explores the intersection of human rights and artificial intelligence.

Teo is investigating the role of anthropomorphic chatbots, which can feel like friends or human thought-partners, but can also goad dangerous or self-destructive behavior. Teo research explores the power—and limitations—of the law in addressing the risks of AI. We spoke with her about her work. Responses have been edited slightly for length or clarity. 

What made you interested in anthropomorphic AI? 

I did a PhD in AI and human rights law, focusing on AI and human rights…I got fascinated by the issue of chatbots and the harms that they pose not just to children and youth, but also to the population in general. We’re not even clear what the issues are. There might be individual issues, but societal issues also come up.

And I feel that the law was not quite ready to address some of these issues. I wanted to look into it.

How does your project explore the risks around relationships with “human-like” AI?  

The project is called Anthropomorphic AI and Emergent Vulnerabilities: “Anthropomorphic” because I feel that the human like features of AI now raises new concerns…What does that do to our interactions, our social relationships? Also forms of dependencies that can arise, possible manipulation because it’s so human-like. That's the first part on anthropomorphic AI.

The second part is on emergent vulnerabilities, which is how I approach [the topic] from the legal sense. The law is very good at addressing certain types of vulnerabilities: for example, if you’re a child, you get more protections from the law because of your developing levels of maturity. And if you belong in minority groups, you get protection from nondiscrimination laws to address historical and ongoing injustices. Laws exist to address these group-based and identifiable vulnerabilities. 

But when it comes to interactions with chatbots, this breaks down a little bit…The vulnerabilities are emergent. And we also don’t know what the possible dangers and harms are, when it comes to longer-duration interaction with chatbots.

Those are the emergent vulnerabilities that I want to look at, and I feel that the law doesn't address so well.

Can you describe your research?

What I want to do is to inform legal research with an empirical element.

When it comes to AI and the law, what we don’t want to do is to regulate based on edge cases and moral panic. We need some degree of empirical evidence to design legal regulation properly. Despite these headline cases of suicides and delusions, by and large, people have reported that their experiences with chatbots have been quite positive. I have to also believe that.

We are doing surveys with up to 120 people on their experiences with chatbots—surveys to see what the harms are, and how the law can address this. The other thing is a longitudinal study where we follow ten people over the course of one year in order to capture emergent harms and emergent vulnerabilities. By following them for one year, we can get insights that will not be available in a one-shot interview or survey. 

Headshot of Sue Anne Teo.
“Then loneliness actually doesn’t go away. It increases.”
Sue Anne Teo

What are you hoping to learn?

We don’t know what the possible harms could be. Of course, we have read some of these headlines, but essentially, we’re in the dark here. And the technology is also changing so quickly that we have to really keep our finger on the pulse to see what’s happening, what’s changing, how that’s affecting people, also mental health-wise.

So, we need to keep pace with what is happening and how people are engaging with it.

How can the law serve as a safeguard?

The law can definitely play a role in preventing some of the things that might be harmful in the long term.

I don’t have all the answers, but I would like to look at it both at the individual level, where there are existing laws, but also at the societal level through some of these more long-term lenses that I employ in my research.

Essentially, if you look at it at the individual level, it might be fine that people can find meaning in their lives through flesh and blood, through silicon, or through chatbots. Frankly, I personally don’t care.

But the individual and the societal level are two different issues. It might be fine on an individual level, but if…they get emotionally dependent, perhaps over-dependent, and don’t want human relationships because it’s too difficult, then what does that do to society? Then loneliness actually doesn’t go away. It increases.

And social relationships can also change to a very large degree, perhaps to the detriment of human relationships in general. Essentially, those are the tensions: the individual and the societal and also paternalism versus freedom.

Your research also explores how capitalism influences the design of AI. Can you share more about that?

It all points back to capitalism at the end of the day because the more human-like it is, the more people will come to these chatbots, and the more they will use it and then get attached to it.

And once you’re attached to one chatbot, because it has all this memory about what you have talked to the chatbot about, you might be less likely to go to another chatbot even if it’s better.

I have a tentative concept that I haven’t fully fleshed out yet called “intimacy capitalism,” about these business models that target our inner lives in very human like way…they don't have to buy your data from third parties or engage in complicated data extraction practices. You are volunteering all this data to the company by yourself—because it is so human-like, because your attachment is being monetized. Not just engagement, but attachment.