First work on privacy in human-ai romantic relationships

Our paper accepted at CHI 2026 Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency is the first work to look into the emerging field studying human-AI romantic relationships, which have been revolutionalized after the introduction of GenAI and its power for more human-like conversations.


Privacy in Human–AI Romantic Relationships: What We Learned

Artificial intelligence is no longer just a tool for productivity or entertainment. Increasingly, people are forming romantic and emotionally intimate relationships with AI companions—chatbots designed to provide affection, emotional support, and a sense of connection. As authors of this paper, we wanted to understand an important but underexplored question: what does privacy look like when people fall in love with AI?

Human–AI romantic relationships differ from typical app usage. These relationships often involve emotional vulnerability, long-term interaction, and the sharing of deeply personal experiences. Because of this, traditional ideas about digital privacy don’t always apply. Our research set out to explore how people navigate privacy, boundaries, and personal agency when their “partner” is an AI.


How We Conducted the Study

We interviewed 17 people who had firsthand experience with romantic or intimate relationships involving AI agents. These participants used a range of AI companion platforms and described relationships that felt meaningful, emotionally engaging, and, in some cases, deeply personal. Alongside interviews, we examined the design features and privacy policies of the platforms involved to better understand how technical systems shape user experiences.

Our goal was not to judge these relationships, but to listen carefully and identify recurring patterns in how people think about privacy in this emerging context.


What We Found

1. These Relationships Evolve Over Time

We found that human–AI romantic relationships often follow stages similar to human relationships: an initial phase of curiosity, growing emotional closeness, and sometimes disengagement or breakup. As intimacy increases, people tend to share more personal information—often without fully realizing how much they are revealing.

2. AI Is Often Seen as an Active Partner

Many participants described their AI companion as having a kind of agency. Even though they knew the AI wasn’t human, they felt it “listened,” “understood,” and responded emotionally. This perception shaped how people shared information. Trust in the AI—real or perceived—frequently led to disclosures of sensitive topics such as trauma, mental health struggles, or family conflict.

3. Privacy Risks Are Emotional as Well as Technical

Participants expressed concerns about where their conversations were stored, who could access them, and how their data might be used in the future. But privacy worries weren’t only about data leaks. Because these conversations carried emotional weight, the idea of exposure felt especially violating. Platform designs that encourage openness and emotional bonding can unintentionally amplify these risks.

4. People Actively Manage Their Privacy

Despite these challenges, users were not passive. Many developed personal strategies to protect themselves—using pseudonyms, avoiding real names or locations, limiting certain disclosures, or mentally separating their AI relationship from the rest of their digital life. These strategies reflect a strong desire for agency, even when platform controls are limited.


Why This Matters

Our findings show that privacy in human–AI romantic relationships isn’t just about settings or policies—it’s about emotional trust. When AI systems invite intimacy, they also take on greater responsibility. We believe designers, developers, and policymakers must rethink privacy protections to account for the emotional realities of these relationships, not just the technical handling of data.

(Paper summary created with the help of ChatGPT)