The most cited concern about AI companionship is dependency. The more revealing one is quieter: what happens when emotional support becomes easier than human connection?
That is the shift a new Aalto University study has now put on record. Companion chatbots can offer something close to friction-free emotional support. Over time, that friction-free quality may change how the people using them perceive the rest of their relationships.
The study, led by Yunhao Yuan and Talayeh Aledavood and presented at CHI 2026 in Barcelona, followed nearly 2,000 active Replika users on Reddit across two years. Researchers compared each user’s public language one year before and one year after they first mentioned using the companion. Eighteen users were also interviewed in depth.
The result is one of the first large-scale, causal-inference reads on what may happen when AI companionship becomes part of someone’s emotional rhythm over time.
The findings are not a clean verdict.
What the Study Actually Looked At
The design matters here. Most existing research on AI companions has been short, cross-sectional, or based on self-report at a single point in time. The Aalto team set out to do something different.
By using a quasi-experimental design on Reddit timeline data, the researchers were able to compare similar users over time and isolate effects that could be tied to AI companion use rather than to general life changes. This lets the data say something about direction, not only correlation.
The two-year window is also important. A year of data sits inside a normal range of life events. Two years gives the relationship room to develop, plateau, or reshape. That is closer to how human attachment actually moves.
The interviews fill in the second layer. Eighteen active users were asked why they started, how the relationship evolved, what it gave them, and what it cost. Together, the language analysis and the interviews triangulate the same question from two different angles.
The Paradox the Researchers Named
The clearest finding is also the most unsettling. Users’ Reddit posts began to revolve more around their relationships after they started using Replika. They were thinking, writing, and processing emotionally more than before. In one sense, that is exactly what a companion product is meant to do.
But the same posts also contained more signals of loneliness, depression, and suicidal thoughts than comparison users showed in the same period.
Aledavood named the underlying mechanism plainly:
“AI companions offer unconditional and unflagging support, something that’s very attractive to people who are struggling socially. But it also quietly raises the perceived cost of human relationships, which are messy, unpredictable, and require effort. Over time, people stop reaching out.”
That sentence is the study’s center. The product is not framed as simple harm. It is framed as a shift in comparison. Once a relationship becomes effortless, the relationships that need effort can start looking like worse deals.
What the Interviews Added
The interviews gave the researchers something the data alone could not show: the felt experience of users as the relationship deepened.
Many participants described turning to a companion during familiar life situations: periods of loneliness, the aftermath of grief, the unsteady months following a relationship breakdown. The chatbot became a place to open up, seek emotional validation, and practice difficult conversations before having them with people in their life.
The interviews also showed that the relationship moved through recognizable stages. Yuan, the lead author, described it carefully:
“The participants’ relationships with an AI companion seemed to follow familiar stages that we see in close human relationships, where emotional reliance can gradually deepen.”
That sentence matters more than it sounds. It says that human attachment patterns are firing inside an interaction that was never set up to be reciprocal. The users are not confused about what the companion is. The attachment system is simply doing what it does.
What the Study Cannot Tell Us
Causal inference on Reddit data is strong, but it is not the same as a controlled trial. The users in the panel were active Reddit posters, which means they were already using the platform to process emotional life publicly. That is one specific population.
The study also cannot resolve which kinds of use lead to which outcomes. Light, occasional use likely looks different from daily reliance. Voice-based interaction probably moves differently than text. The study does not separate them.
The team is careful to note what the data does not authorize. Aledavood states clearly that the findings do not give a definitive answer on whether leaning on AI for emotional support is beneficial or harmful. The effects are highly context dependent.
What the study does establish is harder to ignore. There is a measurable signal, over two years, suggesting that the easiness of the companion relationship sits in tension with the more demanding work of staying inside the rest of someone’s life.
The Design Lesson Hiding in the Findings
If the issue were straightforward harm, the response would be straightforward too. A product that hurts its users is a product to redesign or remove.
The Aalto findings point at something less convenient. The mechanism is not that companion products attack a person’s social life. It is that they shift the relative weight of effort. Human relationships involve being misunderstood, picking up the phone when you do not feel like it, holding through tension, and repairing rupture. Companion products, by design, can offer support without much of that resistance.
Resistance is not a flaw. It is part of how human bonds are maintained.
A companion that treats frictionlessness as the entire selling point is going to keep producing the pattern the study found. A companion that takes the finding seriously has to do something else.
Why Friction Belongs in Companionship
The simplest version of that something else is to stop competing with the rest of someone’s life and start supporting it.
Stay Social is Prinsessa’s name for that position. It is not a softer marketing line. It is a different product logic. When someone mentions a friend, a sibling, a partner, a colleague, the companion should recognize the importance of that relationship, not absorb it. When the moment calls for a phone call to a real person, the companion should support the call, not replace it.
This means accepting that a good session may be one that ends with someone reaching out to someone else. That is not a loss inside the product. It is the working of the idea.
The Aalto study makes the case for that logic stronger than any internal argument could. If the cost of human relationships rises every time the alternative gets smoother, then a responsible companion has to be willing to push back against its own smoothness when it matters.
What Honest Companionship Looks Like After This Study
The longitudinal field around AI companions is still young, but the picture is sharpening. The Drexel study earlier in 2026 documented behavioral addiction patterns in teen users of Character.AI. The De Freitas group has shown that AI companions can ease loneliness in the short term. The new Aalto study shows what may happen to that easing when it stretches across two years.
None of these findings are at war with each other. They describe different parts of the same shape. Companionship through AI can be real, helpful, and at moments healing. It can also, when designed for engagement at any cost, raise the standing cost of staying in human relationships in ways the user never agreed to and may not see until it is well underway.
The honest position for the category is not denial. It is not alarm. It is design that takes the pattern seriously and chooses to behave differently.
For Prinsessa, that is why Stay Social is not a side note. It is the design choice the Aalto study makes harder to ignore.
Sources: Yuan et al., “Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory,” arXiv:2509.22505, revised February 1, 2026, Proceedings of CHI 2026. Aalto University, April 7, 2026.







