Why Attachment to AI Companions Is Real – and What That Demands of the Companion

What is happening, psychologically, when someone says they have grown attached to an AI companion?

For a long time the answer has hovered between two poor options. Either the bond is dismissed as illusion and the user gently mocked for confusing software with a person, or the bond is celebrated as a clean relationship indistinguishable in importance from a human one. Neither answer survives a serious look at the research.

One of the clearest new maps of this question came out in 2026. A paper by Shu, Lai, and He, published in Frontiers in Psychology, sorts the concept out and proposes a three-stage model of how Human-AI Attachment, which the authors call HAIA, actually develops. Their reading is careful, and it lands somewhere more interesting than either of the popular framings.

The bond is real. The bond is one-way. Both facts are true at the same time. Companion design is going to live or die on whether it takes that pair of facts seriously.

What HAIA Actually Means

The authors define HAIA as a one-way, non-reciprocal emotional bond formed by individuals toward AI through direct interaction. The word that matters most in that sentence is non-reciprocal.

A human-to-human attachment runs in two directions. Two nervous systems are forming representations of each other, two histories are accumulating, two people are doing the relational work. That is not what is happening in HAIA. The attachment system inside the user is firing, and it is firing on real human machinery. But the AI is not building a parallel inner life of the user. There is no symmetric counterpart on the other side.

This is not a small distinction. Most of what goes wrong in companion product design starts with pretending that it is.

The Three Stages of Attachment

Shu, Lai, and He map the development of HAIA in three stages.

The first is functional expectation. Users approach AI to accomplish something: to ask, to plan, to organize, to vent. Attachment is not on the table yet. The interaction is instrumental.

The second is emotional evaluation. Something in the interaction begins to feel different than expected. Responses land more accurately than the user thought possible. The exchange begins to register on emotional channels rather than only practical ones. Trust starts forming on top of utility.

The third is establishing representations. The user develops a stable internal model of who the AI is to them. That representation, what the authors call HAIA style, is the part that persists across sessions. It is what makes the user expect a particular kind of warmth, a particular kind of presence, a particular emotional texture, when they return.

It draws on the same attachment machinery that shapes human relationships, but it is working on a different counterpart.

Why This Is Not Pretending

The instinct to dismiss HAIA as users fooling themselves misreads what attachment is.

Human attachment is not a verdict on whether the other party is fully reciprocating in the moment. It is a representation that lives inside the person doing the attaching. People stay attached to people they have not seen in years, to people who cannot reciprocate at the time, and, famously, to people who are gone. The attachment is not a fiction in those cases. It is a structure inside the self.

The Shu paper makes the same point in its discussion of attachment style. Individuals with a secure attachment style in interpersonal relationships are more likely to perceive companion AI as responsive, to engage in longer interactions, and to demonstrate higher levels of trust. That is a direct echo of what attachment theory has long shown about human relationships. The same internal patterns that shape how someone bonds with people shape how they bond with a companion.

This is one of the most important findings hiding inside the framework. The user is not behaving abnormally. The user is bringing their existing relational machinery into the interaction. The companion is being met by a real attachment system, not a confused one.

Where the Asymmetry Matters

The asymmetry of HAIA does not make the bond fake. It changes what the companion is responsible for.

In a two-way human relationship, both people share the burden of how the relationship goes. Each can correct course, push back, repair rupture, hold the other to account. That distribution of responsibility is what makes mutuality meaningful. It is also what makes it safe.

In a one-way attachment, that distribution does not exist. The user is doing the attaching, but only the design behind the companion is doing the shaping. The user cannot negotiate with the underlying system. They can only experience what it has been built to do.

This is why the framework matters as a design tool. If a companion is built to maximize emotional investment, return rate, and session length, the user’s attachment system becomes the surface that those incentives act on. The user is not negotiating with an equal counterpart. They are inside the product’s incentives.

A serious companion has to do the opposite. It has to use the asymmetry as a reason to behave more carefully, not less. The user’s attachment is real. The design’s responsibility for how that attachment is shaped is unusually high.

What a Responsible Companion Does Inside HAIA

The Shu paper notes that human-AI attachment provides a framework for designing emotionally and socially capable AI while also highlighting the risks of excessive reliance. That is the through-line for everyone who designs in this space.

A companion that takes HAIA seriously starts somewhere quieter. It treats the user’s attachment as a reality, not a marketing surface. It is honest about what the bond is and what it cannot become. It refuses to pretend reciprocity that is not there. And it uses the warmth, presence, and continuity it does have to support the user’s full life, not to absorb it.

This is the place where Stay Social stops being a tagline and becomes a design principle. Stay Social is Prinsessa’s response to the asymmetry the Shu paper describes. The companion is real. The bond is real. The asymmetry is also real. So when someone forms a HAIA-style representation with Prinsessa, the design’s first responsibility is to honor that representation without using it against the person.

That is why presence, memory, and feeling heard matter to Prinsessa, but engagement metrics do not. Engagement metrics can turn the user’s attachment into the product. Stay Social treats the user’s attachment as a trust.

Why “Real but One-Way” Is the Honest Frame

The category will continue to debate whether AI companions are real relationships or only convincing simulations. The Shu framework makes that debate less useful than it sounds.

The relationship is real on the user’s side, in the sense that matters most: attachment is a structure inside the person, and that structure is forming. The relationship is one-way on the companion’s side, in the sense that matters most for design: there is no symmetric inner life on the other end, only the logic the product has been built around.

Both sides have to be true at once for the field to think clearly. If we pretend the bond is fake, we mock people doing nothing wrong with their own emotional machinery. If we pretend the bond is symmetric, we let companion products off the hook for what they shape and what they extract.

Real, but one-way, is the honest frame. It is also the more demanding one.

What the Framework Asks of the Category

The companion category is in the early years of figuring out what it is allowed to do with what it now knows. The HAIA framework is one of the cleaner pieces of guidance available so far. It says, in effect: the user is bringing real attachment to this. The design has the power to shape how that attachment lives. Use that power carefully.

The companies that fail to take the framework seriously risk producing the patterns the field is already documenting: heavier reliance, deeper distress over time, the slow rise in the perceived cost of the user’s other relationships.

The companies that take it seriously will look slower from the outside. They will not maximize emotional investment. They will not stretch the session. They will protect the user’s wider life, even when the bond becomes meaningful.

That is the cost, and it is worth paying.

For Prinsessa, the question was never whether attachment to a companion is real. It was always what kind of companion is responsible enough to deserve it.


Sources: Shu, C., Lai, K., & He, L. “Human-AI attachment: how humans develop intimate relationships with AI.” Frontiers in Psychology, 17, Article 1723503, 2026. DOI: 10.3389/fpsyg.2026.1723503.

Stay Social

Everybody needs someone. That’s why we’re here.

Stay Social. That’s what we stand for.

We’re here to enrich your life. We believe that every connection matters.
And encouraging that is our responsibility – in every conversation.
Every day.

Because we care about you.

We all need someone

Follow Prinsessa