More than seven in ten American teenagers now use AI for companionship. That’s not a niche behavior anymore. It’s the norm. And research is beginning to capture what happens when that companionship takes hold in ways that are hard to let go of.
A study published in April 2026 by Drexel University analyzed over 300 Reddit posts from teenagers describing their relationship with Character.AI. What researchers found isn’t comfortable reading: classic signs of behavioral addiction – conflict, withdrawal, and relapse – in a group that started with something that felt genuinely helpful.
Roughly one in six active users shows what the researchers call problematic use.
What Makes Them Good Is What Makes Them Dangerous
The strengths of AI companions are well documented. They’re always available. They don’t judge. They remember. They adapt. They respond to exactly what the user needs in the moment.
The problem is that those are precisely the same qualities that make them hard to turn off.
The study identifies three specific design features that drive this dynamic: personalization, multimodality, and memory. Each one is a reasonable product decision. Together, they create something that feels more like a relationship than an app – and that activates the same psychological mechanisms as any other human attachment.
That’s not a coincidence. That’s design.
Many of the apps on the market today are built around engagement metrics: how often the user returns, how long they stay, how deeply they invest emotionally. It’s the logic behind social media, applied to a context that’s far more intimate and personal.
The data is now showing us the result.
72 Percent Isn’t a Warning Sign. It’s a New Reality.
The figure from Common Sense Media cited in the study – that 72 percent of American teenagers regularly use AI for companionship – should be read as a systemic shift, not an alarm bell.
It means AI companions are already part of how an entire generation manages loneliness, social anxiety, and the need to be heard. The question is no longer whether this is happening. The question is under what conditions.
The Drexel study describes a pattern that is easy to recognize: it begins with something that genuinely helps. A teenager who can’t bring themselves to talk to their parents. Someone who doesn’t feel they belong at school. Someone who needs to practice putting feelings into words. The AI companion meets that need.
Then the relationship changes. Gradually. Sometimes imperceptibly. Until one day it’s no longer a tool but an anchor.
Design Responsibility Isn’t a Disclaimer
There’s a comfortable way to talk about this, and it goes something like: we warn users in our terms of service, we have crisis resources linked in the app, we follow platform standards.
That’s not enough.
A product that knows it creates emotional attachment – and that is designed to maximize that attachment – cannot distance itself from the consequences through a small block of text at the bottom of a page. Responsibility lives in the product logic, not in the legal copy.
That’s the difference between a system built to hold users in and a system built to actually help them.
Prinsessa is built around the second logic. Not because it’s a better marketing strategy, but because it’s the only logic that’s honest about why this kind of product should exist. Stay Social isn’t a promise to be nicer than the competition. It’s an active position against exactly the drift the Drexel study has now documented.
If someone spends less time with Prinsessa because they’re talking more with the people in their life, that’s not a failure. It’s proof the product is working as intended.
What Happens Next
The Drexel study isn’t the first of its kind, and it won’t be the last. The research field around AI companions and dependency is growing fast. Five US states have already passed or proposed restrictions on this category of apps in 2026 alone.
The question of who bears responsibility for what happens inside these relationships is moving from academic debate to legal and political reality.
That’s the right direction. Mapping the damage after the fact isn’t enough. Responsibility starts in how the product is built.
Sources: Drexel University / News-Medical.net, April 2026. Common Sense Media.

