Generative AI: Support or Substitute?
Published on April 15, 2025
Contributors
Using ChatGPT regularly can lead to emotional reliance and splintered real-world connection
A year ago, for me, ChatGPT was nothing but a work tool, a writing aid for fleshing out social media posts. Today, I still use it for that, but it’s also crept into my personal life.
As I’ve found more ways for it to be useful (and gotten better at prompting, learning how to be clearer and more specific in what I ask), it’s become something I use more often, and for more things.
So when my partner and I recently wanted a resource to help us reflect on different aspects of our relationship, it felt natural to turn to ChatGPT. Not for advice, exactly, but for help creating a structure: a questionnaire, a framework for deeper conversation.
And while we haven’t ventured into the territory of using GenAI for companionship, I think we did cross a meaningful line. We brought it into our personal lives in a new way, moving from professional utility into something more intimate.
It’s made me more aware of how easy it is to turn to generative AI not just for writing or research, but for perspective. Even comfort.
That shift is showing up in the data too.
According to Harvard Business Review’s analysis of real-world use cases [1], “therapy and companionship” is now the #1 application of GenAI: overtaking productivity, writing, and coding help. The top three use cases are all personal. People aren’t just using chatbots to get things done. They’re using them to feel better, find clarity, and feel less alone.
So what does that kind of use do to us?
A new randomized controlled trial from MIT and OpenAI [2] explored exactly this question. Nearly 1,000 participants were assigned to use ChatGPT for at least five minutes a day, over four weeks, in one of nine conditions: either text, neutral voice, or emotionally engaging voice; and in either personal, non-personal, or open-ended conversations.
Modality: text, neutral voice, or emotionally expressive voice
Conversation type: personal prompts, non-personal prompts, or open-ended conversation
Some of the results were intuitive: people who spent more time chatting with the AI — regardless of modality or topic — tended to be lonelier, more emotionally dependent, and less socially engaged by the end of the study.
But other findings were less obvious.
Voice-based AI, especially with emotional expressiveness, appeared protective at first — linked to lower emotional dependence and lower problematic use compared to text. But at higher levels of use, those benefits vanished, and in some cases reversed.
Text interactions, surprisingly, were the most emotionally “sticky.” They featured more emotional content, more self-disclosure, and greater dependence. The researchers suggest this may be because text allows users to project more onto the AI — a kind of digital Rorschach.
Personal conversations (structured reflections like sharing values or expressing gratitude) were associated with a slight increase in loneliness, but lower emotional dependence and less problematic use. They encouraged self-reflection without fostering attachment.
Non-personal prompts (the kind of practical, task-based questions many of us ask every day) were more likely to lead to emotional dependence, especially when used frequently. In other words, emotional reliance sometimes grew from practical reliance.
That nuance matters. The study didn’t show that talking to AI about your feelings is always harmful, or that emotionally expressive AI is inherently risky. Instead, it pointed to a pattern: when we engage with AI frequently, and especially when we form habits around it — even for seemingly neutral tasks — we may be more likely to turn to it instead of people. The very features that make GenAI feel helpful and responsive can, over time, shift from support to substitution — especially when engagement becomes habitual. Over time, that can affect how lonely we feel and how much we rely on generative AI for emotional regulation.
So where does that leave us, in terms of designing tech for good?
The study offers a roadmap for building GenAI tools that support people without undermining their social well-being. It suggests that how we design and use AI companions matters just as much as how often we use them. Features that make chatbots more emotionally engaging — like voice, empathy, and warmth — aren’t inherently dangerous. In fact, they were associated with lower emotional dependence in low- to moderate-use scenarios. But as usage increases, those same features can quietly shift from supportive to substitutive — undermining real-world connection and reinforcing reliance on the AI itself.
This has important design implications.
Structured, personal prompts — like those used in the “personal conversation” condition — may serve as a psychological scaffold, helping people reflect without over-attaching. It’s the difference between a journaling tool and a digital friend.
Voice-based interactions, while often assumed to increase anthropomorphism, may in fact be less emotionally sticky than text when designed with intention. The study found that text-based chats led to more emotional content, more self-disclosure, and greater dependence, possibly because their ambiguity allows users to project more onto the AI.
Most critically, and straightforwardly, frequency use itself is a risk factor. The longer people engaged with the AI, the worse their psychosocial outcomes — regardless of format or topic. Overuse is the clearest risk factor for negative psychological outcomes. Have you ever noticed how ChatGPT suggests a never-ending train of work for itself? “Would you like me to suggest a visual to pair with this post?” It doesn’t have to be designed to keep you in the chat.
In a world where therapy and companionship are now top reasons people turn to generative AI, these findings offer both a warning and a design opportunity.
We can build emotionally intelligent systems that support reflection without replacing relationships. That helps people think without making them feel tethered. But doing so means recognizing when helpful turns into habitual — and designing for the right side of that edge.
References
1. Zao-Sanders, M. (2025). How people are really using GenAI in 2025. Harvard Business Review. https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025
2. Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W., Pataranutaporn, P., Maes, P., Phang, J., Lampe, M. & Agarwal, S. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study. arXiv preprint arXiv:2503.17473 https://arxiv.org/pdf/2503.17473