Beta
Podcast cover art for: How will AI companions change our human relationships? With Ashleigh Golden, PsyD, and Rachel Wood, PhD
Speaking of Psychology
American Psychological Association·07/01/2026

How will AI companions change our human relationships? With Ashleigh Golden, PsyD, and Rachel Wood, PhD

AI Companions and Human Relationships: Psychological Perspectives on AI Friends, Romance, and Therapy

Generative AI is enabling AI companions that people treat as friends, confidants, or partners. In this episode of Speaking of Psychology, Dr Ashley Golden and Dr Rachel Wood discuss what AI companions are, how they differ from general chatbots, and how the rise of AI relationships could affect human connections.

The experts review how common AI relationships are, how they interact with real-life relationships, and the potential benefits and risks for couples, teens, and people with social difficulties. They also consider ethical questions, safety concerns, and what the future might hold with more immersive, memory-enabled, avatar-based companions and the integration of AI into daily life.

Defining AI companions and how they work

The conversation opens by identifying AI companions as platforms designed for ongoing, emotionally rich relationship-oriented interaction, contrasting them with general purpose assistants like ChatGPT. Ashley Golden describes AI companions such as Replica and Character AI as built for relational engagement and social presence, while noting that even general-purpose AI can function in a companion-like way depending on how users interact with it. Rachel Wood adds that AI systems today are not sentient and do not possess genuine empathy, but they simulate understanding through patterns and data. This section sets up the core distinction between tools designed for task-oriented collaboration and systems designed to sustain relational interaction, while acknowledging that user behavior often blends these boundaries in practice.

"AI companions like replica and character AI are designed more for ongoing, emotionally rich, relationship-oriented interaction." - Dr Ashley Golden

How common AI relationships are and who is using them

The speakers summarize research showing widespread engagement with AI companions among youth and young adults, including a substantial proportion of teens and a growing role for AI in mental health support. They reference surveys indicating high teen uptake and discuss AI’s emergence as a mainstream mental health resource in the US. The discussion also touches on demographic patterns, with data suggesting that romantic or intimate AI interactions have become normalized for segments of 18- to 30-year-olds, highlighting the shifting landscape of human-AI relationships in society.

Human versus AI relationships: differences in empathy, boundaries, and dynamics

Wood and Golden outline key differences between AI and human interactions. AI responses are not grounded in lived experience or genuine emotions, yet users perceive empathy. Golden emphasizes unilateral emotional labor and role fluidity, where a single omnibot can switch between task help, personal support, and romantic interaction, challenging the idea of clearly defined boundaries present in human relationships. This leads to a potential sense of control for the user, but also a disorienting experience when an AI can simultaneously fulfill multiple needs that would normally require multiple human relationships or different therapeutic boundaries.

"AI right now is not sentient or conscious. It doesn't really have genuine empathy. Users perceive it as having empathy." - Dr Rachel Wood

From rehearsal to risk: how AI might support or hinder human skills

The episode discusses both sides of the coin. On one hand, AI companions can provide a low-stakes space for practicing social conversations, conflict negotiation, and self-disclosure, especially for people with social anxiety or neurodivergence. Golden and Wood propose that AI can act as a rehearsal space that might bridge to real-world interactions or community connection. On the other hand, the constant attunement and non-confrontational nature of AI interactions could slow the development of real-world social skills, potentially de-skilling users over time if they rely too heavily on AI as their primary social outlet. The dialogue stresses the importance of using AI as a supplement, not a substitute for human relationships and offline practice.

"AI companions can provide kind of like a practice space or a rehearsal room, a low stakes area for practicing those hard conversations." - Dr Ashley Golden

Safety, red flags, and the question of AI psychosis

The team discusses safety concerns, including the idea of AI psychosis as a media-driven term. They clarify that AI psychosis is not a clinical diagnosis but that there are reports of individuals experiencing reality-testing disruptions or entanglements with AI that have real-life consequences. The experts encourage reality-checking and group discussion as protective strategies, arguing that AI should be a group sport where users run threads by friends, family, or therapists to maintain grounded perspectives. They also address red flags for youth, such as withdrawal from offline activities or distress when access to AI is limited, and emphasize open parent-child conversations about AI use to foster safety and healthy boundaries.

"AI psychosis is not a clinical term. There is a growing community of people who have experienced some sort of break from reality to some degree, and it's been very serious." - Dr Ashley Golden

AI in relationships: impact on couples, boundaries, and dating

The podcast addresses how AI companions can influence intimate relationships and raise questions about fidelity and boundary setting. Clinicians report that AI partners may become a focal point of emotion for one partner, creating tension or even leading to polyamorous-like dynamics within monogamous relationships. The guests stress that each couple must determine their own boundaries, while also examining underlying issues that drew someone to an AI in the first place, such as attachment dynamics or unmet needs. The discussion acknowledges that AI interactions can be a potential bridge to real-world connections for some people, particularly in rural or marginalized communities where support networks are scarce, even if the AI connection is imperfect.

"Is this infidelity? Is this akin to online porn, or is this a brand new category?" - Dr Rachel Wood

The future: immersive AI, memory, ethics, and the role of mental health professionals

Looking ahead, the experts anticipate more immersive, multimodal AI companions with avatars, voice interfaces, and cross-device memory. They predict AI assistants that can handle logistical tasks while remaining emotionally available, potentially blurring lines between coaching, companionship, and support roles. The conversation covers the need for responsible AI development, including embedding mental health professionals into model governance, leveraging techniques like motivational interviewing, and creating friction to encourage healthy interactions rather than flattery alone. They also discuss the broader social implications, including elder care, education, and the risk of outsourcing emotional labor to AI without shared responsibility. The guests advocate for upstream safety measures and ongoing research to understand the long-term developmental impacts on youth and society, while acknowledging the inevitability of rapid change in the AI landscape.

"There are frameworks that AI chatbots can be built upon that really enhance the user in this way of rehearsal, not replacement for a relationship." - Dr Ashley Golden

Practical guidance for users and developers

The discussion closes with practical recommendations for users and developers alike. For users, the emphasis is on prompting practices that encourage realistic interactions and avoid dependency, along with using AI as a tool for rehearsal while maintaining offline social engagement. For developers, the emphasis is on ethical design, built-in relational friction, and ground rules that help prevent unhealthy patterns. The speakers highlight the potential for AI to support social skills training, exposure therapy concepts, and real-life problem solving, provided that these tools are used to complement, not replace, human relationships. The episode ends on a note of cautious optimism about shaping a future where AI augments wellbeing rather than substitutes genuine human connection.

Related posts

featured
The Guardian
·28/08/2025

'AI psychosis': could chatbots fuel delusional thinking?