Back to blog
AI
wellbeing
chatbot
mental health
human-robot interaction

What Happens When a Robot Tries to Be Your Wellness Coach?

2026-03-285 min read

Picture this: you're stressed about finals, and instead of texting a friend or booking a counsellor, you open a chat with a robot. Not a FAQ bot, not a symptom checker — an actual conversational AI designed to coach you through your emotional lows. Would you open up? Would you take its advice? Would you argue with it?

These are exactly the questions a group of researchers decided to test by deploying a real AI wellbeing coach among university students for an entire week.

The Experiment

Thirty-eight students at NYU Abu Dhabi were given access to a large language model-based coaching robot for seven days. No scripts, no pre-set prompts — just open-ended conversations about stress, goals, mental health, and daily life. By the end of the week, the students had sent 4,352 messages. Researchers analyzed every single one.

The goal wasn't just to check whether students liked the chatbot. The team wanted to understand the dynamics of these interactions — specifically, how much autonomy students exercised, whether they felt like agents in their own wellbeing journey, and what happened when the AI's suggestions didn't land.

People Open Up More Than You'd Expect

One of the most striking findings was how readily students disclosed personal information. Nearly 70% of participants shared emotionally vulnerable content with the robot — fears, frustrations, struggles they might not bring up in a doctor's office.

That's not nothing. Emotional disclosure is a cornerstone of any effective coaching relationship, and for decades we've assumed it requires human connection. The fact that a significant portion of students were willing to be vulnerable with a machine suggests the barrier is lower than we thought.

Compliance rates were also high: nearly 79% of participants followed through on at least some of the coach's recommendations. Whether that's a sign of trust, social pressure from even a robotic authority figure, or simply good advice — it tells us the AI was having a real influence on behavior.

But They Also Pushed Back

Here's where it gets interesting. Negotiation — instances where students actively challenged or tried to reframe the coach's guidance — occurred in nearly 79% of participants as well.

Students weren't passive recipients. They questioned suggestions, offered counter-arguments, or redirected conversations toward their own priorities. The AI had to adapt in real time to users who weren't simply nodding along.

This is actually a healthy sign. In human coaching, the ability to negotiate and assert your own perspective is considered a marker of psychological health and engagement. The fact that students brought this same dynamic to their AI interactions suggests they were taking the process seriously — not just playing along.

The Autonomy Question

Autonomy — whether students felt they were making their own choices rather than just following orders — was present in a remarkable 97.4% of participants. Nearly everyone maintained a sense of self-direction even while engaging with the AI's prompts.

Agency, defined as students actively steering the conversation rather than passively responding, also appeared in 97.4% of participants and accounted for over 20% of all messages sent.

Put together, these numbers paint a picture of users who were genuinely engaged, not just compliant. They weren't treating the coach as a vending machine for advice. They were collaborating with it.

The Shadow Side: Rumination

Not everything was positive. Rumination — getting stuck in repetitive, unproductive thought loops — appeared in 13.2% of participants. That might sound small, but in the context of a wellbeing tool, it's a meaningful concern.

A human coach can recognize when a client is spiraling and gently redirect. The AI coach in this study sometimes amplified rumination without catching it. This points to one of the most important unsolved problems in AI-assisted mental health: how do you build a system that knows when not to keep the conversation going?

Five Design Lessons for the Future

The researchers distilled their findings into five design principles for anyone building conversational AI coaches:

First, support autonomy by default — give users the sense that they're directing their own journey. Second, create space for negotiation rather than demanding compliance. Third, actively monitor for rumination and build in circuit breakers. Fourth, treat emotional disclosure as a signal that trust has been established, and respond with appropriate care rather than generic advice. Fifth, design for agency — let users set the agenda, not just respond to it.

These aren't just technical guidelines. They're essentially a philosophy of what good coaching looks like, translated into interaction design.

What This Means for You

AI wellness tools are already everywhere — apps that track your mood, chatbots that offer cognitive behavioral therapy techniques, virtual coaches that prompt you to reflect on your day. Most of us have encountered at least one.

What this research adds is a more nuanced picture of what actually happens when people use these tools seriously over time. People aren't just passively absorbing whatever the AI says. They bring their own defenses, their own skepticism, their own willingness to push back.

The students in this study weren't surrendering their agency to a machine. They were using the machine as a kind of thinking partner — something to argue with, reflect alongside, and occasionally confide in.

That's a fundamentally different relationship than the one dystopian AI narratives warn us about. It's also a more realistic model of how most of us already interact with technology: not as passive users, but as people with our own agendas trying to get something useful done.

The Takeaway

A robot won't replace your therapist. But it might fill in some of the gaps between sessions — or reach people who would never seek therapy in the first place. The challenge now is making sure these tools know when to step back, when to ask harder questions, and when the conversation has stopped being helpful.

The students in this study figured that out for themselves. The robots still have some catching up to do.