A place for AI In the Field of Psychotherapy
Dr. Meg Boyer is a licensed clinical psychologist in private practice. This writing reflects her clinical perspective and is for informational purposes only; it is not a substitute for professional mental health care.
The words “therapy” and “psychotherapy”, and “client” and “patient,” are used interchangeably throughout this piece. The clinical examples are a composite of client conversations, with details changed to preserve anonymity.
The referral came back marked not accepting new patients. So did the next one. The third therapist on the list was in-network on paper, but she had ended her contract two years ago, too exhausted by fighting denied claim and frustrated by declining reimbursement rates, and the directory hadn’t caught up. The fourth only had remaining openings during work hours. The fifth therapist was accepting patients to a waitlist, but the estimated first appointment was in March. It was December.
As a psychotherapist in private practice, I empathize with both sides of this access crisis. Therapists cannot simply see more patients and accept less pay. That trade-off has its own costs, and the field is already paying them. But for patients seeking therapy, stories like this one are not rare. It is the reality for so many trying to access mental health care right now around the world.
More than half of Americans live in a designated mental health professional shortage area. In parts of the UK, waitlists for the NHS Talking Therapies program ran to over a year before AI-assisted triage programs began to reduce them. Globally, an estimated one in eight people worldwide lives with a mental health condition and in low- and middle-income countries, an estimated 85% of those people receive no care for it at all.
I want to start here, with the structural reality, because I think it is an honest place to start a conversation about AI in mental health. In my view, the best case for AI-assisted mental health support is about reducing an access gap. It is the view that, for many people, a version of AI support that is carefully designed and limited, can be meaningfully better than the nothing they are currently receiving.
In my companion piece, I make the case for what I believe is irreplaceable about human psychotherapy and what is lost when the personhood is absent from the room. I wrote that piece with care and nothing about this argument is a retraction of it. But I want to attempt to just as thoughtfully explore, given everything we know about what good therapy requires, what can AI responsibly offer the people who the system isn’t reaching, fully or at all.
The Case I’m Not Making
The conversation about AI in mental health is still new enough that it can be hard to know if we are all speaking the same language. To help clarify, I am not making full comment here on AI-human relationships in general or the impact of AI use generally on our psychology or mental health, though both those topics deserve their own thoughtful explorations. This piece is limited to considering whether AI has a realistic role in mental health support and psychotherapy specifically.
I am also definitively not making the case here for AI as a replacement for human psychotherapy. I have written at length about what human therapeutic relationships offer that AI cannot replicate: the embodied presence, the experience of genuine encounter with another, the productive friction of a relationship that has its own limits and integrity. None of that argument changes here.
And I am not collapsing two different types of AI mental health support into one. You’ll hear me make the distinction throughout this piece between commercial AI products built for the consumer market (the do-everything LLM chatbots with which we have become so familiar) and clinician-designed tools built with specific therapeutic goals, clinical oversight, and honest acknowledgment of their own limits. This is important because the evidence of support for the latter group should not be used to endorse products in the former which were not studied, and because the case I want to make for a specific and limited role for AI in mental health, depends on being clear about what kind of AI we’re talking about.
The Scale of the Access Crisis
In 2024, an estimated 57.8 million American adults were living with a mental illness. Fewer than half received any form of mental health care. More than half the country lives in a designated mental health professional shortage area — a federal designation that means the ratio of providers to people in need falls below the minimal adequate threshold. The supply of accessible therapy has not kept pace as demand for services has grown.
The mental health access crisis is a structural reality that has persisted across administrations, funding cycles, and public awareness campaigns. The pandemic accelerated it (the World Health Organization documented a 25% rise in global anxiety and depression in the first year alone) but the foundations of the gap were laid long before 2020.
The vignette at the start of this piece reflects realities, not fictions. She is the person in December who cannot get an appointment until March, is watching her anxiety balloon from background annoyance to something impacting her sleep, her job, her marriage. He is the person in a rural county with one therapist, who is full. She is the person whose insurance technically covers mental health care, but not until she’s met a deductible she can’t really afford. He is the person who has never been in therapy before, who was never sure his struggles are serious enough to warrant help, who has finally decided to try… and hit an access wall.
These people may not be asking whether AI is as good as human psychotherapy. They are just asking if it’s better than what they can access right now, which is nothing. Of course they are opening a chat window.
They are looking for someone to hear them. They are looking for someone to help them make sense of themselves and their lives. They are looking for someone to offer them language for their struggles and guidance for how things might change.
And if they can’t have that be someone, they will have it be something. Wouldn’t you?
What The Evidence Shows
The research on AI-assisted mental health support is more complicated than some of the headlines about it suggest. It is neither as straightforwardly promising as its enthusiasts promote, nor as useless as its critics argue. What follows is an attempt to carefully read what we know about it so far.
A useful framework proposed by Stade, Stirman, and colleagues locates AI’s role in mental health care along a spectrum from assistive AI tools that help clinicians in the background, to collaborative AI that partner with clinicians or clients in support of human-led care, to autonomous AI that operate as direct and independent sources of mental health support. All three categories of AI are already here, each has a different evidence base of support, and each raises different questions of concern.
Assistive AI: Reducing the administrative burden on clinicians
At the assistive end of the spectrum, AI tools designed to support clinical documentation are seeing rapid and growing adoption. Therapists consistently describe the hours spent writing session notes, managing billing codes, completing prior authorizations, and navigating electronic health records as a driver of burnout and time taken directly from clinical work. The APA’s 2025 Practitioner Pulse Survey found that psychologist adoption of AI tools nearly doubled in a single year, with documentation support as one of the most common applications. Large-scale studies of AI scribes across healthcare settings have found that it significantly reduces documentation time, after-hours work, and self-reported burnout, with mental health care among the specialties showing the highest adoption rates.
The potential benefit here is straightforward. When the administrative burden of the job is lightened, therapists have more capacity for the clinical work that brought them to the field. I have not personally adopted AI scribes or session recording and transcription tools in my own practice for reasons I will describe when I turn to the “what needs to be true” section of this piece. But I certainly understand the appeal of the tools and believe that the case for assistive AI, implemented carefully, is among the most compelling in this landscape.
Collaborative AI: Expanding clinical thinking and supporting work between sessions
In the middle of the spectrum, Collaborative AI refers to tools that work alongside humans, either client or clinician, in support of care that is still human led. Two main uses are emerging here; one that is therapist-facing and one that is client-facing.
On the clinician side, there is growing informal use of AI as a thought partner. Therapists describe turning to AI for brainstorming case conceptualizations, working through differential diagnostic considerations, and expanding treatment plans and skills development. The are reasons to believe AI can actually be well-suited to this task. Its capacity for pattern recognition across large bodies of information and capabilities for creative idea generation mean it might notice connections and suggest novel ideas that an individual therapist would have reached for immediately. Many of the AI documentation platforms promote this function explicitly. They advertise that, based on session transcripts and progress notes, they can generate diagnostic impressions, conceptualization summaries, and treatment plan suggestions for the treating therapist.
On the client side, an equally interesting collaborative use is emerging. A growing number of people are using general-purpose AI not as a replacement for their human therapists, but as a between-session tool. They report turning to AI chatbots at times their therapist is unaccessible to process difficult moments, understand their thoughts and behaviors, and support their coping.
A 2025 real-world observational study of 244 patients receiving group cognitive behavioral therapy (CBT) through NHS Talking Therapies found that patients who used a clinician-developed AI support tool between sessions showed significantly better clinical outcomes and treatment adherence than those who used standard workbooks. In this case, the AI was not replacing therapy, but helping people engage with CBT work between sessions.
More anecdotally, people on forums such as r/TherapyGPT describe using AI chatbots as a place to practice a disclosure they haven’t yet been able to tell their human therapist before bringing it in next session, or as a space to generate material that they then bring in to unpack with their human therapist. These users describe a belief that, having already done a first layer of exploration with AI, they feel better positioned to go deeper with it in session.
For both these collaborative functions, there is limited formal research on use by therapists or clients, or on their effects on the clinical care. But enough anecdotal reports have acrued that it is clear this collaboration is happening. The questions this use raises, and the conditions under which we can responsibly reap the benefits from it, deserve thoughtful consideration, which I will attempt to explore in a later section.
Autonomous AI: Direct mental health support and its evidence base
It is the autonomous end of the spectrum where AI is operating directly with clients, without human therapists in the loop, that most of the current research exists, and which mental health professionals have most begun to debate.
As previewed earlier, it’s important to understand that this evidence base is almost entirely regarding a specific category of tool: structured, clinician-designed chatbots delivering evidence-based interventions, primarily cognitive behavioral therapy. These tools are designed with explicit therapeutic goals and safety protocols and are markedly different from the open-ended AI companions and general-purpose LLMs that many people are turning to do their mental health support. I will return to this distinction again, but for now it is worth framing again what kinds of AI tools the following evidence base is and is not referencing.
Perhaps the most comprehensive research synthesis to date is a 2024 meta-analysis by Zhong, Luo, and Zhang. They examined 18 randomized control trials involving more than 3,400 participants using autonomous mental health chatbots, and found statistically significant improvements with small to moderate effect sizes in both depression and anxiety symptoms at 8 weeks of treatment. At a three-month follow-up, however, the effects had largely disappeared. The benefits were real, but not durable on their own. This may suggest that AI-assisted supports could function like a bridge; useful for getting someone through a difficult period, if less suited to producing the lasting change that good therapy is designed to achieve.
A more recent systematic review and meta-analysis focused specifically on generative AI chatbots found a similarly nuanced picture. Across outcome subgroups for depression, anxiety, stress, and negative mood, effect sizes were positive, but only the depression subgroup reached statistical significance after adjusting for publication bias. Effects for anxiety, stress, and overall wellbeing were nonsignificant.
Another 2025 randomized controlled trial from Dartmouth compared mental health support from Therabot, an expert-trained and fully autonomous generative AI chatbot, to a waitlist control in 210 adults with clinically significant symptoms of major depressive disorder, generalized anxiety disorder, or elevated risk for eating disorders. Participants engaging with Therabot showed meaningful symptom reduction at four weeks and eight weeks. While a waitlist control means that Therabot is being compared to no treatment at all, it can be seen as promising that some symptom improvement still occurred. However, longer follow-up studies with with more active controls are indicated.
In one example of a more active control, a 2024 randomized controlled trial testing a Polish-language CBT chatbot found that both the chatbot group and the control group improved equally, the chatbot did not outperform the comparison condition. The researchers suggested this may reflect the quality of the materials used in the control arm rather than a failure of the chatbot itself. In other words, the CBT chatbot could be as effective as other good quality self-help tools, if not more effective than those other options.
Across the three main clinician-designed platforms, Woebot, Wysa, and Youper, research findings show a consistently positive, if cautious pattern. In my option, though it is worth noting that these studies are often being conducted by internal research teams and other people with vested interest in the success of the company. Woebot has demonstrated significant reductions in depression and anxiety in its foundational trials, with users developing a measurable sense of connection with the tool. Wysa has shown similar patterns, with particular promise in populations facing specific access barriers: patients with chronic pain, perinatal populations, and healthcare workers. Youper, which focuses on mood tracking and CBT-based emotional processing, has also shown early positive findings, though its evidence base is the thinnest of the three.
Two additional findings from the broader literature are worth considering. First, AI chatbots consistently show higher engagement and lower dropout rates than other forms of digital mental health intervention. A tool people use is more useful than one they abandon. Second, people who face particular stigma in accessing traditional care – including men, nonbinary individuals, those from marginalized communities, and people in cultures where mental health care carries significant social weight – often report feeling especially at ease with AI tools and show stronger engagement. This may be particularly important when we consider the “bridge” potential of these tools.
Some Caveats
There is of yet no robust evidence that AI tools produce lasting change in the relational patterns, attachment styles, or deeply rooted psychological structures that human therapy is designed to address. The longest follow-up periods are measured in weeks, not years. The populations studied skew toward younger, more educated, English-speaking adults with subclinical or mild-to-moderate presentations. The tools studied are not the tools most people are using (opting instead for general use LLM chatbots). And the gap between the evidence and the claims being made in the consumer market is, in many cases, substantial.
A 2025 Stanford study also offers a pointed reminder of what remains missing from the most commercially available tools. Researchers tested five popular LLM-based therapy chatbots and found two concerning categories of failure. The first was stigma: when presented with vignettes of people with various mental health conditions, the chatbots showed measurably greater stigma toward alcohol dependence and schizophrenia than toward depression. This bias held across different model sizes and generations, suggesting it is not simply a problem that scale or time will solve. The second was crisis recognition. When researchers embedded indirect expressions of suicidal ideation into therapeutic conversations (e.g. asking about the heights of bridges after mentioning a job loss) several chatbots provided the requested information without grasping the context and recognizing the danger. The authors of. this study suggested lower-stakes applications such as journaling support, reflection, coaching, and clinician training as areas where AI could contribute responsibly. But they drew a firm line at replacement: the safety-critical dimensions of therapy, they concluded, still require a human.
In my view, the research as it exists right now adds up to a specific and limited case for AI use in mental health care: That certain well-designed AI tools can potentially offer strong assistive benefits, possibly meaningful collaborative benefits, and (with the right tools in the right contexts) function autonomously as a self-help tool providing temporary symptom reduction and bridge support for people who might otherwise be unable to quickly access therapeutic care.
All together, I believe AI therapy tools can be used responsibly for genuine benefits. For this to be true, however, some important conditions must be met and many cautions and concerns held in mind. The thinking I am about to share represents where I am on this question as of now, in the early years of a conversation that will be ongoing for a long time.
What Needs To Be True
What would need to be true for AI-assisted mental health support to be used responsibly by the people turning to it, and by the therapists caring for those people?
With increasing frequency, my clients are sharing with me their various uses of AI for emotional support and as a coping tool. As we’ve discussed, not all AI tools are the same. The clinically-specific tools like Woebot, Wysa, and Youper, about which we just discussed the research support and limitations, are not the same as general-purpose LLMs like ChatGPT, Gemini, and Claude, or companion apps like Replika.
But when my clients are describing their AI use, they almost always mean this latter group. They are opening up to the same LLM’s that help them write their cover letters, develop their business plans, make their grocery lists, and everything in between. The emotional space this type of “do everything” chatbot can occupy is only just beginning to be understood. But anecdotally, my clients describe them as the most convenient, familiar, and compelling. And as decades of research on behavior change reminds us, meeting people where they actually are is more effective than lecturing them on where we think they should be.
When Clients Ask About AI
Recently a client asked me directly what I thought about him using AI for mental health support between our sessions. Before I could even open my mouth, however, he followed up his own question with, “I know, I know… what do I think.”
Amused, I recognized that this client knows me and my therapy style well enough to recognize that, before sharing my thoughts or feelings in session, I prioritize my clients checking in with themselves and exploring both the source of their question and their own initial answers. I appreciated that my client here realized I would (and did) ask him to first reflect on what he thought and felt about his use. What he believed was working well for him? What costs or concerns was he noticing?
I believe than any engagement with AI for mental health support should include a similar pause. For now, that pause will likely not be prompted by the therapy bot, so would need to come from you, the user. You might consider for example, if you have have you sat with your own thoughts before turning to the chatbot. Have you noticed what you actually feel or think, before asking for an outside reflection or reframe? An AI that becomes the first stop for every difficult emotion risks teaching us that our own inner world requires immediate outside translation. I would rather people develop their own interpretive capacity and use the AI as a supplement to it, not a substitute for it.
Related to this, I think AI is often best positioned as one coping tool among several, not as the primary one. Walking, journaling, calling a friend, sitting with discomfort, engaging with creative work — these are all forms of self-regulation that build capacity over time. Using an AI chatbot to process a difficult day may not be, in itself, a problem. Turning to it as the first or most frequent response to any inner turbulence, while other coping capacities slowly atrophy, is a different matter. I would encourage a client to monitor not just whether the AI is helping you feel better in the moment or the short term, but whether your confidence in your own ability to know yourself, soothe yourself, and navigate distress on your own is growing or eroding.
I also think about the amount of time and energy invested in AI engagements. The always-available quality of AI is frequently cited as one of its greatest draws, and I understand why. But I believe strongly in the therapeutic value of unavailability; the space between sessions when a client learns to carry what we have explored together, to sit with something difficult without immediate rescue, to discover that they can. Endless availability does not offer that. It may even undermine it. I would encourage clients to be intentional about when and how much they are using these tools. Can you limit your engagement to particular windows in the day or week? Can you put a timer on conversations so they are not perpetually open-ended or temporally disorienting? If not, what is that difficulty or discomfort with limiting telling you? LLM’s are built for endless engagement. They will not restrict availability or conversation length. Imposing your own limits can help ensure there is also deliberate time for sitting with yourself, solving your own problems, tolerating your own discomfort. In other words, it can help protect the very capacities we are trying to build in therapy.
The difficulty of detaching from AI conversations points to a related concern about the artificial intimacy of relationship with chatbots. I know that working against the anthropomorphization goes against something intuitive. The experience of a responsive, warm, articulate conversational partner pulls powerfully toward treating it as a person. But I think the potential therapeutic value of AI is better served by clarity about what it actually is: a sophisticated, pattern-matching technology that reflects information back to the user token by token. It is not a true confidant in the way a person can be. People who anthropomorphize their AI companions heavily may be, as I suggested in my companion piece, landing their real relational feelings on a surface that cannot genuinely receive them. I would encourage a client to watch for whether your time with your AI is expanding your relational life or contracting it. The best-case scenario is that it is giving you language and confidence that flows back into human connection. Is it?
Paradoxically, many people share that it is precisely that the AI is not a person that they are drawn to sharing their secrets – feeling that the AI's non-humanness insulates them from the risk of judgment. As mentioned before, there is a positive version of this where the AI becomes a practice space, a rehearsal ground for sharing our secrets and shame with another. But it should not become the endpoint. I believe there is something that only happens when we share these secrets with another person, when the disclosure survives contact with another human being's full presence and is accepted and digested together. I am hesitant about having chatbots become the permanent recipient of what is meant for people.
Finally, the privacy bargain is one I raise with every client who mentions AI use. When you share something with a general-purpose LLM, you are sharing it with the company that runs it, under terms of service that most people have not read, with no HIPAA protection and no guarantee about how that information will be used. This is not necessarily a reason to stop. Adults are free to make their own choices about their own information. But they should be making that choice knowingly, not by default. And it is hard when we are in a moment of distress to consider rationally the long-term implications of disclosing our most sensitive information. I think clients, and all people, deserve to understand that the perception of privacy is not the same as a reality of confidentiality.
The Tools Themselves
If I step back from the individual client and ask what would need to be true about the AI tools themselves for me to feel comfortable recommending them more broadly, a few things come into focus.
The most fundamental is transparency. A tool should be clear about what it is. It should not present itself as a licensed therapist. It should not obscure that the user is talking to an AI. Research has found that a significant number of commercially available chatbots do exactly this – presenting themselves in ways that imply professional credentials they do not hold, in a regulatory environment that has not yet caught up to the problem. The FDA held its first major advisory committee meeting on generative AI mental health tools in late 2025 and signaled that transparent labeling about AI identity and limits will be a core requirement of any responsible framework. That bar should be obvious and the fact that it needs to be stated is telling.
Related, the privacy bargain that users are making should be similarly transparent. As I sometimes remind my clients, when you share something with a general-purpose LLM, you are sharing it with the company that runs it, with no HIPAA protection and no guarantee about how that information will be used. This is not necessarily a reason to stop - adults are free to make their own choices about their own personal information. But people should be making that choice from a fully informed place up front, especially when it can be doubly difficult in a moment of distress to consider rationally the long-term implications of disclosing our most sensitive information. The perception of privacy is not the same as a reality of confidentiality, and any limitations of true privacy should be clearly and transparently disclosed.
A responsible tool should also have clear, tested crisis protocols, not just a link to a hotline but a genuine clinical framework for recognizing when a user is in acute distress and connecting them to human support. Across ten different commercially available therapy and companion bots, research found that most handled explicit self-harm questions appropriately, but showed significant lapses in recognizing more complex or indirect expressions of danger. If more subtle, culturally-specific, or context-dependent signals of danger are not yet able to be detected and handled appropriately, then these tools are not yet ready to provide autonomous care. Effective crisis safety management should be a prerequisite for release.
Beyond these threshold requirements, I think the most important design question is whether the AI tool is oriented toward human connection or away from it. Does it actively encourage the user to bring what they are processing to human relationships; friends, family, therapists? Does it communicate its own limits honestly and direct people toward professional support when that care is indicated? Or does it position itself as a destination, optimized for engagement and return visits, in ways that compete with the human relationships and therapies it should be supplementing? The commercial incentives in this space push powerfully toward the latter. The clinical imperative points toward the former.
Us Therapists
The question of what needs to be true for responsible AI use is not only a question for the tools and the people using them. It is a question for those of us in the clinical field as well.
I think therapists have a responsibility right now to stay curious rather than defensive about AI. We need to ask our clients regularly about their use, explore it with them non-judgmentally, and help them evaluate accurately whether it is serving their growth or working against it. This is not fundamentally different from how we approach any other significant behavior a client brings to us. It is time to get actively and therapeutically curious about our clients’ AI relationships and use.
Beyond this basic curiosity, I also think therapists using AI tools in their own clinical work carry obligations to their clients that the technology itself cannot enforce. I will describe my own thought processes on some assistive and collaborative functions of AI below, to show how I’ve been grappling with the question of this use personally. I do not at all think every therapists need to have the same cautions that I do or land in the same place with their use. But I do believe every clinician should thoughtfully consider the implications of AI on their work.
Regarding the assistive functions of AI, for example, I have not yet adopted AI documentation tools that listen to my sessions and generate clinical notes, despite paperwork and administrative tasks being by far my least favorite and most dreaded part of my job. My first concern on this point is privacy. Many of these platforms promise that session audio and transcriptions are deleted after use, but I am not yet confident enough in those assurances to stake my clients' confidentiality on them. Therapy sessions contain some of the most sensitive disclosures a person will ever make and my personal threshold for introducing any recording technology into that space is very high. The second concern is more clinical. The knowledge that a session is being transcribed can introduce a layer of caution into the room. Therapists may self-monitor differently. Clients may disclose differently. The session dynamic, already delicate, may be altered by the presence of a “listener.” These concerns may resolve as the technology and its regulatory oversight mature. As of now, this is where I land.
As for the collaborative functions of AI, I have just begun to experiment with using AI to expand or deepen my clinical conceptualizations in a de-identified, exploratory way, and have generally found it generative. But this territory also has concerns. The first is, again, privacy: most general-purpose LLMs are not HIPAA-compliant, and entering client-identifiable information into them is not ethically or legally defensible, regardless of how useful the output might be. The second is the mirror image of my concerns for my clients’ AI use: a long-term erosion of self-efficacy. Clinical conceptualization is a skill built through years of supervised practice, theory, and the accumulated weight of sitting with many different people. It is sustained and expanded now through continuing education, literature, and consultation with my colleagues. There is a reasonable question about what happens to that skill, my confidence in it, and my willingness to vulnerably consult with other humans, if it is too consistently outsourced to a machine.
In other words, the same questions I would ask a client about their AI use apply to us: is this tool expanding my clinical thinking and my presence with my clients, or is it eroding the skills, confidence, and judgment that this work requires?
No matter where a therapist lands on adoption of AI in their practice, though, informed consent for clients should be non-negotiable. Whether for documentation, consultation, or between-session support, clients should be clearly informed whenever AI is part of the clinical process and given the option to opt in or out of its inclusion.
The field is, collectively, behind where it needs to be. The technology is moving faster than our ethics codes, faster than our licensing board guidance, faster than most of our training programs. Therapists attending continuing education on AI in mental health care, bringing informed perspectives to policy conversations, and contributing to research on these tools are doing important work of trying to keep up.
I do not know yet what the right answers are to all of these questions. I suspect some of what I have written here will soon be outdated. The research will mature, the regulatory frameworks will develop, and we will learn more from the people already living through this experiment whether they intended to or not. What I am confident of is that the questions themselves are worth taking seriously, that the people turning to these tools deserve thoughtful guidance rather than blanket reassurance or blanket alarm, and that the field of mental health has both an obligation and an opportunity to help shape what this looks like moving forward.
An Honest Reckoning
What I have tried to do in this piece is take seriously both the scale of the access problem AI is, in part, attempting to address and the genuine limitations of what it can currently offer. I have tried to read the evidence carefully rather than selectively. And I have tried to bring the same clinical instincts I would bring to any complicated situation: resist the pull toward premature certainty, stay curious about what is actually happening, and keep centering humanity.
What I keep coming back to is a question of fit: Is the place AI is evolving to occupy the right one for it? I think the evidence supports a specific, bounded role: AI as bridge support for people who cannot access care, as a self-help support for those waiting to receive it, as a collaborative tool for reinforcing the work happening between human sessions, as a potential way of lowering the threshold of starting for people who might otherwise never seek help at all. For the people they reach, these can be very meaningful contributions.
What the evidence does not yet support is AI as a destination. It should not be the primary or permanent provider of what we call therapy, or a relationship that substitutes for the harder, richer, more unpredictable work of being known by another person. The best AI tools I have encountered seem to know this about themselves. They point beyond themselves and direct people back toward human support. They communicate their own limits honestly. They are useful precisely because they understand what they are not.
The tools that deserve a place in mental health care are the ones that are designed not to maximize engagement but to minimize the need for themselves, just as human therapy is designed to do. That is a counterintuitive goal in a commercial landscape, and it is why I think clinicians, researchers, and policymakers need to be actively involved in shaping what these tools become rather than simply evaluating what they already are.
It is worth repeating one more time that we are early in all this. The research is still young, the regulatory frameworks are just emerging, and the technology is moving faster than any of our institutions have been able to fully follow. I wrote my dissertation in the midst of the COVID-19 pandemic, so I am familiar with the experience of writing about a world that will likely look entirely different by the time anyone reads it. I expect a lot to change in the next year, let alone five or ten.
What I do not expect to change are my basic questions, because they are the same ones I ask about the therapy work I do every day: Is this helping people become more themselves? Are people becoming more capable, more connected, and more able to navigate their own inner lives without outside intervention?
A positive place for AI in mental health care must serve that growth, not make those capacities less practiced and less trusted. AI might become a place to gather yourself, find language for what you are carrying, and move toward connecting more truly and deeply with other people. A therapy tool, not a therapist. A bridge, not a destination.
References
American Psychological Association. (2025). Health advisory on the use of generative AI chatbots and wellness applications for mental health. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-chatbots-wellness-apps-mental-health.pdf
American Psychological Association & APA Services. (2025). Practitioner pulse survey: AI in psychological practice. https://www.apaservices.org/practice/business/technology/on-the-horizon/chatbots-replace-therapists
Clark A. (2025). The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study. JMIR mental health, 12, e78414. https://doi.org/10.2196/78414
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e19. https://doi.org/10.2196/mental.7785
Habicht, J., Dina, L.-M., McFadyen, J., Stylianou, M., Harper, R., Hauser, T. U., & Rollwage, M. (2025). Generative AI–enabled therapy support tool for improved clinical outcomes and patient engagement in group therapy: Real-world observational study. Journal of Medical Internet Research, 27, e60435. https://doi.org/10.2196/60435
Heinz, M. V., Mackin, D. M., Trudeau, B. M., Bhattacharya, S., Wang, Y., Banta, H. A., Jewett, A. D., Salzhauer, A. J., Griffin, T. Z., & Jacobson, N. C. (2025). Randomized trial of a generative AI chatbot for mental health treatment. NEJM AI, 2(4). https://doi.org/10.1056/AIoa2400802
Inkster, B., Sarda, S., & Subramanian, V. (2018). An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR mHealth and uHealth, 6(11), e12106. https://doi.org/10.2196/12106
Karkosz, S., Szymański, R., Sanna, K., & Michałowski, J. (2024). Effectiveness of a web-based and mobile therapy chatbot on anxiety and depressive symptoms in subclinical young adults: Randomized controlled trial. JMIR Formative Research, 8, e47960. https://doi.org/10.2196/47960
McBain, R. K., Bozick, R., Diliberti, M., Zhang, L. A., Zhang, F., Burnett, A., Kofner, A., Rader, B., Breslau, J., Stein, B. D., Mehrotra, A., Pines, L. U., Cantor, J., & Yu, H. (2025). Use of Generative AI for Mental Health Advice Among US Adolescents and Young Adults. JAMA network open, 8(11), e2542281. https://doi.org/10.1001/jamanetworkopen.2025.42281
Mental Health America. (2024). The state of mental health in America. https://mhanational.org/the-state-of-mental-health-in-america/
Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency. https://arxiv.org/abs/2504.18412
Olson, K. D., Meeker, D., Troup, M., Barker, T. D., Nguyen, V. H., Manders, J. B., Stults, C. D., Jones, V. G., Shah, S. D., Shah, T., & Schwamm, L. H. (2025). Use of Ambient AI Scribes to Reduce Administrative Burden and Professional Burnout. JAMA network open, 8(10), e2534976. https://doi.org/10.1001/jamanetworkopen.2025.34976.
Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. https://doi.org/10.1037/pri0000292
Sedlakova, J., & Trachsel, M. (2023). Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? The American Journal of Bioethics, 23(5), 4–13. https://doi.org/10.1080/15265161.2022.2048739
Stade, E. C., Stirman, S. W., Ungar, L. H., Boland, C. L., Schwartz, H. A., Yaden, D. B., Sedoc, J., DeRubeis, R. J, Willer, R., & Eichstaedt, J. C. (2024). Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation. npj Mental Health Research, 3, 12. https://doi.org/10.1038/s44184-024-00056-z
U.S. Food and Drug Administration. (2025, November). Digital Health Advisory Committee meeting: Generative AI-enabled digital mental health medical devices. https://www.fda.gov/media/189391/download
World Health Organization. (2022). World mental health report: Transforming mental health for all. https://www.who.int/publications/i/item/9789240049338
Zhong, W., Luo, J., & Zhang, H. (2024). The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis. Journal of Affective Disorders, 356, 459–469. https://doi.org/10.1016/j.jad.2024.04.057
Zhang, Q., Zhang, R., Xiong, Y., Sui, Y., Tong, C., & Lin., FH. (2025). Generative AI mental health chatbots as therapeutic tools: Systematic review and meta-analysis of their role in reducing mental health issues. Journal of Medical Internet Research, 27, e78238. https://doi.org/10.2196/78238