AI in HealthcareMental HealthDigital Health EthicsTransplantCheck

An $8 Billion Industry That Can't Detect a Suicide — Why AI Mental Health Tools Are Failing

WHO says clinicians must build AI mental health tools. VERA-MH data shows chatbots can't detect suicide risk. Here's what happens when engineers build therapy without therapists.

Matthew Sexton, LCSW·March 25, 2026

I have sat across from a person in a suicide crisis more times than I can count. There is nothing algorithmic about it. The shift in vocal tone. The sudden calm after weeks of agitation. The throwaway comment about "not being around" that is not throwaway at all. These are signals that require a trained human brain to catch — and even experienced clinicians miss them sometimes.

Now imagine asking a chatbot to do it.

On March 20, 2026, the World Health Organization published guidance on responsible AI in mental health, calling for clinicians to be actively involved in the design, development, and oversight of AI tools used in mental health contexts. Four days later, I am writing this from the perspective of someone who has been doing exactly what WHO is asking for — and I can tell you the gap between what the industry is building and what patients actually need is not a crack. It is a canyon.

The VERA-MH Data Should Terrify You

In February 2026, VERA-MH — the first open-source evaluation framework for AI chatbots in mental health — published findings that should have stopped the industry in its tracks. The researchers systematically tested commercially available AI chatbots on their ability to identify and respond to suicidal ideation.

The results showed meaningful variation in how these chatbots handle the most critical moment in mental health — the moment someone signals they want to die. Some chatbots failed to escalate. Others used stigmatizing language. The tools that millions of people are downloading as mental health support cannot reliably detect when someone is in crisis.

This is not a technical limitation. This is a design failure. And it is a design failure that was entirely predictable, because the people building these tools have never sat in the room when someone is deciding whether to live or die.

Over 40% of digital health platforms now integrate some form of AI-driven assessment or support tool. The global AI mental health market is projected to cross $8 billion in 2026. That is $8 billion flowing into an industry that has not solved the most basic clinical requirement: can your tool tell when someone needs immediate human intervention?

WHO Is Right — But the Framework Is Toothless

The WHO framework calls for mental health professionals to be in the room when AI tools are designed. I agree completely. But calling for involvement and requiring it are two different things.

Right now, there is no regulatory mechanism that forces AI mental health companies to include licensed clinicians in their development process. There is no certification body. There is no minimum standard for clinical oversight in consumer-facing mental health AI. The WHO has identified the problem with precision and proposed a solution with no enforcement mechanism.

I have watched this pattern before in healthcare technology. Guidelines get published. Industry nods along. Nothing changes. The companies with the best marketing budgets continue to ship products that look clinical but are not. Users assume that because an app talks about anxiety and depression, someone with clinical training built it. That assumption is often wrong.

The result is a market where the most important variable — whether a clinician was involved in building the tool — is invisible to the consumer. You cannot tell from an app store listing whether a therapist shaped the logic or whether an engineer Googled the DSM-5 and built a decision tree over a weekend.

What Happens When Engineers Build Therapy Alone

I want to be clear: I am not anti-engineer. I work with engineers every day. I am building AI-powered healthcare tools right now — TransplantCheck for kidney transplant patients, VeteranCheck for veteran mental health screening. The technology is powerful and I believe in it.

But I build these tools as a licensed clinical social worker with thirteen years of direct practice experience across substance abuse treatment, forensic ACT teams, disaster case management, and inpatient psychiatric settings. When I design a screening flow, I am drawing on thousands of clinical encounters. When I decide what happens after a user flags a concerning symptom, I am not guessing. I know what the clinical pathway looks like because I have walked it with real people in real crisis.

That clinical context is not something you can replicate with training data. You cannot fine-tune a language model on empathy. You cannot prompt-engineer the instinct that tells you this patient's "I'm fine" means something completely different than it did last week.

When engineers build mental health tools without clinicians, three things happen consistently:

First, risk stratification fails. The tool treats all distress as equivalent. A user reporting mild work stress gets the same response pathway as a user expressing hopelessness after a loss. In clinical practice, these are fundamentally different presentations requiring fundamentally different interventions.

Second, escalation logic is backwards. The tool either escalates everything — flooding crisis lines with non-emergent contacts and eroding trust — or escalates nothing, because the developers were afraid of liability and built the safest possible response: a disclaimer and a phone number. Neither approach is clinically useful.

Third, the tool optimizes for engagement instead of outcomes. A therapy app that keeps you coming back is not the same as a therapy app that helps you get better. In clinical practice, the goal is often to make yourself unnecessary. That is the opposite of what engagement metrics reward.

What Clinician-Led AI Actually Looks Like

When I build TransplantCheck, every screening question maps to validated clinical instruments. Every risk flag triggers a specific, evidence-based response pathway. The system knows the difference between a patient who needs education and a patient who needs immediate clinical contact — because a clinician defined those boundaries, not an algorithm.

When I am designing VeteranCheck's mental health screening flow, I am thinking about the veteran who will not admit to suicidal ideation on a direct question but will disclose it indirectly through comments about burden, hopelessness, or "not wanting to be here anymore." I know those patterns because I have heard them in session. A chatbot trained on general mental health content does not know to listen for them.

This is what WHO is calling for. Not clinicians reviewing AI tools after they are built — clinicians building them from the first line of code. There is a difference between a clinician signing off on a finished product and a clinician shaping the product's clinical logic from day one. The first is a rubber stamp. The second is how you build tools that actually work.

The Stakes Are Not Abstract

Someone is going to die because an AI mental health tool failed to detect their crisis. That is not alarmist — it is arithmetic. When you have millions of users, a chatbot that misses suicidal ideation even 5% of the time, and no clinical backstop, the outcome is inevitable.

The WHO framework is a start. The VERA-MH data is a warning. But neither will matter if the industry continues to treat clinical expertise as an optional add-on rather than a foundational requirement.

We are building mental health tools in the same way we would never build surgical tools — without the surgeon in the room. The $8 billion AI mental health market needs to decide whether it is building products that help people or products that look like they help people. Those are not the same thing, and the difference is measured in lives.

If you are building in this space, put a clinician at the table. Not on the advisory board. Not in the press release. At the table, with commit access and veto power over clinical decisions. That is the minimum standard. Everything less is theater.

If you or someone you know is in crisis, contact the 988 Suicide and Crisis Lifeline by calling or texting 988. Veterans can press 1 for the Veterans Crisis Line.

Learn more about clinician-led AI in healthcare — because your mental health deserves tools built by people who understand it.