Artificial intelligence is moving into mental health care at both large systems and independent practices, creating a mix of enthusiasm, caution, and resistance among clinicians.
Rapid uptake of AI tools — paired with disturbing reports of people harmed after using general-purpose chatbots — has heightened anxiety in the field. Many providers worry the technology could displace jobs and alter how care is delivered.
Those concerns helped drive a 24-hour strike by about 2,400 mental health workers at Kaiser Permanente sites in Northern California and the Central Valley. Several therapists said recent changes to intake and triage were a trigger. Licensed clinicians who once conducted 10–15 minute screening calls have often been replaced by scripted phone operators or E-visit workflows, according to clinicians who took part in the strike. At one site, a triage team fell from nine members to three, prompting fears that fewer licensed staff and streamlined digital processes could open the door for further automation or AI-driven replacement.
Kaiser says it is not using AI to replace clinical decision-making and is reviewing tools such as those from the U.K. company Limbic. The organization confirmed it is evaluating Limbic to improve access but that the tool is not yet deployed.
So far, most AI in mental health has been applied to administrative and workflow tasks rather than direct therapy. Experts point out that documentation, billing, and electronic health record updates are time-consuming duties that take clinicians away from patients, and AI can streamline these functions. Commercial products now offer session transcription, summary generation, chart updates, and progress tracking — for example, one vendor markets automatic summaries and record updates to reduce paperwork burden. Other companies build conversational assistants for intake and patient support; some chatbots are trained on cognitive behavioral therapy techniques to offer immediate, low-intensity help through portals.
Despite these tools, routine clinical use of AI remains limited. Digital psychiatry specialists say adoption is hampered by weak evidence in many products, high operational costs, technology and data infrastructure needs, and safety concerns. Small practices and community mental health centers often lack the IT staff and resources required to deploy and maintain AI systems. With limited regulatory oversight, professional organizations emphasize that clinicians are currently responsible for vetting tools to ensure safety and effectiveness.
Proponents argue AI can enhance care if clinicians are involved in design and rollout. Many observers predict growing adoption over time and stress that providers must be trained to evaluate and use AI safely. Clinicians who protested at Kaiser and leaders in digital mental health both urge employers to include frontline staff in decisions about implementing AI so that technology supports, rather than supplants, clinical judgment.
Looking forward, most expect a hybrid model: human therapists continuing to provide psychotherapy while AI assistants support homework, skill practice, session summaries, and real-time patient feedback. Advocates for cautious integration stress that AI should augment clinical work and free clinicians to spend more time with patients, not replace them.
Psychologists and professional groups emphasize the ongoing necessity of human-led care. There is broad agreement that no current digital tool substitutes for therapist expertise, the therapeutic relationship, and the complex clinical decision-making involved in mental health care. Thoughtful deployment, evidence-based evaluation, staff training, and worker protections will be central as AI becomes a more common part of mental health settings.