Artificial intelligence is increasingly present in mental health care, used by large systems and independent therapists to help manage treatment delivery. Rapid adoption — alongside disturbing incidents of people using general-purpose chatbots with harmful outcomes — has heightened concern among clinicians and researchers.
“There is a lot of fear and anxiety about AI,” says psychologist Vaile Wright, senior director of health care innovation at the American Psychological Association. “And in particular fear around AI replacing jobs.” Those worries surfaced during a recent 24-hour strike by about 2,400 mental health providers at Kaiser Permanente in Northern California and the Central Valley.
Therapists who struck, including licensed clinical social worker Ilana Marcucci-Morris, say changes to triage work helped trigger the action. Marcucci-Morris had worked as a triage clinician since 2019 but was reassigned after Kaiser revamped intake. What had been 10–15 minute screenings by licensed clinicians, she says, is now often handled by unlicensed telephone operators following a script or by E-visits. At another Kaiser site, the triage team shrank from nine providers to three, says marriage and family therapist Harimandir Khalsa. Strike leaders protested the erosion of licensed triage and fear that such staffing changes pave the way for AI to take on clinician roles.
Kaiser Permanente says its use of AI does not replace clinical expertise and that it is assessing tools, including ones from the U.K. company Limbic. A Kaiser statement confirmed it is evaluating Limbic to help members access care but that the tool is not in use.
Many experts say AI has mostly been applied so far to administrative tasks rather than direct clinical care. Wright notes AI can improve efficiencies around documentation, billing, and electronic health record updates — time-consuming activities that take clinicians away from patients. Several companies now market transcription and documentation-support products; Blueprint, for example, summarizes sessions, updates records, and tracks patient progress. Limbic builds AI assistants for larger systems, including intake and patient support; its Limbic Care chatbot is trained in cognitive behavioral therapy (CBT) skills to offer immediate help through patient portals.
Despite these developments, clinical use of AI remains limited. “We’re not seeing a lot of clinical use of AI today,” says psychiatrist Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center. He and others point to weak testing, high operational costs, infrastructure needs, and safety concerns. Small practices and community mental health centers often lack the IT teams and resources required to run these systems. With limited regulation, professional groups say providers must vet tools themselves to determine whether they are safe and effective.
Proponents argue the technology can improve care if clinicians are involved in its development and deployment. Torous predicts growing adoption and says clinicians must be trained and upskilled to evaluate and use AI safely. He and striking Kaiser clinicians urge employers to involve frontline providers when integrating AI, so clinicians remain part of decisions about patients’ level of care.
Looking ahead, many expect a blended or hybrid model in which human providers continue therapy while AI assistants help with homework, skill practice, and real-time feedback on patients. Wright of the APA emphasizes an ongoing role for human therapists, noting there are no digital solutions that replace human-driven psychotherapy or care.