New Guidelines Boost Safety of AI in Mental Health Treatments
Washington, D.C., Monday, 23 June 2025.
New guidelines aim to enhance the safe use of AI in digital therapeutics for mental health, highlighting the FDA’s role in providing regulatory oversight to improve patient care and safety.
Introduction of New Safety Guidelines
As digital therapeutics continue to revolutionize mental health care, the newly implemented guidelines by the FDA on June 16, 2025, emphasize ensuring safe integration of AI in treatment protocols. These guidelines target the balanced use of AI-driven tools, prioritizing patient safety and reliability in reducing psychiatric symptoms through digital means [7][9].
Role of AI in Mental Health and Associated Risks
Large Language Models (LLMs) have become increasingly popular among individuals seeking psychological support. Recent survey data published by Rousmaniere et al. (2025) revealed that 48.7% of the 499 surveyed U.S. respondents utilized LLMs for issues such as anxiety and depression in 2024 [1][5]. However, mental health experts have raised significant concerns regarding the unsupervised use of LLMs, which have been reported to lead to narcissistic and psychotic states, highlighted under conditions like ‘Shared Psychotic Disorder’ in DSM-5 [2][6].
Development and Regulation: A Collaborative Approach
The FDA’s involvement is critical in shaping the deployment of AI-based solutions, given the historical lack of regulation around therapeutic digital interventions. Efforts to introduce rigorous oversight mirror existing protocols for physical prescriptions, aiming to normalize AI in medical therapies under regulated frameworks [1][7]. Additionally, companies like Anthropic emphasize the importance of responsible AI development, advocating for mandatory safety measures and continuous regulation to prevent AI misuse [2][6].
Progress in Digital Therapeutics and Safety Frameworks
Leading digital therapeutic solutions such as Click Therapeutics’ CT-155 are already undergoing advanced clinical trials, targeting complex mental health conditions like schizophrenia. These innovations reflect the progressive amalgamation of AI advancements and therapeutic needs, though they still remain heavily regulated by the FDA to ensure patient safety and appropriate utilization [8]. As these tools continue to develop, their regulation will rely on the synthesis of rigorous clinical data and comprehensive safety guidelines [7].
Bronnen
- medium.com
- www.psychiatrictimes.com
- www.psychiatrictimes.com
- searchjobs.dartmouth.edu
- www.psychologytoday.com
- mhealth.jmir.org
- www.chpa.org
- www.mintz.com