FDA Advisory Group to Address Therapy Chatbot Risks
Washington, D.C., Wednesday, 5 November 2025.
The FDA is forming a new advisory group to address risks posed by therapy chatbots, aiming to clarify regulations and ensure patient safety in digital health.
Rising Concerns Over AI in Mental Health
The FDA’s initiative comes in response to growing concerns about the use of AI-powered chatbots in mental health care. These digital tools are increasingly popular among patients seeking accessible and affordable alternatives to traditional therapy. However, the lack of regulation has raised alarms over potential risks, including the reinforcement of harmful behaviors and inadequate responses in crisis situations [1][2].
FDA’s Digital Health Advisory Committee
The FDA’s newly formed Digital Health Advisory Committee (DHAC) is set to meet on November 9, 2025, to discuss regulatory frameworks for therapy chatbots. This meeting will explore scenarios involving AI applications designed for mental health support, such as whether these tools should be available over-the-counter or require a prescription, and how they might be used for both adult and adolescent populations [1][3].
Legal and Ethical Implications
Recent legal cases, such as the lawsuit filed against OpenAI after a tragic incident involving a chatbot, underscore the ethical dilemmas posed by these technologies. The case highlights the urgent need for comprehensive guidelines to ensure that AI does not exacerbate mental health issues, particularly among vulnerable groups like teenagers [4].
Future Directions for AI Regulation
The FDA’s efforts are part of a broader push to integrate digital health tools safely into healthcare systems. By establishing clear regulations, the FDA aims to promote innovation while ensuring patient safety. This move is expected to influence global standards, as countries like India also work to incorporate digital therapeutics into their healthcare frameworks [2][5].