Physician,Psychiatry Illinois Implements AI Therapy Prohibition with Significant Exception

Illinois Implements AI Therapy Prohibition with Significant Exception

Illinois Implements AI Therapy Prohibition with Significant Exception


As a psychiatry resident experiencing the swift rise of AI chatbots in therapy, I’ve observed a notable transformation in the mental health care environment. With these innovations becoming integrated into daily life, they’ve ignited essential conversations regarding their influence on clinical practices and patient safety—a debate heightened by Illinois Governor JB Pritzker’s endorsement of the Wellness and Oversight for Psychological Resources Act (HB1806) on August 1, 2025. This law prohibits AI-driven therapy across the state, intending to protect consumers from unlicensed and unsupervised AI applications. This action is prompted by survey findings identifying ChatGPT as a prominent source of mental health assistance in the U.S., alongside troubling reports of damage inflicted by these AI systems.

HB1806 disallows AI from making therapeutic choices or engaging with patients without oversight from a professional, limiting the employment of AI in mental health environments to supportive roles like scheduling or documentation when used under the guidance of licensed professionals—those permitted to deliver mental health services. Notably absent from this group are physicians, specifically psychiatrists, prompting significant inquiries about the reasoning behind this exclusion.

In Section 15 subsection (b) of HB1806, the bill permits ambient listening technologies, cutting-edge tools that transform clinical sessions into condensed notes using generative AI. DAX Copilot by Nuance, incorporated into electronic health records, serves as a prime example of how these technologies have alleviated documentation challenges in healthcare environments. However, the exemption of physicians from informed consent obligations for ambient listening technologies, influenced by the state medical society’s objections, has sparked debate. This exclusionary position may disregard the risks, ethical implications, and responsibilities associated with using such technology in clinical settings.

Ethical analyses published in JAMA clarify the wider consequences of ambient listening systems, particularly the recurrence of “hallucinations” by generative AI—where seemingly credible but inaccurate information might be generated. This risk necessitates careful monitoring by healthcare providers, highlighting concerns regarding inaccuracies in medical records, which are largely attributed to the technology’s inherent automation bias. The importance of precise documentation is critical, especially in legal circumstances such as malpractice litigation.

In summary, while Illinois’ legislative initiatives to prevent unsupervised AI therapy demonstrate a forward-thinking strategy for consumer protection, they do not fully tackle essential issues related to ambient listening technology and informed consent in clinical settings. Comprehensive understanding and clear communication between patients and physicians remain crucial, as does the ongoing dialogue surrounding the evolving role of AI within mental health services.