Title: Recasting AI Hallucinations as Fabrications: An Urgent Appeal for Ethical Responsibility in Medicine and Law
By Muhamad Aly Rifai, MD, FACP, FAPA, FACLP
Artificial intelligence (AI) stands at the cutting edge of innovation, ready to transform medicine, law, and almost every aspect of human activity. However, a concerning and overlooked issue threatens to destabilize the very foundation of these fields: trust.
Central to professions such as psychiatry and law is a core value—verifiable truth. When that foundation is compromised, the repercussions extend beyond mere academic or procedural concerns; they touch upon our humanity. The so-called “hallucinations” of AI—system-generated fabrications that assert false information with confidence—constitute a significant and pressing danger to both ethical standards and public safety.
For a psychiatrist, the term hallucination holds concrete clinical importance. My patients experience intense, distressing perceptual disruptions—voices, visions, or feelings that lack a basis in reality. These involuntary neurological events merit compassion and clinical intervention.
Labeling AI-generated mistakes as “hallucinations” trivializes the serious experiences of my patients. More critically, it obscures the reality of what AI is creating: counterfeit knowledge, lacking intent but not devoid of consequences. These are not “hallucinations.” They are fabrications.
Mislabeling the Issue Undermines Accountability
The euphemistic categorization of AI inaccuracies as “hallucinations” has two perilous effects:
1. It shifts responsibility away from developers and users.
2. It diverts essential examination from the ethical ramifications of such inaccuracies.
Robin Emsley’s editorial in Schizophrenia highlights this risk vividly. When Emsley employed ChatGPT to generate references for antipsychotic research, he received citations that initially seemed credible—but upon checking, most were completely fictitious. This was no isolated incident. Further investigation revealed that in a sample of 115 medical citations produced by AI models like ChatGPT, an astonishing 93% were either wrong or purely fabricated.
Such widespread error isn’t a mere oversight. It represents a systemic issue rooted in the probabilistic nature of large language models (LLMs). These models produce text based on prediction rather than logic, fact, or ethics.
Legal Consequences Are Equally Serious
The threats extend beyond the field of medicine. In a notorious court case, lawyer Steven Schwartz unknowingly referenced six non-existent legal cases fabricated by ChatGPT in a legal document. He faced sanctions from the court, yet the more pressing question is: How did such a fundamentally flawed tool infiltrate an arena where accuracy and precedent are paramount?
The fact that a trained lawyer could be so readily deceived underscores the persuasive and professional nature of these AI outputs. This issue has escalated to the point where multiple federal judges have issued orders prohibiting the use of generative AI in legal submissions unless disclosures are provided.
A Comparison of Neurological and Algorithmic Processes
Interestingly, we observe a concerning resemblance in clinical psychiatry—in the case of musical hallucinations after sensory alteration. For instance, patients receiving cochlear implants may suddenly encounter ongoing musical hallucinations. Deprived of usual sensory input, the brain compensates by creating sounds internally, often becoming persistent and distressing.
AI models operate in a comparable manner, albeit through algorithmic mechanisms instead of neuronal ones. When faced with insufficient data to respond to a query, AI fills the “informational void” with predictive text sequences. This results not in mere mistakes, but in outputs that, to the untrained observer, resemble well-crafted, expertly articulated truths.
But that apparent truth is a falsehood.
Ethical Protocols Must Evolve
The ramifications of these AI-generated fictions are tangible and quantifiable:
– Medical students might inadvertently reference fabricated information in research papers.
– Physicians could risk making treatment decisions based on imaginary guidelines.
– Legal briefs might be built upon fictitious precedents, eroding trust in the justice system.
This provokes existential inquiries for professions grounded in truthfulness:
– Who bears responsibility when AI misguides?
– How do we regulate instruments that generate information while escaping accountability?
– Can medical or legal practitioners realistically be expected to verify every AI output while also being compelled to provide services promptly and economically?
Call to Action: Immediate Steps Needed
1. Introduce Comprehensive Professional Training
In every field, from medicine to law, we must educate professionals about the dangers of AI with the same intensity we apply to other ethical and clinical principles. Specialized modules should prepare users not only on utilizing AI—but also on the precautions to take.
2. Mandate Disclosure and Accountability Protocols
Just as we require conflict-of-interest and funding disclosures, all scholarly, clinical, or legal documents must expressly state when and how AI contributed to content generation. Transparency cultivates accountability.
3. Implement Integrated Fact-Checking