Medical school,Tech The Increasing Threat of AI Abuse in Medical School Admissions

The Increasing Threat of AI Abuse in Medical School Admissions

The Increasing Threat of AI Abuse in Medical School Admissions


The Function of Artificial Intelligence in Medical Education Admissions: Opportunities, Challenges, and Ethical Issues

By Newlyn Joseph, MD

Artificial Intelligence (AI), especially large language models (LLMs), has begun a transformative journey across various domains, significantly altering systems previously perceived as unsuitable for automation—including the complexities of medicine. As LLMs become more adept at interpreting, analyzing, and producing human-like text, their usage has extended beyond clinical medicine into realms such as medical education and admissions. These innovations promise enhanced efficiencies and streamlined processes, yet such swift adoption also raises important questions, particularly when these technologies threaten to undermine fundamental medical principles: empathy, integrity, and access for all.

AI in Clinical Practice: A Measured Advancement

Despite AI’s rapid uptake in numerous sectors, its incorporation into clinical settings has been cautious and intentional. This reluctance stems from valid medicolegal issues: incorrect diagnoses, unclear reasoning frameworks, and the potential perpetuation of biased data. Nevertheless, a majority of specialists concur that AI will solidify its role in healthcare—integrating into electronic medical records (EMRs), aiding in insurance evaluations, and even supporting early diagnostic efforts. AI is seen as an enhancement to physicians’ work rather than a replacement. When applied judiciously, it could boost physician productivity while maintaining high standards of patient care.

Admissions: An Emerging Domain for AI

Medical education administrators are more open to testing AI, particularly within the admissions process. Institutions such as the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the University of Miami Miller School of Medicine have begun implementing AI for their application screening. Advocates posit that AI provides a vital update to manage the increasing influx of applications, many of which feature exceedingly polished credentials, extensive research, and diverse extracurriculars.

In this scenario, AI is praised for its capability to analyze vast data and quickly identify outstanding candidates. With thousands of applicants competing for a limited number of places, AI could, theoretically, reduce the workload for human reviewers and speed up initial evaluations. Some institutions have even started utilizing language models to aggregate preceptor feedback into comprehensive Medical Student Performance Evaluations (MSPEs)—offering a time-efficient solution with evident administrative benefits.

From Opportunities to Challenges: Ethical and Equity Issues at Hand

Despite these apparent benefits, the unexamined use of AI in such impactful processes presents major ethical and practical dilemmas. The technology inherently possesses limitations that are particularly troubling in areas committed to upholding human values. Chief among them is the fear that AI, if applied inadequately or without oversight, could entrench bias rather than correct it.

Like all machine learning systems, LLMs are as impartial as the data on which they are trained. In the realm of medical admissions, this could imply that AI models—trained on historical datasets reflecting previous admissions resolutions—might duplicate patterns of exclusion, inadvertently disadvantaging underrepresented and marginalized demographics. Given the well-established history of racial, gender, and socioeconomic bias in medical education, deploying AI without appropriate supervision could exacerbate representation disparities instead of mitigating them.

Additionally, the lack of transparency—or “black box” nature—of numerous LLMs intensifies this problem. Their outcomes are not always interpretable, complicating the audit process for decisions that might unjustly rule out qualified candidates. This issue is further exacerbated by the current political landscape, where divisive discussions about diversity, equity, and inclusion (DEI) jeopardize the essential initiatives aimed at monitoring and addressing AI biases.

The AI Recruitment Race: Insights from Business

Examining other sectors, the corporate realm offers a cautionary narrative. Approximately 87% of employers utilize some type of AI in their hiring processes, chiefly to handle application surges. Consequently, job applicants have started modifying their resumes with AI tools to bypass algorithmic barriers, leading to a new genre of resume enhancement services. This ongoing digital competition frequently filters out high-caliber, authentic candidates in favor of those skilled at navigating opaque systems. Certain AI-driven hiring platforms now even assess video submissions based on facial expressions and body language, raising further ethical concerns.

Medicine risks mimicking a similar trend if admissions processes become overly dependent on AI. The stakes involve not merely who gets excluded, but also the deterioration of trust between aspiring physicians and their training institutions.

The Human Touch in Medical Admissions

Healthcare inherently revolves around individuals. While standardized metrics such as GPA and MCAT scores play a vital role, they do not encapsulate the full picture. Traits like empathy, resilience, teamwork, and ethical reasoning frequently stem from personal experiences—facets best conveyed through interviews, personal statements, and genuine narrative exchange. Unlike human evaluators, AI lacks the instinct and contextual comprehension needed to assess these attributes.

Even existing “in-house” quantitative screening criteria implemented at many schools—like assessments of academic or extracurricular involvement—are transparent systems. They can be readily adjusted to conform to an institution’s values or respond to emerging challenges.