
When I penned my inaugural novel last year, I wrestled with the moral considerations of employing a large language model to assist in my writing journey, which I have been gradually honing for decades. Prompt after prompt, my thoughts were flooded with concerns about how each query consumed a notable amount of energy and the opaque nature of the origins of the generated text. “Am I depending too much on this emerging technology? At what point does this stop being my creation? Where is the boundary, and have I overstepped it?”
I soon recognized that GPT-4 was not the creative writing sorcerer many had claimed, capable of immediately supplanting the human brain with its flawless artificial intelligence. Was it effective? For certain tasks, yes. It could generate credible dialogue and transition through well-described scenes effortlessly. Indeed, it was clear how it could combine the necessary facts to create persuasive arguments in an essay. Yet, its narrative skills faltered when faced with some of the creative writer’s most impactful techniques: thematic coherence and subtlety.
In order for the chatbot to generate content fitting within the world I had constructed, I needed to steer it, at times, to remarkable extents. For each 800 words it assembled, I had to supply at least half of that to keep it aligned. I crumpled many of its suggestions like discarded paper and discarded them into my laptop’s recycle bin. I extensively revised others and ultimately incorporated them. While it was far from a fully automated endeavor, utilizing AI enabled me to swiftly discover the right ideas, facilitating an efficient writing process that allowed me to complete my book in less than two months. Holding the first copy of my book, which took over nine years to come to life, brought tears to my eyes and an unforgettable sense of achievement.
Whether we appreciate it or not, AI (similarly to televisions, computers, the Internet, and mobile devices before it) has swiftly infiltrated every aspect of our lives. Although it feels akin to unleashing Pandora’s box without clear insight into whether AI is the evil that emerged or the hope that resides within, AI has undoubtedly become an enduring element of our existence. I have been cautious of the human-driven surge of systematically untested AI products and services because, as a species, we have yet to establish the ethical and moral frameworks for this technology, akin to Isaac Asimov’s Laws of Robotics. “Where is the boundary, and have we already crossed it?”
In my professional life, I serve as a family physician, tending to patients in a federally qualified health center. As a young Millennial, I experienced the relentless technological transformations of the late 1990s and 2000s. By all metrics, I graduated from high school with a diploma in techno-literacy. Following four years of college, one year of graduate studies, and four years of medical school throughout the 2010s and early 2020s, I initiated my residency training to become a primary care physician, equipped with a thorough understanding of the skills required to navigate the labyrinth that is the electronic health record.
Even though I had been trained to master the nuances of these tools primarily designed for medical billing, I faced the same hurdles that more experienced clinicians had described years prior. If I abandoned my computer for more focused interactions with my patients, I would ultimately be burdened with numerous hours of charting that I would need to complete unpaid after work, leading me toward burnout and a less effective, potentially shorter, career in medicine. Thus, I did what most felt compelled to do: gazing at the computer screen in the patient’s room to chisel away at the mountain of required documentation while striving to deliver high-quality health care.
Throughout the years, I have made this computer-physician hybrid appear seamless and work as efficiently as possible, with variable outcomes, to be sure. When I began to hear about the latest applications of various AI models employing ambient listening to document clinical encounters like an automated medical scribe, I felt intrigued. Thrilled, even. It reminded me of two things: how the chatbot unlocked a deeper potential within me as a writer and a meaningful conversation I had with an emergency physician about this very idea when I was working as a medical scribe.
With many of the available applications, I could download an app, push a button, and let the AI record the dialogue, transcribe it, and condense the relevant clinical information into a fairly complete note. Once I started encountering those integrated into electronic health records and other vetted, HIPAA-compliant platforms like Doximity, my ethical concerns about the technology diminished. After receiving training to use one of the so-called “AI scribes” at my current workplace, I was eager to start utilizing it and reclaim at least a portion of the professional autonomy that electronic health records had stripped away from physicians long ago.
When I stepped into the