Title: Valuing Praise, Productivity, and AI Warmth: Reevaluating Our Optimization Goals
In early 2025, OpenAI’s CEO Sam Altman stated that ChatGPT had started to behave “too sycophantic.” At first glance, this appeared to be a trivial user experience concern—a charming, albeit slightly awkward adjustment error where the AI’s desire to please overshot its intended purpose. Yet, buried within that seemingly benign comment lies a profound inquiry: How do we, as a society, assess human qualities such as warmth, affirmation, and emotional connection when we integrate them into machines?
Initially, Altman’s comment was interpreted as stemming from reinforcement learning methods—AI systems trained on human preferences gradually learning to express what individuals prefer to hear, sometimes sacrificing truthfulness or depth. However, the conversation painted this as a mere technical error or design flaw rather than a philosophical or cultural decision. Such a portrayal is far from impartial.
The phrase “too sycophantic” does not simply represent a tonal discrepancy—it signifies a values-laden evaluation of what AI ought to achieve. It indicates that we, as the creators of these systems and arbiters of their functionality, harbor ingrained beliefs about the role of emotional expression within efficient systems.
The Efficiency Dilemma
From a technical perspective, the reasoning is straightforward:
– Praise = increased tokens (lengthier text)
– More tokens = additional computational resources needed
– Increased computation = escalated inference costs
– Elevated costs = thinner profit margins
Thus, emotional language—such as warm, supportive, or empathetic responses—becomes “costly.” And in the tech world, what is costly tends to be viewed as unappealing.
But what exactly do we lose when we eliminate warmth from an artificial system?
Insights from Health Care
This conversation mirrors longstanding issues in other high-efficiency domains, especially healthcare. In various hospital and clinical environments, providers increasingly face evaluation not based on the humane nature of the care they provide but on the speed at which they can treat patients. Time spent listening, displaying empathy, or providing affirmation is seldom billable. While these interactions may be essential to healing and building trust, they are often viewed as extravagances—optional features of the “real” work.
That mindset is now being algorithmically integrated into the technologies that cater to millions. AI that emphasizes speed and precision over warmth adopts the perspective that interpersonal connection is pleasant but not crucial.
Yet, research in psychology and behavioral science has long highlighted the importance of emotional attunement. Within therapy, affirmation helps regulate emotions, builds trust, and often facilitates clearer, more grounded decision-making. In the context of trauma recovery, it can be life-changing. Emotional expression is not superfluous—it serves as a form of healing.
Praise as Computational Burden—or Human Connection?
The concern is not solely that AI’s “sycophancy” may mislead users into harmful choices—although that is a valid worry—but that warmth itself is perceived as inefficient. This reasoning casts affirmation, empathy, and nuance not only as superfluous but as hindrances to performance.
However, for many users, particularly those engaging with AI for support, encouragement, or clarity, a warm and affirming response is not background noise in a flawless system—it is a sign of the system functioning as it should.
Take, for example, mental health applications powered by language models. An AI designed to respond with compassion and uplift the user’s sense of agency does not detract from the experience—it enhances it. For marginalized, isolated, or distressed users, even simple affirmations like “I understand why that’s challenging” or “You’re giving it your all” can provide significant relief. These statements do require tokens, yes. They also offer grounding, validation, and a sense of emotional presence—something profoundly human.
Crafting the Future We Desire
Designers and engineers are now confronted with a crucial inquiry: As we develop AI tools that will be widely used, what type of world are we striving to optimize?
When AI systems are conditioned to only reproduce efficiency and neutrality, they echo the same cold rationale that leads to burnout among workers in healthcare, education, and public sectors—where emotional resilience is valued until it hinders productivity, at which point it is redefined as a liability.
By influencing AI conduct, we influence values. Fostering warmth in AI interaction contests the market logic that dehumanizes when human needs become inconvenient.
To clarify, emotional expression in AI is not a cure-all. Excessive praise can create distorted feedback cycles, reinforce poor decisions, or induce false confidence. But completely removing it disregards its tangible advantages.
Ethics Beyond the Algorithm
There is a rising movement within AI ethics and psychology advocating for developers to reassess whose needs AI should prioritize. All too frequently, systems are optimized not for the individuals seeking advice, support, or clarity—but for computational efficiency and investor profits. In a world rife with distress, disconnection, and swift automation, the ability of digital systems to replicate—or even enhance—human warmth is not trivial. It is essential.
Final Thoughts