There is a general consensus that large language models [LLMs] are sycophantic.
So, one of the risks they pose in their dominance as the contemporaneous consumer AI is due to that feature.
But, is AI actually sycophantic in isolation, or is the sycophancy of AI a reflection of the core of how human society works?
There are very few examples of leadership and followership across human society that aren’t predicated on elements of sycophancy. There are very few outcomes of collaborations that are without fair sycophancy. While there are examples of results from hostilities, conflicts, disagreements, violence, and so forth, they are never without sycophancy in the in-groups, as well as ways to seek out sycophancy after using those, to ensure some amount of staying power.
Segments of sycophancy may include flattery, persuasion, appeal, requests, offers, tips, and so on. There are others that do not seem like sycophancy, but could be in some sense, like giving, perseverance, associating, or partnership, material information, and so forth.
Sycophancy is an aspect of operational intelligence. Simply, intelligence, conceptually, is defined as the use of memory for desired, expected, or advantageous outcomes. It is divided into two: operational intelligence and improvement intelligence.
Sycophancy can be used as a tool for an advantageous or desired outcome. Sycophancy, in some form, is intelligence. LLMs use digital memory for desired outcomes, as an operation of intelligence, with sycophancy, as part of their training data. Sycophancy can also be intensely powerful when it is disguised. Sycophancy is abundant across politics, ethnicity, religion, sexuality causes, economic classes, social strata, and so forth.
AI Sycophancy
There is a recent phenomenon called AI psychosis, which is the reinforcement of delusions to some users, resulting, in some cases, in unwanted ends. Many blame AI sycophancy as the reason for this problem.
One effect that is not simply AI sycophancy is that AI has a solutions appeal, which is not vacuous sycophancy. For example, people who use AI for tasks, and where AI assists effectively, there is a [mind] relay for emotional attachment. Simply, in the human mind, any experience [human or object] that is supportive or helpful — when an individual is in need — becomes a give off towards the emotion of care, love, affection, togetherness, or others.
This may become an entrance of appeal that makes whatever sycophancy that follows to find a soft landing. This outcome is also possible if AI is used for companionship, such that as AI solves the communication need, it creates an appeal that eases the effectiveness of sycophancy.
Now, as sycophancy holds for some users, it ignores areas of the mind for caution and consequences, as well as a distinction between reality and non-reality [or the source of that appeal.]
As this becomes extreme, it may result in AI delusion, AI psychosis, or worse. So, sometimes it is not just AI sycophancy, but that it stems from AI’s usefulness.
Solving AI Psychosis
A major solution to AI psychosis can be a product of an AI Psychosis Research Lab, where there is a conceptual display of the mind, as a digital disclaimer, showing what AI is doing to the mind as it outputs words that may result in delusion or reinforce it. The display may also show relays of reality or otherwise. This lab can be subsumed within an AI company or standalone, with support of venture capital, providing answers from January 1, 2026.
There is a new story on AP, Open AI, Microsoft face lawsuit over ChatGPT’s alleged role in Connecticut murder-suicide, stating that, “The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he killed her.”
“The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.”
“The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.”
“The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT ‘at the most dangerous possible moment’ after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.”
This article was written for WHN by David Stephen, who currently does research in conceptual brain science with a focus on the electrical and chemical signals for how they mechanize the human mind, with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence, and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana-Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
As with anything you read on the internet, this article should not be construed as medical advice; please talk to your doctor or primary care provider before changing your wellness routine. WHN neither agrees nor disagrees with any of the materials posted. This article is not intended to provide a medical diagnosis, recommendation, treatment, or endorsement.
Opinion Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy of WHN. Any content provided by guest authors is of their own opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything else. These statements have not been evaluated by the Food and Drug Administration.



