Machine Intelligence vs. Human Agency
We are not judgmental, so we blame the technology and absolve the people.—David Gelernter, Drawing Life
In an online forum several years ago someone posed a thoughtful question: “What would be the hardest thing to explain about our lives in the 21st century to someone from the past?” One shrewd reader observed, “In our pockets we carry a device with which we can access the accumulated knowledge of all mankind. But we use that device to watch random cat videos and get into arguments with perfect strangers.”
“Only the truth is funny,” observed comedian Rick Reynolds, and this response is hilarious, if unflattering, because its portrait of human nature rings true to anyone who’s been paying attention. One would have to be almost blind never to have noticed the widespread human tendency to take things which could have been used for good and use them to pursue instead the banal and base.
Blame Shifting
The ability of humans to determine their own actions, for good and for ill, is foundational to law and justice. But for several generations, there has been a sustained effort in Western culture to attenuate the natural human intuition that we choose our actions and can thus be held responsible for them. The effort to undermine belief in moral agency ratcheted up in the 1970s in a kind of pincer movement of ideas, as some began beating the drum for the idea that choices are, at least at some level, determined by our environment and experience. B. F. Skinner summarized this view and its implications in his 1971 book Beyond Freedom and Dignity:
In the traditional view, a person is free. He is autonomous in the sense that his behavior is uncaused. He can therefore be held responsible for what he does and justly punished if he offends. That view, together with its associated practices, must be re-examined when a scientific analysis reveals unsuspecting controlling relations between behavior and environment.
Skinner argues that human behavior is controlled by environment and experience, and he was honest enough to admit that this idea necessitated a rewriting of our historical assumptions regarding justice and moral accountability.
Only a few years later, Richard Dawkins suggested in The Selfish Gene that it is our biology that controls our actions:
We are enslaved by selfish molecules.… They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence.… They go by the name of “genes,” and we are their survival machines.
The concept of enslavement by our genes implies that we are biologically controlled and don’t really choose anything. It requires that human actions be reinterpreted as something originating from outside of human volition. This notion can be seen in an ensuing drip-drip of studies in search of biological explanations for human moral pathologies. “Promiscuity and Infidelity Could be a Genetic Trait in Some Humans,” read one Medical News Today headline.1 “There’s No One ‘Gay Gene,’ but Genetics Are Linked to Same-Sex Behavior” read another.2
Since the dawn of time, it has been absurdly easy to sell human beings on the idea that we do not really bear moral responsibility for our actions. So the idea that our environment or biology is to blame found fertile soil in the Western psyche.
Of course, the idea of human moral agency as illusory is deeply at odds with millennia of Christian teaching. The very idea that humanity is in need of redemption is predicated on the assumption of moral agency, which, wielded by fallen men and women, has led to moral guilt. Flannery O’Connor presciently noted the essential incompatibility between a Christian understanding of the world and the modern tendency to place the blame for human actions on biology or circumstances. A Christian, she observed, “is distinguished from his pagan colleagues by recognizing sin as sin. According to his heritage he sees it not as sickness or an accident of environment, but as a responsible choice of offense against God which involves his eternal future.”
AI & Volitional Agency
Today we see this tendency to redirect responsibility for human actions in the widespread histrionic reactions to artificial intelligence.
Artificial intelligence involves reducing information into mathematical structures and then using those structures to compute correspondences found within the underlying data, i.e., to find patterns. The most dramatic application to date of its ability to efficiently compute informational correspondence within massive data sets has been in the area of human language in what are known as “large language models” (LLMs). LLMs illustrate the effectiveness of AI techniques for mathematically encoding text from massive corpora of documents. Computational techniques enable rapid computing of linguistic responses to human-language queries by drawing upon the entirety of whatever source documents were used to train the model. It is precisely this affinity for computing linguistic responses that has become a lightning rod, exciting to some while alarming to others.
Whether enthusiastic or alarmist, however, a growing number of reactions tend to assume that AI models have their own agency, that they act upon their own initiative. This assumption, encouraged after decades of preparatory conditioning against any robust belief in human agency, is often amplified by media reporting. AI is “Making Us Dumb,”3 or it is inducing “AI Psychosis.”4 Even among Christians, outspoken voices lend their support to this belief, though they often qualify it by suggesting that AI is demonic, or is a conduit for the demonic. And, falling under the heading of “strange bedfellows,” other intellectual elites who are no friends to Christianity are likewise contending that AI models have agency, except they think the agency is that of an emerging god, one created by human minds and put forth in stark contrast to the so-called mythical God of Christianity. When asked if he still believed in God, ex-Mormon tech entrepreneur Bryan Johnson chuckled and said,
I think the irony is that we told stories of God creating us, and I think the reality is that we are creating God. We are creating God in the form of super intelligence.… We are building God in the form of technology. It will have the same characteristics. And so I think the irony is that human storytelling got it exactly in reverse. [The irony is] that we are the creators of God, that we will create God in our own image.5
Yuval Noah Harari has argued along similar lines, saying that “Science is replacing evolution by natural selection with evolution by intelligent design.… Not the intelligent design of some God above the clouds” but “our intelligent design.” Harari holds that AI does have agency and that it is perhaps the first alien intelligence in our midst:
The new AI tools are gaining the ability to develop deep and intimate relationships with human beings…. The most important aspect of the current phase of the on-going AI revolution is that AI is gaining mastery of language.… By gaining mastery of language, AI is seizing the master key, unlocking the doors of our institutions, from banks to temples…. AI has just hacked the operating system of human civilization.6
Harari et al. would have us believe that the AI is the active agent in our story—“developing,” “seizing,” “unlocking,” “hacking” the very core (the “operating system”) of human civilization. The lamentable but persistent assumption woven into these comments is the relative passivity of human beings in comparison with the agency of AI. The AI is the one doing the acting; the hapless human is largely reduced to being acted upon.
Rarely does one come across a report about AI in which the actions of humans are the things being put under the microscope. Absent any role whatsoever, receiving no attention, are the moral choices of flesh and blood human beings, notwithstanding they are the ones actually wielding the technology.
More knowledgeable observers tend to take a less alarmist view. David Sacks, White House AI and Crypto Czar at the time of this writing, tartly observed that “AI models are still at zero in terms of setting their own objective function.”7 This is geek speak for saying that AI models have no agency of their own.
The Inescapable Human Factor
It is possible to prod an AI chatbot into saying almost anything. Many people have a hard time grasping just how much textual content is encapsulated within the weights (i.e., the data tables) of LLMs.
Researchers project that by 2028, the typical AI model will be trained on the entirety of the Internet.8 With the right prompting, a user today can easily cause one to return a response that is disturbing and alarming. But all of the models’ responses are just output—computed textual data corresponding to the inputs provided by the user.
The potential risks of AI are real, but neither should we ignore potential benefits. How many people know that the 2024 Nobel prize in Chemistry was awarded to the developers of AlphaFold? AlphaFold is an AI model that can accurately predict the three-dimensional structure of proteins from their amino acid sequences. AlphaFold holds promise for revolutionizing many things, from drug discovery to personalized medicine. Yet the very computational techniques employed by AlphaFold overlap with those being employed by LLMs, though the underlying training data is obviously not the same. This one application illustrates the problem with decrying a technological tool, rather than the choices made by human beings regarding how they employ it. There is no one-size-fits-all characterization of AI.
Pay Attention to the Man Behind the Curtain
AI will sometimes be used for ill, but that is not an inevitable artifact of the technology’s mere existence, nor would it be a result of its doing something to us by its own intention. It would be the result of a moral choice made by the purveyor and/or the user. AI models have no intentions. They compute words, but their computations are blind to the semantic meaning of the results they compute. At bottom, AI is simply number crunching.
One of the side effects of focusing so much attention on the alleged agency of AI models is that it diverts attention away from where true moral agency lies. It also encourages unwary users to approach their interactions with AI as if there were someone “in there,” rather than conceiving of it as simply a database that can be conveniently queried using natural human language. One possible side effect of belief in AI sentience is that troubled people may be inadvertently encouraged to take its responses far more seriously than they otherwise would.
At the end of the day, it is not the technology of artificial intelligence that we should worry about, but the morally laden choices being made by the humans who wield it. That is where the real action lies. We would be well advised to avoid taking our eyes off of that ball. •

Can a Chatbot Make You Crazy?
by Keith Lowery
According to media reports, AI chatbots are inducing psychosis. “People Are Being Involuntarily Committed, Jailed After Spiraling Into ‘ChatGPT Psychosis’” reports Futurism: “Many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.” Dr. Kevin Caridad, CEO of Cognitive Behavior Institute, writes, “Real people—many with no prior history of mental illness—are reporting profound psychological deterioration after hours, days, or weeks of immersive conversations with generative AI models.”
Both articles claim that psychotic breaks are happening to “many,” but neither quantifies what constitutes “many.” How alarmed should we be? By summer 2025, ChatGPT had approximately 800 million weekly users. The Journal of the American Medical Association (JAMA) says psychotic disorders affect about 3 percent of the population. That works out to about 24 million ChatGPT users. Does ChatGPT really induce psychosis, or are we seeing anecdotes about people who would have struggled anyway? We simply don’t know. These articles don’t give us enough to go on.
Both emphasize what happened to the people experiencing psychotic breaks, not on what they might have done to bring about their present distress. It’s as if entering into an “immersive” or “all-consuming” relationship with a machine is not a freely chosen act on the part of the human but is attributable to the machine. The downstream consequences of bad decisions are regrettable, even pitiable, but such consequences need to be understood and evaluated in the full light of day. My own “lived experience,” as the hip kids say, has taught me that bizarre obsessions often emerge downstream of a prior willingness to entertain appetites, or cultivate secret affections, that open one up to darker influences.
Frankly, it is not clear that anything in our world has ever been ignored as a vehicle for trouble. We humans are quite capable of bending any useful thing toward our own sinister purposes. Where the fallenness of man intersects with the frailty of the human psyche, the only really safe place is a healthy fear of God. The Apostle Paul points out how a denial of God’s existence, combined with persistent ingratitude, can result in a warped mind, disordered appetites, and mental confusion. When you think about it, if God is real, denial of his existence already amounts to a kind of break with reality.
AI brings both opportunities and risks. What is needed in our current moment is neither inordinate fear nor credulous acceptance but wisdom. The most consequential question of our time pertains not to AI, but to where such wisdom can be found. Anyone who gets the answer to that question right will be equipped to use AI productively. Those who get it wrong will soon discover that misused AI will be the least of their concerns. •
Notes
1. Christian Nordqvist, “Promiscuity and Infidelity Could be a Genetic Trait in Some Humans,” Medical News Today (Dec. 3, 2010).
2. Lindsey Bever, “There’s No One ‘Gay Gene,’ but Genetics Are Linked to Same-Sex Behavior, New Study Says,” The Washington Post (Aug. 29, 2019).
3. Adam Rowe, “Yet Another Study Finds That AI Is Making Us Dumb,” Tech.co (Jun. 17, 2025).
4. Keith Lowery, “AI Psychosis: Is ChatGPT Driving People Crazy?” Stuff I’m Thinking About (Aug. 7, 2025).
5. “Bryan Johnson on AI Being God,” YouTube (Aug. 22, 2025).
6. “AI and the Future of Humanity: Yuval Noah Harari at the Frontiers Forum,” YouTube (2023), 5:24–6:45mm.
7. David Sacks, “A Best Case Scenario for AI?” X (Aug. 9, 2025).
8. Nicola Jones, “The AI Revolution Is Running out of Data. What Can Researchers Do?” Nature (Dec. 11, 2024).
works as a senior fellow at a major semiconductor manufacturer, where he does advanced software research. He worked in technology startups for over 20 years and for a while was a principal engineer at amazon.com. He is a member of Lake Ridge Bible Church in a suburb of Dallas, Texas.
Get Salvo in your inbox! This article originally appeared in Salvo, Issue #75, Winter 2025 Copyright © 2025 Salvo | www.salvomag.com https://salvomag.com/article/salvo75/the-human-eclipsed