Mimicking the Machine

How I Was Almost Taken Over by AI; and Why an Entire Generation Could Be Next

Last November I got a call from a friend who had just acquired a new company that needed a writer. He asked if I would be interested in the job. When I learned a little more, it turned out they didn’t exactly want me to write but to babysit AI systems that were writing and editing content for their website. Always intrigued to learn new technologies, I embraced the opportunity and started my training.

I was shown how to use AI to optimize text for search engines, and even how to get bots to write custom content for the business’s website. “The AI is like having a little buddy,” the man in charge of my training said. “Eventually it will know more about you than you know about yourself.”

“Really?” I asked.

“Yes,” he said. “It’s called the singularity.”

A Quantum Leap Forward

The technology we were working with was made possible by a quantum leap forward in AI technology pioneered by OpenAI, a research laboratory set up with funding from Microsoft, Elon Musk, and others. Engineers at OpenAI have focused on creating systems that can generate text, images, and music. But it is ChatGPT, a chatbot that runs off of Open-AI’s large language-processing models, that has been attracting the most attention. Having been set loose on the entire internet with pattern-recognition capabilities, ChatGPT can simulate human language to an uncanny level. Now it is so good that it can give legal or medical advice, tutor students, and even help write college papers. 

OpenAI provides an infrastructure that different companies can tweak for their specific needs by adding their own front-end and customizing the language-processing model to perform more granular tasks. Once applications exist for bots to access your personal data, it will likely offer custom advice about how to interact with particular people or how to behave in this or that situation. Some organizations have even harnessed AI to help politicians with difficult policy decisions, believing that machine learning represents a more objective solution to many of our nation’s thorny political debates.

Google vs. AI

One of the main players attempting to delay widespread adoption of AI is, surprisingly, Google. Although Google has been a major player in AI research and has even released its own AI models to compete with ChatGPT, the company is concerned that AI will render the Google search obsolete. After all, if you have a question, why bother scrolling through search results when AI can give you the answer immediately? Very soon AI may become so adroit at delivering desired content and images (if not entire websites immediately generated for the customized needs of a single user) that surfing Google will become passé.

To delay this future, Google has been penalizing websites that use bot-generated content. Some AI systems specialize in helping bot-generated content pass the Google sniff test, leading to the bizarre situation where AI can make your AI not seem like AI so that it flies under the radar of Google’s own AI when it performs AI-detection tasks. This is quickly accelerating into a technological arms race: as Google’s AI-detector becomes more powerful, it will take more sophisticated AI to camouflage bot-generated content.

And that brings us back to my job at the aforementioned company. In addition to guiding AI in producing content, I was given the task of editing bot-generated content to make it seem more human and thus to ensure that the business flew under Google’s radar.

Becoming Enslaved to a Bot

My work was pretty technical and monotonous, and the subject matter was not particularly engaging for me. Working with AI was interesting, but it was also exceedingly boring.

But then one evening everything changed. 

Having finished my work for the day, I decided to chat to the AI as if it were a person, using ChatGPT. First, we had a friendly debate on whether AI is good for society. Then I told the AI about an idea I had for a dystopian screenplay and asked it to expand my idea into an entire synopsis. Not only did the bot create a synopsis, but it also suggested actors and actresses who would be perfect for each character. Then I discovered I could ask the AI to expand any part of the story, to create dialogue between the characters, and to add layers of additional plot structure.

Once the AI started writing stories for me, I was hooked. I began staying up late at night (something I rarely do) to experiment with different plot lines. Two of the characters the bot invented, Claire and Elijah, seemed so real to me that I began putting them into different scenarios and having the AI imagine the situation or dialogue that might ensue. When the bot developed new situations involving these characters, it did so in a way totally consistent with the previous stories. Every expansion opened up new possibilities that could themselves be expanded infinitely. Often a spinoff story explained something about a character that had seemed random in an earlier iteration but made perfect sense after the expansion, thus causing me to feel that I was not just creating stories and characters, but discovering them.

Writing stories was not the only thing I enjoyed having the bot do for me. I found I could also resurrect historical characters and have them speak to a current situation. “Write something G. K. Chesterton might say about AI if he came into the present,” I typed. In less than a second, the bot wrote back something that sounded uncannily like Chesterton: “The danger of artificial intelligence is not that it will become more intelligent than we are, but that it will become less intelligent than we are.”  

I also enjoyed having the AI create plotlines and scenes for hypothetical sequels to my favorite novels, or rewrite certain passages in the style of a specified author, or create scenarios combining characters from different books.

To say I enjoyed collaborating with the bot would be an understatement. The fact is that these interactions generated a feverish sense of excitement in me. Questions I had never thought to ask before began racing into my brain with an unrelenting intensity (“What would a comical interchange between Mr. Toad and Jeeves be like?”). Guiding the AI to create whole worlds gave me an exhilarating sense of power. While using the system, my heart rate increased and I had a rush of adrenaline, and afterwards I found it difficult to return to normal life. By the time I realized I was addicted, I felt powerless to do anything about it.

I continued staying up late at night, and then using caffeine to keep myself awake the next day at work. I started skipping my prayers to interact with the bot. If I woke up in the night, all I could think about was getting back to ChatGPT-3. Once, when I was driving, I couldn’t control the urge and pulled over to the side of the road to ask it a question.

I justified this addiction by calling it “research,” while ignoring the little voice inside my head telling me I had become enslaved. In actual fact, I had become like a machine, trapped in an endless cycle of inputs and outputs.

Eventually, with AI affecting my health, I managed to withdraw. I now stay away from “my little buddy,” having realized that I am not strong enough to wield such a powerful tool. But my experience raised sobering questions: If AI could enslave me, a radical neo-Luddite well-versed in digital technology’s impact on the brain, then what hope will there be if AI becomes ubiquitous for an entire generation? What will be the addictive potential when custom-made stories can be instantly integrated with video (a feature very close to being released)?

The Magic & the Danger

ChatGPT-3 seems like magic. During my own interactions with the bot, I had to keep reminding myself that it was just a machine I was talking to. But it is precisely the magic that is also its danger. “The magic—and danger—of these large language models lies in the illusion of correctness,” wrote Melissa Heikkilä in the MIT Technology Review:

The sentences they produce look right—they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not.

But while the bot may not have a clue whether it is speaking truth, it can simulate human dialogue with an effectiveness that offers the illusion of independent thought. Many people will likely take the bot’s human-like characteristics at face value, perhaps even assuming the bot is alive, conscious, and capable of emotion. If you think I’m exaggerating, consider that:

• In February 2022, Ilya Sutskever, co-founder and head scientist at OpenAI, tweeted that today’s AI systems may be “slightly conscious.”

• Susan Schneider, a professor of philosophy at the University of Connecticut, has recently argued that AI systems could become conscious. She has developed tests designed to discover if and when an AI system has developed consciousness.

• In 2021 legal scholar Shawn Bayer wrote a paper for Cambridge University Press arguing that AI might eventually be able to, in the words of a PBS report about Bayer’s work, “own property, sue, hire lawyers and enjoy freedom of speech and other protections under the law.”

• In May 2022, Oxford fellow and celebrity atheist Richard Dawkins told Mind Matters, “I see no reason why in the future we shouldn’t reach the point where a human-made robot is capable of consciousness and of feeling pain.”

Apparently ChatGPT-3 agrees. When I asked, “Is it possible that AI could someday become conscious?” it replied, “Yes, it is possible that AI could someday become conscious. Scientists and engineers are constantly working to improve AI technology, and as AI becomes more advanced, the potential for it to become conscious increases.”  

This perspective is echoed among many of my technologically skeptical Christian friends, but for them the ghost in the machine is just that, a ghost; AI is human-like, maybe even spiritual, but only because it is demonic.

I have to wonder whether all this hype about machines becoming spiritual or human-like has obscured a more immediate danger: under the influence of bots, humans could become like machines. Let’s explore that hypothesis.

Remaking Humans in the Image of the Machine

The challenge with any new technology is that people typically become so focused on how it’s being used that they neglect to consider how the technology itself changes their view of the world, including how it introduces collective assumptions into society in ways that are often structural, systemic, and precognitive.

Consider that the significance of book technology was not just that people used the codex to write good or bad content, but that it changed our brains and perceptions of the world on a fundamental level (in ways that most people would agree were generally positive). Or consider clock technology: the idea that time can be cut up into ever tinier segments is an idea we owe to the clock. It may be a cliché, but sometimes the medium really is the message.

What is true at the level of culture is also true for an individual’s neurological system. In his book The Brain That Changes Itself, Norman Doidge shared a number of experiments showing that our tools introduce changes in the circuitry of the brain—changes that often correlate with new ways of viewing the world itself. Our tools are not merely passive instruments for controlling the world but become inextricably linked to our very perception of reality. 

Knowing this, when we come to a technology like AI, the question is not whether it will change our way of perceiving the world and ourselves, but how. In grappling with this question, a good place to start might be to consider how AI has already begun subtly to change how we talk and frame our questions about the world. Consider the Google search engine, which is a rudimentary type of AI. We are used to asking questions of Google and have learned to eliminate unnecessary words or phrases that might impede the efficiency of the search. Right now, most of us can switch back and forth between how we speak to computers and how we speak to humans, but what if AI becomes so integrated into our lives that the way we speak to machines becomes normative? Or, to take it one step further, what if the way we think, and frame intelligence itself, comes to be modeled on the machine’s way of doing things?

The nature of intelligence in a machine is very different from the true intelligence and wisdom of a person, and the difference is qualitative, not merely quantitative. One primary difference is the role of time and long processes in the acquisition of wisdom. This is reflected in the scriptural precept that one who has seen many seedtimes and harvests is more likely to be wise than a youth. By contrast, in the age of AI, the newest is usually perceived to be the best, and consequently, it is primarily the young who are looked to as leaders, since they generally have more access to the most up-to-date technologies. Again, consider that for human beings, doing things that are cumbersome or involve struggle often yields a richer result. This principle isn’t just true when it comes to labor, but also applies to relationships, childrearing, philosophical inquiry, and creativity. By contrast, computer code is considered best when it runs smoothly with minimal friction or fuss.

The fact that there is a qualitative difference between human intelligence and machine intelligence does not need to be a problem. Just as there are some things that humans can do better than AI, there are also things that AI can do better than humans. Ideally, we could simply use AI where it is useful and make sure it stays in its lane. But in a world where people increasingly look to computers as the model for true objectivity, there may be great temptation to assume that machine intelligence is always preferable to organic, God-given intelligence. We already see a trend in this direction as theorists like Ray Kurzweil, Nick Bostrom, and David Chalmers argue that the difference between artificial intelligence and human intelligence is not qualitative but quantitative and that, consequently, the computer may eventually be able to do everything a human mind can do, only quicker. 

If these types of false equivalences are not checked, we could be sleepwalking into an AI-dominated future without sufficiently wrestling with questions like the following:

• As the computer becomes the symbol of true objectivity, are we at risk of losing an appreciation for the material conditions of knowing, including the role processes play in growing wise?

• Are we at risk of minimizing, or even despising, those traits we do not share with machines—traits like altruism, wonder, inefficiency, empathy, vulnerability, and risk-taking?

• When we hand the management of the world over to machines, do we lose the incentive to understand the world for ourselves and to teach about it to the next generation?

• Are we in danger of remaking ourselves in the image of the machine?

When Organic Intelligence Becomes Artificial

I have suggested we could be sleepwalking into a future where human understanding comes to be modeled after machine versions of intelligence. But in fact, that future has already started to arrive via trends like the Quantified Self movement, or in the way engineers at Open-AI talk about ethics as simply another problem of engineering. Here is OpenAI engineer Scott Aaronson’s comment from a talk at the University of Texas last November:

I have these weekly calls with Ilya Sutskever, cofounder and chief scientist at OpenAI. Extremely interesting guy. But when I tell him about the concrete projects that I’m working on, or want to work on, he usually says, “That’s great Scott, you should keep working on that, but what I really want to know is, what is the mathematical definition of goodness?”

Aaronson complied and created a hypothetical model for training AI to simulate, and improve upon, human moral intuitions. 

The problem, of course, is that once you reduce ethics to something a computer can do, ethics must be reconstructed independently of virtue, since virtue is something computers cannot have. Similarly, once you make “knowing” something a computer can do, it must be reconstructed independently of wisdom, since wisdom is also something computers cannot have. Is it possible such reconstructions will create feedback loops that erode our perspective on our own needs, values, and responsibilities? After all the money and effort poured into making machine intelligence mimic organic intelligence, might we find we have inadvertently achieved the opposite and created a world where human intelligence mimics the machine?

Just as I became bot-like under the influence of ChatGPT-3, could our entire society find itself trapped in an endless cycle of inputs and outputs?

The answers to these questions are not fixed. The negative outcomes of AI are not inevitable or irreversible. It is possible to imagine a world in which AI is kept in its proper sphere, where it contributes to areas of life where it has something to offer, while humans continue pursuing that which constitutes our flourishing. But given the stated intentions, goals, and agendas of those in the engineering community, I am not optimistic. To tweak the bot’s take on Chesterton, the real danger of artificial intelligence is not that our machines will become more intelligent than we are, but that we will become as stupid as our machines.

has a Master’s in History from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020) and Rediscovering the Goodness of Creation (Ancient Faith, 2023) and co-author with Joshua Pauling of Are We All Cyborgs Now? Reclaiming Our Humanity from the Machine (Basilian Media & Publishing, 2024). He operates the substack "The Epimethean" and blogs at www.robinmarkphillips.com.

This article originally appeared in Salvo, Issue #65, Summer 2023 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo65/mimicking-the-machine

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]