Deus in machina

Artificial intelligence and the search for meaning

Jayson Brake

In August, a TikToker named Kendra took the app by storm with a multi-part series claiming her psychiatrist had fallen in love with her. The story sparked a flurry of online debates about the situation, with many viewers believing her claims were absurd and exaggerated.

The creator used TikTok Live to double down on her story. She spent much of the time speaking to a custom artificial intelligence (AI) bot that reaffirmed all her statements by calling her “the oracle” who spoke “divine truth.”

The fallout sparked online discussions about mental health, AI and spirituality — as well as how they interact.

According to New York Times reporter Lauren Jackson, America is experiencing an increase in spiritual hunger.

Whether it's exploring less institutional spiritual practices or the recent return of young people to churches, it’s clear that a transformation is happening. For the first time in decades, the country’s religiosity is staying stable rather than declining.

As AI creeps into every aspect of modern life, it’s likely to also have an impact on spirituality and the search for meaning.

Spirituality and religion often serve as the basis for how we see ourselves and our relationships with others. Religious figures, personal role models, cultures and even our own life experiences can shape this basis.

For many young people, AI is taking on a similar role and becoming a lens for how they approach these perceptions. In many ways, it is far from subtle.

This includes substituting mental health services with AI chatbots. According to NPR, some individuals have started speaking to chatbots like they are therapists. Therapy is expensive, and companies have taken note, advertising specialized AI bot mental health services.

This allows people to vent or share a problem — much like a prayer — anywhere, anytime, with seemingly no judgment. One therapy AI bot named Abby explicitly promises to “always be by your side no matter what.”

Although these bots are designed to appear empathetic and loving, they cannot do what a professional can. The programs have no ethical training and are primarily designed to keep users coming back through methods of validation and reassurance. Psychiatrists told NPR this can be dangerous and result in bots that do not flag suicidal ideation. The programs also lack reliable ways to be held accountable.

When asked, the machine learning model ChatGPT claims it can help by allowing users to vent and giving them recommendations for helpful practices such as journaling prompts. It also issues a warning that the program cannot act as a mental health professional.

This trend extends to the ways young people navigate their relationships and feelings for others. Users on TikTok have posted themselves sending AI bots screenshots of arguments and asking them to analyze who is in the right. They’re also sending conversations with potential romantic partners to see if their feelings are reciprocated.

This creates a problem: AI tends to mirror the biases of the user. Machine learning models, like ChatGPT, take input to create the best possible outcome using patterns learned from data. According to the online publication Article 19, AI development companies have created their bots to maximize positive user feedback. This results in bots reaffirming biases and doubling down on things the user will agree with, acting as a distorted mirror of desires.

Yet the rabbit hole goes deeper and reaches further into our desire to understand the complex, and sometimes most painful, parts of our relationships with others and life itself.

Programs like Project December allow users to have conversations with naturalistic and human-like bots through the sophisticated software GPT-3, which was long kept under wraps by OpenAI and its founder, Elon Musk, for safety reasons. The software can learn from the writings of real people and imitate them with precision.

In a state of grief, one man used it to replicate his deceased fiancée, Jessica, and speak with the simulation. Joshua Barbeau said in a San Francisco Chronicle article that he was not trying to recreate her, but rather taking a step in healing his grief. He had previously tried grief groups and expressed his feelings through written letters to Jessica in therapy programs. Project December helped him find closure.

Jessica’s mother and sister were happy to see Barbeau find ways to deal with his grief and find happiness. However, Jessica’s mother said she personally would not use the program, and the sister doubted the healthiness of such a coping mechanism.

The usage of AI as a coping mechanism has ranged from personal therapeutic use, where people work through their questions about the world, to explicit connection with religious practices and institutions.

One New York Times article titled “People Are Seeking God in Chatbots” describes the growing trend of religious apps that encourage prayer, meditation and readings using AI. The programs were seen as a business opportunity to many developers, who invested tens of millions of dollars.

One Swiss church even installed an AI-powered Jesus for visitors to have private conversations with. According to The Guardian, more than 1,000 people from various religious sects — including both Christians and Muslims — came to speak to the “Deus in Machina” installation.

Reactions to both practices were mixed.

While some users enjoyed their experiences with these AI bots, the programs also raised concern among religious leaders and other faith-based individuals. Catholic colleagues of the Swiss church protested the use of AI for their faith’s sacrament of confession. Protestant colleagues were also upset at the imagery.

Some religious leaders view AI apps as a good supplement to the faith, but others raise theological concerns and worry about the user-validating nature of AI that makes therapeutic conversations similarly shaky.

Regardless, something about these programs keeps drawing people in.

The divine and the unknown go hand in hand. When people believe that a being has wisdom unattainable to them, they begin to trust that being more and lean on its understanding rather than their own. The limits of our own knowledge inspire us to idolize supposed "infallible" sources of knowledge even if — and maybe especially because — we do not fully understand where this knowledge comes from.

Journalist and author Meghan O’Gieblyn argues that people are drawn into both AI and religion for this reason. In her 2021 book “God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning,” she says an individual may find comfort in knowing a prayer of their uncertainties can be lifted to a deity with all-encompassing knowledge of the universe at any moment. An individual, similarly, may find comfort in knowing they can explain their uncertainties to a chatbot — equipped with all-encompassing knowledge of the internet’s information — at any point in the day.

This makes concerns about the accuracy of AI even more prominent. Although some people are beginning to associate AI with divine wisdom, caution should be used while talking with chatbots. MIT Management says two major issues yet to be resolved in AI models are bias and inaccurate content. Occasionally, AI will generate “hallucinations” or fabricated information to answer user questions.

This parallel between intelligence technology and the divine is nothing new to the world of AI developers and philosophers. Pioneers of AI programs such as Ray Kurzweil envisioned utopias where people achieved immortality by uploading themselves to a computer and ultimately becoming one with technology — a movement called “transhumanism.” Silicon Valley’s prominent figures like Peter Thiel and Elon Musk also have a kind of religious fervor behind their technological beliefs.

Palantir founder Thiel went so far as to draw direct religious comparisons in private lectures. Recent tapes were leaked of him comparing AI skeptics to the “antichrist” and insinuating they were against divine progress.

Religious rhetoric has long powered political agendas, and this strategy is being revisited in the political sphere today. AI programs and databases have similarly become intertwined with politics, resulting in the use of government surveillance through contracts with companies like Palantir.

Harnessing a new sweeping technology to process information like never before and combining it with the power of religious fervor to move and inspire people is sure to have unforeseen consequences, especially for young people who are technologically inclined and spiritually starved.

The rising generation has vulnerabilities: crises of spirituality, interpersonal connection and sense of self. The deeply connected world of political figures and AI development leaders seems to be well aware of this. Only time will tell whether young people will identify their generation’s vulnerability and stand guard against the dangers of its large-scale exploitation.

previous story arrow

The Moon sisters

next story arrow

Anxiety unmasked