SIRI, SHOULD I BELIEVE IN GOD?

By Claire Giangravè | Spring 2018

Our apps provide directions, the weather and easy ways to pay for coffee. But when it comes to answering deep metaphysical questions, artificial intelligence isn’t so smart.

In some ways, she’s the perfect college roommate.

Alexa, Amazon’s bestselling digital assistant, always knows what the weather will be, plays great music, and, no matter the hour, is always game for some trivia.

But as Charlie Trense (BBA ’19) found out, she’s not much for deep theological conversations.

One evening in early February, the finance major tested Alexa on a topic a little more challenging than the weather.

“Alexa, do you believe in God?” he asked.

Her answers, he recalls, were pretty generic, ranging from the vague, “Everyone has their own beliefs,” to the witty, “Religious questions are above my pay grade.”

In the past, most people would have sought spiritual guidance from a sacred book or faith leader. But statistics show that the overwhelming majority of youth in the digital age, who increasingly struggle with short attention spans and interpersonal relationships, prefer Google searches and even disembodied voices to get the information they need about virtually everything.

Nearly 1 in 5 United States citizens, more than 60 million people, will make use of artificial intelligence (AI)-driven digital assistants such as Apple’s Siri, Google’s assistant (which doesn’t have a name), Amazon’s Alexa or Microsoft’s Cortana at least once a month in 2018, a study by eMarketer predicts.

Of these, the majority will be millennials, meaning people between the ages of 25 and 35.

“I see people using Siri for more and more complicated things,” says Tim Carone, an associate teaching professor at Mendoza College of Business. “Especially the younger generation is using their phones for everything. I have seen this incremental move toward asking more complex questions.”

Carone, a former astrophysicist, recently published an article in The Chicago Tribune where he explored the issues and questions raised by the transversal use of digital assistants.

In his opinion, “technologies are getting way ahead of us,” as they develop more capabilities and have access to greater amounts of data. Despite their many limitations, Carone says, “Machines are now in certain terms and circumstances better than humans when it comes to making decisions.”

Artificial intelligence refers to the specialized kind of intelligence displayed by machines, as opposed to the “natural intelligence” of human beings. It includes the concepts of machine learning, which uses data and experience automatically to tune algorithms, and deep learning, which uses neural networks — or your brain — to simulate the learning process.

In general, a device is considered to be artificially intelligent if it can perceive its environment and take actions that maximize its chances of achieving its goals.

That, of course, begs the question: Where do those goals come from?

And the answer, at least at the present state of the art, is programmers. Specifically, programmers working for large commercial enterprises, whose prime directive is to sell products by appealing to the largest possible swath of potential customers.

That works out well when an AI device such as a personal or home digital assistant is asked to perform relatively simple tasks. But when it comes to the realm of the spirit — the nature and existence of God, or the purpose of human life — it’s a whole new, and sometimes problematic, ball game.

First, whose answers are you getting? This question has been gaining more and more traction in and around AI circles, especially considering that it’s likely to be impressionable young people turning to their digital assistants for religious questions. Are there inherent biases or worldviews written, subtly or not-so-subtly, into those lines of code?

Second, Siri, Alexa and Cortana are essentially commercial products created to broaden market bases — not to challenge or provoke a consumer, particularly on tough issues such as faith and religion. As a result, virtual assistants will tend to answer questions concerning God by boiling them down to a lowest common feel-good denominator, raising the issue of whether something such as Alexa could ever be a reliable source for this type of information, given its drive for profit.

A third question concerns the impact that AI devices are having on the capacity of users, and the young people who are the most engaged with this technology, to develop a proper spiritual life in the first place. While Siri and Alexa can be extremely valuable resources and companions, are people actually being trained to defer the most important questions of their lives to machines?

Finally, while the current state-of-the-art machines are no more conscious than a book or a spoon, scientists and technologists continue to push the boundaries of artificial intelligence. Could AI devices one day be capable of any sort of spiritual life, or, to use the psychological term, “interiority”?

AI’S INHERENT BIAS
Predictions are that as AI becomes more refined and pervasive in our lives, homes and society, digital natives (those born in the digital age who are therefore comfortable with technology) will be the readiest to jump onboard, and their engagement level in this multimillion-dollar business is likely to grow.

Data also show that digital natives tend more than others to “trust” AI with personal information in exchange for product or service recommendations.

It comes as no surprise, then, that while commercial AI is at its best with tasks that relate to location, calendar and reminder settings, digital assistants can sometimes be questioned on issues that they are not equipped to answer.

When Trense asked Alexa who Jesus Christ was, the machine answered with a two-minute rundown drawn from the Wikipedia page.

In late November 2017, the same question raised controversy when a right-wing conservative YouTuber published a video that showed Alexa answering, “Jesus Christ is a fictional character.” It resulted in a heated debate about the extent to which the inherent bias of companies working in AI, including Google, Apple and Amazon, affects the answers offered by these machines.

Only rarely will AI technologies such as Siri or Alexa actually compute an answer. If one were to ask these virtual agents, “What is 12 times 4?” they would calculate the results. But if one were to ask, for example, “What is the capital of Turkmenistan?” they would rely on existing data available on the internet, most of it from Wikipedia.

“A lot of technologies like Siri are less sophisticated than people think, and when you open up the box and look inside, it can be disappointing to see how it works,” says David Chiang, associate professor of Computer Science and Engineering at Notre Dame.

Chiang’s specialty is Natural Language Processing, or NLP, which focuses on having computers utilize language as similarly as possible to the way humans do, except faster.

In a way, his job is to make AI sound more human.

To a certain extent, AI can also “learn” independently from data, which can lead to some dangerous overgeneralizations. This unintentional bias could have significant consequences, especially when touching on subjects such as gender, race and even religion.

“People are really concerned right now about the ways in which the data that we put into the machine either reflects inherent biases that people have or, more benignly, the computer is just making poor choices based on overgeneralizations from probabilities,” Chiang says.

An AI system aimed at determining whether someone should be granted a loan, for example, might rely on data such as income, address and education, which may unintentionally result in a form of racial or gender bias.

At other times, digital agents simply rely on human-made information on the internet that is only partially, or not at all, factual. Though Wikipedia is in many ways a powerful and crucial tool, it’s no mystery that its “ground-up” information system is often prone to human error.

Beyond computed and data-derived answers, Siri and other devices clearly are directly programmed to recite particular answers, although precisely who creates the scripts at Apple, Google or Amazon is difficult to identify.

“Some of the most intelligent-sounding comments made by systems like Siri are not because of artificial intelligence, but because they’ve been written by intelligent humans,” says Chiang. “The Siri team employs scriptwriters to keep Siri constantly updated with canned answers to particular questions. I assume this is especially the case for sensitive topics such as the question, ‘Who is Jesus Christ?’”

Since the scriptwriters are virtually anonymous, their religious knowledge and beliefs underlying the canned answers are far from transparent. To consider Silicon Valley on the whole — where many of the big tech companies are located — as a proxy, a 2010 religious census of California’s Santa Clara County showed that 43 percent of residents are religious, with the majority being Catholic or evangelical.

Viewed from a wider angle lens, it’s clear that there isn’t a hard and fast division between tech and faith. Some tech enthusiasts have developed pseudo-religious concepts such as the “singularity,” which points to a moment in the not-so-distant future when technological developments will be so profound so as to entirely change humanity’s current state of being. (It’s sometimes jocularly referred to as “the rapture of the nerds.”) Others, such as former Google engineer Anthony Levandowski, have taken it to the next level by inventing a religion called Way of the Future that is aimed entirely toward the creation of an AI god.

Though the religious disposition of Silicon Valley offers some fascinating insight into what potentially could be the underlying biases of its programmers and engineers, it’s likely profit, not religion, that guides top-level decisions in a competitive market such as AI.

CAN SIRI BE A PROPHET, IF IT ALSO HAS TO TURN A PROFIT?
According to Rev. Paul Mueller, S.J., administrative vice director at the Vatican observatory and an expert in the philosophy of science, it’s important when evaluating a form of AI such as Siri to consider its economic and marketing aspects.

“Siri is a commercial product. The people who control Siri want to make money — so they want to entertain you, and they want to get you hooked on their product,” the Jesuit explains. “They want to avoid unpleasant controversy. They don’t want to stir up letters of complaint and protest.”

In this sense, Mueller states that it might be more useful to evaluate Siri and similar products as something designed by committee, rather than just as forms of artificial intelligence. Commercial forms of AI should be considered in the context of a consumer society, Mueller adds, where products are deliberately engineered to exploit human tendencies and weaknesses.

“Siri would rather avoid the topic of religion all together ... but people are going to ask,” he says.

Siri, Alexa or Cortana are not programmed to confront matters of faith in such a way as to induce spiritual reflection the way a human being would.

“Siri, like a vanilla politician, wants to offend no one, will not challenge or provoke. So Siri will be coy regarding religious questions, to avoid the risk of giving offense,” Mueller says. Digital assistants “will at most promote generic niceness, and perhaps something akin to the modern liberal values that church and state should be separate and religion should be a private affair.”

He adds that given how commercial AI heavily relies on internet searches to provide users with requested information, it translates into ceding one’s own ability to look through the Google search and independently choose the appropriate answer.

“You are relying on Siri and her programmers to make that choice for you,” Mueller says.

In the case of Alexa, the tendency to use AI as a marketing tool is even more apparent. Amazon’s digital assistant is programmed to constantly attempt to fill one’s online cart with products that the user might want to buy.

The theologian and science enthusiast goes on to compare virtual assistants to “the worst of kind of slick televangelist, who preaches a gospel of comfort and prosperity: not a gospel that is designed to challenge you, but designed to get you to feel good and tune in again next week.”

Since AI is, in a certain way, approaching the status of a public utility, some might expect the information it provides to be reliable and honest. But Mueller warns that these digital assistants are in fact not a utility, but products “designed like many social media products to play to your addictive tendencies and make you feel good.”

While digital assistants are in no way called to uphold one religious tradition or the other, it’s important to acknowledge the possible effects of this nondenominational and generic “digital evangelization,” especially on the most young and vulnerable minds.

REALITY VERSUS SIMULATION
Even the agreeable voices and tone of AI devices are meant to be non-threatening. Most of all, digital assistants strive to be relatable and human-like, which they achieve by employing clever programming and good writing.

“Siri doesn’t act like a real person. But millions of people are being trained by advertisers and by the seductiveness of online interactions to talk with Siri like a real person,” Mueller says.

For the theologian, the fact that people, especially young people, would look to AI in order to discuss matters of faith is reflective of a society that values the safety of online interaction more than real relationships.

To demonstrate this, he describes a scene he witnessed at a barbershop where a 4-year-old boy was getting his hair cut and spent the entire time on his device, ignoring the conversation going on around him.

“When I was growing up, going to the barbershop with dad was a moment for feeling like, ‘Oh! I’m with the men, I’m included talking with the grown-ups,’” Mueller explains. “The kid didn’t talk with anyone. He is being trained from a very early age that it’s OK in a social situation to interact with his phone and not with people.”

This scenario has become somewhat of a cliché in First World societies, one that has been made worse as machines progress in their ability to seem more “real.”

A perfect example of AI artfully simulating life is Sophia, a robot created by Hanson Robotics that has made a strong impression, from the World Economic Forum meetings to The Tonight Show Starring Jimmy Fallon, and has even been granted honorary citizenship by Saudi Arabia.

But Sophia — despite her mechanical legs and highly realistic facial features that can imitate hundreds of human expressions — is nothing more than a chatbot designed to answer basic questions and parrot scripted lines for her TV appearances.

The robot has drawn strong criticism from those who believe that Sophia offers a false impression of AI capabilities, creating the illusion of being “basically alive.”

Facebook’s head of AI research, Yann LeCun, recently wrote a tweet condemning “the (human) puppeteers behind Sophia,” and stating “that many people are being deceived into thinking that this (mechanically sophisticated) animatronic puppet is intelligent.”

Previously, the chief scientist at Hanson Robotics, Ben Goertzel, responded to criticism by stating that Sophia helps draw attention to the developments in AI and especially is key to drawing investments to their company.

Another perhaps better known AI example is IBM’s Watson, which famously trounced Jeopardy! champions Ken Jennings and Brad Rutter in 2011. “Watson is a cognitive-computing platform that determines a best answer to the question based on a proprietary scoring system using content gathered from millions of documents and books,” says Mendoza’s Tim Carone. Its use, of course, has gone far beyond game shows. Carone says Watson now is employed in a range of functions and industries, from analytics and health care, to the internet of things and security.

“We cannot help but to someday use Watson for the more difficult questions and ascertain what direction its answers will take,” says Carone.

A SENTIENT AI: BETWEEN FANTASY AND REALITY
Beyond programmers and companies who see in AI a powerful marketing strategy, there are others who pursue this field in an the effort to actually create a sentient being, one that could in theory even converse with humans on the nature and existence of God.

At a speculative level, a truly intelligent form of AI might well be logically inclined to assume the existence of a Creator, perhaps more than any other creature on earth. After all, humans created machines and it would be a rational supposition for an advanced AI to deduce that humans, too, were “made.”

Though machines might not be at that level yet, Mueller refuses to dogmatically state that artificially produced entities that act as if they are self-conscious and have interiority are impossible.

“In the future, we might be so smart that we construct a life form that can do all that. At that point, I’m willing to give the benefit of the doubt,” he says, adding, “Who am I to say that God has not ‘zapped’ a soul in there? I’d rather run the risk of treating something soul-less as ensouled, rather than run the risk of treating something ensouled as soul-less.”

Mueller also states that the belief held by many technologists and programmers — that human intelligence is nothing more than a series of data inputs and elaborations — is actually an obstacle to the creation of a sentient mechanical being.

Starting in the 17th century, following the writings and reflections of French philosopher René Descartes, the Western understanding of the world has shifted from an “organic” and, therefore, variable reality, to an inexorable and predictable “mechanical” one.

This transition proved essential for scientists, who could then perform crucial experiments by presupposing certain unmovable principles. Yet, for this precise reason, various experts say, some science-minded individuals are more prone to failing to challenge principles born from philosophical, metaphysical and epistemological assumptions.

“Those in control of science can have such a reductionist, materialistic view of the world that they reduce the fullness of human consciousness and human thought and explain it by reducing it to what a computer does,” says Rev. Terrence Ehrman, C.S.C., assistant director for the Center for Theology, Science and Human Flourishing at Notre Dame.

“I guess it’s the blind leading the blind in a sense,” he adds.

Referring to the AI technology that is now at our disposal, Ehrman, who is also a biologist, believes that there is no reason why someone would want to ask digital assistants questions about God, if not just to hear the writers’ humorous remarks.

“You might as well ask a Magic 8 Ball!” he says.

Many, including Ehrman, are also skeptical about the future possibility of a self-conscious artificial intelligence, precisely because the human mind is not limited to the neural networks that animate the brain.

“All the arguments for consciousness as an emergence from information, from data or from physical systems have been fairly dealt with and soundly defeated by people from all across the spectrum of the philosophy of mind,” says David Hart, a philosophy of mind expert who teaches at Notre Dame.

While the brain might be a computer, the mind certainly isn’t, he explains, and the whole spectrum of autonomous first-person experience, consciousness and intentionality cannot be synthesized into an artificial platform through constellations of data.

“I don’t think that’s even plausible as a fantasy, because it’s based on a defective notion of what mental events are,” Hart says.

Trying to imagine a computational system as conscious, he continues, would be like describing a library as being conscious. The AI systems commonly used today are made with such complexity that they present the impression of having intentionality at times, deceiving users into overestimating their abilities.

“That is tempting to imagine,” Hart says, “but it’s like Narcissus seeing his reflection in the water and imagining that there is someone else there.”

The real danger of AI, according to the scholar, is not that it might acquire consciousness and even decide that it has to eliminate inferior beings, in such a way frequently fantasized in sci-fi movies or novels. The danger arises from the fact that AI lacks an internal dimension, and therefore may be capable of doing things that humans might not be able to predict.

“What makes them dangerous is precisely that they are not conscious, they’re not deliberate, they are algorithmic systems, and if you don’t control the consequences of very complex algorithms, you never know what they’ll generate,” Hart says.

There is also the unfortunate example of what happened when Facebook programmed its chatbots to learn to negotiate. Instead, they learned to lie.

Given the concerns that AI might entail, experts including tech giant Elon Musk and the late astronomer Stephen Hawking believe the government should step in to regulate programmers and companies.

Others, such as Mueller, go as far as to propose a tech equivalent of the Hippocratic oath, where those working in AI promise to use their products and skills to help humanity rather than to exploit human weaknesses for profit.

Hart points to the profound influence that computers and technology have had on human beings and, especially, digital natives. As a professor, he expresses concern toward his students’ struggle in approaching and retaining knowledge as well as distinguishing opinions from facts.

When dealing with existential or religious questions, this becomes even more apparent. People asking Alexa, for example, whether God exists, are deferring complex and introspective questions to machines that are created as virtual, yet lacking analogies of the human brain.

“At this point I think that technology is altering habits of mind in a rather distressing way,” Hart says. “If there’s a danger in computers, it’s not that they are going to become conscious, but that they’re forcing us to become unconscious.”