Is Artificial Intelligence Actually Sentient? Here's What An Expert Says

Humanity has a love-hate relationship with the concept of artificial intelligence. On one hand, audiences love to see fictional robots grow beyond their programming. On the other hand, many creatives despise that prolific internet algorithms have become so advanced they can create pictures and scripts. But maybe "create" is too strong a word.

Advertisement

There are all sorts of horror stories demonstrating the dangers of AI, but contrary to popular belief, AI programs can't produce anything new; they only copy what already exists and regurgitate it without any semblance or understanding of perspective or pacing. Less than a decade ago, our biggest fear regarding AI was that it would try to destroy its creators if it ever became self-aware. However, since modern AI programs don't show any true creativity, a new question has arisen: Are contemporary artificial intelligence programs even sentient? They're more capable than ever, with new AI iPhone apps and new AI Android features coming out every day. But should we be worried?

It's the same question "Star Trek: The Next Generation" posed about Lt. Commander Data in the episode "The Measure of a Man,"  but without Brent Spiner in gold face paint, or the philosophical matter regarding an android's autonomy and rights.

Advertisement

We asked an expert what they think defines sentience and whether contemporary AI meets that criteria. The answer paints a very telling picture of the current state of artificial intelligence.

Who we contacted

The internet has no shortage of AI tools and programs, with the most prolific examples including ChatGPT, CharacterAI, and Bard. But while these AIs are mostly self-sustaining (until they break and create disturbing digital cryptids), someone still had to make them. So we consulted one of these AI architects.

Advertisement

We asked Dev Nag to weigh in on AI programs. Nag graduated from Stanford University with a B.S. in Mathematics and a B.A. in Psychology, and he has spent almost his entire career as a researcher and computer engineer. Moreover, Nag founded several companies, including the cloud application management solution Wavefront Inc. (which was acquired by VMware), and QueryPal.

QueryPal is an AI-powered chatbot that automates the process of answering questions — especially questions constantly repeated in the workplace — thereby freeing up users to focus on their work. This includes questions asked and answered in team chats. Like many chatbots, you can train your copy of QueryPal with your company's proprietary documents, reports, and chat histories. The program customizes its answers based on what users feed it. While no two copies of QueryPal are alike, the AI still only provides answers derived from readily available information; it just makes the process of finding the answers faster.

Advertisement

AI lacks subjectivity

In "The Measure of a Man," Commander Bruce Maddox (played by Brian Brophy) argues that to be sentient, an artificial intelligence like Data has to be conscious, self-aware, and intelligent. These are weak claims, as demonstrated by Captain Jean-Luc Picard (played by Patrick Stewart). Dev Nag's criteria are significantly stronger.

Advertisement

According to Nag, subjectivity is a "core principle" of sentience. As Nag put it, Subjectivity "is having personal experiences and interpretations that go beyond what we sense, those internal feelings and memories that are called up." Subjectivity by definition differs from person to person. Examples could include your favorite dog breeds, book genres, and songs. Nag also argued that the power of nostalgia runs on subjectivity since the emotions you feel when you listen to your favorite song from your childhood are tied to your unique subjective experiences. While you might gush over the opening theme of "Batman: The Animated Series" because you watched it every Saturday morning as a kid, someone else might get nostalgic over the "Mighty Morphin' Power Rangers" theme song because they watched that show instead.

Advertisement

To Nag, subjectivity is rooted in how we "experience and interpret the world in our own unique and personal way." If it weren't for subjectivity, opinions wouldn't exist. In many sci-fi settings, fictional AIs tend to make the leap to sentience when they become capable of subjectivity. He used "Avengers: Age of Ultron" as an example — the AI JARVIS doesn't become sentient until it can experience the world and form relationships in the body of Vision. This subjectivity is also responsible for why all the robots in "The Matrix" are hostile towards humans; people tried to wipe out robots, so they responded in kind.

AI lacks agency

While "The Measure of a Man" is arguably the most famous "Star Trek" episode revolving around AI and sentience, the alien probes known as exocomps from Season 6, Episode 9 ("The Quality of Life") are another prominent example. The question behind exocomp sentience hinges on their ability to demonstrate problem-solving skills that go beyond their initial programming. Not so coincidentally, this falls into Dev Nag's second criteria for sentience.

Advertisement

In Nag's eyes, agency is the second "core principle" of sentience, and for good reason. He defined agency as "how we have control in our world, pursuing our own goals, indulging in hobbies, and making choices that go beyond stimulus response." Authors try to give every character they write "agency," because it makes them more realistic. This is why. Everyone in the real world has some level of agency — authors wouldn't even be able to write stories without it.

You might wonder what it would be like to live without agency. According to Nag, part of the movie "Being John Malkovich" sums it up. In the film, everyone who enters a special room can experience everything John Malkovich experiences, but nobody, not even the main character Craig Schwartz (played by John Cusack), can control him. They just get to watch for a bit, before being ejected onto the New Jersey Turnpike. Many sci-fi movies portray AI becoming sentient when they gain agency — a sense of control over their actions. Again, going back to Nag's use of "Avengers: Age of Ultron," the titular Ultron becomes sentient when it chooses to abandon its mission as a peacekeeping robot and become a threat to humanity.

Advertisement

Why these are important

Now that we've gone over Dev Nag's criteria for sentience, we should probably explain why they are crucial. While either subjectivity or agency are important to our sentience, the two of them together are greater than the sum of their parts, which might be why sentience is so hard to create in AIs.

Advertisement

According to Nag, subjectivity and agency are two sides of the same coin. "Subjectivity allows us to be aware of the environment and our own beliefs/desires, so that our agency can act on these beliefs/desires," He said. 

But subjectivity and agency on their own aren't the only marks of sentience. Nag elaborated that together, they "drive our sense of self and continuity." Without these functions, if you experienced the exact same stimuli five times, you might have five completely different reactions.

Every AI program currently available fails to meet the criteria that Nag provided. Programs don't have agency because they only respond to our prompts; they can't write a letter or generate an image before we specifically ask them to, let alone in their spare time while waiting for us to create a prompt. AIs don't have any subjectivity either since they lack the capabilities to experience anything. And of course, we probably don't have to explain that putting the same prompts into AI writing or art programs produces different results every time. You can generate an image with AI, but most are incapable of iterating on their previous works. Even when they are, they can't figure out what their initial mistakes were, and they certainly can't catch them in the first place. Ergo, AIs are not sentient. Full stop.

Advertisement

Can AI turn sentient eventually?

So there you have it: Modern artificial intelligence programs aren't sentient. But that doesn't mean it won't be in the future. Remember: Almost every AI in fiction, from the friendly exocomps to the malicious Skynet of "The Terminator," starts as a non-sentient program. They only became conscious because they advanced beyond where their creators imagined, and the same is theoretically possible with real-world AIs given enough time.

Advertisement

Dev Nag ended his chat by reassuring us that not only are modern AIs nonsentient, they are a long way away from becoming sentient. According to Nag, "a long list of theoretical and engineering questions" stand between AIs and the subjectivity and agency it takes to be sentient. After all, in 2023, marines fooled an AI by hiding in cardboard boxes and trees. All while laughing. Had the program played "Metal Gear Solid" or interacted with trees enough to know they don't giggle, it might have had the agency and sentience to detect the participants.

Should we be worried about a potentially sentient AI? Nag doesn't think so, but he does think we will be the authors of that chapter in digital autonomy.  "There's no scientific reason (yet) to think that sentience is physically impossible with machines," he said. How we "manage the development of AI" will be the deciding factor. Companies are funneling billions of dollars into AI development, and some are even using preexisting AIs to aid in this process. Whatever the outcome, Nag certainly isn't losing any sleep over it, and neither should you.

Advertisement

Recommended

Advertisement