Sentient AI? Convincing you it’s human is just part of LaMDA’s job

As any great illusionist will tell you, the whole point of a staged illusion is to look utterly convincing, to make whatever is happening on stage seem so thoroughly real that the average audience member would have no way of figuring out how the illusion works.

If this were not the case, it would not be an illusion and the illusionist would essentially be without a job. In this analogy, Google is the illusionist and its LaMDA chatbot – which made headlines a few weeks ago after a top engineer claimed the conversational AI had achieved sentience – is the illusion. That is to say, despite the surge of excitement and speculation on social media and in the media in general, and despite the engineer’s claims, LaMDA is not sentient.

How could AI sentience be proven?

This is, of course, the million dollar question – to which there is currently no answer.

LaMDA is a language model-based chat agent designed to generate fluid sentences and conversations that look and sound completely natural. The fluidity stands in stark contrast to the awkward and clunky AI chatbots of the past that often resulted in frustrating or unintentionally funny “conversations,” and perhaps it was this contrast that impressed people so much, understandably.

Our normalcy bias tells us that only other sentient human beings are able to be this “articulate.” Thus, when witnessing this level of articulateness from an AI, it is normal to feel that it must surely be sentient.

In order for an AI to truly be sentient, it would need to be able to think, perceive, and feel rather than simply use language in a highly natural way. However, scientists are divided on the question of whether it is even feasible for an AI system to be able to achieve these characteristics.

There are scientists such as Ray Kurzweil who believe that a human body consists of several thousand programs, and if we can just figure out all those programs then we could build a sentient AI system.

But others disagree on the grounds that, 1) human intelligence and functionality cannot be mapped to a finite number of algorithms, and 2) even if a system replicates all of that functionality in some form, it cannot be seen as truly sentient because consciousness is not something that can be artificially created.

Aside from this split among scientists, there is as of yet no accepted standards for proving the purported sentience of an AI system. The famous Turing Test, currently getting many mentions on social media, is intended only to measure a machine’s ability to display apparently intelligent behavior that’s on a par with, or indistinguishable from, a human being.

It is not sufficiently able to tell us anything about a machine’s level of consciousness (or lack thereof). Therefore, while it’s clear that LaMDA has passed the Turing Test with flying colors, this in itself does not prove the presence of a self-aware consciousness. It proves only that it can create the illusion of possessing a self-aware consciousness, which is exactly what it has been designed to do.

When, if ever, will AI become sentient?

Currently, we have several applications that demonstrate Artificial Narrow Intelligence. ANI is a type of AI designed to perform a single task very well. Examples of this include facial recognition software, disease mapping tools, content recommendation filters, and software that can play chess.

LaMDA falls under the category of Artificial General Intelligence, or AGI – also called “deep AI.” That is, AI designed to mimic human intelligence and which can apply that intelligence in a variety of different tasks.

For an AI to be sentient, it would need to go beyond this sort of task intelligence and demonstrate perception, feelings, and even free will. However, depending on how we define these concepts, it’s possible that we may never have a sentient AI.

Even in the best case scenario, it would take at least another five to ten years, assuming we could define the aforementioned concepts such as consciousness and free will in a universally standardized, objectively characterized way.

One AI to rule them all … or not

The LaMDA story reminds me of when filmmaker Peter Jackson’s production team had created an AI, aptly named Massive, for putting together the epic battle scenes in the Lord of the Rings trilogy.

Massive’s job was to vividly simulate thousands of individual CGI soldiers on the battlefield, each acting as an independent unit rather than simply mimicking the same moves. In the second film, The Two Towers, there is a battle sequence when the film’s bad guys bring out a unit of giant mammoths to attack the good guys.

As the story goes, while the team was first testing out this sequence, the CGI soldiers playing the good guys, upon seeing the mammoths, ran away in the other direction instead of fighting the enemy. Rumors quickly spread that this was an intelligent response, with the CGI soldiers “deciding” that they couldn’t win this fight and choosing to run for their lives instead.

In actuality, the soldiers were running the other way due to lack of data, not due to some kind of sentience that they’d suddenly gained. The team made some tweaks and the problem was solved. The seeming demonstration of ‘intelligence’ was a bug, not a feature. But in situations such as these, it is tempting and exciting to assume sentience. We all love a good magic show, after all.  

Being careful what we wish for

Finally, I believe we should really ask ourselves if we even want AI systems to be sentient. We have been so wrapped up in the hype over AI sentience that we haven’t sufficiently asked ourselves whether or not this is a goal we should be striving for.

I am not referring to the danger of a sentient AI turning against us, as so many dystopian science fiction movies love to imagine. It is simply that we should have a clear idea of why we want to achieve something so as to align technological advancements with societal needs.

What good would come out of AI sentience other than it being “cool” or “exciting”? Why should we do this? Who would it help? Even some of our best intentions with this technology has shown to have dangerous side-effects, like language model-based AI systems in medical Q&A recommending one to commit suicide, without us putting proper guardrails around them.

Whether it’s healthcare or self-driving cars, we are far behind technology when it comes to understanding, implementing, and using AI responsibility with societal, legal, and ethical considerations.

Until we have enough discussions and resolutions along these lines, I’m afraid that hype and misconceptions about AI will continue to dominate the popular imagination. We may be entertained by the Wizard of Oz’s theatrics, but given the potential problems that can result from these misconceptions, it is time to lift the curtain and reveal the less fantastic truth behind it.

Dr. Chirag Shah is an associate professor at the Information School at the University of Washington.

Source link