HiJiffy-TiagoA-2
June 30, 2022

Sentient Chatbots: is the Perception of Consciousness the Ultimate AI Goal?

Join our CEO for exploring boundaries between fact and fiction related to AI

The line between fact and fiction can begin to blur when it comes to artificial intelligence (AI). Artificial General Intelligence (AGI) is the ability of a machine to learn to a level where it can complete tasks that require decisions to be made, to a level of a similar quality to that of human beings. A machine that can make judgments rather than blindly follow pre-set instructions is of course incredibly useful in many practical settings. However, where it perhaps becomes most intriguing, is when we add in language. Conversational AI brings us closest to a representation of ourselves and an interaction that resembles establishing a relationship. Of course, the ultimate goal is to develop an AGI system that is indistinguishable from human interaction. A system that perhaps even has its own developers perceiving a level of sentience that surely cannot exist. Or can it? It is that shadow of doubt in the minds of the users that AI developers are striving for when it comes to conversational AI. That ultimate goal of perceived consciousness.

An unceasingly intriguing theme that sparks the imagination

The idea of artificial intelligence crossing the boundary between humans and machines has intrigued us for decades. The world of science fiction explores countless examples of ethical issues for artificial beings who declare they are sentient and demand their rights. From the replicants of Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?, later made into the award-winning movie Blade Runner, to Spike Jonze’s Her, where a lonely writer falls in love with his operating system, we are continually fascinated by the concept of sentient AI.

An interesting ethical concern

Machines that can ‘think’ for themselves bring about many ethical considerations. Of course, we have those that reach into the realms of science fiction. If a machine tells you that it has a mind, that it has individual thoughts, ideas, hopes and fears, does it then have rights? Or could sentient chatbots use their abilities to override the purpose for which they were created and forge their own future, for good or for evil? Coming back closer to reality, we have other more realistic ethical concerns. For example, if AI is being used by businesses to communicate with customers, should it be compulsory to inform users that they are indeed conversing with artificial intelligence?

How do we determine ‘intelligence’?

A computer can give out responses determined by a particular set of variables, but when does this become something we can realistically call artificial intelligence? Here is where the Turing Test comes into play. Created by computer scientist Alan Turing in the 1950s, this test, also called “The Imitation Game”, gives a user blind access to an artificially intelligent machine, alongside human beings. If the users cannot determine which is the real person and which is the machine, then the machine has passed the test.

AI in the news – Google’s LaMDA system

One of the most curious stories to hit the media lately has been that of Google engineer Blake Lemoine, and his work with Google LaMDA. LaMDA, or Language Model for Developed Applications, is a chatbot development system created by Google. With access to data in many different languages, this advanced chatbot has been dubbed ground-breaking. Such technological advancements do not usually make the mainstream news, however, LaMDA certainly did, when one of the engineers working on the project proclaimed that the system had become conscious, truly believing that Google had created sentient chatbots. The thing with AI is that it develops beyond its programming. You never quite know what direction it may take, particularly when humans get involved, and that was certainly the case with Microsoft’s Tay chatbot.

Tay – the AI project that backfired

When Microsoft launched its Tay AI system on Twitter in 2016, it hoped to engage with millennials, and that it certainly did. However, the outcome was not quite what it had hoped for. The system picked up on, and repeated, the interactions with its audience, which included many racist and controversial political opinions. The system become so inflammatory that it was eventually taken offline and an important lesson regarding the quality of data was learned.

The ambiguity factor of artificial neural networks

Most computer systems are easily understood. Input turns to output. When you know what is being input, you know what should be output. If the output is incorrect, then the program is faulty. There is no grey area. However, when it comes to artificial intelligence, this all changes. It is the neural network, the ‘learning’ process that cannot be predicted, with hidden ‘black box’ levels of processing that can give results that are unexpected, but that cannot be dismissed as being wrong. We are stepping away from the binary world that we undoubtedly associate with computer technology, into something else, something unpredictable. To err is human as they say. People are surprising, in general, computers are not. And so, when a computer does surprise us, it is hard to comprehend, and it is not too much of a stretch of the imagination to see sentience in the machine, particularly when we have not had control over the training data fed into the system.

Training is perhaps the most vital aspect of AI that we need to understand

With an awareness of where the training data has come from, we can then better understand the output from a chatbot. With a greater understanding of how AI systems are developed, chatbot responses can be seen as more like the product of the training, and less inexplicable. Without knowing the history of the training for a chatbot, even highly experienced chatbot users can become confused when a system responds in an unexpected way. It does not actually take such a great leap to perceive such responses as conscious thoughts. But it is this closeness to human behaviour that will make chatbots invaluable resources for many businesses in the future. They may not be sentient, but if they can perfectly mimic a human being, they can be incredibly useful, and that is why the perception of consciousness is the ultimate AI goal.

HiJiffy-TiagoA-2
Co-Founder & CEO

Latest Articles

What is a sustainable hotel?

What is a sustainable hotel?

Put your hotel at the forefront of sustainability and attract eco-conscious guests
How technology improves hotel reception etiquette

How technology improves hotel reception etiquette

Leverage technology at your hotel's front desk to delight your guests from the moment they arrive
Explained: What is data mining and its role in our solution

Explained: What is data mining and its role in our solution

Learn how data mining in HiJiffy’s AI turns data into insights

Want to receive exclusive insights about Hotel Management?

Join our list and receive the best articles every month.

Newsletter