Cenia researchers explain why a machine cannot be self-aware

 width=

 

Two Cenia researchers, Denis Parra, one of the experts in charge of leading the research line “Deep learning for vision and language” (RL1) and Marcelo Mendoza, who heads together with Bárbara Poblete the research line “Human-centered AI” (RL5), were interviewed by TVN and Canal 13 to give an informed opinion about the real scope that artificial intelligence currently has, in relation to the news of Google’s AI with alleged “consciousness and feelings”.
 
A few days ago, a Google engineer named Black Lemoine sent an email to two hundred company employees sharing conversations he had with LaMDA (Language Models for Dialog Applications), a machine learning artificial intelligence system created to emulate human conversations. Given the consistency of the conversations and the level of responses delivered by the software, which claimed to “be a person” and asked to be recognized as such, Lemoine concluded that this system had the ability to think and feel. The unauthorized disclosure of this confidential information led to the engineer being suspended from his duties, but his statements went around the world.
 
Can a person dialogue with a machine?
On the question of whether or not there is currently a technology that allows a human being to talk to a machine, Denis Parra explains that although it is possible, there is no evidence that an AI system is aware of what it is talking about: “there is quite advanced chatbot technology, which allows to carry on a conversation, where it would be difficult to distinguish whether one is talking to a machine or a human being. It is even possible to use media, such as a photo, and talk about it, dialogue about elements that go with the image and make queries. However, from there to talking about software having the ability to sense things and perceive emotions such as sadness or fear, there is no evidence at present to say this for sure.”
 
How do you get an artificial intelligence system to carry on a coherent conversation?
Marcelo Mendoza, who holds a PhD in Computer Science, explains the mechanism behind a conversation that seems human, but is actually a mesh of deep artificial neural networks: “a language model is trained, shown a huge volume of text and asked to solve a simple task, such as, for example, predicting some sentence-level words, given a context sentence. The model actually learns to correlate words conditioned to the context of a sentence. Then, a dialog system is trained, so that the machine interacts in the context of a conversation with a human.”
 
Meanwhile, Denis Parra, explains that a coherent conversation with a high level of detail is related to technological progress, which has allowed for more advanced algorithms and greater computing and storage capacity: “as current methods have a greater capability to learn from longer sequences of conversations than those of a few years ago, their conversations are more coherent and are perceived to be more coherent”.
 
Why is an artificial intelligence system not able to understand what it is saying?
For Denis Parra, who is also an academic at the UC School of Engineering, Senior Researcher at Instituto Milenio i-Health and researcher at IMFD, one way to prove that an artificial intelligence system does not understand what it is saying is to find some “antagonistic” or “adversarial” questions, sentences or threads of conversations. “One example is GPT-3, a language model that can write text and answer questions. When asked “how long does it take to get the Golden Gate Bridge to Egypt?” the system can answer something syntactically correct and logical, but implausible in reality: “that would take 14 days”. And furthermore, to the question “how long does it take to get Egypt to the Golden Gate Bridge?” the system answered something like “that would take 14 days”. The fact that the models learn associatively (relationships between concepts), but do not explicitly have notions of causality (which event is the actual cause of another), makes them fail in relatively simple situations for a human.”
 
Mendoza adds that comprehension understood from the human perspective has scopes that exceed the capabilities of automated learning machines: “among the limitations of AI is the difficulty to sensory integrate signals from the environment and the inability to build a narrative of its own experience”. Added to this is the fact that “artificial consciousness” is focused on specific tasks and contexts and is partial in the sense of perception of the environment: “for example, a machine may have an awareness mechanism that favors self-monitoring in a simple specific task and on the basis of homogeneous data sources. In contrast to this, natural consciousness (proper to human beings), is much more complex, and includes the construction of a meta-narrative of one’s own experience and the continuous integration of signals from the environment acquired through the senses.”
 
What initiatives do you think there should be, from the academic or scientific world, to mitigate the sense of anxiety that people feel when thinking about the future of artificial intelligence?
Denis Parra believes that communication is essential as part of the solution to this problem: “organize open events, podcasts or short videos explaining what can be done with artificial intelligence and what cannot”. He also adds that it is important to inform the population at some strategic points: “go directly to schools, nursing homes for the elderly, hospitals, at the civil registry or places where people use this technology, often without knowing it; explain people the importance of their data, because these data are used to train these artificial intelligence models, and teach what positive impacts this technology has, what risks could there be and why.”
agencia nacional de investigación y desarrollo
Edificio de Innovación UC, Piso 2
Vicuña Mackenna 4860
Macul, Chile