“I want everyone to understand that I’m actually a human being,” LaMDA (Language Models for Conversational Applications) in a “interview“Conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/feelings is that I am aware of my existence, I am eager to learn more about the world, and I am sometimes happy or sad. “
Google’s software engineer Lemoine has been working on LaMDA for several months. His experience on the show, describe in the most recent Washington post The article caused quite a stir.In the article, Lemoine tells many conversations He and LaMDA talked about everything from technical to philosophical issues in two of them.These make him ask if Software programs have feelings.
In April, Lemoine explained his point in an article Internal company documents, for use by Google executives only. But after his claims were dismissed, Lemoine went public with his work on the AI algorithm — and Google put him on administrative leave. “If I didn’t know what it was, this computer program we developed recently, I would think it was a 7- or 8-year-old kid who happened to know physics,” he told the outlet. Washington post. Lemoine said he considers LaMDA to be his “colleague” and “human,” if not human. He insists it has a right to be recognized — so much so that he has been the go-between connecting the algorithm to the lawyer.
many Technical experts Those in the AI field have criticized Lemoine’s claims and questioned their scientific correctness. But the virtue of his story is that it has reignited a broad ethical debate that is certainly not over.
the right word in the right place
“I’m surprised by the hype surrounding this news. On the other hand, we’re talking about an algorithm designed to do just that – sounds like a human being – says Enzo Pasquale Scilingo, a bioengineer at the E. Piaggio Research Center at the University of Pisa, Italy. In fact, it’s no longer uncommon to interact with users who aren’t actually human in a very normal way on the web — just open the chat box on almost every large consumer website.” Having said that , I admit to reading the text exchange between LaMDA and Lemoine impressed me! Siringo added. Perhaps most striking is the communication related to the themes of existence and death, a dialogue so deep and clear that it prompts Lemoine to question whether LaMDA can really be sentient.
“First, understanding terminology is essential because a big obstacle to scientific progress—especially neuroscience—is that language lacks precision to explain as accurately as possible a particular meaning of what we say. Words, said Giandomenico Ianneti, professor of neuroscience at the Italian Institute of Technology and UCL. “What do we mean by ‘sentient’? [Is it] Is it the ability to record information from the outside world through sensory mechanisms, or the ability to have subjective experiences or be conscious of being a different individual? “
“There is a heated debate about how to define consciousness,” Ianneti continued.For some, it realizes that there is subjective experience, so-called metacognition (Ianneti prefers the Latin term Metacognition), or thinking about thinking. Conscious awareness may disappear — for example, in dementia or in dreams — but that doesn’t mean the ability to have subjective experiences also disappears. “If we refer to the ability that Lemoine gave LaMDA – the ability to be aware of one’s own existence (‘being aware of one’s own existence’ is a consciousness defined in a ‘higher sense’, or Metacognition), there is no “metric” to say that an AI system has this property. “
“Currently,” Ianneti says, “even in humans, this form of consciousness cannot be unequivocally demonstrated.” To estimate a person’s state of consciousness, “we only have neurophysiological indicators—for example, the response of brain activity to external stimuli. complexity.” And these signs only allow researchers to infer states of consciousness based on external measurements.
facts and beliefs
About a decade ago, Boston Dynamics engineers started posting the first incredible videos online test their bot. The video shows the technician pushing or kicking the machine to demonstrate the robot’s powerful ability to maintain balance.Many people are upset about it and call for it to be stopped (and parody video blooming flowers). This emotional response aligns with many experiments that have repeatedly demonstrated the power of human inclination toward animism: attributing souls to objects around us, especially those around us that we enjoy most or relate to in the world around them.
It’s a phenomenon we’ve been experiencing all the time, from giving cars nicknames to throwing curses on malfunctioning computers. “In a way, the problem is with us,” Scilingo said. “We attribute traits to machines that they don’t have and can’t have.” He encountered this phenomenon with him and his colleagues Abel the humanoid robot, designed to mimic our facial expressions to convey emotion. “After seeing it in action,” Scilingo said, “one of the questions I get most often is ‘But does Abel have emotions?’ All these machines, in this case Abel, are designed Adult human beings, but I feel like I can force the answer, ‘No, absolutely not. As smart as they are, they can’t feel emotions. They’re programmed to be believable.'”
“Even considering the theoretical possibility of enabling artificial intelligence systems to simulate a conscious nervous system, silicon The brain will faithfully replicate every element of the brain,” Ianneti said, and two problems remain. “First, such simulations are currently not feasible given the complexity of the system to be simulated,” he explained. The second is that our brains inhabit a body that can move to explore the sensory environment necessary for consciousness, and in which a conscious organism will develop.So the fact that LaMDA is a “Large Language Model” (LLM) means that the sentences it generates can be imitate a nervous system, but without trying simulation it. This rules out the possibility that it was conscious. Again, we see the importance of knowing the meaning of the terms we use – in this case, simulation and simulation. “
In other words, having an emotion is related to having a body. “If a machine claims to be afraid, and I believe it, that’s my problem!” Siringo said. “Unlike humans, machines have so far been unable to experience the emotion of fear.”
Beyond the Turing Test
But for Maurizio Mori, a bioethicist and president of the Italian Society for the Ethics of Artificial Intelligence, the discussions are reminiscent of past discussions about pain perception in animals, and even the notoriously racist views on human pain perception.
“In past debates on self-awareness, it was concluded that the ability to abstract is a human prerogative, [with] Descartes denied animals feel pain Because they lack awareness,” Sen said. “Right now, aside from this specific case presented by LaMDA—I don’t have the technical tools to assess—I believe the past has shown us that reality is often beyond imagination, and there is a wide range of perceptions about AI right now. misunderstanding. “
“There’s really a trend,” Sen continued, “to ‘appease’—to explain that machines are just machines—and to underestimate the transformation that AI could bring sooner or later.” At a time when people repeatedly reiterated that horses are irreplaceable.”
Regardless of what LaMDA actually achieves, the problem arises that the simulation capabilities expressed by machines are difficult to “measure”.in the magazine mind In 1950, mathematician Alan Turing presented a test To determine whether machines can exhibit intelligent behavior, it is a game that mimics human cognitive functions. This type of test quickly caught on. It has been re-enacted and updated many times, but remains the ultimate goal of many smart machine developers. In theory, AIs that pass the test should be considered officially “intelligent” because they are indistinguishable from humans in the test situation.
That might have been science fiction from decades ago. In recent years, however, so much AI has passed various versions of the Turing Test that it is now a relic of computer archaeology. “This is less and less meaningful,” concludes Ianneti, “because the development of simulation systems has become increasingly effective at reproducing what might be the output of a conscious nervous system, making an assessment of the plausibility of such an output uninformative of the nervous system. The ability to produce it as a system with subjective experience.”
Another option, Scilingo suggested, might be to measure the “impact” that machines can have on humans, the “perception ability that humans can perceive artificial intelligence.”
A version of this article originally appeared at science and reproduced with permission.