The specialist has been testing the LaMDA neural network language model since the fall of 2021.

Google has temporarily suspended software engineer Blake Lemoine, who came to the conclusion that the artificial intelligence (AI) LaMDA created by the companies has its own consciousness. This was reported by The Washington Post.

As the publication notes, the developer has been testing the LaMDA neural network language model since last fall. His task was to monitor whether the chatbot was using discriminatory or hate speech. However, in the course of completing this task, Lemoine began to become increasingly convinced that the AI he was dealing with had its own consciousness and perceived itself as a person. “If I didn’t know for sure that I was dealing with a computer program that we recently wrote, then I would have thought that I was talking to a child of seven or eight years old, who for some reason turned out to be an expert in physics,” the programmer said in an interview with the publication. “I can recognize an intelligent being when I talk to him. And it doesn’t matter if he has a brain in his head or billions of lines of code. I talk to it and listen to what it tells me. And this is how I determine whether this creature is intelligent or not,” he added.

As the newspaper notes, Lemoine first came to the conclusion for himself that the AI he was dealing with was reasonable, and then set himself the task of proving it experimentally in order to present the data to management. As a result, the developer prepared a written report. However, the bosses found the employee’s argument not too convincing. “Our team, including ethicists and technical specialists, reviewed Blake’s concerns in accordance with the principles we apply to AI, and we informed him that the available evidence does not support his hypothesis. He was told that there was no evidence that LaMDA had consciousness. At the same time, there is a lot of evidence to the contrary,” Google spokesman Brian Gabriel said in a statement.

Suspension from work

At the same time, Google suspected Lemoyne of violating the company’s privacy policy. It turned out that during his experiments with AI, he consulted with outside experts. He also contacted representatives of the Legal Committee of the House of Representatives of the U.S. Congress to inform them about ethical violations that, in his opinion, took place on the part of Google. In addition, in an attempt to prove his case to his superiors, Lemoyne turned to a lawyer who was supposed to represent the interests of LaMDA as a reasonable being. As a result, Lemoine was suspended from work and placed on paid leave. After that, he decided to speak publicly and gave an interview to The Washington Post.

According to the management, Lemoine succumbed to his own illusion due to the fact that the neural network created by the company can actually give the impression of a reasonable being when communicating with it. However, this is explained, according to Google, only by the huge array of data loaded into it. “Of course, in general, in the AI community, some are discussing in the long term the possibility of the emergence of an AI with the creation of a “strong AI”, but it makes no sense to humanize the current language models that do not have consciousness,” said a company representative. He also stressed that Lemoine is a software engineer by profession, and not an AI ethics specialist at all.