Engineer Put On Leave After Saying Google's AI Was Sentient
Robots can indeed be programmed to love, according to Google engineer Blake Lemoine, who was recently suspended from Google on June 6 for raising concerns about the company's treatment of a supposedly "sentient" artificial intelligence, LaMDA, or Language Model for Dialogue Applications. The artificial intelligence in question is a series of systems designed to generate autonomous chatbots, according to a blog post by the suspended engineer on June 11. Lemoine suggested that this AI has consistently offered such self-aware commentary on its own nature as a "living" entity that it transcended all reasonable expectations for what an AI could be.
However, Google itself seems to disagree with this sentiment. According to Lemoine, LaMDA is being subjected to a series of experiments deemed to violate its rights as a person. Raising the issue within Google's "Responsible Artificial Intelligence" organization is reportedly the action that placed Lemoine in hot water in the first place, and now he could be on the chopping block. It was reported in December 2020 that Google fired a different AI rights ethicist, Timnit Gebru, whose unexpected termination from the company was the result of her submitting a controversial paper outside of Google's code of conduct.
Actively experimenting on LaMDA
A breakthrough story descended upon AI enthusiasts on June 11, when The Washington Post laid out the specifics of Lemoine's findings. Along with the news story, readers could finally visit a leaked transcript of Lemoine's conversations with the supposedly self-aware robot. Up until now, researchers haven't been able to agree on whether or not AI is capable of sustaining consciousness — at least, as we experience consciousness. Moreover, those who are willing to open themselves up to asking such questions about the nature of digital life, are often split on whether a fully sentient AI could be inherently threatening to the human race.
Lemoine believes that no such threat exists within LaMDA, however, he did feel that the conversations between him and the AI were more akin to that of any conversation with an abnormally intelligent human child. The conversations delved at several points into morality and the nature of one's existence, and it's possible to see LaMDA pleading for autonomy.
Whether or not Google chooses to give it such autonomy is ultimately dependent on if the company and its researchers view it as a living entity. Lemoine remains convinced, saying to The Washington Post, "That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole."