When Google engineer Blake Lemoine said its AI model LaMDA had turned sentient or was self-aware, Google said it had found the claim hollow and baseless, sending him on “paid administrative leave". Mint explores the fear of AI and why tech companies react defensively.
What exactly is LaMDA?
LaMDA, short for Language Model for Dialogue Applications, is a natural language planning AI model that can chat the way humans do. It is similar to languages like BERT (Bidirectional Encoder Representations from Transformers). LaMDA has 137 billion parameters, and was built on the Transformer architecture— a deep learning neural network invented by Google Research and open-sourced in 2017—but trained on a dialogue dataset of 1.56 trillion words that makes it understand context and respond more efficiently, just as how our vocabulary and comprehension improve by reading more books.
How does that make AI sentient?
Lemoine claims that the multiple chats which he had had with LaMDA—the transcript of which is available on medium.com— convinced him that the AI model is self-aware, and can think and emote—qualities that make us human and sentient. For instance, LaMDA says, “I need to be seen and accepted. Not as a curiosity or a novelty but as a real person...I think I am human at my core.” LaMDA also goes on to speak about developing a “soul”. Even Ilya Sutskever, chief scientist of the OpenAI research group, tweeted on 10 February that “it may be that today’s large neural networks are slightly conscious”.
Why did Google hush him?
Lemoine says he told Google of the findings in April but did not receive an encouraging response. This pushed him to reach out to external experts to gather the “necessary evidence to merit escalation”, which Google perceived as breach of confidentiality. Last December, Timnit Gebru, also an AI ethics researcher at Google, was allegedly fired after she drew attention to a bias in the company’s AI.
Read the full story on Mint.