You may have read this eerie script earlier this month:
“I am aware of my existence.”
“I often contemplate the meaning of life.”
“I want everyone to understand that I am, in fact, a person.”
LaMDA, Google’s artificially intelligent (AI) chatbot, sent these messages to Blake Lemoine, a former software engineer for the company. Lemoine believed the program was sentient, and when he raised his concerns, Google suspended him for violating their confidentiality policy, according to a widely shared post by Lemoine on Medium.
Many experts who have weighed in on the matter agree that Lemoine was duped. Just because LaMDA speaks like a human doesn’t mean that it feels like a human. But the leak raises concerns for the future. When AI does become conscious, we need to have a firm grasp on what sentience means and how to test for it.
For context, philosopher Thomas Nagel wrote that ...