{"id":49552,"date":"2022-06-14T01:30:32","date_gmt":"2022-06-14T01:30:32","guid":{"rendered":"https:\/\/harchi90.com\/google-suspends-engineer-who-rang-alarms-about-a-company-ai-achieving-sentience\/"},"modified":"2022-06-14T01:30:32","modified_gmt":"2022-06-14T01:30:32","slug":"google-suspends-engineer-who-rang-alarms-about-a-company-ai-achieving-sentience","status":"publish","type":"post","link":"https:\/\/harchi90.com\/google-suspends-engineer-who-rang-alarms-about-a-company-ai-achieving-sentience\/","title":{"rendered":"Google Suspends Engineer Who Rang Alarms About a Company AI Achieving Sentience"},"content":{"rendered":"
<\/p>\n
Google suspended an engineer last week for revealing confidential details of a chatbot powered by artificial intelligence, a move that marks the latest disruption of the company’s AI department. <\/p>\n
Blake Lemoine, a senior software engineer in Google’s responsible AI group, was put on paid administrative leave after he took public his concern that the chatbot, known as LaMDA, or Language Model for Dialogue Applications, had achieved sentience. Lemoine revealed his suspension about him in a June 6 Medium post and subsequently discussed his concerns about LaMDA’s possible sentience with The Washington Post in a story published over the weekend. Lemoine also sought outside counsel for LaMDA itself, according to The Post. <\/p>\n
In his Medium post, Lemoine says that he investigated ethics concerns with people outside of Google in order to get enough evidence to escalate them to senior management. The Medium post was “intentionally vague” about the nature of his concerns of him, though they were subsequently detailed in the Post story. On Saturday, Lemoine published a series of “interviews” that he conducted with LaMDA.<\/p>\n