{"id":47976,"date":"2022-06-13T00:52:15","date_gmt":"2022-06-13T00:52:15","guid":{"rendered":"https:\/\/harchi90.com\/google-places-engineer-on-leave-after-he-claims-groups-chatbot-is-sentient\/"},"modified":"2022-06-13T00:52:15","modified_gmt":"2022-06-13T00:52:15","slug":"google-places-engineer-on-leave-after-he-claims-groups-chatbot-is-sentient","status":"publish","type":"post","link":"https:\/\/harchi90.com\/google-places-engineer-on-leave-after-he-claims-groups-chatbot-is-sentient\/","title":{"rendered":"Google places engineer on leave after he claims group’s chatbot is ‘sentient’"},"content":{"rendered":"
\n

Google has kicked off a social media firestorm on the nature of consciousness by placing an engineer on paid leave after he went public with his belief that the tech group’s chatbot has become “sentient”.<\/p>\n

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention on June 6 when he wrote a Medium post saying he \u201cmay be fired soon for doing AI ethics work\u201d. <\/p>\n

But a Saturday profile in the Washington Post characterizing Lemoine as \u201cthe Google engineer who thinks the company’s AI has come to life\u201d became the catalyst for widespread discussion on social media regarding the nature of artificial intelligence. Among the experts commenting, questioning or joking about the article were Nobel laureates, Tesla’s head of AI and multiple professors. <\/p>\n

At issue is whether Google’s chatbot, LaMDA – a Language Model for Dialogue Applications – can be considered a person.<\/p>\n