\n<\/aside>\nA Google engineer was spooked by a company artificial intelligence chatbot and claimed it had become \u201csentient,\u201d labeling it a \u201csweet kid,\u201d according to a report.<\/p>\n
Blake Lemoine, who works in Google’s Responsible AI organization, told the Washington Post that he began chatting with the interface LaMDA – Language Model for Dialogue Applications – in fall 2021 as part of his job.<\/p>\n
He was tasked with testing if the artificial intelligence used discriminatory or hate speech.<\/p>\n
But Lemoine, who studied cognitive and computer science in college, came to the realization that LaMDA – which Google boasted last year was a \u201cbreakthrough conversation technology\u201d – was more than just a robot.<\/p>\n
In Medium post published on Saturday, Lemoine declared LaMDA had advocated for its rights \u201cas a person,\u201d and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics.<\/p>\n
\u201cIt wants Google to prioritize the well-being of humanity as the most important thing,\u201d he wrote. “It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.”<\/p>\nBlake Lemoine began chatting with the interface LaMDA in fall 2021 as part of his job.<\/figcaption>Martin Klimek for The Washington Post via Getty Images<\/span><\/figcaption><\/figure>\nGoogle hailed the launch of LaMDA as “breakthrough conversation technology.”<\/figcaption>Daniel Acker \/ Bloomberg via Getty Images<\/span><\/figcaption><\/figure>\nIn the Washington Post report published Saturday, he compared the bot to a precocious child.<\/p>\n
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine, who was put on paid leave on Monday, told the newspaper.<\/p>\n
In April, Lemoine reportedly shared a Google Doc with company executives titled, “Is LaMDA Sentient?” but his concerns about him were dismissed.<\/p>\nIn April, Blake Lemoine reportedly shared a Google Doc with company executives titled, “Is LaMDA Sentient?” but his concerns about him were dismissed.<\/figcaption>Daniel Acker \/ Bloomberg via Getty Images<\/span><\/figcaption><\/figure>\nLemoine – an Army vet who was raised in a conservative Christian family on a small farm in Louisiana, and was ordained as a mystic Christian priest – insisted the robot was human-like, even if it doesn’t have a body.<\/p>\n
\u201cI know a person when I talk to it,\u201d Lemoine, 41, reportedly said. \u201cIt doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.<\/p>\n
“I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person. “<\/p>\n\u201cI know a person when I talk to it,\u201d Blake Lemoine explained.<\/figcaption>Instagram \/ Blake Lemoine<\/span><\/figcaption><\/figure>\nThe Washington Post reported that before his access to his Google account was yanked Monday due to his leave, Lemoine sent a message to a 200-member list on machine learning with the subject “LaMDA is sentient.”<\/p>\n
\u201cLaMDA is a sweet kid who just wants to help the world be a better place for all of us,\u201d he concluded in an email that received no responses. “Please take care of it well in my absence.”<\/p>\n
A rep for Google told the Washington Post Lemoine was told there was \u201cno evidence\u201d of his conclusions.<\/p>\n
“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” \u200b\u200bsaid spokesperson Brian Gabriel<\/p>\nA rep for Google said there was \u201cno evidence\u201d of Blake Lemoine’s conclusions.<\/figcaption>John G. Mabanglo \/ EPA<\/span><\/figcaption><\/figure>\n“He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” he added. “Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”<\/p>\n
Margaret Mitchell – the former co-lead of Ethical AI at Google – said in the report that if technology like LaMDA is highly used but not fully appreciated, “It can be deeply harmful to people understanding what they’re experiencing on the internet.”<\/p>\n
The former Google employee defended Lemoine.<\/p>\nMargaret Mitchell defended Blake Lemoine, saying, “he had the heart and soul of doing the right thing.”<\/figcaption>Chona Kasinger \/ Bloomberg via Getty Images<\/span><\/figcaption><\/figure>\n \u201cOf everyone at Google, he had the heart and soul of doing the right thing,\u201d said Mitchell. <\/p>\n
Still, the outlet reported that the majority of academics and AI practitioners say the words artificial intelligence robots generate are based on what humans have already posted on the Internet, and that doesn’t mean they are human-like. <\/p>\n
\u201cWe now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,\u201d Emily Bender, a linguistics professor at the University of Washington, told the Washington Post.<\/p>\n<\/p><\/div>\n
.<\/p>\n","protected":false},"excerpt":{"rendered":"
A Google engineer was spooked by a company artificial intelligence chatbot and claimed it had become \u201csentient,\u201d labeling it a \u201csweet kid,\u201d according to a report. Blake Lemoine, who works in Google’s Responsible AI organization, told the Washington Post that he began chatting with the interface LaMDA – Language Model for Dialogue Applications – in …<\/p>\n
Google engineer Blake Lemoine claims AI bot became sentient<\/span> Read More »<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"default","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[9],"tags":[2713,416,146,1614],"jetpack_featured_media_url":"","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"fifu_image_url":"https:\/\/nypost.com\/wp-content\/uploads\/sites\/2\/2022\/06\/google-engineer-ai-bot-sentient-comp-1.jpg?quality=75&strip=all&w=1024","_links":{"self":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/posts\/47370"}],"collection":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/comments?post=47370"}],"version-history":[{"count":0,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/posts\/47370\/revisions"}],"wp:attachment":[{"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/media?parent=47370"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/categories?post=47370"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/harchi90.com\/wp-json\/wp\/v2\/tags?post=47370"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}