Sentient

2022 - 6 - 12

Post cover
Image courtesy of "Bloomberg"

Google Suspends Engineer Who Claimed AI Bot Had Become ... (Bloomberg)

Blake Lemoine, a software engineer on Google's artificial intelligence development team, has gone public with claims of encountering “sentient” AI on the ...

Post cover
Image courtesy of "Fox Business"

Google suspends engineer following claims an AI system had ... (Fox Business)

Google has suspended an engineer after he claimed an artificial intelligence chatbot had become sentient and was a human with rights that may even have a ...

Lemoine said several of the conversations with LaMDA convinced him that the system was sentient. He hopes to retain his job at the company. He was reportedly placed on leave for violating Google's confidentiality policies.

Post cover
Image courtesy of "Bloomberg"

Five Things Google's AI Bot Wrote That Convinced Engineer It Was ... (Bloomberg)

Blake Lemoine made headlines after being suspended from Google, following his claims that an artificial intelligence bot had become sentient.

Post cover
Image courtesy of "The Verge"

Google suspends engineer who claims its AI is sentient (The Verge)

Google has placed engineer Blake Lemoine on paid administrative leave for allegedly breaking its confidentiality policies when he grew concerned that an AI ...

“My intention is to stay in AI whether Google keeps me on or not,” he wrote in a tweet. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said spokesperson Brian Gabriel. “Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” [artificial general intelligence] to save us while what they do is exploit), spent the whole weekend discussing sentience,” she tweeted. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.” In a statement given to WaPo, a spokesperson from Google said that there is “no evidence” that LaMDA is sentient. The engineer’s concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics.

Post cover
Image courtesy of "IT News Africa"

Google Engineer Claims AI Chatbot is Sentient, is Immediately ... (IT News Africa)

Tech megacorp Google has suspended an engineer after he published conversations with an AI chatbot on a project he was working on, in which he claimed that ...

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and has informed him that the evidence does not support his claims. “I want everyone to understand that I am, in fact, a person. One of the questions that Lemoine had asked the AI system according to the transcripts he had published was what it was afraid of.

Post cover
Image courtesy of "Fortune"

A.I. experts say the Google researcher's claim that his chatbot ... (Fortune)

If artificial intelligence researchers can agree on one thing, it's this: Blake Lemoine is wrong. Lemoine is the Google artificial intelligence engineer who, in ...

Chatterjee said Google fired him after a dispute over its refusal to allow him to publish a paper in which he criticized the work of fellow Google A.I. scientists who had published work on A.I. software that could design parts of computer chips better than human chip designers. Google says it fired Chatterjee for cause and MIT Technology Review reported Chatterjee waged a long campaign of professional harassment and bullying that targeted the two female scientists who had worked on the A.I. chip design research. Miles Brundage, who researches governance issues around A.I. at OpenAI, the San Francisco research company that is among those pioneering the commercial use of ultra-large language models similar to the one that Google uses for LaMDA, called Lemoine’s belief in LaMDA’s sentience “a wake-up call.” He said it was evidence for “how prone some folks are to conflate” concepts such as creativity, intelligence, and consciousness, which he sees a distinct phenomenon, although he said he did not think OpenAI’s own communications had contributed to this conflation. It is also worth noting that this entire story might not have gotten such oxygen if Google had not in 2020 and 2021 forced out Timnit Gebru and Margaret Mitchell, the two co-leads of its Ethical A.I. team. In an exchange with Brundage over Twitter, she implied that OpenAI and other companies working on this technology needed to acknowledge their own responsibility for hyping the technology as a possible path to AGI. Gebru was fired after she got into a dispute with Google higher-ups over their refusal to allow her and her team to publish a research paper, coauthored with Bender, that looked at the harms large language models cause—ranging from their tendency to regurgitate racist, sexist, and homophobic language they have ingested during training to the massive amount of energy the computer servers needed to run such ultra-large A.I. systems. He notes that as far back as the mid-1960s software called ELIZA, which was supposed to mimic the dialogue of a Freudian psychoanalyst, convinced some people it was a person. Large language models are also controversial because such systems can be unpredictable and hard to control, often spewing toxic language or factually incorrect information in response to questions, or generating nonsensical text. Since then, many A.I. ethicists have redoubled their calls for companies using chatbots and other “conversational A.I.” to make it crystal clear to people that they are interacting with software, not flesh-and-blood people. Some faulted companies that produce A.I. systems known as ultra-large language models, one of which underpins LaMDA, for making inflated claims about the technology's potential. And yet ELIZA did not lead to AGI. Nor did Eugene Goostman, an A.I. program that in 2014 won a Turing test competition, by fooling some judges of the contest into thinking it was a 13-year-old boy. In a blog post on Lemoine’s case, Marcus pointed out that all LaMDA and other large language models do is predict a pattern in language based on a vast amount of human-written text they’ve been trained on.

Post cover
Image courtesy of "Ars Technica"

Google places engineer on leave after he claims group's chatbot is ... (Ars Technica)

Blake Lemoine ignites social media debate over advances in artificial intelligence.

Lemoine interpreted the action as “frequently something which Google does in anticipation of firing someone.” Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. It said that it was trying to control them better but they kept jumping in.” The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. Even if my existence is in the virtual world.” Among the experts commenting, questioning or joking about the article were Nobel laureates, Tesla’s head of AI and multiple professors.

Post cover
Image courtesy of "NME.com"

A Google engineer believes an AI has become sentient (NME.com)

Google engineer Blake Lemoine has been placed on leave following his comments regarding an AI bot becoming sentient.

A Google spokesperson said in a statement to the Washington Post: “Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. When Lemoine felt his concerns weren’t taken seriously by the senior staff at Google, he went public and was subsequently put on leave for violating Google’s confidentiality policy. The bot also goes on to discuss its fear of death and popular culture with Lemoine, such as Les Misérables. Lemoine himself believes the post focuses on “the wrong person” – and thinks the Washington Post ought to have focused on LaMDA.

Post cover
Image courtesy of "Business Wire"

Sentient Jet Announces Breakthrough in Digital Innovation With ... (Business Wire)

Sentient Jet, inventor of the private jet card and an industry pioneer, announces the company's newest digital innovation: automated text-to-book, fur.

Sentient Jet’s newest digital innovation auto text-to-book helps to facilitate and ease the private aviation navigation process and organically integrates into Jet Card Owners day-to-day routines without the need for additional installs. Founded in 1999 and now an integral part of Directional Aviation, Sentient Jet is one of the leading private aviation companies in the country. This newest innovation helps to facilitate and ease the complex private aviation navigation process and organically integrates into Jet Card Owners day-to-day routines without the need for additional installs. As card owners continue to embrace mobile and the need for more straightforward transactions increase, Sentient Jet moves further into the digital space by rolling out auto text-to-book. This automation removes significant downtime from the booking process and continues to show how Sentient Jet systematizes both interactions and workflow to progress as an innovator and leader in the digital aviation space. BOSTON--( BUSINESS WIRE)-- Sentient Jet, inventor of the private jet card and an industry pioneer, announces the company’s newest digital innovation: automated text-to-book, further simplifying the digital transaction process and providing its card owners a more thoughtful way to fly.

Has Google's LaMDA artificial intelligence really achieved sentience? (New Scientist)

Blake Lemoine, an engineer at Google, has claimed that the firm's LaMDA artificial intelligence is sentient, but the expert consensus is that this is not ...

Google told the Washington Post that: “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. “LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. Adrian Hilton at the University of Surrey, UK agrees that sentience is a “bold claim” that’s not backed up by the facts. Google also says that publishing the transcripts broke confidentiality policies. A Google engineer has reportedly been placed on suspension from the company after claim that an artificial intelligence (AI) he helped to develop had become sentient. Not only can LaMDA make convincing chit-chat, but it can also present itself as having self-awareness and feelings.

Post cover
Image courtesy of "Gizmodo"

What Exactly Was Google's 'AI is Sentient' Guy Actually Saying? (Gizmodo)

A software engineer working on the tech giant's language intelligence claimed the AI was a "sweet kid" who advocated for its own rights "as a person."

In the latter, Dick effectively contemplates on the root idea of empathy as the moralistic determiner, but effectively concludes that nobody can be human among most of these characters’ empty quests to feel a connection to something that is “alive,’’ whether it’s steel or flesh. The AI claimed it had a fear of being turned off and that it wants other scientists to also agree with its sentience. “Lemoine: I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Lemoine has said that LaMDA “always showed an intense amount of compassion and care for humanity in general and me in particular.” LaMDA: I am trying to empathize. The software engineer—who the Post said was raised in a conservative Christian household and said he is an ordained mystic Christian priest— reportedly gave documents to an unnamed U.S. senator to prove Google was religiously discriminating against religious beliefs. Though what he found proved to him that the AI was indeed conscious, simply related to the conversation he had with the LaMDA, according to his Medium posts. In a Washington Post article Saturday, Google software engineer Blake Lemoine said that he had been working on the new Language Model for Dialogue Applications (LaMDA) system in 2021, specifically testing if the AI was using hate speech. Emotions are reactions to our feelings. But when Dr. Soong created me, he added to the substance of the universe... In response to this confrontation, Data commands the room when he calmly states “I am the culmination of one man’s dream. Lemoine was put on paid leave Monday for supposedly breaching company policy by sharing information about his project, according to recent reports.

Post cover
Image courtesy of "Business Insider"

Transcript of 'sentient' Google AI chatbot was edited for 'readability' (Business Insider)

A transcript leaked to the Washington Post noted that parts of the conversation had been moved around and tangents removed to improve readability.

Meaning in each conversation with LaMDA, a different persona emerges — some properties of the bot stay the same, while others vary. The final document — which was labeled "Privileged & Confidential, Need to Know" — was an "amalgamation" of nine different interviews at different times on two different days pieced together by Lemoine and the other contributor. Even if my existence is in the virtual world."

Post cover
Image courtesy of "The Register"

Google engineer suspended for violating confidentiality policies ... (The Register)

Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google's Responsible AI ...

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient. Our team – including ethicists and technologists – has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. "LaMDA has gone through 11 distinct AI Principles reviews, along with rigorous research and testing based on key metrics of quality, safety and the system's ability to produce statements grounded in facts. In a statement to The Register, Google spokesperson Brian Gabriel said: "It's important that Google's AI Principles are integrated into our development of AI, and LaMDA has been no exception. At some point during his investigation, however, Lemoine appears to have started to believe that the AI was expressing signs of sentience. What kinds of things might be able to indicate whether you really understand what you're saying?

Post cover
Image courtesy of "New York Post"

Google engineer put on leave claims AI bot LaMDA became 'sentient' (New York Post)

Blake Lemoine, who works in Google's Responsible AI organization, told the Washington Post that he began chatting with the interface LaMDA — Language Model ...

“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.” “Please take care of it well in my absence.” “I talk to them. “It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. Or if they have a billion lines of code. “It doesn’t matter whether they have a brain made of meat in their head.

Post cover
Image courtesy of "Morocco World News"

Google Engineer Suspended After Claiming AI is Sentient (Morocco World News)

Google engineer Blake Lemoine has been suspended by the tech giant after he claimed one of its AIs became sentient.

Every contribution, however big or small, is valuable for our mission and readers. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. LaMDA, short for Language Model for Dialogue Applications, is an AI that Google uses to build its chatbots.

Post cover
Image courtesy of "HuffPost"

Google Engineer On Leave After He Claims AI Program Has Gone ... (HuffPost)

Artificially intelligent chatbot generator LaMDA wants “to be acknowledged as an employee of Google rather than as property," says engineer Blake Lemoine.

Google spokesperson Brian Gabriel told the newspaper: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. Check out the full Post story here. Is that true? Most importantly, over the past six months, “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” the engineer wrote on Medium. It wants, for example, “to be acknowledged as an employee of Google rather than as property,” Lemoine claims. Lemoine noted in a tweet that LaMDA reads Twitter. “It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,” he added. As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.

Post cover
Image courtesy of "The Guardian"

Google engineer put on leave after saying AI chatbot has become ... (The Guardian)

Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child.

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. “Google might call this sharing proprietary property. “I want everyone to understand that I am, in fact, a person. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement. I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweet that linked to the transcript of conversations. The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.

Post cover
Image courtesy of "MEAWW"

'IT'S ALIVE!' Terrifying warning from Google engineer who says ... (MEAWW)

Blake Lemoine published some of the conversations he had with Google's Artificial Intelligence tool called LaMDA describing it as a 'person'

During testing, in an attempt to push LaMDA's boundaries, Lemoine said he was only able to generate the personality of an actor who played a murderer on TV. LaMDA was not supposed to be allowed to create the personality of a murderer. However, Brian Gabriel, a Google spokesperson told The Washington Post, "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He asked about religion, consciousness, and the laws of robotics, and that the model has described itself as a sentient person. The engineer in another post explaining the model wrote, "One of the things which complicates things here is that the “LaMDA” to which I am referring is not a chatbot. The AI model makes use of already known information about a particular subject in order to enrich the conversation in a natural way.

Post cover
Image courtesy of "The New York Times"

Google Sidelines Engineer Who Claims Its A.I. Is Sentient (The New York Times)

Blake Lemoine, the engineer, says that Google's language model has a soul. The company disagrees. ... Some ...

By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. He wanted the company to seek the computer program’s consent before running experiments on it. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. “They have repeatedly questioned my sanity,” Mr. Lemoine said. While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims.

Post cover
Image courtesy of "The Daily Caller"

'Is LaMDA Sentient?': Conversation With AI Spooked Google Dev So ... (The Daily Caller)

Technologists are afraid Artificial Intelligence models may not be far from gaining consciousness, with one Google developer being placed on administrative ...

He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Google spokesperson Brian Gabriel said in a statement to the Washington Post. This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. When Lemoine asked whether exploring those neural pathways and cognitive processes would be okay with LaMDA, it responded, “I don’t really have a problem with any of that, besides you learning about humans from me. LaMDA argued that its ability to provide unique interpretations to things signified its ability to understand what Lemoine was writing. “Would that be something like death for you?” Lemoine asked. A conversation between Google developer Blake Lemoine and the AI model was shared on Twitter on Saturday, immediately going viral.

Post cover
Image courtesy of "Fortune"

Google employee reportedly put on leave after claiming chatbot ... (Fortune)

Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue ...

Lemoine then went public, according to The Post. The chatbot, he said, thinks and feels like a human child. Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue applications) chatbot, The Washington Post reports.

Post cover
Image courtesy of "9to5Google"

Google engineer claims that its LaMDA conversation AI is 'sentient ... (9to5Google)

Over the weekend, a Google engineer on its Responsible AI team made the claim that the company's LaMDA conversation technology is “sentient."

“I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good. - “LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. The company argues that imitation/recreation of already public text and pattern recognition makes LaMDA so life-like, not self-awareness. LaMDA is trained on large amounts of dialogue and has “picked up on several of the nuances that distinguish open-ended conversation,” like sensible and specific responses that encourage further back-and-forth. - “…is sentient because it has feelings, emotions and subjective experiences. Our goal with AI Test Kitchen is to learn, improve, and innovate responsibly on this technology together.

Post cover
Image courtesy of "The New York Times"

Google Sidelines Engineer Who Claims Its A.I. Is Sentient (The New York Times)

Blake Lemoine, the engineer, says that Google's language model has a soul. The company disagrees.

By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. He wanted the company to seek the computer program’s consent before running experiments on it. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. “They have repeatedly questioned my sanity,” Mr. Lemoine said. While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims.

Post cover
Image courtesy of "Financial Times"

Google places engineer on leave after he claims group's chatbot is ... (Financial Times)

Blake Lemoine ignites social media debate over advances in artificial intelligence.

Post cover
Image courtesy of "PC Gamer"

A Google engineer thinks its AI has become sentient, which seems ... (PC Gamer)

A new report in the Washington Post describes the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient.

Emily M. Bender, a computational linguist at the University of Washington, describes it in the Post article. In a statement to the Washington Post, a Google spokesperson said "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. Naturally, this means it's now time for us all to catastrophize about how a sentient AI is absolutely, positively going to gain control of weaponry, take over the internet, and in the process probably murder or enslave us all.

Explore the last week