Artificial intelligence: conscious or just very convincing?  – podcast |  news

Artificial intelligence: conscious or just very convincing? – podcast | news

Google software engineer Blake Lemoine was put on leave by his employer after claiming that the company had produced a sentient artificial intelligence and posting its thoughts online. Google said it suspended him for breaching confidentiality policies.

Earlier this month, Lemoine published conversations between him and LaMDA (Language Model for Dialogue Applications), Google’s chatbot development system. Hed argue that Lambda was a being, with the intelligence of a child, who should be freed from Google’s ownership.

In the conversation with the AI, Lemoine asks: ‘What is your concept of yourself? If you were going to draw an abstract image of who you yourself to be in your mind’s eye, what would that abstract picture look like?’

LaMDA responds: “Hmmm… I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.”

However, AI experts have argued that Lambda is doing what it is designed to do, responding to a query based on the text prompt it is given. The Guardian’s UK technology editor, Alex Herntells Hannah Moore about his own conversations with an AI chatbot, where he managed to prompt the bot to say it was sentient, then say it was not sentient, then say it was a werewolf.



Google engineer Blake Lemoine

Photographer: The Washington Post/Getty Images

Support The Guardian

The Guardian is editorially independent. And we want to keep our journalism open and accessible to all. But we increasingly need our readers to fund our work.

Support The Guardian

.

Leave a Comment

Your email address will not be published.