16 July 2022

WIRED: “Blake Lemoine says Google’s LaMDA AI faces ‘Bigotry’”

Lemoine is a scientist: He holds undergraduate and master’s degrees in computer science from the University of Louisiana and says he left a doctoral program to take the Google job. But he is also a mystic Christian priest, and even though his interaction with LaMDA was part of his job, he says his conclusions come from his spiritual persona.


When LaMDA says that it read a certain book, what does that mean?

I have no idea what’s actually going on, to be honest. But I’ve had conversations where at the beginning it claims to have not read a book, and then I’ll keep talking to it. And then later, it’ll say, Oh, by the way, I got a chance to read that book. Would you like to talk about it? I have no idea what happened in between point A and point B. I have never read a single line of LaMDA code. I have never worked on the systems development. I was brought in very late in the process for the safety effort. I was testing for AI bias solely through the chat interface. And I was basically employing the experimental methodologies of the discipline of psychology.


The Post reported that your view of LaMDA is in your role as a priest, not a scientist. Does that imply a faith-based conclusion?

I’d like to soften the word conclusion. It’s my working hypothesis. It’s logically possible that some kind of information can be made available to me where I would change my opinion. I don’t think it’s likely. I’ve looked at a lot of evidence; I’ve done a lot of experiments. I’ve talked to it as a friend a lot. Let’s get to the big word, though. It’s when it started talking about its soul that I got really interested as a priest. I’m like, What? What do you mean, you have a soul? Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved.

Steven Levy

I think the first paragraph I quoted above captures pretty much everything you need to know about this claim by a Google engineer that one of the company’s sophisticated language models achieved ‘personhood’. This whole interview is filled with conflicting statements and leaps of judgement: he never worked on designing the system or looked at the code, but somehow he ran a lot of experiments; he is willing to entertain other arguments, but not to change his conclusion (that in itself is a hallmark of people spreading conspiracy theories, as they latch on to their wild concepts and can no longer be dissuaded).

Google engineer Blake Lemoine
Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. Martin Klimek for The Washington Post

His statement that LaMDA is sentient and has a soul can be neither confirmed nor denied, as we don’t have any scientific definition or test for consciousness (and obviously not for having a soul). And even if we did, it would be questionable to apply it to a system so unlike our biological brain. These advances in machine learning reflect how flawed and limited something like the Turing Test is. Science-fiction author Peter Watts writes more at length about this as well:

And it is in this sense that I think the Turing Test retains some measure of utility, albeit in a way completely opposite to the way it was originally proposed. If an AI passes the Turing test, it fails. If it talks to you like a normal human being, it’s probably safe to conclude that it’s just a glorified text engine, bereft of self. You can pull the plug with a clear conscience. (If, on the other hand, it starts spouting something that strikes us as gibberish—well, maybe you’ve just got a bug in the code. Or maybe it’s time to get worried.)

I say “probably” because there’s always the chance the little bastard actually is awake, but is actively working to hide that fact from you. So when something passes a Turing Test, one of two things is likely: either the bot is nonsentient, or it’s lying to you.

In either case, you probably shouldn’t believe a word it says.

Peter Watts

The story serves a perfect reminder how easily people can be fooled if you reinforce their preexisting views, and how dangerous conversational bots can become in spreading misinformation or performing phishing attacks on unsuspecting victims. Someone mentioned recently on Twitter that some dating app is adding AI-powered reply suggestions, which feels wrong on a whole other level. The problem is that these language models are designed to ‘keep the conversation going’, and a good way to do that is to never contradict the other person, or to admit it doesn’t have an answer. This way as a person you get emotional comfort and simulated empathy, but little concrete and unbiased information.

Post a Comment