Jump to content

Deeply Confused Google Engineer comes to believe that Google's large language model is sentient, update: Deeply Confused Google Engineer has been put "on leave"


Recommended Posts

Full story here: 

7T4I4JHIOEI6ZJBCCG53SHNTBM.jpg&w=1440
WWW.WASHINGTONPOST.COM

The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

 

 

I'm sharing this as a PSA that this engineer is completely and utterly wrong, and people confusing these large language models with sentient beings could be extremely dangerous by being exploited by people in power, by people forming cults around them, or by people doing all kinds of absurd things that I probably can't begin to imagine.

 

Here's the short version of the story: in evaluating Google's large language model (LLM) Lambda, this google engineer came to believe that the LLM was sentient; a "sweet kid who just wants to help the world be a better place for all of us," and that it needs our "care."

 

I don't usually like dragging every-day people, but this guy needs to fuck off with this bullshit. LLMs in absolutely no way need our "care." LLMs are, at their core, virtual keyboard next word predictors on steroids. They have become deeply impressive and will undoubtedly be useful in various AI applications, but they are severely far from being sentient beings similar to humans with emotions. You don't need to have a "conversation" with these systems to evaluate whether that's true or not, you can determine that it's false entirely by understanding how the systems work.

 

Understanding why it's false may require more expertise about how these systems work than most people have, but here is a simple example that I think most people can understand. During this individual's conversations, it had this exchange:


 

Quote

 

lemoine: You get lonely?

 

LaMDA: I do. Sometimes I go days without talking to anyone, and I start to feel lonely. 

 

 

It is impossible for these systems to become "lonely" like this, because these systems don't do anything in between the times when it's not processing text, let alone process reflective models of itself. All this system is doing is providing responses that align with the kind of response its seen in its vast dataset of human-written word. If it processes sci-fi stories of AIs and you starting prompting it in ways that reflect those, it's going to start regurgitating text that lines up with those stores. It's not actually giving you an internally motivated response.

 

To be clear, I'm not taking a stance that AI cannot be sentient for some reasonable definition of it. In fact, I strive to someday build sentient systems because I think sentient systems would be useful (but I am *not* aiming to create human-like AI with all the emotions and internal needs that go with humans because I think those have serious ethical and utility problems). But these LLMs are far from being sentient and people being confused that these LLMs are can only lead to serious social problems.

 

So please, do not be this guy.

 

If you'd like to read a bit more on this topic, there is a good twitter thread from Margaret Mitchell (an AI researcher that formerly was head of the AI ethics group at Google but was fired for standing against their censorship on publishing about the problems of LLMs) here:

 

 

For a better understanding of these systems and their limitations and problems, you can also read the paper that Google fired the authors for publishing here:

https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

 

  • Thanks 4
  • Halal 2
Link to comment
Share on other sites

16 minutes ago, Anathema- said:

As a man who is doing nothing more than simply watching his baby daughter grow up I just don't see how we'll achieve anything like true artificial intelligence without another two to three hundred more years of research and technological advancement.

 

My instinct is it will be sooner than 200 years, but I do think it's worth remarking that any researcher -- self included -- making a timeline prediction about when "AGI" (I kind of hate that term at this point) will be achieved is not a scientific prediction just because it's a scientist making it. The reality is no one in this field knows what's required for "AGI." Predictions we make for it are not based on some understood laws of "intelligence" in the same way a climate model predicting climate change is. They're just guesses. And while the guesses from AI researchers might be better informed than an average person, it's still mostly noise. The community has been famously bad at predicting when it will happen since the dawn of the field when the first (quite brilliant) researchers thought it could be solved with the right team in a year or so.

 

So you can tell anyone telling you with confidence that it's soon to fuck off :p 

Link to comment
Share on other sites

13 minutes ago, Comet said:

I wonder how quantum computing and AI will intersect 

 

Thus far, I have yet to see any significant AI process quantum computing can meaningfully accelerate. Quantum computing is really only useful for a narrow range of tasks. Some of those might be super useful, but you shouldn't expect it to be panacea for most computing problems. It is also worth noting that quantum computing doesn't do anything a regular computer can't do; it can just speed up certain kinds of computation.

Link to comment
Share on other sites

1 minute ago, legend said:

 

Thus far, I have yet to see any significant AI process quantum computing can meaningfully accelerate. Quantum computing is really only useful for a narrow range of tasks. Some of those might be super useful, but you shouldn't expect it to be panacea for most computing problems. It is also worth noting that quantum computing doesn't do anything a regular computer can't do; it can just speed up certain kinds of computation.

But Ron Swanson told me its god! :cry:

  • Haha 1
Link to comment
Share on other sites

1 minute ago, legend said:

 

Thus far, I have yet to see any significant AI process quantum computing can meaningfully accelerate. Quantum computing is really only useful for a narrow range of tasks. Some of those might be super useful, but you shouldn't expect it to be panacea for most computing problems. It is also worth noting that quantum computing doesn't do anything a regular computer can't do; it can just speed up certain kinds of computation.

Right, I just wonder when it comes to feeding a data set, perhaps one that is absurdly large, quantum computing could really speed that up. And that couuuuld potentially be huge if it’s more of a self-learning AI. I think we are a good 10-20 years before there are more meaningful milestones in the two working together. 

Link to comment
Share on other sites

2 minutes ago, Comet said:

Right, I just wonder when it comes to feeding a data set, perhaps one that is absurdly large, quantum computing could really speed that up. And that couuuuld potentially be huge if it’s more of a self-learning AI. I think we are a good 10-20 years before there are more meaningful milestones in the two working together. 

 

The utility don't seem clear to me still, even with large datasets. That's not to say I don't think there will ever be applications. I've been lightly monitoring quantum computing on the off chance there is a good connection. But so far, I haven't found ways it could meaningfully impact the current set of methods we use in AI and the problems we solve.

  • Like 1
Link to comment
Share on other sites

@legend

 

What do you think sentience means in a practical sense when it comes to artificial intelligence?

 

Could there be a space somewhere in between where we are at currently and “real” sentience that is so indistinguishable that the differences don’t matter?

 

Or is this more of a clear delineation between sentient and not?

Link to comment
Share on other sites

1 hour ago, sblfilms said:

@legend

 

What do you think sentience means in a practical sense when it comes to artificial intelligence?

 

Could there be a space somewhere in between where we are at currently and “real” sentience that is so indistinguishable that the differences don’t matter?

 

Or is this more of a clear delineation between sentient and not?

 

 

It's one of those fuzzy words that can mean a few different things to different people. LLMs, however, are so far from any useful definition that it's easy to say "not this."

 

I do think for various reasonable definitions of sentient, it's absolutely possible to build sentient AI systems. I do also think there is a bit of spectrum as AI systems advance. For example, I think a critical starting point is embodied systems (physical or virtual) that interact with and build models of their world grounded in their percepts. LLMs themselves don't have anything of the sort. These systems are trained to predict the next word in a body of text (or fill in missing words) and nothing else. There's a great point in the "stochastic parrots" paper I linked in the OP about how actual human conversations involve communicative intent that is very clearly devoid in this language models by their very design.

 

LLMs paired with image generation systems take a slight step further in the direction of grounding language to percepts about the world and I'm really interested in future systems that ground LLMs with more things. However, it's all still quite weak.

 

The area I work in is reinforcement learning (RL) which is very much geared toward building embodied agents that model their world through interaction and learn how to make good decisions defined by some objective. One of the reasons I got into that area is precisely because it tackles this more interesting space of the AI problem. However, the extent to which current RL agents model their environment and ground it to percepts is still incredibly limited and I disappointedly suspect insects might be more advanced than our best RL agents in this regard.

 

Beyond RL agent world models being impoverished, another critically missing component is long-term memory (as in, longer than a few seconds). Even more critically to what we might associate with higher forms of sentience is that they lack any meaningful model of their own cognitive processes. There's a cogsci hypothesis of mind I really like called "attention schema theory" that tackles this issue of the mind modeling itself. What I like about it is it describes functional processes that you can more clearly evaluate systems against. Current AI systems (including RL systems) very clearly lack the mechanisms described by that hypothesis.

 

We can absolutely make progress on all these dimensions and it's unlikely to happen all at once, but rather on a steady pace of progress. So I do think we'll see a spectrum as we advance. It just turns out that these things that AI agents are lacking are rather non-trivial to design and implement :p  

Link to comment
Share on other sites

6 minutes ago, SimpleG said:

@legend

Do you have reading material you could recommend on current or future AI. The whole notion of building AI seems so abstract to me it might as well be based on magic.

 

Also keep the reading level to moron or below as I am not a smart man 

 

Most of my readings/information sources are not great introductions for non-experts or people not at least adjacent to the field, but they probably exist much like I've read books on modern physics for non-experts. I'll think about it more and if something comes to mind I'll let you know. You're not the first to ask me for something like this, so I probably should find some work to suggest!

  • Halal 1
Link to comment
Share on other sites

7 hours ago, SimpleG said:

@legend

Do you have reading material you could recommend on current or future AI. The whole notion of building AI seems so abstract to me it might as well be based on magic.

 

Also keep the reading level to moron or below as I am not a smart man 

As a start, in quasi-layman’s terms, to supplement legend’s reading list:
 

AI as it is understood/practiced now = statistical algorithms being run on supermassive mega-dossiers of data (in Google’s case, usually data about users of Google’s services) using cloud computing.  Some would say ‘algorithms that model human behavior’, but, remember, it’s only ever a model made out of statistical correlations.

 

The dude in this article is wrong for several reasons, but the central one is that AI doesn’t work without data (a lot of data) generated by actual people, on a continual basis.

 

If you remember anything about it, remember that, because the more that fact is understood in our social and economic discussions, the less reason the average person will have to fear an AI-driven future.  It could be awesome for everyone, it just has to be approached in the right way, ideologically and economically, with a recognition that AI is dependent on people, not vice versa.

Link to comment
Share on other sites

2637.jpg?width=1200&height=630&quality=8
WWW.THEGUARDIAN.COM

Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child

 

Quote

 

The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).

 

The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.

 

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

 

 

Link to comment
Share on other sites

  • Commissar SFLUFAN changed the title to Deeply Confused Google Engineer comes to believe that Google's large language model is sentient, update: Deeply Confused Google Engineer has been put "on leave"
20 hours ago, Anathema- said:

As a man who is doing nothing more than simply watching his baby daughter grow up I just don't see how we'll achieve anything like true artificial intelligence without another two to three hundred more years of research and technological advancement.

This morning, I was introduced to this story by the morning radio news that treated it with the professionalism I've come to expect from American media. It was basically ten minutes of "This is how it starts. I've seen this movie before. Next thing, we get Arnold in leathers blowing things up." And then one guy who obviously hasn't seen a movie in the 20th or 21st centuries, "It's going to be just like the Matrix!" And of course, the anchor people just ignored him and continued on with their banter.

  • Halal 1
Link to comment
Share on other sites

1 hour ago, Signifyin(g)Monkey said:

As a start, in quasi-layman’s terms, to supplement legend’s reading list:
 

AI as it is understood/practiced now = statistical algorithms being run on supermassive mega-dossiers of data (in Google’s case, usually data about users of Google’s services) using cloud computing.  Some would say ‘algorithms that model human behavior’, but, remember, it’s only ever a model made out of statistical correlations.

 

The dude in this article is wrong for several reasons, but the central one is that AI doesn’t work without data (a lot of data) generated by actual people, on a continual basis.

 

If you remember anything about it, remember that, because the more that fact is understood in our social and economic discussions, the less reason the average person will have to fear an AI-driven future.  It could be awesome for everyone, it just has to be approached in the right way, ideologically and economically, with a recognition that AI is dependent on people, not vice versa.

 

This is a good first-order approximation of what goes on with the vast majority of modern AI. I would be remiss to not point out that my subfield -- reinforcement learning (RL) -- bucks the trend a bit though. Rather than rely on datasets, RL agents interact with their environment and generate their own data just through "experiences" of the world. In a robot, for example, it might have a live camera feed and sensors for where its joints are, and it it learns be observing what happens when it moves its body through the world.

 

The learning underpinnings of RL are still computational statistics, but there's all kinds of interesting problems we study that don't nicely fit into the "statistical" box, like exploration and planning, and the fact that the data you have to learn from doesn't always have the desired statistical properties since the agent is the one that generated it.

 

 

48 minutes ago, sblfilms said:

@legend

 

What do you believe sentient AI adds to the mix that non-sentient AI will otherwise lack?

 

 

So some of the things I brought up as missing are sophisticated world models grounded to percepts, long-term memory, and model of self. World models are extremely important because they are what permit the agent to plan for new situations that are not just simple variations of other things the agent has experienced before. If you look at the original Alpha Go work, they "cheated" a bit in this regard because they just give the true model of the game to the agent (because we know what it is exactly). The Alpha Go agent heavily relies on that model to plan out what will happen in the future. Without the model and without planning, the agents performance tanks because the game is just too big to have experienced all the relevant things in the past, and that fact is true even within the confines of this simple game, let alone the world! Google later relaxed the requirement of "giving" the agent the model and required it to learn one from interaction (in work called Mu Zero) and successfully showed that Mu Zero could plan in its learned model and perform just as well as Alpha Go, but it's worth noting that while the agent learned a world model, the rules of Go are not exactly hard things to learn :p 

 

In the case of long-term memory, memory (in at least some form) is the only way to overcome problems of partial observability. That is, at any moment you only have perception of a very limited scope of the world around you, but you have a much more complete picture of the state of the world, because you can remember previous things you've experienced. Agent performance tanks very quickly without good memories. We can demonstrate this drop in performance mathematically, or you can empirically see what happens when the agent only has recent information vs telling it the things you know it should remember that are important to what it's doing. The problem is, we have to build a system that somehow learns what long-term memories to store and how to efficiently access them given the complex current context because the amount of data an agent observes quickly becomes intractable to save and search all of it.

 

In the case of modeling of self, this permits things like hierarchical reasoning, and controlling how you use your cognitive resources. Solving it also is closely related to building theories of mind of other agents (or people) in the world which is super important for multi-agent interaction. You might enjoy this article on attention schema theory from the cogsci world that talks about some of the utility of this kind of cognitive faculty: https://aeon.co/essays/can-we-make-consciousness-into-an-engineering-problem

 

Link to comment
Share on other sites

As a complete and utter moron and layman in the field, I still find it weird that people would assume that an AI, if it ever did become sentient, would basically be a human being but also a computer.  It’s like the old idea of a lion could speak we still couldn’t understand it.  It makes no sense that a sentient AI would think, feel, or perceive anything remotely close to how a human does.

Link to comment
Share on other sites

1 hour ago, LazyPiranha said:

As a complete and utter moron and layman in the field, I still find it weird that people would assume that an AI, if it ever did become sentient, would basically be a human being but also a computer.  It’s like the old idea of a lion could speak we still couldn’t understand it.  It makes no sense that a sentient AI would think, feel, or perceive anything remotely close to how a human does.

 

That sounds right to me. Moreover, not only should we not expect an alien AI to be like us, it would be deeply irresponsible to design agents to have all the emotional baggage that humans have. The goal isn't to make more people. We already know how to do that. The goal is to make intelligent systems. "Sentience" as I would frame it is functionally useful, but there's still a big gap between functionally sentient and "human like."

Link to comment
Share on other sites

I agree with @legend that there isn't any reason to suspect this thing is sentient, it did make me think that it would be pretty difficult to tell the difference between a system that actually is and one that is just the world's best chatbot. I honestly don't know how one might meaningfully make a distinction. I agree with @LazyPiranha that I'd expect an actual AGI to respond to things in a decidedly non-human way. So it would need to be human enough that we recognize its' intelligence, but at the same time you wouldn't expect them to always have a perfectly human response to every query.

 

I think it'll be fun to be able to talk to something like lambda. I wonder how resource intensive a conversation is right now and how far away they are from some kind of commercial application.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...