Jump to content
Sign in to follow this  
SFLUFAN

The Morality of Killing Video Game NPCs (Vox article)

Recommended Posts

I have some issues with this article. First, the fact that an NPC might do things that resemble human thought processes doesn't necessarily give that NPC moral value by itself (except maybe sentimental value to humans that it resonates with much like a teddy bear does). For instance, if an NPC's goals aren't met, it might not 'suffer' in the same way that humans suffer. Our mentality is set up in such a way that we experience all different types of actual suffering when we don't meet our goals but that doesn't necessarily have to be true for artificially intelligent agents. For that agent, it could simply be a matter of acting out probabilistic equations. Suffering doesn't need to enter into the equation at all because even if that agent 'dies' in the game, it could interpret that 'death' as something analogous to death at best. So basically I think the following quote is bunk or at least doesn't have to be true because we are creating artificial intelligence from scratch:

We care much more about sophisticated and algorithmically powerful systems than about simple ones. But the difference is one of degree, not kind, so the simple ones seem to matter a tiny amount.

Moving on to the end:

Dylan: At some point all of this comes down to when you think certain beings can or cannot have wants, need, and desires, or feel pain and pleasure, and the whatnot, and a lot of people just have the intuition that, while the mental states of complex animals are less intricate but somewhat analogous to the mental states of human beings, NPCs and insects just don't have mental states at all. It may be "like" something to be a bat, but it isn't like anything to be a mosquito or a Quake bot. Why do you think that's wrong?

Brian: We have a powerful intuition that it's a factual question whether there's something it's like to be a human, a bat, or an NPC. Are insects conscious or not? This is usually regarded as a matter for science to settle in an objective manner. And while science does play an important role, I claim that it's fundamentally a moral question —one for our hearts to decide.

On my view, there is only physics, and "consciousness" must be an emergent phenomenon within physics — a larger, more complex process built from smaller components. Life, planets, skyscrapers, and nation-states are also emergent phenomena. Thus, for something to be conscious means that it has mental traits, exhibits behavior, and uses cognitive algorithms that we decide qualify it as being "conscious."

We tend to think that certain mental processes — like attention, sensory integration, self-reflection, imagination, and so on — when put together constitute our conscious mind. Hence, when we see these kinds of processes in other places, it's reasonable to call those other processes "conscious" as well, to a greater or lesser degree. This is actually what scientists do when they assess the sentience of animals; they just aren't always explicit about the fact that these questions are judgment calls based on what criteria we feel ought to be relevant to what degree.

IT'S FUNDAMENTALLY A MORAL QUESTION, ONE FOR OUR HEARTS TO DECIDE

Now, you might object and maintain that your being conscious is not a matter of opinion, but is something you can be certain of based on introspection. I share this intuition, and it's very powerful. But think about what's going on in your brain when you have that thought. What neurons are firing in what pattern, and what algorithms are they playing out? Perhaps there are inputs from your perception combining with a mental representation of "yourself," which work together with cognitive centers to form the thought that "I'm conscious!", which is then verbalized by the muscles in your mouth, and so on. Whatever the specific steps of the process are, those steps constitute an instance of consciousness in your mind. And presumably you're also conscious when you have perceptions even if you don't explicitly think about that fact. In order to extend the range of things we call conscious beyond references to your personal introspection, we generalize what sorts of cognitive algorithms are at play here.

We could define consciousness narrowly, to only encompass very human-like brains. But this seems parochial, especially if similar behavioral and intellectual functionality can be achieved by somewhat different algorithms or cognitive architectures. If we define consciousness to be too specific, we may also rule out humans with certain cognitive disabilities, even if most of us would still call them conscious. If we think about the broader, fundamental principles that lead us to care about a mind — e.g., having a sense of what's better or worse for it, having goals and plans, broadcasting information throughout its brain about events that take place, being able to act successfully in the world, and so on — we can see rudimentary traces of these traits in many places, including insects and, to a much smaller degree, computational processes like some NPCs in video games.

We might think there should be a threshold of cognitive complexity below which computational agents stop mattering, but it's not clear where to set such a threshold. Each step in increasing an artificial agent's mental abilities by itself looks rather trivial; it generally means just adding some extra lines of code. It's when all of these seemingly trivial steps are added together that the agent traces out patterned behavior that looks more meaningful to us. If we insist on self-reflection as a clear cutoff for moral concern, we run into the problem of specifying where "self-reflection" begins and ends. Even a trivial computational agent may monitor its own state variables, assess its performance, generate statements about itself, store and retrieve memory logs of its past experiences, and so on. Without a clear dividing line here, I think a computational agent with these trivial abilities ought to be called marginally self-aware, and an agent with more powerful and advanced self-reflection ought to be called more self-aware.

Unless we adopt a parochial view, the traits of an organism that we consider to constitute its sentience and welfare come in degrees. Hence, really simple agents can be said to have really small but nonzero amounts of sentience.

All of this seems like a merely definitional argument at best and the underlined claim just seems totally out of place in this overall article and I can't imagine a way that he could substantiate it (as if he did, he would by definition be refuting it because he wouldn't be substantiating it 'with his heart,' whatever that means).

Share this post


Link to post
Share on other sites

Interesting article. Though I share some commonality with his positions (e.g., continuums of ethical concern based on cognitive abilities), I don't think I agree that we should have any concerns with game NPCs. "Avoiding" getting shot isn't typically something that is reasoned about by an NPC, it's a mere pre-programmed behavior that is no more cogent or feeling of suffering than the effect of my soda rising when I open the bottle.

That said, he does focus on goal-directed agents and reinforcement learning ( <3 ), but I still think there are a few problems with his reasoning that prevent me from caring in the slightest (about video game NPCs).

For one thing, the goal directed behavior you find in video game AI isn't really goal directed in any meaningful sense. Goals typically arise transiently due to pre-defined behavior rather than arising from fundamental values of the NPC. For instance, it will be dictated by hard encoding that an NPC will suddenly have a pathfinding problem to reach some location, planning is performed, and then the agent follows the plan fairly blindly to that path goal. The agent isn't acting because it values that goal location, it's acting because some transient system found a path to it and then the agent followed it. If something gets in the agents way and it blindly keeps walking into the obstacle, it doesn't care that it's been "thwarted" because getting to that location isn't an actual value of the agent. The "goal" only exists temporarily in a disjoint planning system and it was a goal that only came into existence for predefined reasons rather than because they are fundamental goals the agent actually values.

I also think a bigger problem is that I don't think the existence of values is sufficient for an entity to be of any moral concern. In addition to values, I contend that an agent needs to be aware of themselves and the world in which they reside. First it's probably important to clarify what it means to be aware of yourself and your world. For sure, an RL agent must be able to observe a "state" of the world of some sort in order for it to learn how to behave. But these states are typically handed to the agent and the agent does nothing to think about what that state is or what it might actually be. It simply accepts its as computational input and uses it. Model-based RL gets a little closer, because it involves learning how states transition between each other, but it's still being given the states and the way it models transitions between states is typically extremely limited to a very specific model, even in advanced academic AI. What seems critical to awareness is that we recognize that our mental states are just that—not truth about the world—and that we seek to find models of how the world actually is so that we may better act in it. Furthermore, what is that what is really unique about human life is that our modeling ability is Turning complete, which makes us a very unique kind of system that is categorically different from any other kind of physical system in the universe.

So if our current best AI tends not to go this far, you can be damn sure that video game NPCs aren't going to come anywhere near close enough to be of any moral concern. They are for all intents and purposes lifeless physical systems. Worrying about an NPCs well being would be like worrying about a pool of water. Sure, it changes state, but it's not of moral concern.

I'd also like to address this bit of it:

How to determine whether this feedback is positive or negative is a tricky issue. In reinforcement learning, the rewards are numbers that can be either positive or negative, so a natural assumption is that positive numbers are (in an extremely crude sense) "pleasurable" and negative numbers are "painful," but this discrimination isn't necessarily accurate, because there are cases where we can shift the set of possible rewards/punishments the agent receives up or down (i.e., add or subtract a constant from all of them) without changing behavior. So we may need to think more about exactly what separates reward from punishment in these agents, perhaps by appealing to behavioral seeking vs. avoiding tendencies, or to other characteristics.

He seems to be getting hung up on behavior and isn't really appreciating the nuances of these properties of MDPs. It is true that for a fully defined Markov Decision Process (MDP) without termination states you can simply scale all the rewards by a constant amount and the optimal policy will remain the same. But termination states (which by definition require 0 reward to be returned once reached) make things unique. Also as the relevant definition of suffering (assuming you have an otherwise aware agent as discussed above) seems to me to be a state of being that is worse than non-existence. Negative rewards will entail this because if you provided an agent that was experiencing negative rewards (and no way to shift to positive rewards) an action that would let itself kill itself (transition to a terminal state in MDP terms), it would always take it. For example, if I have a simple grid world and specify a single goal terminal state to return a reward of 1 and zero everywhere else, this will have the same behavior as an agent that always received a negative reward everywhere: go to the goal position in the gird world; but if you then added an action the agent could take from any state to terminate early, the former agent would not take that action, whereas the latter negative reward world agent would take it.

Share this post


Link to post
Share on other sites

It's interesting, but I'm not even slightly convinced. Even if NPCs had the same moral worth as insects, the moral worth of an insect is extremely low compared to me, thus my own preferences strongly outweigh an insect so much so that it is rarely wrong to kill an insect if you are doing it for a good purpose (such as keeping them out of your house).

Share this post


Link to post
Share on other sites

I also don't like the idea of ascribing moral value to neural activity alone. I ascribe moral value to potential experiential suffering and well-being and I see no reason at all to believe that neural activity alone always entails that to any degree at all.

Share this post


Link to post
Share on other sites

I also don't like the idea of ascribing moral value to neural activity alone. I ascribe moral value to potential experiential suffering and well-being and I see no reason at all to believe that neural activity alone always entails that to any degree at all.

I don't know a lot about insects, but I know their brains and nerves aren't even close to the level of complexity as mammals. I have a hard time believing that they suffer in a way that is meaningfully similar to human beings. At the same time, I do ascribe to them some form of moral worth, but it is very low. For instance, when I'm outside on my porch and I observe moths or other bugs/insects that don't really inconvenience me, I don't go out of my way to kill or hurt them if they aren't annoying me or harming me. I would even go as far to say that this would be wrong. All bets are off if a wasp tries to sting me, or a scorpion is in my house. Do you agree that it is wrong on some level to needlessly kill insects? If not, at what point does it become wrong to needlessly kill life?

These are separate issues from NPCs. NPCs are not life yet.

Share this post


Link to post
Share on other sites

I don't know a lot about insects, but I know their brains and nerves aren't even close to the level of complexity as mammals. I have a hard time believing that they suffer in a way that is meaningfully similar to human beings. At the same time, I do ascribe to them some form of moral worth, but it is very low. For instance, when I'm outside on my porch and I observe moths or other bugs/insects that don't really inconvenience me, I don't go out of my way to kill or hurt them if they aren't annoying me or harming me. I would even go as far to say that this would be wrong. All bets are off if a wasp tries to sting me, or a scorpion is in my house. Do you agree that it is wrong on some level to needlessly kill insects? If not, at what point does it become wrong to needlessly kill life?

These are separate issues from NPCs. NPCs are not life yet.

Well there are two separate issues here I think. Above I was talking about ascribing moral value in an objective sense and it seems that you're talking about my intuitive values for insects. On an intuitive level, if I kill an insect, I feel like it's an at least partially malevolent act, but I'm not really sure objectively how much value to ascribe to insects. My guess is that we shouldn't ascribe much if any and that the answer would hinge on whether or not there is anything that it's like to be an insect.

There's another issue, too, I think, and that is that people ascribe sentimental value to agents as well and I think that value is relevant to morality because it's a value held by humans. So even if insects don't suffer at all and there is nothing that it's like to be an insect, if enough humans value insects, then it would represent at least some loss of global well-being to kill them (though that loss might not be a net loss depending on what's gained).

Share this post


Link to post
Share on other sites

Well there are two separate issues here I think. Above I was talking about ascribing moral value in an objective sense and it seems that you're talking about my intuitive values for insects. On an intuitive level, if I kill an insect, I feel like it's an at least partially malevolent act, but I'm not really sure objectively how much value to ascribe to insects. My guess is that we shouldn't ascribe much if any and that the answer would hinge on whether or not there is anything that it's like to be an insect.

There's another issue, too, I think, and that is that people ascribe sentimental value to agents as well and I think that value is relevant to morality because it's a value held by humans. So even if insects don't suffer at all and there is nothing that it's like to be an insect, if enough humans value insects, then it would represent at least some loss of global well-being to kill them (though that loss might not be a net loss depending on what's gained).

I agree with the second issue you brought up. It is a separate issue and important.

I am talking objectively though. What kind of objective moral value does an insect deserve? I am saying I think it deserves a very small amount.

Share this post


Link to post
Share on other sites

I agree with the second issue you brought up. It is a separate issue and important.

I am talking objectively though. What kind of objective moral value does an insect deserve? I am saying I think it deserves a very small amount.

I think it's unclear objectively so I think we should give it some tiny amount, first to be safe as there might actually be something that it's like to be an insect and they might suffer and second because some people do intuitively value insects. If a bee is likely to sting you and you have the power to kill it, I think you should do it without question, though. Or wipe out a whole hive if it's near your BBQ. :P

Share this post


Link to post
Share on other sites

I think it's unclear objectively so I think we should give it some tiny amount, first to be safe as there might actually be something that it's like to be an insect and they might suffer and second because some people do intuitively value insects. If a bee is likely to sting you and you have the power to kill it, I think you should do it without question, though. Or wipe out a whole hive if it's near your BBQ. :P

I agree. What about bacteria? :P

Share this post


Link to post
Share on other sites

I agree. What about bacteria? :P

Lol I don't know nearly enough about bacteria to speculate but I don't ascribe much value from an intuitive pov.

Share this post


Link to post
Share on other sites

I feel pretty confident in saying that I ascribe absolute zero value to bacteria myself. They really are just complex chemical reactions with no I'd almost say the same for insects, but I have some non-negligible uncertainty regarding their intelligence that could give them at least a tiny positive value :P

Share this post


Link to post
Share on other sites

I don't give value to bacteria either, was just having some fun. Just a little instrumental value for good bacteria. It's just neat to think of the lines we are forming and I think we are on the same page.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By GuyWhoPredictsThings
      https://www.washingtonpost.com/news/grade-point/wp/2016/09/15/can-video-games-make-kids-smarter-yale-university-researchers-think-so/?tid=sm_fb
       
       
       
    • By GuyWhoPredictsThings
      The Tomorrow Children is a game I'm looking forward to. I know the September 6th release is an early access release that gives $30 worth of items for $20, and the game will eventually be released free-to-play. Bioshock Collection is a possibility down the line. If I had a 3DS, Phoenix Wright would be a no-brainer.
       
      Otherwise, October - December are the big months for me. Playing Doom now, may play Street Fighter V online again soon, have to finish Zombi, want to go back to Galak-Z, and I could go back to Furi as well. Plus whatever is on PS+ next month.
       
      Axiom Verge (Wii U) - September 1
      Earthlock: Festival of Magic (Xbox One) - September 1

      Mother Russia Bleeds (PC, Mac, Linux) - September 5

      Dogos (PS4) - September 6, (Xbox One, PC) - September 7
      Just Sing (Xbox One, PS4) - September 6
      The Legend of Heroes: Trails of Cold Steel II (PS3, Vita) - September 6
      The Tomorrow Children (PS4) - September 6
      Touhou Genso Rondo: Bullet Ballet (PS4) - September 6

      Unbox (PC) - September 7

      Phoenix Wright: Ace Attorney: Spirit of Justice (3DS) - September 8

      Jotun: Valhalla Edition (Wii U) - September 8, (Xbox One, PS4) - September 9

      Iron Fish (PC) - September 12 

      BioShock: The Collection (Xbox One, PS4, PC) - September 13
      Dead Rising (Xbox One, PS4, PC) - September 13
      Dead Rising 2 (Xbox One, PS4) - September 13
      Dead Rising 2: Off the Record (Xbox One, PS4) - September 13
      MeiQ: Labyrinth of Death (Vita) - September 13
      Nascar Heat Evolution (Xbox One, PS4, PC) - September 13
      Never Ending Night (PC) - September 13
      NHL 17 (Xbox One, PS4) - September 13
      Pac-Man Championship Edition 2 (Xbox One, PS4, PC) - September 13
      Psycho-Pass: Mandatory Happiness (PS4, Vita) - September 13
      ReCore (Xbox One, PC) - September 13
      Rive (Xbox One, PS4, Wii U, PC) - September 13
      Slain: Back from Hell (PS4) - September 13

      Conga Master (PC) - September 14

      Sorcery! Part 4: The Crown of Kings (PC, Mac, iOS, Android) - September 15

      Adr1ft (Xbox One) - September 16
      Dragon Quest VII (3DS) - September 16

      Air Conflicts: Double Pack (PS4) - September 20
      Criminal Girls 2: Party Favors (Vita) - September 20
      Destiny: Rise of Iron (Xbox One, PS4) - September 20
      Destiny: The Collection (Xbox One, PS4) - September 20
      NBA 2K17 (Xbox One, PS4, 360, PS3, PC) - September 20

      Warhammer 40,000: Eternal Crusade (PC) - September 23

      Darkest Dungeon (PS4, Vita) - September 27
      FIFA 17 (PS4, XBO, PS3, 360, PC) - September 27
      Forza Horizon 3 (Xbox One, PC) - September 27
      Shantae and The Pirate's Curse (3DS Retail) - September 27
      Sonic Boom: Fire & Ice (3DS) - September 27
      XCOM 2 (Xbox One, PS4) - September 27

      Nebula Online (PC, Mac, Linux, iOS, Android) - September 29
      Unloved (PC) - September 29

      Yo-Kai Watch 2 (3DS) - September 30 
  • Recently Browsing   0 members

    No registered users viewing this page.

  • Who's Online   25 Members, 2 Anonymous, 12 Guests (See full list)

×