Jump to content

Search the Community

Showing results for tags 'AI'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Gaming Boards
    • General Video Gaming
    • Microsoft
    • Sony
    • Nintendo
    • PC Gaming
    • Mac
    • Handheld Gaming
    • Tabletop and Collectible Card Gaming
    • Trading/Deals
  • Off Topic
    • Community General
    • Current Events & Politics
    • Entertainment
    • Sports
    • Intellectual Interests
    • Gadgets and Home Theater
  • The Country General Store
  • Danger Zone
  • Board Development
    • Development Central


  • Community Calendar


  • Articles


  • Test Blog
  • D1P Official Blog
  • Day One Podcast - Feed
  • I Loved You Innnnnn... IMDb! - Feed
  • Gaming Test Blog
  • Entertainment Test Blog

Found 8 results

  1. @legend Electronic Arts has begun testing self-teaching bots in Battlefield 1 in an attempt to find the next big leap forward in gaming AI. Partly inspired by Google's Deepmind AI, the project involved EA's SEED division showing the 'agents' 30 minutes of human play (which they could imitate later on) before setting them loose in the game for six days of self-teaching. Interview with EA-SEED Technical Director
  2. The original story in The Atlantic about the original purpose of the chatbots. Follow-up story about what the "language" looks like.
  3. “There has been a lot of controversy lately on the subject of Artificial Intelligence (AI). However, some important elements appear to be missing in the debate so far. Professor Edward Frenkel will talk about the issues surrounding AI from the perspective of a mathematician, and a human. To what extent is it possible to represent reality by numbers and algorithms?” In his own words: (source) Interesting talk although I think Kurzweil et al. wouldn’t care all that much (if they accepted his argument) that they cannot REALLY transfer our minds to computers. I think they just might go “Eh, close enough, it’s not me but something very similar to me” and plow on ahead.
  4. IMO, it will be tough for society to accept at first, but eventually the answer will be yes.
  5. Introduction Humans can obviously suffer and in our developmental history the fact that we can suffer has been important. Lets suppose we've suddenly cracked strong AI and can make robots that are more intelligent and capable than humans. Like humans, should we make it possible for them suffer? I hypothesize that no, suffering actually isn't useful to an ideal intelligent agent and that it would be unethical to make them suffer. However, I'm not entirely convinced of that either and would like to throw the question out to this board to see what they think. Before I explain why I don't think suffering is necessary, we first have to define what we mean by it. I define suffering as a state of being that is worse than being dead or unconscious. That does not mean that anyone experiencing suffering should consider suicide, because if the suffering can be removed in the future, the future well being will make it worthwhile to tough it out. However, in that moment, non-experience would be preferable. Formalisms To ground the idea a bit more (and provide the background for my argument), we can frame suffering in terms of reward functions in Markov Decision Processes (MDPs). In an MDP there are a set of possible states in the world, and the agent receives a numerically represented reward, defined by a reward function, for each transition between two states. The decision problem is to find the set of actions that maximize the received reward over time. However, saying the reward must be maximized over time isn't quite sufficient for defining the problem; it must be defined how reward is maximized over time. There are a number of ways to define this metric. Two common methods are to maximize the sum reward over a finite horizon and maximize the discounted sum reward over an infinite horizon. In the former case, the agent adds up all the rewards it would receive up to some time horizon h and ignores any possible reward it would receive after that horizon. In the latter, the agent adds up all the rewards it would receive to the infinite future, but discounts the value of each reward as it appears further in the future by a discount factor gamma in [0, 1). If all rewards were simply summed equally to the infinite future, all you'd be left with were positive infinity, 0, or negative infinity, which doesn't really convey a meaningful set of values. Exponentially discounting ensures the value for any infinite behavior sequence is always finite. Another important concept in MDPs are terminal states, which are states that once reached, cause all action to cease and prevent the agent from receiving any future reward, so the reward received from those states onward is zero. A terminal state is in many ways analogous to death. Since the reward received from a terminal state is zero and analogous to death, we can now say the suffering can be mathematically represented as negative reward. The Argument for No Suffering With it established that we can formally define suffering as negative reward signals, we can now examine the question "should intelligent robots be able to suffer?" More precisely, should we define the reward function an intelligent robot maximizes to have a range that spans negative reward values? I claim no. However, let us examine why suffering, or negative reward, might be found in humans. I think the answer is that humans, by default, reason rather myopically. That is, we don't typically plan for the distant future. This is in many ways understandable. Planning for the distant future is really hard and if we rewind the evolutionary clock to earlier less intelligent life forms, such life was especially unable to plan for the future. Without being able to plan to the future, it would be impossible for an agent to tell that some sequence of actions led to something bad, like death, or that it led away from something good (like reproduction). Instead, evolution would have to encode signals that tended on average to do those things: hence suffering. When a myopic life form experiences suffering, it doesn't need to know why it's bad and it only needs to reason in the very short future about how to avoid those states. But now let us consider a highly intelligent agent that can and does reason about the more distant future. If we have an agent using a discounted sum infinite horizon metric, it doesn't need to experience pain, because the higher value toward states that are good, will propagate back and cause it go there; similarly, states than unnecessarily terminate its life will naturally avoid. To help illustrate what I mean, I created an example MDP. In this MDP the agent exists in an 8x8 2D world in which they can move north, south, east, or west. The agent has a health level that when it drops to zero they die. If they move into a food source area, their health goes up by one (to a maximum of 15). If they move into a thorny region of the world, they get hurt and lose 3 health points. If they move into a neutral area of the world they lose one health from hunger. Finally, there are also predators who will randomly appear behind the agent at various times. If the agent moves directly away from the predator, they have a chance to escape them that is a function of how healthy they are. If they move in any either direction, or are unlucky, the predator will land on them. If a predator is at the same location as the agent, then the agent has to fight. The chance to win is a function of how healthy the agent is and if they do win, they still pay a health cost of 3 for the battle. A visualization of the world is shown the below. The grey circle is where the agent is (show in the bottom left corner). The red squares indicate thorny regions of the world, and the blue squares are food sources. Let us now consider a reward function in which the agent does not suffer; for being alive, they receive a reward of 1 and when they die, they get no reward. We could imagine a more rich reward function without suffering too (such as one where there is a higher reward for being healthy), but I'm keeping it very simple to make the point. After letting the agent plan a solution (using my planning and learning library that I've developed), I visualized the value and policy (how the agent would behave) for each place in the world, assuming the best possible shape the agent could be in when reaching that state when having started in the bottom left corner. The more red a cell is, the less valued the location is; the more blue, the more highly valued. Arrows indicate which direction the agent would go from that location. As you will note, despite the fact that the agent never suffers, the fact that it does not reason myopically enables it to do the right thing in going toward the food source, avoiding thorny areas, and then stay in the food source. To contrast that, I also ran it when with a reward function in which when the agents health is low, it suffers (negative reward), suffers for getting hurt from thorns or combat with a predator, and gets pleasure from improving its health. You can see the visualization of that below. It looks mostly the same, but in actuality, it's worse! If you pay close attention, you'll see that the agent does things like briefly going away from the food source and then going back, so that it can gain the reward of losing health which is then increased by eating. So here we have an example where despite the inability to suffer, the agent actually performs better than when it can suffer, because it's ability to plan for the future enables it to do so. If this is the case universally, then it seems to be a strong argument that we shouldn't make robots that can suffer. Because it wouldn't gain us anything, it seems wildly unethical to do so. So that said, I'd like to ask you all what you think. Are there scenarios in which you think the ability to suffer would have some utility? Furthermore, what does this mean about humans? Should we strive for a future where when we can augment ourselves, we remove our ability to suffer altogether?
  6. RP discussion guide: formal-inquiry I've argued for consciousness being a result of an intelligent system, which in turn can be defined as a system capable of a certain kind of computation. In this post, I'd like to expand on that notion more and start driving toward a metric we could use to identify consciousness and levels of consciousness in other beings. Specifically, I believe a metric for consciousness could be established by examining the computational ability of any given being in terms of Turing-completeness, computational resources, and the ability to search for explanatory models. To begin the argument it's important to first establish what computation, of any sort, is. Computation can be described as a well defined process of interactions within a closed system. By itself, this has the interesting implication that any physical system is performing computation. For instance, we can say that a star itself is performing computation and the computation it is performing is specifically the star function. Of course, we would not label a star as intelligent. So what makes the computation it does non-intelligent, whereas the human brain's computation is intelligent? I think the first obvious thing to note is that a star is extremely specific in what it computes. We cannot, for instance, simply rearrange inputs to a star and have it compute something else. A star will always compute the star function and nothing more. This is not the case for a human brain, however. A human can compute any number of things, which suggests to us that the number of computable functions is perhaps important to intelligence and consciousness. If the number of computable functions is important to intelligence, then concepts of particular importance are Turning Completeness and the Church-Turing Thesis. To explain these concepts, one must first understand what a Turing Machine is. Specifically, a Turing Machine is a hypothetical computer that consists of (1) a tape which is divided into cells, each of which may have a single symbol from a defined finite alphabet written on it; (2) a head that can read and write symbols to any cell on the tape and move either left or right on the tape one cell at a time; (3) a state register which specifies the internal state of the machine (one of a defined finite number of states); and (4) a table that defines what the head should do given the internal state of the system and the symbol on the cell over which the head is located (e.g., whether to write a new symbol, whether to change its position on the tape, and whether to change its internal state). There is also a specific kind of Turing Machine called the Universal Turing Machine, which is a computer that can simulate any other Turing Machine if it takes the description of that Turing Machine as input (which is represented as information on the tape of the Universal Turing Machine). Given the definition of a Turing Machine, the Church-Turing Thesis states that any computable function can be computed on a Turing Machine, which in turns also means that a Universal Turing Machine can compute any function. Thus far, this thesis has appeared to be true; no one has ever been able to come up with a physically plausible machine that can compute a function that a Turing Machine cannot. Of course, a Universal Turing Machine is not the only machine that can compute any function, should the Church-Turing thesis be true. A machine can be said to be Turing Complete if it can compute any function that a Turing Machine can compute. There are currently many machines that, save for having the required resources (e.g., memory) are proved to be Turing Complete; the computers we use and the various programming languages that support them are excellent examples. Let us now return to the human brain and its computational ability. Specifically, we earlier noted that a serious computational difference between the brain and non-intelligent systems is the number of functions the brain can compute versus non-intelligent systems. With the notion of Turing-completeness and the Church-Turing Thesis in mind, if the brain is Turing complete, then that gives it an especially strong distinction over non-intelligent systems. Can we demonstrate the human-brain is Turing-Complete, baring resources? Quite trivially, yes! The fact that we can not only simulate, but write, programs using a Turing Complete programming language demonstrates that our mind is Turing Complete. The only restriction on our computational ability is time and memory we have available to perform it. And since the brain is Turing-complete that gives it a very unique property over non-intelligent systems. Of course computability cannot be the whole picture. Our computers are obviously Turing-complete as well, but they are neither conscious nor intelligent like we are and simply adding more RAM and faster CPUs isn't suddenly going to make them conscious. I posit that the missing piece for intelligence and consciousness is not just having a Turing Complete system, but it being utilized to model the world it senses and to plan behavior to meet goals. That is, an intelligent system will sense its world and use the input from its senses to model the world, where models are developed by effectively searching for programs it can run internally that well predict its senses. Additionally, programs that use these models to find courses of actions that maximize the agent's goals or values can be searched as well. Because the system is Turing-complete it could, in theory, use any good model. If the mechanisms that create programs are capable of eventually (given enough time and memory) finding any possible program, then the system is in a sense universally intelligent. We now have a number of important properties for assessing intelligence and consciousness that build on top of each other. The number of functions it can compute in theory. The number of functions it can compute in practice due to resource restrictions. The speed at which functions can be computed. The space of programs that can be searched in theory. The space of programs that can be searched in practice due to resource restrictions. The speed that good programs can be found. We then have multiple levels of universality; universality when the system is Turing Complete, universality when the the space of programs it can search is complete, and another I would propose, universality when the agent is intelligent enough to gather additional resources to facilitate its computational ability. Humans are especially unique in at least two of those ways. We are Turing Complete in theory and we are smart enough to utilize additional resources outside our brain in our computations. Even something as simple as writing something down on paper can be seen as resource augmentation and our development of computers emphasizes this ability even further. The one element which is less clear is whether our brain, given enough time and resources, would be able to find every conceivable program to model the world or search for behavior. Given these properties, we also now have some informed ways to analyze the intelligence and consciousness of various entities. For instance, we can say that single celled organisms are not intelligent nor conscious because the number of functions they can compute is profoundly limited in even the theoretical sense (they badly fail property 1). For more developed animals, like dogs, it is currently less clear. It is possible that their brain is Turing complete, but they may have insufficient resources to compute complex functions or may not be able to search for programs nearly as well as humans, being restricted to a subset. We can also now definitely say why our current AI system are not truly intelligent nor conscious: because most are seriously limited in what they compute. Perhaps, however, these properties will serve as a method to guide our research in AI and give us greater understanding of the other life that surrounds us.
  7. So I've been thinking about the nature of consciousness and its implications lately and thought I'd toss my thoughts out on this board. I fully expect many people to look at this truly massive wall of text and just back out or leave a "tl;dr" and I also expect that this topic will simply be of no interest to many more. Regardless, I'm posting it anyway in hopes some of you will be interested in reading it or discussing it—I know at least of few of you will be—and because it helps me organize my own thoughts on the matter. To help make it a bit more digestible, I will also add sections to the essay. Formal Definitions of Consciousness Before really beginning anything, it's probably important to first give an intuitive definition of consciousness, or at least what it is I'm trying to discuss! In this case I'm referring to what it means for us to have that sense of continuing self; why you feel like you throughout your life versus anything else and why you are aware of yourself at all. To begin to discuss consciousness, as above defined, and its implications, it is important to first define what it is in a formal sense more than just our intuitive understanding of it; how we can model and hypothesize its existence. To that end, I'll need to first make a conjecture about how consciousness comes to be. Being a conjecture, it's not something I can outright prove, but I think it's a fairly safe one. I would propose that that consciousness is a result of an intelligent entity existing. To me, it seems pretty clear that our conscious self is entirely wrapped up in being intelligent and it seems perfectly reasonable to conclude that consciousness, at a minimum, requires intelligence, because I don't think anyone would claim that a dumb system or organism like a bacteria is conscious. Now that doesn't mean the implication goes the other way; the requirement of intelligence doesn't mean that intelligence always begets consciousness (that's why this is a conjecture), but the observation of that relationship does give some credence to it. Moreover, the hypothesis that intelligence causes consciousness also seems the most harmonious conclusion with evolution. Intelligence has a very tangible survival benefit, so it would be selected in the evolutionary process. But if consciousness isn't a result of intelligence, then why would it be selected in our evolution? Why would something so unique but unimportant for us to survive just happen to evolve? The way it makes most sense is if it is purely an outcome of the right kind and level of intelligence in a being. I suppose we could also sit here and argue what it means for something to be intelligent, but I'll avoid that for now and leave it at there being some form of intelligence that entails consciousness. Using that conjecture, we can now think about what it means for an intelligent entity to exist instead of thinking of what it means for a conscious entity to exist and this seems a bit more tractable. Again, while we still could get into lengthy discussions about intelligence, I don't think we need to get that deep. All we need to observe about intelligence is that it is a dynamic system that takes information, manipulates it, and causes behavior or output. What that means is intelligence‚ the process of information manipulation and output‚ can be functionally described. If we understood intelligence perfectly then we could describe the system perfectly. But merely describing the system isn't sufficient to make an intelligent agent exist. I can describe anything in the world that already exists, but my describing it isn't want makes it exist. What makes it exist is when that functional system is actually enacted in reality and only then does it actually exist rather than just being a description. What that means then, is intelligence exists on the scope of the components that make it work. By proxy then, consciousness exists on the components that cause the functioning of the intelligence. In other words, consciousness is tied to the existence of a computer! Now a computer in the theoretical sense can mean any number of systems, not just the kind of computer we use on a daily basis. The more formal definition, and some example systems, can be found on wikipedia's page on the topic if anyone is interested. Basically, the core idea is that a computer is a contained system that can carry out a functional process or algorithm. So under this more theoretical definition of a computer, a brain can easily be described as a theoretical computer; it just may be a very specific kind that can only run certain kinds of algorithms. Necessary Hardware If you're a proponent of a soul, that technically could work into this framework as well provided it is integral to the computation of our intelligence. That said, I think postulating the existence of the soul is completely unnecessary, adds no explanatory power and also doesn't fit well with evolution (how can the physical, on which evolution acts, suddenly cause a connection to a spiritual world?). Specifically, the only way in which a soul could be of any use beyond what could be done in the physical is if it provides computational power that cannot be replicated by the brain. I see no reason to suspect this and beyond that there has been no evidence of any mechanisms in the brain that are affected in ways that are not caused by physical phenomenon. So for those reasons, it makes the existence of a soul, a computational system operating outside the physical world, very unlikely. One of the interesting ramifications of all this, is that it means a conscious being can exist on a typical computer system that we have today (a Von Neumann-like architecture), provided it is fast enough to carry out the necessary computation of intelligence. The fact that the computer architecture is different from the brain is irrelevant. All that is relevant is that a physical system exists that carries out the computation of intelligence. One might argue that a typical computer is unable to perform the computation of the function that intelligence requires, but this is misguided. Beyond problems of having the necessary resources (i.e., enough RAM, disk space, and perhaps computational speed to make it efficient) our typical Von Neumann-like computers are turing complete, or computationally universal. That is, they can compute any function another turing complete computer can compute. This of course might lead one to ask if there are such things as functions that are not computable by a turing computer, which could potentially cross off a Von Neumann-like system as being able to run a conscious being. The answer is yes, with a but. We can conceive of functions that are not computable by a turing machine, however, these functions are functions that would require infinite computation time and are therefore not possible for any physical computer, including the brain, to compute. A soul proponent might come back and say "Aha! This is why we need a soul, to compute infinite time functions!" but that seems more like wishful thinking to inject a soul into the equation. There is no reason to believe that intelligence requires infinite time functions and if the argument for a soul is contingent on that, then there is a hefty burden of evidence that must be supplied first and it also still suffers from the other problems I previously discussed about postulating a soul. And of course, when considering how many functions do not need infinite time computation, it's really not worth arguing that intelligence requires it without at least some evidence to that. I would be remiss if I didn't mention another caveat to whether a Von Neumann-like machine could be used to produce consciousness and that is whether the brain is a quantum computer and whether intelligence is a quantum function. If so, then while a Von Neumann-like machine could theoretically run consciousness, it couldn't do it in any practical sense. That's because theoretically *any* turning complete machine can run the algorithms a quantum machine does, but doing so would require resources well beyond anything even remotely practical in reality; that is, it's practically impossible for a non-quantum computer to run a quantum function. That said, I've never seen a very compelling reason to suspect that the brain is a quantum computer. So far, all of the functional mechanics of the brain have been found to be non-quantum. Further, the hypothesis that the brain is a quantum computer doesn't seem to fit well with evolution. If we examine the brain of more simple forms (for which we could analyze more directly on a neuronal level), it doesn't seem to indicate anything quantum going on and the progression from there seems to be more a matter of progressively complex integration of neurons rather than wholly new functional units that behave wildly different being introduced to the brain's overall system. Granted, while I don't think we have very explicit means to analyze the neuronal structure of long gone evolutionary ancestors, genetic trees of life that exists today with the simple life forms currently alive gives us a good approximation of what happened. At any rate though, even if it is true that the brain is a quantum computer, we would still have hope for the future of developing consciousness and manipulating it because we can make quantum computers in the physical world. Despite those two unlikely caveats, it seems like we have good grounds to to conclude that our more typical Von Neumann-like computers can indeed house a conscious entity by virtue of the fact that they run the computation for intelligence. But before continuing on, I think it's probably worth addressing one of the more famous objections to this computational theory of mind from Searle called the Chinese Room Problem. If you are unfamiliar with it, here is wikipedia's summary: The flaw in Searle's argument is he's not attacking this core notion of consciousness existing due to the *computation* of the intelligence function. Instead, he takes a single functional unit and calls that the computer. While the human in the room is a wildly important functional unit, this is still clearly a false equivalence because the person inside the chinese room cannot do what the chinese room does. Therefore the person is not the entire computational system; he is merely one element that carries out part of the computation. Also important is the book and writing (memory) that takes place in the room and everything else that goes on in the room relevant to the computation. Saying the Chinese room is not conscious because the person inside doesn't understand chinese is like saying a person isn't conscious because none of a person's neurons understand any of the things that the whole person does. Again, what matters is the whole functional system being implemented and carried out, not a single component of it. With that objection out of the way, we can continue to considering what this model and hardware requirements for consciousness imply about ourself and what we can do with ourself. Maintaining our Sense of Self and Consciousness Downloading There is an interesting question that is raised by viewing consciousness as the product of a given computation system: if we duplicate a computational system, is it the same self and the same consciousness? I believe this question can be readily answered from the previous analysis: no, it is not the same. The reason being is because you now have instantiated a new computational system. What was determined above was that the existence of consciousness is tied to the system that computes it, not the description of a function or algorithm. Even if the information and mechanics of another system are identical, it is a different computing system, ergo it produces a new consciousness. That answer, however, raises an even more important question. Our brain is in constant physical flux. Neurons die and grow and atoms are shed and replaced. So if the physical makeup of our brain‚ the computational system to which our consciousness is tied‚ is frequently changing and different computational systems have different consciousnesses, are we effectively dying all the time, producing a new consciousness to replace the old? How is it that our sense of self is maintained across time? The answer to this question is provided by observing the key difference between creating a new system and what happens to our brain. For a new system a completely disjoint computer with no functional relation to the former is created. In the later, the computer is gradually changed. With a gradual change, the computer that was, is in constant functional contact with the changes. It is not out right replaced but adapts with any changes and integrates them into the whole system. So just as the key for a consciousness and intelligence existing is having this contained system that provides the necessary computation, so to is it the key for maintaining the consciousness. So long as the system functionally integrates with changes, then it is the same system and the same consciousness and intelligence. Our brain's design is especially good at this and is very adaptive to changes. In fact, the very nature of the algorithms running is designed to handle the necessary noise that is entailed in every physical mechanic of the brain. Ergo, changes in our brain do not cause ourself to die, it's just part of the functional system. Now it should be noted, that if serious damage is caused to the brain such that it cannot be naturally integrated into the whole, then the consciousness that existed before would be gone because the system must be able to integrate the changes. Damage or remove too much and the system cannot integrate the changes or loss. A concept that frequently comes up in science fiction is the notion of consciousness downloading, where person has their whole self moved to another physical system (such as a complex computer or another body and brain). If we can someday understand how the brain works in full, it is interesting to ponder whether this can ever be done. Based on the conclusions I've drawn, I think the answer is yes, but perhaps not as it often appears in movies or books. In fiction it often seems like the consciousness download is performed as basically an information download; copy the information and start it running on the new system. Except if this is all it is, then you have just killed the original person and created a new consciousness as I above described. The solution then, is a progressive and interactive download. Just as our brain can incorporate new elements and discard old ones if the changes functionally interact and can be integrated, so too can we swap our physical position. The key would be to start up functional elements on the new system that *functionally interact* with the previous elements so that they are integrated into the system. Previous elements in the brain that once covered this functionality can then be removed so that part of the brain is now operating across these eventually disjoint systems. This process is then continued until all the functional elements exist on the new target system. At this point any link between them can be safely severed and consciousness has been shifted to a new physical location. The Future of Humanity With this model for consciousness and consciousness downloading, it's fun to think about the future of humanity. Consciousness downloading means that we could slowly replace our brain with better computer like components or just straight download into a computer-like system. Effectively, humans would become AI without loss of consciousness and could become far smarter than we are now. That is, there would be no difference really between humans and created AI; we'd all use whatever was the best physical architecture for our needs. It also means that death is less of a problem. If your body is dying, then it's a simple matter of downloading to a new location. Of course, dying would still be possible. As previously described, consciousness downloading couldn't be performed by just an information download, it would have to be a functional process. So if a being's physical make up is being destroyed and that being has no time to make the functional transition, that being is dead and their consciousness gone. At best, you could make a duplicate; a twin. Although there still might be hope that being destroyed will be very difficult. As we progress to new computational hardware, we might find ways to make the functional transition to a new system very fast such that we could escape many dangerous scenarios. We might even have it happen across multiple systems all the time to ensure that the destruction of one will never mean the end. Living to the end of the universe may become a reasonable enough likelihood. Beyond keeping ourself living for a far longer time and making ourselves smarter, better, etc., there is another interesting possible outcome for humanity given this model of consciousness. If consciousness is a computational system that computes an intelligence function, it is possible that society will become highly integrated with each other and become the functional elements of a far greater computer, thus creating a meta consciousness. This would be unique from anything that has previously happened because it would be the first instance of consciousness and intelligence that existed on top of consciousness and intelligence. What this consciousness would be as an integration of separate intelligent agents would probably be unlike anything we've known before and could create a kind of society unlike anything before. Therefore, I wouldn't speculate anything specific and only offer that this model of consciousness may mean that it is in humanities future to become part of a far bigger consciousness with more capability than anything before. Even if death did occur to individuals under such a system, I'm not entirely sure what that would mean, because the larger consciousness would live on just as we do despite any of our neurons dying. This would surely entail new philosophies of life being built. Of course our existence, even with all this advancement, is still contingent on the universe. If we exist in the universe and the universe dies, then we would go with it. To achieve true immortality then, there are only two possibilities. One is whether we could become capable enough to bend the state of the universe to the point that that it never suffers entropic death. The other is whether we can escape the universe altogether. The former might be easier (and I use that word loosely), but the possibility of the latter is more interesting to consider. Part of me feels like taking the perspective that because the universe is effectively self contained from other universes (or seems to be) that no escape is impossible. An analogy might be running a virtual world on a computer that contained sentient beings inside it. These beings would seemingly be stuck in the virtual world forever. Upon further examination of this metaphor though, I decided that is not *necessarily* true. People can manipulate programs in lots of ways to cause lots of effects outside of their supposed self contained environment. This is kind of the intention of hacking no less. So, if there are some fundamental properties of reality for all universes, that our universe is just one of many running on these fundamental properties, it makes it analogous to an operating system running a program (or in our case a virtual world). If we come to understand these fundamental properties as well as our universe and its relation to the fundamental properties, it might be possible to "hack" it. We might be able to do things in our universe that affects the rest of the fabric of reality. From that, it may be possible to build an information bridge to somewhere outside our universe. Once an information bridge is available, then we can transfer our consciousness in the same way previously described. That's a lot of ifs and certainly may not be possible, but the implications that we could grow to not only live to the end of the universe, but actually escape it is a pretty fun idea (at least to me). Not only would this mean we could exist eternally (something that on its own might be possible if we can figure out how to avoid entropic death of the universe), but we could completely circumvent all bounds placed upon us. If you and I are super lucky, maybe we'll even live long enough to reach consciousness transfer and get to find out ourselves! The End Okay, that concludes my thoughts on these matters. I hope some of you read at least some of it and found it at least somewhat interesting. I'd love to discuss any of these thoughts further too. If you have questions about anything I've said, what it might imply, or even disagree with some of my arguments, I'd love to hear it. I by no means think I have the answers to all this as it is one of the great mysteries of our time, so I'm open to being completely wrong, but I'd love to work toward finding the model of what consciousness is and this is my best stab at it currently!
  8. http://www.forbes.co...-become-brains/ Sounds like some cool work. One thing that bugs me about this article, however, is this line: Erm... There is at least 30 years worth of work on learning algorithms in typical computing. This work's goal is to "hardwire" such learning algorithms so that it runs faster (much like having dedicated hardware for video decoding versus running the algorithm in software) and I'd bet their architecture is hugely informed from these last 30 years of research.