I dislike intro-level philosophy classes. Invariably, a student asks whether this world is merely a ‘matrix’. The question usually elicits an eye-roll from me—not because it’s a bad one, but because it’s so tired and clichéd. (And the question is made all the more obnoxious by the fact that the student thinks he or she is volunteering something novel.) So I’ve kind of dismissed the thought about living in a matrix as an amateurish musing popularized by a silly movie. But recently, I’ve encountered a more sophisticated articulation of this issue.
In a debate last month, atheist author Sam Harris made an interesting secular argument for the afterlife. It’s known as the simulation argument, and it was first formulated by Swedish philosopher Nick Bostrom. Bostrom contends that it’s likely we live in a simulated world, and this conclusion rests on pretty sound assumptions. I’ll let him explain the logic in his own words:
The formal version of the argument requires some probability theory, but the underlying idea can be grasped without mathematics. It starts with the assumption that future civilizations will have enough computing power and programming skills to be able to create what I call “ancestor simulations”. These would be detailed simulations of the simulators’ predecessors—detailed enough for the simulated minds to be conscious and have the same kinds of experiences we have. Think of an ancestor simulation as a very realistic virtual reality environment, but one where the brains inhabiting the world are themselves part of the simulation.
The simulation argument makes no assumption about how long it will take to develop this capacity. Some futurologists think it will happen within the next 50 years. But even if it takes 10 million years, it makes no difference to the argument.
Let me state what the conclusion of the argument is. The conclusion is that at least one of the following three propositions must be true:
1. Almost all civilizations at our level of development become extinct before becoming technologically mature.
2. The fraction of technologically mature civilizations that are interested in creating ancestor simulations is almost zero.
3. You are almost certainly living in a computer simulation.
How do we reach this conclusion? Suppose first that the first proposition is false. Then a significant fraction of civilizations at our level of development eventually become technologically mature. Suppose, too, that the second proposition is false. Then a significant fraction of these civilizations run ancestor simulations. Therefore, if both one and two are false, there will be simulated minds like ours.
If we work out the numbers, we find that there would be vastly many more simulated minds than nonsimulated minds. We assume that technologically mature civilizations would have access to enormous amounts of computing power.
So enormous, in fact, that by devoting even a tiny fraction to ancestor simulations, they would be able to implement billions of simulations, each containing as many people as have ever existed. In other words, almost all minds like yours would be simulated. Therefore, by a very weak principle of indifference, you would have to assume that you are probably one of these simulated minds rather than one of the ones that are not simulated. (New Scientist, 2006)
It’s important to note that Bostrom’s trilemma does not necessarily show that we are living in a simulated reality, only that it’s one of three possibilities. “In reality,” he writes, “we don’t have much specific information to tell us which of the three propositions might be true. In this situation, it might be reasonable to distribute our credence roughly evenly between them.”
Still, I get the impression that he leans toward the third proposition, that of simulated reality, given his discussion of the other options.
Proposition one is straightforward. For example, maybe there is some technology that every advanced civilization eventually develops and which then destroys them. Let us hope this is not the case.
Proposition two requires that there is a strong convergence among all advanced civilizations, such that almost none of them are interested in running ancestor simulations. One can imagine various reasons that may lead civilizations to make this choice. Yet for proposition two to be true, virtually all civilizations would have to refrain. If this were true, it would be an interesting constraint on the future evolution of intelligent life.
Building on Bostrom’s argument, Sam Harris suggests that some of these simulated worlds will include an afterlife, because these simulations would likely reflect the religious beliefs of their creators. A Mormon would then simulate a reality in which Mormonism is true, a Hindu would simulate a reality in which Hinduism is true, and so on.
What do you think of these arguments, and, if correct, what would their implications for our everyday lives be?
Some people came on to my blog pressing forward that argument…they used it as part of something called the “New God Argument”
And then I got some Mormon Transhumanists.
It was a weird week for blogging…
This idea was brought up by Alan Watts on a subject about the Brahman who is so convinced in playing himself that he has forgotten about the button he pressed to get to here and now.
““So if our technology were to succeed completely, and everything were to be under our control, we should eventually say,
‘We need a new button.’
With all these control buttons, we always have to have a button labeled SURPRISE, and just so it doesn’t become too dangerous, we’ll put a time limit on it – surprise for 15 minutes, for an hour, for a day, for a month, a year, a lifetime. Then, in the end, when the surprise circuit is finished, we’ll be back in control and we’ll all know where we are. And we’ll heave a sigh of relief, but, after a while, we’ll press the button labeled SURPRISE once more.
“During the manvantara when the world is manifested, Brahma is asleep, dreaming that he is all of us and everything that’s going on, and during the pralaya, which is his day, he’s awake, and knows himself, or itself (because it’s beyond sex), for who and what he/she/it is. And then, once again, presses the button—surprise!”
It is nonetheless an interesting notion to think.
This argument presupposes that it is possible to create a “perfect” simulation. This has not been proven to be even theoretically possible.
What if you need all the matter in the Universe to simulate the Universe?
There are definitely more than 3 answers.
Interestingly, this universe is NOT perfect. Down at the subatomic level, things are no longer precise, they become “fuzzy”, almost as if a simulation wasn’t bothering to position each and every electron individually, but instead is just averaging out the math so as to save on computing resources…
Also, if it takes all the matter in the universe to simulate the universe, then you don’t simulate it all at once, you simulate it in smaller, discrete chunks, save them, then run a different part of the universe for a while, and keep paging them in and out of memory. The beings inside the simulation won’t know when they are “saved to disk” while another section of the simulation runs, they’ll never see any time pass at all.
Which is a great explanation for why there’s a speed of light limit.
I’m surely missing something. I can’t see why the very same argument doesn’t equally well prove the following conclusions:
- that there are flying saucers buzzing the Grand Canyon every hour on the hour;
- that each of us has alien microchips implanted in us directing our appetites;
- that we have been selectively bred by aliens to produce comic books;
- and on and on, but it’s becoming tedious.
I’m guessing that the diagnosis of all these arguments is the same: that logical/theoretical possibility isn’t the best guide to actuality.
Uh, yeah, you’re missing something.
The blog post argument is weak, but you’re not even following the argument.
“…logical/theoretical possibility isn’t the best guide to actuality”
Ha ha, granted. But neither is intuition (and aren’t you just appealing to intuition in your objection?).
Sorry, I didn’t connect all the dots in my comment. The general form of the argument is as follows. The universe is so mind-bogglingly Vast that it contains a huge variety of possible developments, each of which is Vanishingly small relative to the whole, but Vast relative to little buggers like us. One of these possible developments is X. Given the Vastness of X relative to us/our experience, it’s probable that X has actually been developed. So, X. Throw in the sex appeal of external world skepticism and the aphrodisiac of futurism and you have the original argument. Throw in the relevant goofball claims, and you have the alternatives I proposed.
I’m a computer science grad student, work in IT, and pretty familiar with Bostrom’s shots at trying to envision the future of computing. With that out of the way, let me start highlighting the technical problems with his supposition…
Why would they want to do that? It would be an awful lot of time and energy to try and recreate a totally artificial world based on their ideas of what the past was like.
So this future civilization would figure out AGI and use it to play their version of The Sims? Again, what benefit do they possibly gain? There are thousands of other uses for an AGI-like tool of the far future and the assumption itself is based on the idea that such a technology could ever be built. We don’t know if that’s possible and thus, this is an assumption we can’t actually prove in any way, shape or form. We might as well say that we’re all God’s playthings because we have just as much proof for that assumption as we do for this one.
That wouldn’t be an afterlife at all actually. It would just be a continuation of the life cycle of the data object. And it wouldn’t work as an argument for an afterlife because all simulations eventually reach an end of some sort and all objects associated with the simulation would be purged from memory or saved to a persistent state (such as a database or a file) and released. If you run the simulation again with your saved objects, we could make some interesting arguments. If you start over, the object itself is now erased out of existence.
Now, let me go back to something you said for a moment…
The only sophistication in this argument is throwing around the word “computing” with about the same fervor as Deepak Chopra throws around the word “quantum.” If anything, even Ray Kurzweil’s utopianism is more believable than that. But not by all that much…
Why would they want to do that? Ever played “The Sims”?
I see as a potential problem to that idea being the fact that those people living in our current reality (the past – according to the future virtual reality participants) that do not have any children would not have descendants to become virtual reality participants in our present reality as a simulation. If that idea is true (that we are currently in a futuristic ancestral virtual reality experience), then everyone around us would always have offspring or someone as a dependent who would come to have someone to think of them as an ancestor that they could live the life of. Unless, of course, there is enough information being recorded in the present day to be able to recreate with exactness all of the interactions with people that did not have any descendants in our present reality that is being simulated. I think this is unlikely, however.
The idea also suggests that the past reality being simulated actually did happen at some point, so to be able to distinguish between the actual events and the simulated ones would be nearly impossible.
Besides that, would people dying around us in this present reality represent people that wanted to leave the simulation (or had to leave for some outside reason – occurring in their future, present reality – for them)? If so, would the ability to leave the simulation not be an option until the life had been fully lived out (unless the plug was pulled by some authority not involved in the simulation)?
Also, how long are people living in the future to be able to enjoy more than one ancestors’ simulation? I guess the possibilities are that the vr experience doesn’t take as much time as does the non virtual future existence and that the future real world is so dismal and our vr represents a life so much more meaningful than their present reality. Or they are all beings able to live very long lives thanks to advances in medical technology and lifespan. When I think about all of that, the likelihood of us existing as participants in a vr simulation just kind of goes out the window. I also wonder, if that ability existed today (to go back and live as one of your ancestors in a full vr experience), would I go and take advantage of that? I don’t know, I guess it would depend on how long it would take compared to how long my lifespan is. If it took most of my life, what would be the point? Who would have that kind of time to devote to entertainment when we could be exploring the known universe?
I think it is more possible that we are trans-dimensional beings who occupy life forms in this dimension either as an escape, or for the experience, of which our present reality is the only way to do so. Of course, that opens up a whole world of possibilities as it relates to God and an afterlife…but it still does not make sense that we are here being tested. If anything we are simply here enjoying the experience because we can and maybe as a diversion in an otherwise boring dimensional existence.
I love this argument. Here are some possible questions.
First, computing power. James Patton would be the one to talk to about this, but computing power to accurately simulate our reality would need to be immense, if we are talking about the atomic level, then it would need to be roughly the size of that being simulated. If we say that they only filled earth in, then it would need to be a planet sized computer. I am thinking Douglas Adams sized here. Descartes’s argument that this dream of ours seems very vivid may be important, because the more vivid it becomes, the more processing power it requires. The more simulations we ourselves run, the more the system has to be able to handle. Further, for the probability part to factor in, we must be able to simulate many of these ancestor programs, requiring even more resources for no apparent reason. Honestly I doubt our ability to simulate something so vast and detailed at any point in our history (although this limitation might be an argument for. These limits might represent the limits of computing power inside the simulation)
Second, I question the drive for an ancestor simulation. What would we learn from this simulation? It seems pointless. All those resources for a bit of history? I think of civilization, or flight simulators where the creator has a active role in the “game.” Have we seen anything that might represent a player? Or other simulations that are meant to predict the future. Why create a huge computer to tell us nothing?
There was an article I skimmed not to long ago said that we have this need to place “easter eggs” and sign our own work. So maybe some message at the very end of pi, who knows? Also, no simulations are perfect, are there any glitches that would suggest its a simulation? Matrix deja vu?
Nobody says the simulation has to be run real-time. Perhaps every second of our universe takes twenty years of their computing power, we’d never notice. We simulate the interactions of proton beam collisions suuuuuuuper slowly. They happen in a bazillionth of a second, yet we simulate them over the course of days and days.
Perhaps it’s me. That would at least help explain the flashing 1UP I see in the corner of my vision.
The point is that running the simulation slower reduces the possible total number of simulations, and thus weakens his probability argument. 1 simulation running at 20x slower means that we are still more likely to be in the “real” version of reality. You would need 20 simulations running at 20x slower to make the probability 50/50.
You don’t present Bostrom’s argument properly. He doesn’t “contends that it’s almost certain we live in a simulated world.” Bostrom contends that at least one of three propositions is true. Only one of those propositions is that we are in a computer simulation. To Quote Bostrom’s actual conclusion:
His conclusion is tripartite, he does NOT conclude that we’re in a simulation. It’s really a disservice to his phenomenal paper to misrepresent his ideas.
Fair enough. I understand that his conclusion is tripartite, but Bostrom clearly favors proposition 3. And given proposition 3, a simulated reality is, in Bostrom’s words, “almost certain.”
I invented this philosophy when I was 7.
Why? Also, what measure of computing power is being used? Are we talking about the energy consumed, or about the maximum amount of floating point operations per second? The fastest system built so far clocks in at 2.6 petaflops, or 2.6 x 10^15 floating point operations per second and consumes about 4 MW of electricity. The fastest computers we could theoretically build using photonics would max out at 100 x 10^19 flops (or 100 exaflops). How much power they consume I wouldn’t even try to guess.
Is that enough to simulate the details of an entire universe? I’m not sure. But I doubt it would be big enough to be planet sized. Earth, which is actually kind of a puny planet, weighs around 6 x 10^24 kg whereas a high end supercomputer weighs something like a few tons (say ~2 x 10^3 kg) which means that to be planet sized, we’d need to bundle about 3 x 10^21 computers together into a single machine. And how long that would take to build is another thing I don’t even want to venture to guess. But running concurrently, they could have a processing speed of 3 x 10^42 flops on paper, though I’d venture to guess that a few hundred exaflops would be devoted just to running the OS.
I see where you’re going with this, but here’s a quibble to consider. Having all those vivid dreams and environments in a simulation may be wasted on us because we know that our brains very efficiently filter out most of the input their receive and keep only the key details. So if I were an architect of an app like that, I would want to use the maximum threshold of the brain as the peak of system activity, not just go all out to make everything seem really vivid beyond what the system’s target components would handle.
Not necessarily. The resources for actually creating the many simulation participants would be just a tiny percentage of the system’s overhead. Assuming that future computing is similar enough to today’s, we’re talking about creating new objects in a heap and a reference to them in a stack for each processor engaged in running the simulations for that particular cycle. Compared to what it’s already running, this isn’t even going to register for those CPUs.
- “Is that enough to simulate the details of an entire universe?”
Only the observed portions need be simulated, and then only at the level at which they can be comprehended. Modern day video games already use this technique to save on RAM and load times.
In fact, in our own universe, we know that observation of an event has a deterministic effect on the outcome. Drastically less computing power would be needed to create self-driven actors that act more or less randomly within a pre-programmed set of behaviors unless/until interacted with – which is what we’ve got, essentially.
As to motive, perhaps the “ancestor” program is not directly ancestral – in fact, the only real reason anyone would utilize such a program would be if it had something different (or, more likely, better) to offer. Circumstantial condition could be one factor. What seems like a more likely factor to me would be the state of being – when you die, you wake up as a slug-like methane monster who wanted the experience of a life with fingers and toes.
“Why? Also, what measure of computing power is being used?”
It doesn’t matter, current computers get no where near simulating a mind or complex molecular systems like a body, and as our processors are reaching the theoretical limits, the only option is parallel processing, which gets very bulky very quickly. As far as I understand the computer science, to really simulate the universe, at the atomic level, would require at the least the same amount of physical matter, running at the theoretically fastest speeds possible. Anything less than equal size, and you must either sacrifice detail or speed. Several have mentioned ways to not calculate everything ie. only compute what we see, and only at a level of detail we can theoretically see with our eyes. These are both great ideas, and cut down on system requirements significantly, but when talking about planet sized computers, I am fine saying sure, you are 1,000 times more efficient. We are still talking about a huge system and all the problems that come along with that, power requirements and coolant systems etc. Again doubtful that we could reach this level, never mind several versions of this.
http://wiki.answers.com/Q/What_is_the_maximum_theoretical_processor_speed_for_a_computer
“I would want to use the maximum threshold of the brain as the peak of system activity, not just go all out to make everything seem really vivid beyond what the system’s target components would handle.” Fair enough, even then, the amount of computing power would still require immense amounts of resources.
Sure our computers now are nothing compared to this future computer, but my point is that as our technology progresses, and reaches more and more computations, we will never be able to reach a point where we can run an ancestor simulation, inside our own ancestor simulation. So they will need to not only be able to run the simulation, but also all of our simulations etc. So they would either have to pull the plug before we reached this level, or program internal limits to what we can run.
The point of the last quote was that if one system takes so much resources, running it in half time as the commenter below states reduces the probability by half. For his argument to work under 3) there has to be a ton of ancestor simulations. Each simulation increases our chances of being in one. And if one requires so many resources, we either create multiple supercomputers, or run them one after another, either way, this significantly reduces the possible amount of simulations.
To Libertang,
I am not sure that only the observed portions need to be simulated. Yes only observed portions would need data sent to us as users, but a lot still goes on even when we aren’t looking at it. So the quantum mechanics holds that unobserved follows different rules, but that doesn’t mean we can stop computing. For example, I close the closet door, while its closed I don’t need the sensory data, but the system still has to remember where all my cloths were and in what order, so when I look back in they are still the same. Similarly, if I put a clock in the closet and shut the door, the system still has to calculate the movement of the second hand, even though I don’t see it, so when I reopen the door it’s the correct time. So even taking into consideration quantum mechanics the system still needs to calculate the unobserved areas. Further, my understanding is that the unobserved is possibilities, and so a system would need to do even more work for the unobserved, tabulating all the unobserved possibilities.
slug-like methane monster
Yeah this is along the lines of what I was thinking, an “ancestor” simulation seems silly. I wonder why it has to be this way? The argument could work just as well with any complex sentient being. Maybe just because its easier for us to swallow the argument? Oh the programmers are just like us, but way in the future, that is nice :).
@ libertang,
Actually, no, it doesn’t. Invasive measurement does, but the idea that merely observing the effect has a deterministic outcome is a misunderstanding of quantum physics. The system behaves as it will behave whether you’re watching it or not. However, if the tools with which you measure its behavior is invasive, it can disrupt the system and change its behavior. The same applies to psychology. People will behave as they normally do if they don’t know you’re watching them, and if they know you do, they’ll change their behaviors to what they want to present.
Actually we already have systems like this. They’re employed by intelligence agencies to try and model social situations in other nations. They’re not really good at it, but the idea is there. At any rate, we don’t even have to create an agent until there’s an interaction, just reserve a pointer to it. This is where I was going with the whole notion of simulating things only as vividly as an agent’s brain will be able to interpret because it’s a lost effort to do otherwise. We could even argue that all we’d have to actively simulate is the distance between the Earth and the Moon and even then, not too much would happen beyond geosynchronous orbit so we don’t need to put a lot of resources into it. As for stars and galaxies, we can portray how they behave already so they’re not going to be much of a computing challenge to generate around our simulated world and its moon.
@James,
Why is that? You’re creating virtual objects, not physical ones so you hardly need 10^55 kg of matter in the visible universe to do that. As for the fastest theoretical speeds, we could approach 99.99% the speed of light in some very advanced computing applications using photons but the electrons we use now are very close to those speeds already. It’s just that silicon can’t handle being pushed much farther than it already is. It will start to melt.
I came across this argument a few years ago and found it pretty compelling. I’ve also read a bit about how quantum mechanics point (somewhat) towards it being possible. Here’s something I found that talks about it, if anyone is interested. http://www.bottomlayer.com/bottom/argument/Argument4.html
Those differ, in that they are not statistically likely. It is very likely computation power will explode. And very likely people will be interested in simulating human experience.
A lot of arguments againts this theory seems to be the computational limitation of the computers which might be needed for running the simulation, what i am thinking is how can we assume / know / understand the computational power of the “higher being”, which is running the simulations? what if the technology they are using is uncomprehendable to us? what if the universe we know is nowhere near the “actual” universe?
similarly why should the simulation be that of thier ancestors, what if its all just happening within a supercomputer without the actual user not at all aware of us? Like Isaac asimov said what if we are “just random pieces of code”? or may be its a simulation game and may be it ran only for a minute of their time but it is an eternity for us?
Again not the expert here, but the assumptions are in the model that it is in “our” universe, or a similar universe upon which ours is modeled, and there is always going to be a theoretical limit to processing power. Each atom can only be used for a certain amount, and there are other physical limits such as heat and speed of light which bound what is theoretically possible, even given an “other type” being.
Well, first you would have to believe in statistics. There can be statistics and statistical phenomenon so that they are useful but without statistics being in any way dealing with the some basis of reality. The argument assumes that statistics is in any way valuable tool when arguing about metaphysics.
Hi Jon.
You may be interested to know that Richard Sherlock, about whom you were recently blogging, once identified as a Mormon Transhumanist and perhaps still identifies as a Transhumanist. Below are some links to material, referenced previously, that I think will interest you.
New God Argument
https://docs.google.com/View?id=dfzwxpjb_329f2pfxmcz
Trust in Posthumanity and the New God Argument
https://docs.google.com/View?id=dfzwxpjb_350f4mpz6f3
Theological Implications of the New God Argument
https://docs.google.com/View?id=dfzwxpjb_348dvtrkjdg
Mormon Transhumanist Association
http://transfigurism.org
Jon, given your misrepresentation of the original work, as pointed out by Bri in his post of 2011.03.09 20:05, you should revise your article. No argument is made favouring (3) over (1) or (2). Therefore your article is misleading.
I do think Bostrom argues (albeit somewhat hesitantly) for the third proposition, but I did edit my post to prevent further confusion. The original post, though, was clear that Bostrom poses three propositions, and that no one was necessarily true.
Now that’s what I’d call a DEEP THOUGHT!
I think my biggest objection to this is that I think there’s a clear and obvious 4:
4. No society has yet reached the level of technology required to simulate a reality.
I think this has to be the default position. With no evidence for or against a claim about other, more advanced societies existing, I don’t see a reason to believe they exist, so the statistic argument holds no water.
Of course, if we are in a simulation, we wouldn’t have evidence for the numerous societies on the “higher” level of existence. But without that evidence, I don’t see how you can’t avoid defaulting to the #4 I proposed above. Though the other options do make for interesting “what if” scenarios, even if they don’t have any kind of truth value.
I must admit that if we did enter into this world as a sort of super-immersive video game, then half the fun right after we come out will be discussing the speculation on afterlifes and such that took place within the game.
Somebody forgot to connect more than just the dots… There’s an old rhetorically pertinent joke that goes something like: “…Engage cerebral process before putting mouth in gear…!” Or maybe it should be more precisely stated, “…Shut your mouth before you fall in and drown in your own cerebreal vacuum…!”
What has this to do with any of the aforementioned schlock…? Probably more than anything else discussed above…