Simulation Inferences

How small must be the computer simulating the universe?
philosophy, transhumanism, computer-science
2009-05-292012-04-15 finished certainty: unlikely importance: 4


Nick Bostrom’s (SA) ar­gues that ei­ther: 1. civilizations/entities with tremen­dous com­put­ing power do not ex­ist; 2. they ex­ist, but choose not to sim­u­late prim­i­tive civ­i­liza­tions like us (what­ever else they might do); 3. or we are likely in such a sim­u­la­tion That is: they don’t ex­ist, they ex­ist but don’t use it, or they ex­ist and use it.1 The SA pro­vides a frame­work for re­vis­it­ing the old SF skep­ti­cal chest­nut “what if, like, we’re just in a com­puter sim­u­la­tion, man?” with po­ten­tially bet­ter-grounded rea­sons for con­sid­er­ing it rea­son­ably pos­si­ble—hu­man­ity cer­tainly does run as many small crude sim­u­la­tions as it can about every sub­ject un­der the sun and is par­tic­u­larly in­ter­ested in mod­el­ing the past, and there are no good long-term rea­sons for think­ing that (should hu­man­ity not go ex­tinct or civ­i­liza­tion col­lapse) to­tal com­put­ing power will for all time be sharply bounded be­low the nec­es­sary amount for re­al­is­tic sim­u­la­tions of whole worlds or peo­ple.

The problem

What’s nice about the SA is that it presents with a whose branches are all par­tic­u­larly nox­ious.

Living in a simulation

We don’t want to ac­cept #3—­be­liev­ing we are in a sim­u­la­tion is re­pug­nant and means we must re­vise al­most every as­pect of our world­view2. It also ini­tially seems flab­ber­gast­ing: when one con­sid­ers the com­put­ing power and in­tel­lec­tual re­sources nec­es­sary to cre­ate such a de­tailed sim­u­la­tion, one bog­gles.

Disinterested gods

But if we ac­cept #2, aren’t we say­ing that de­spite liv­ing in a civ­i­liza­tion that de­votes a large frac­tion of its efforts to en­ter­tain­ment, and a large frac­tion of its en­ter­tain­ment to video games—and what are com­plex video games but sim­u­la­tion­s?—we ex­pect most fu­ture civ­i­liza­tions de­scended from ours, and most fu­ture civ­i­liza­tions in gen­er­al, to sim­ply not bother to sim­u­late the past even once or twice? This seems a mite im­plau­si­ble. (And even if en­ter­tain­ment ceases to fo­cus on sim­u­la­tions of var­i­ous kinds3, are we to think that even his­to­ri­ans would­n’t dearly love to re-run the past?)

Infeasible simulations

And if we ac­cept #1, we’re say­ing that no one will ever at­tain the heights of power nec­es­sary to sim­u­late world­s—that the nec­es­sary com­put­ing power will not come into ex­is­tence.

Physical limits to simulation

What does this pre­sup­pose? Well, maybe there are phys­i­cal lim­its that bar such sim­u­la­tions. It may be pos­si­ble, but just not fea­si­ble4. Un­for­tu­nate­ly, phys­i­cal lim­its al­low world sim­u­lat­ing. in con­cludes that:

The “ul­ti­mate lap­top” is a com­puter with a mass of one kilo­gram and a vol­ume of one liter, op­er­at­ing at the fun­da­men­tal lim­its of speed and mem­ory ca­pac­ity fixed by physics. The ul­ti­mate lap­top per­forms log­i­cal op­er­a­tions per sec­ond on ≈1031 bits.

The cost of simulation

Nick Bostrom cal­cu­lates as a rough ap­prox­i­ma­tion that sim­u­lat­ing a world could be done for ‘≈1033–1036 op­er­a­tions’5. Even if we as­sume this is off6 by, say, 5 or­ders of mag­ni­tude (1038-1041), the ul­ti­mate lap­top could run mil­lions or bil­lions of civ­i­liza­tions. A sec­ond.

It seems un­re­al­is­tic to ex­pect hu­man­ity in any in­car­na­tion to reach the ex­act lim­its of com­pu­ta­tion. So sup­pos­ing that hu­man­ity had spread out over the So­lar sys­tem proces­sors amount­ing to the vol­ume of the Earth? How fast would they be to equal our ul­ti­mate lap­top sim­u­lat­ing mil­lions or bil­lions of civ­i­liza­tions? Well, the vol­ume of the Earth is 1.08321x1024 liters; so op­er­a­tions per liter.

Prospects for development

1026 ops is­n’t too bad, ac­tu­al­ly. That’s 100 yottaflops (1024). , in 2009, clocks in at 1.4 petaflops (). So our hy­po­thet­i­cal low-pow­ered lap­top is equal to 350 bil­lion Road­run­ners ().

OK, but how many turns of would that rep­re­sent? Quite a few7: 38 dou­blings are nec­es­sary. Or, at the canon­i­cal 18 months, ex­actly 57 years. Re­mark­able! If Moore’s Law keeps hold­ing (a du­bi­ous as­sump­tion, see ), such sim­u­la­tions could be­gin within my life­time.

We can prob­a­bly ex­pect Moore’s Law to hang on for a while. There are pow­er­ful eco­nomic in­duce­ments to keep de­vel­op­ing com­put­ing tech­nol­o­gy—proces­sor cy­cles are never cheap enough. And there are sev­eral plau­si­ble paths for­ward:

But in any case, even if our es­ti­mate is off by sev­eral or­ders of mag­ni­tude, this does not mat­ter much for our ar­gu­ment. We noted that a rough ap­prox­i­ma­tion of the com­pu­ta­tional power of a plan­e­tary-mass com­puter is 1042 op­er­a­tions per sec­ond, and that as­sumes only al­ready known nan­otech­no­log­i­cal de­signs, which are prob­a­bly far from op­ti­mal.8

Even if we imag­ine Moore’s Law end­ing for­ever within a few turns, we can’t ex­clude hu­man­ity for­ever re­main­ing be­low the com­put­ing power ceil­ing. Per­haps we will spe­cial­ize in ex­tremely low-power & durable proces­sors, once we can no longer cre­ate faster ones, and man­u­fac­ture enough power the slow way over mil­len­nia. If we want to hon­estly affirm #1, we must find some way to ex­clude hu­man­ity and other civ­i­liza­tions from be­ing pow­er­ful enough for­ever.

Destruction

The eas­i­est way to en­sure hu­man­ity will never have enough com­put­ing power is for it to die. (A cheer­ful thought! Would­n’t one rather ex­ist in a sim­u­la­tion than not ex­ist at al­l?)

Per­haps ad­vanced civ­i­liza­tions re­li­ably de­stroy them­selves. (This is con­ve­nient as it also ex­plains the .) It could be rogue AIs, nu­clear war, nanoplagues, or your fa­vorite .

A fail­ure to de­velop could well be as fa­tal as any di­rect at­tack. A fail­ure to de­velop in­ter­stel­lar travel leaves hu­man­ity vul­ner­a­ble to a so­lar sys­tem-wide cat­a­stro­phe. One can­not as­sume hu­man­ity will sur­vive in­defi­nitely at its present level of de­vel­op­ment, nor even at higher lev­els.

But un­de­sir­abil­ity does­n’t mean this is false. After all, we can ap­peal to var­i­ous em­pir­i­cal ar­gu­ments for #2 and #3, and so the bur­den of proof is on those who be­lieve hu­man­ity will for­ever be in­ad­e­quate to the task of sim­u­lat­ing worlds, or will aban­don its eter­nal love of games/simulated-worlds.

SA is invalid

One might ob­ject to the SA that the triple dis­junc­tion is valid, but of no con­cern: it is un­war­ranted to sug­gest that we may be sim­u­lat­ed. An em­u­la­tion or sim­u­la­tion would pre­sum­ably be of such great ac­cu­racy that it’d be mean­ing­less for in­hab­i­tants to think about it: there are no ob­ser­va­tions they could make one way or the oth­er. It is mean­ing­less to them—­more the­ol­ogy than phi­los­o­phy.

We may not go to the ex­treme of dis­card­ing all non-fal­si­fi­able the­o­ries, but we should at least be chary of the­o­ries that dis­claim fal­si­fi­a­bil­ity in most cir­cum­stances9.

Investigating implications of SA

The sim­u­la­tion hy­poth­e­sis is sus­cep­ti­ble to some form of in­ves­ti­ga­tion, how­ev­er, in some sense. We can in­ves­ti­gate the na­ture of our own uni­verse and make de­duc­tions about any enclosing/simulating uni­verse10.

More clear­ly, we can put lower bounds on the com­put­ing power avail­able in the lower11 uni­verse, and in­ci­den­tally its size.

If a sim­u­lated uni­verse re­quires n units of space-time, then it must be made of at least n + 1 units; it’s para­dox­i­cal if a sim­u­lated uni­verse could be more in­for­ma­tion-rich than the sim­u­la­tor, inas­much as the sim­u­la­tor in­cludes the sim­u­lated (how could some­thing be larger than it­self?). So if we ob­serve our uni­verse to re­quire n units, then by the , the sim­u­la­tor must be n + 1 units.

Limits of investigation

This is a weak method of in­ves­ti­ga­tion, but how weak?

Very.

Sup­pose we as­sume that the en­tire uni­verse is be­ing sim­u­lat­ed, par­ti­cle by par­ti­cle? This is surely the worst-case sce­nar­io, from the sim­u­la­tor’s point of view.

There are a num­ber of es­ti­mates, but we’ll say that there are 1086 par­ti­cles in the ob­serv­able uni­verse. It’s not known how much in­for­ma­tion it takes to de­scribe a par­ti­cle in a rea­son­ably ac­cu­rate sim­u­la­tion—a byte? A kilo­byte? But let’s say the av­er­age par­ti­cle can be de­scribed by a megabyte—then the sim­u­lat­ing uni­verse needs just spare bytes. (About 1055 ul­ti­mate lap­tops of data stor­age.)

But we run into prob­lems. All that’s re­ally needed for a sim­u­la­tion is things within hu­man­i­ty’s . And the sim­u­la­tors could prob­a­bly ‘cheat’ even more with tech­niques like and .

It is not nec­es­sary for the sim­u­lat­ing thing to be larg­er, par­ti­cle-wise, than the sim­u­lat­ed. The ‘greater size’ prin­ci­ple is in­for­ma­tion-the­o­retic.

If one wanted to sim­u­late an Earth, the brute-force ap­proach would be to take an Earth’s worth of atoms, and put the par­ti­cles on a one-to-one ba­sis. But pro­gram­mers use ‘brute-force’ as a pe­jo­ra­tive term con­not­ing an al­go­rithm that is dumb, slow, and far from op­ti­mal.

Bet­ter al­go­rithms are al­most cer­tain to ex­ist. For ex­am­ple, might ini­tially seem to re­quire n2 space-time as the plane fills up. But if one caches the many re­peated pat­terns, as in al­go­rithm, log­a­rith­mic and greater speedups are pos­si­ble.12 It need not be as in­effi­cient as a real uni­verse, which mind­lessly re­cal­cu­lates again and again. It is not clear that a sim­u­la­tion need be iso­mor­phic in every step to a ‘real’ uni­verse, if it gets the right re­sults. And if one does not de­mand a true iso­mor­phism be­tween sim­u­la­tion and sim­u­lat­ed, but al­lows cor­ners to be cut, large con­stant fac­tors op­ti­miza­tions are avail­able.13

And the sit­u­a­tion gets worse de­pend­ing on how many lib­er­ties we take. What if in­stead of cal­cu­lat­ing just every­thing in the uni­verse, the sim­u­la­tion is cut down by or­ders of mag­ni­tude to just every­thing in our light­cone? Or in our so­lar sys­tem? Or just the Earth it­self? Or the area im­me­di­ately around one­self? Or just one’s brain? Or just one’s mind as a suit­able ab­strac­tion? (The brain is not very big, in­for­ma­tion-wise.14) Re­mem­ber the es­ti­mate for a sin­gle mind: some­thing like 1017 op­er­a­tions a sec­ond. Reusing the ops/second fig­ure from the ul­ti­mate lap­top, we see that our mind could be han­dled in per­haps of a liter.

This is a very poor lower bound! All this effort, and the most we can con­clude about a sim­u­lat­ing uni­verse is that it must be at least mod­er­ately big­ger than the ( m) cubed.

Investigating

We could imag­ine fur­ther tech­niques: per­haps we could send off to the far cor­ners of the uni­verse, in a bid to de­lib­er­ately in­crease re­source con­sump­tion. (This is only use­ful if the sim­u­la­tors are ‘cheat­ing’ in some of the ways listed above. If they are sim­u­lat­ing every sin­gle par­ti­cle, mak­ing some par­ti­cles move around is­n’t go­ing to do very much.)

Or we could run sim­u­la­tions of our own. It would be diffi­cult for sim­u­la­tors to pro­gram their sys­tems to see through all the lay­ers of ab­strac­tion and op­ti­mize the sim­u­la­tion. To do so in gen­eral would seem to be a vi­o­la­tion of (a gen­er­al­iza­tion of the ). It is well known that while any can be run on a Uni­ver­sal Tur­ing ma­chine, the per­for­mance penalty can range from the mi­nor to the hor­rific.

The more s and there are be­tween a pro­gram and its fun­da­men­tal sub­strate, the more diffi­cult it is to un­der­stand the run­ning code—it be­comes ever more opaque, in­di­rect, and bulky.

And there could be dozens of lay­ers. A sim­u­lated proces­sor is be­ing run; this re­ceives which it must trans­late into ; the ma­chine code is be­ing sent via the offices of an , which hap­pens to be host­ing an­other () op­er­at­ing sys­tem (per­haps the user runs Linux but needs to test a pro­gram on Mac OS X). This hosted op­er­at­ing sys­tem is largely idle save for an­other in­ter­preter for, say, . The pro­gram loaded into the in­ter­preter, , hap­pens to be it­self an in­ter­preter… If any pos­si­ble sim­u­la­tion is ex­clud­ed, we have ar­guably reached at least 5 or 6 lev­els of in­di­rec­tion al­ready (view­ing the OSs as just sin­gle lay­er­s), and with­out re­sort­ing to any ob­tuse uses of in­di­rec­tion15.

Even with­out re­sort to lay­ers, it is pos­si­ble for us to waste in­defi­nite amounts of com­put­ing pow­er, power that must be sup­plied by any sim­u­la­tor. We could brute-force open ques­tions such as the , or we could sim­ply ex­e­cute every pos­si­ble pro­gram. It would be diffi­cult for the sim­u­la­tor to ‘cheat’ on that—how would they know what every pos­si­ble pro­gram does? (Or if they can know some­thing like that, they are so differ­ent from us as to ren­der spec­u­la­tion quisquil­lian.) It may sound im­pos­si­ble to run every pro­gram, be­cause we know many pro­grams are in­fi­nite loops; but it is, in fact, easy to im­ple­ment the tech­nique.

Risks

But one won­ders, aren’t we run­ning a risk here by send­ing off Von Neu­mann probes and us­ing vast com­put­ing re­sources? We risk an­ger­ing the sim­u­la­tors, as we cal­lously use up their re­sources. Or might there not be a grand Killer? Every al­lo­ca­tion we come a lit­tle closer to the un­known lim­its!

There’s no guar­an­tee that we’re in a sim­u­la­tion in the first place. We never ruled out the pos­si­bil­ity that most civ­i­liza­tions are de­stroyed. It would be as ter­ri­ble to step softly for fear of the Di­vine Pro­gram­mer as it would be of the Di­vine. Sec­ond­ly, the higher we push the lim­its with­out dis­as­ter (and we will push the lim­its to en­able eco­nomic growth alone), the more con­fi­dent we should be that we aren’t in a sim­u­la­tion. (The higher the lim­it, the larger the uni­verse; and small uni­verses are more par­si­mo­nious than large ones.) And who knows? Per­haps we live in a poor sim­u­la­tion, which will let us probe it more di­rect­ly.


  1. This is a com­plete dis­junc­tion; we can see this by con­sid­er­ing what’s left if we com­bine these 2 bi­nary pred­i­cates: They don’t ex­ist, but they use it? They don’t ex­ist, and don’t use it?↩︎

  2. Al­though the re­vi­sion might not be too aw­ful; tries such a re­vi­sion in “How To Live In A Sim­u­la­tion”, and comes up with pretty be­nign sug­ges­tions, some of which are a good idea on their own mer­its:

    …you should care less about oth­ers, live more for to­day, make your world look more likely to be­come rich, ex­pect to and try more to par­tic­i­pate in piv­otal events, be more en­ter­tain­ing and praise­wor­thy, and keep the fa­mous peo­ple around you hap­pier and more in­ter­ested in you.

    ↩︎
  3. Pretty much every video-game is a sim­u­la­tion of some­thing—war, rac­ing, trav­el, etc. They vary in re­al­ism and fic­tion­al­ized set­tings, but fun­da­men­tal­ly, they are still sim­u­la­tions. In the case of the , sim­u­la­tion of every­day life even.↩︎

  4. An ex­am­ple of some­thing pos­si­ble but not fea­si­ble might be fac­tor­ing clas­si­cally a com­pos­ite num­ber with a quadrillion dig­its; we know there is a fac­tor­iza­tion of it, and ex­actly how to go about get­ting it, but that does­n’t mean we could do it in less than the life­time of the uni­verse.↩︎

  5. foot­note 10; “Are you liv­ing in a com­puter sim­u­la­tion?”↩︎

  6. A great deal de­pends on how ex­pen­sive sim­u­lat­ing a brain is. Ralph Merkle puts it at 1013-1016. Robert J. Brad­bury’s “Ma­trioshka Brains” es­say lists 4 es­ti­mates for an in­di­vid­ual brain rang­ing any­where from 1013 ops/second to 1017.↩︎

  7. Try eval­u­at­ing this in your friendly : (1.4*10^15)*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2 >= (5.009001*10^26). Man­u­ally mul­ti­ply­ing con­veys the mag­ni­tude of how many Moore-dou­blings are nec­es­sary.↩︎

  8. Nick Bostrom, ibid.↩︎

  9. Sim­u­la­tions are pre­sum­ably ob­serv­able in a few spe­cial cas­es—be­sides ex­am­ples, the sim­u­la­tors could vi­o­late var­i­ous phys­i­cal laws to demon­strate their ex­is­tence. It’s hard to see why this would be very com­mon, though—The Ma­trix seems to sug­gest that the qua­si­-god­like be­ings who could build & run a sim­u­la­tion are some­how fal­li­ble, and such tam­per­ing would seem to de­stroy the value of sim­u­la­tions run for en­ter­tain­ment or re­search.↩︎

  10. An im­por­tant as­sump­tion we must make is that the sim­u­lat­ing uni­verse is much like ours: with sim­i­lar laws of physics, and most im­por­tant­ly, is com­putable. If the other uni­verse runs on rad­i­cally differ­ent physics, that could make non­sense of any con­clu­sions. For­tu­nate­ly, as­sum­ing that the other is like ours is the sim­pler as­sump­tion, and is even rea­son­able (if uni­verses are too alien to each oth­er, then why one would cre­ate the oth­er? And from where could the de­tailed de­scrip­tion come from?).↩︎

  11. I use ‘lower’ in the sense of ‘more fun­da­men­tal’.↩︎

  12. Ex­am­ples of caching can be drawn from ex­ist­ing em­u­la­tors like which often and cache re­peat­ing pat­terns to ex­e­cute fewer ‘na­tive’ in­struc­tions. From “Fab­rice Bel­lard”, by Andy Gocke and Nick Piz­zo­lato (ACM 2009):

    While a sub­stan­tial ac­com­plish­ment on its own, QEMU is not sim­ply a proces­sor em­u­la­tor, it uses dy­namic trans­la­tion to im­prove per­for­mance. As ex­plained in the Usenix pa­per [Bel­lard 2005], QEMU uses a novel ap­proach to ISA trans­la­tion. In­stead of trans­lat­ing one in­struc­tion at a time, QEMU gath­ers many in­struc­tions to­gether in a process called “chunk­ing,” and then trans­lates this chunk as a whole. QEMU then re­mem­bers these chunks. Many times there are cer­tain chunks which will oc­cur many times in the code of a pro­gram. In­stead of tak­ing the time to trans­late them all sep­a­rate­ly, QEMU stores the chunks and their na­tive trans­la­tion, next time sim­ply ex­e­cut­ing the na­tive trans­la­tion in­stead of do­ing trans­la­tion a sec­ond time. Thus, Bel­lard in­vented the first proces­sor em­u­la­tor that could achieve near na­tive per­for­mance in cer­tain in­stances.

    ↩︎
  13. Re­al-world em­u­la­tions offer in­ter­est­ing ex­am­ples of the trade-offs. cov­ers in “Ac­cu­racy takes pow­er: one man’s 3GHz quest to build a per­fect SNES em­u­la­tor” the vari­a­tion in de­mands for sim­u­lat­ing the : an early em­u­la­tor man­aged to em­u­late the NES with 25MHz x86 proces­sors but was some­what in­ac­cu­rate and re­quired mod­i­fi­ca­tions to the games; a much more ac­cu­rate em­u­la­tor uses up 1600MHz. For the , the spec­trum is from 300MHz to 3,000MHz. Fi­nal­ly, an em­u­la­tion down to the tran­sis­tor level for the 1972 _, runs at <=10fps per sec­ond on a 3,000MHz x86 proces­sor; to run in re­al-time on a 50hz TV would re­quire (to naively ex­trap­o­late) a 15,000MHz x86 proces­sor.↩︎

  14. Cryp­tog­ra­pher in “The Mol­e­c­u­lar Re­pair of the Brain” finds an up­per bound of 100 bits per atom; “Dan­coff and Quastler[128], us­ing a some­what bet­ter en­cod­ing scheme, say that 24.5 bits per atoms should suffice”; a will­ing­ness to work on the mol­e­cule level re­duces this to 150 bits per mol­e­cules made of a few to thou­sands of atoms; a cuts the 150 bits down to 80 bits; Merkle com­ments “50 bits or less is quite achiev­able”.

    Ex­pand­ing out to the whole brain, Merkle quotes Cher­niak (“The Bounded Brain: To­ward Quan­ti­ta­tive Neu­roanatomy”):

    On the usual as­sump­tion that the synapse is the nec­es­sary sub­strate of mem­o­ry, sup­pos­ing very roughly that (given anatom­i­cal and phys­i­o­log­i­cal “noise”) each synapse en­codes about one bi­nary bit of in­for­ma­tion, and a thou­sand synapses per neu­ron are avail­able for this task: 1010 cor­ti­cal neu­rons x 103 synapses = 1013 bits of ar­bi­trary in­for­ma­tion (1.25 ter­abytes) that could be stored in the cere­bral cor­tex.

    (An­ders Sand­berg, in com­par­ison, sug­gests much higher up­per bounds as he ex­am­ines in­for­ma­tion out­put.) An even more ex­treme lower bound than a ter­abyte is the one de­rived by (“How Much Do Peo­ple Re­mem­ber? Some Es­ti­mates of the Quan­tity of Learned In­for­ma­tion in Long-term Mem­ory”, Cog­ni­tive Sci­ence 10, 477-493, 1986) based on mem­ory tests, of 2 bits per sec­ond or mere hun­dreds of megabytes over a life­time! (See Merkle’s “How Many Bytes in Hu­man Mem­o­ry?”)↩︎

  15. For ex­am­ple, per­haps one could in­stead be run­ning Fire­fox, which in­ter­prets JavaScript (in which one can run Linux), and one has vis­ited a web page with a (an in­ter­preter of Java); and this ap­plet is run­ning Pearco­la­tor—which is an em­u­la­tor. Of course, just a raw proces­sor is­n’t very use­ful; per­haps one could run one of the op­er­at­ing sys­tems writ­ten in Java on it, and then does­n’t one want to browse the In­ter­net a lit­tle with good old Fire­fox…?↩︎