Simulation Inferences

How small must be the computer simulating the universe?
philosophy, transhumanism, computer-science
2009-05-292012-04-15 finished certainty: unlikely importance: 4


Nick Bostrom’s (SA) argues that either: 1. civilizations/entities with tremen­dous com­put­ing power do not exist; 2. they exist, but choose not to sim­u­late prim­i­tive civ­i­liza­tions like us (what­ever else they might do); 3. or we are likely in such a sim­u­la­tion That is: they don’t exist, they exist but don’t use it, or they exist and use it.1 The SA pro­vides a frame­work for revis­it­ing the old SF skep­ti­cal chest­nut “what if, like, we’re just in a com­puter sim­u­la­tion, man?” with poten­tially bet­ter-­grounded rea­sons for con­sid­er­ing it rea­son­ably pos­si­ble—hu­man­ity cer­tainly does run as many small crude sim­u­la­tions as it can about every sub­ject under the sun and is par­tic­u­larly inter­ested in mod­el­ing the past, and there are no good long-term rea­sons for think­ing that (should human­ity not go extinct or civ­i­liza­tion col­lapse) total com­put­ing power will for all time be sharply bounded below the nec­es­sary amount for real­is­tic sim­u­la­tions of whole worlds or peo­ple.

The problem

What’s nice about the SA is that it presents with a whose branches are all par­tic­u­larly nox­ious.

Living in a simulation

We don’t want to accept #3—­be­liev­ing we are in a sim­u­la­tion is repug­nant and means we must revise almost every aspect of our world­view2. It also ini­tially seems flab­ber­gast­ing: when one con­sid­ers the com­put­ing power and intel­lec­tual resources nec­es­sary to cre­ate such a detailed sim­u­la­tion, one bog­gles.

Disinterested gods

But if we accept #2, aren’t we say­ing that despite liv­ing in a civ­i­liza­tion that devotes a large frac­tion of its efforts to enter­tain­ment, and a large frac­tion of its enter­tain­ment to video games—and what are com­plex video games but sim­u­la­tion­s?—we expect most future civ­i­liza­tions descended from ours, and most future civ­i­liza­tions in gen­er­al, to sim­ply not bother to sim­u­late the past even once or twice? This seems a mite implau­si­ble. (And even if enter­tain­ment ceases to focus on sim­u­la­tions of var­i­ous kinds3, are we to think that even his­to­ri­ans would­n’t dearly love to re-run the past?)

Infeasible simulations

And if we accept #1, we’re say­ing that no one will ever attain the heights of power nec­es­sary to sim­u­late world­s—that the nec­es­sary com­put­ing power will not come into exis­tence.

Physical limits to simulation

What does this pre­sup­pose? Well, maybe there are phys­i­cal lim­its that bar such sim­u­la­tions. It may be pos­si­ble, but just not fea­si­ble4. Unfor­tu­nate­ly, phys­i­cal lim­its allow world sim­u­lat­ing. in con­cludes that:

The “ulti­mate lap­top” is a com­puter with a mass of one kilo­gram and a vol­ume of one liter, oper­at­ing at the fun­da­men­tal lim­its of speed and mem­ory capac­ity fixed by physics. The ulti­mate lap­top per­forms log­i­cal oper­a­tions per sec­ond on ≈1031 bits.

The cost of simulation

Nick Bostrom cal­cu­lates as a rough approx­i­ma­tion that sim­u­lat­ing a world could be done for ‘≈1033–1036 oper­a­tions’5. Even if we assume this is off6 by, say, 5 orders of mag­ni­tude (1038-1041), the ulti­mate lap­top could run mil­lions or bil­lions of civ­i­liza­tions. A sec­ond.

It seems unre­al­is­tic to expect human­ity in any incar­na­tion to reach the exact lim­its of com­pu­ta­tion. So sup­pos­ing that human­ity had spread out over the Solar sys­tem proces­sors amount­ing to the vol­ume of the Earth? How fast would they be to equal our ulti­mate lap­top sim­u­lat­ing mil­lions or bil­lions of civ­i­liza­tions? Well, the vol­ume of the Earth is 1.08321x1024 liters; so oper­a­tions per liter.

Prospects for development

1026 ops isn’t too bad, actu­al­ly. That’s 100 yottaflops (1024). , in 2009, clocks in at 1.4 petaflops (). So our hypo­thet­i­cal low-pow­ered lap­top is equal to 350 bil­lion Road­run­ners ().

OK, but how many turns of would that rep­re­sent? Quite a few7: 38 dou­blings are nec­es­sary. Or, at the canon­i­cal 18 months, exactly 57 years. Remark­able! If Moore’s Law keeps hold­ing (a dubi­ous assump­tion, see ), such sim­u­la­tions could begin within my life­time.

We can prob­a­bly expect Moore’s Law to hang on for a while. There are pow­er­ful eco­nomic induce­ments to keep devel­op­ing com­put­ing tech­nol­o­gy—proces­sor cycles are never cheap enough. And there are sev­eral plau­si­ble paths for­ward:

But in any case, even if our esti­mate is off by sev­eral orders of mag­ni­tude, this does not mat­ter much for our argu­ment. We noted that a rough approx­i­ma­tion of the com­pu­ta­tional power of a plan­e­tary-­mass com­puter is 1042 oper­a­tions per sec­ond, and that assumes only already known nan­otech­no­log­i­cal designs, which are prob­a­bly far from opti­mal.8

Even if we imag­ine Moore’s Law end­ing for­ever within a few turns, we can’t exclude human­ity for­ever remain­ing below the com­put­ing power ceil­ing. Per­haps we will spe­cial­ize in extremely low-power & durable proces­sors, once we can no longer cre­ate faster ones, and man­u­fac­ture enough power the slow way over mil­len­nia. If we want to hon­estly affirm #1, we must find some way to exclude human­ity and other civ­i­liza­tions from being pow­er­ful enough for­ever.

Destruction

The eas­i­est way to ensure human­ity will never have enough com­put­ing power is for it to die. (A cheer­ful thought! Would­n’t one rather exist in a sim­u­la­tion than not exist at all?)

Per­haps advanced civ­i­liza­tions reli­ably destroy them­selves. (This is con­ve­nient as it also explains the .) It could be rogue AIs, nuclear war, nanoplagues, or your favorite .

A fail­ure to develop could well be as fatal as any direct attack. A fail­ure to develop inter­stel­lar travel leaves human­ity vul­ner­a­ble to a solar sys­tem-wide cat­a­stro­phe. One can­not assume human­ity will sur­vive indef­i­nitely at its present level of devel­op­ment, nor even at higher lev­els.

But unde­sir­abil­ity does­n’t mean this is false. After all, we can appeal to var­i­ous empir­i­cal argu­ments for #2 and #3, and so the bur­den of proof is on those who believe human­ity will for­ever be inad­e­quate to the task of sim­u­lat­ing worlds, or will aban­don its eter­nal love of games/simulated-worlds.

SA is invalid

One might object to the SA that the triple dis­junc­tion is valid, but of no con­cern: it is unwar­ranted to sug­gest that we may be sim­u­lat­ed. An emu­la­tion or sim­u­la­tion would pre­sum­ably be of such great accu­racy that it’d be mean­ing­less for inhab­i­tants to think about it: there are no obser­va­tions they could make one way or the oth­er. It is mean­ing­less to them—­more the­ol­ogy than phi­los­o­phy.

We may not go to the extreme of dis­card­ing all non-­fal­si­fi­able the­o­ries, but we should at least be chary of the­o­ries that dis­claim fal­si­fi­a­bil­ity in most cir­cum­stances9.

Investigating implications of SA

The sim­u­la­tion hypoth­e­sis is sus­cep­ti­ble to some form of inves­ti­ga­tion, how­ev­er, in some sense. We can inves­ti­gate the nature of our own uni­verse and make deduc­tions about any enclosing/simulating uni­verse10.

More clear­ly, we can put lower bounds on the com­put­ing power avail­able in the lower11 uni­verse, and inci­den­tally its size.

If a sim­u­lated uni­verse requires n units of space-­time, then it must be made of at least n + 1 units; it’s para­dox­i­cal if a sim­u­lated uni­verse could be more infor­ma­tion-rich than the sim­u­la­tor, inas­much as the sim­u­la­tor includes the sim­u­lated (how could some­thing be larger than itself?). So if we observe our uni­verse to require n units, then by the , the sim­u­la­tor must be n + 1 units.

Limits of investigation

This is a weak method of inves­ti­ga­tion, but how weak?

Very.

Sup­pose we assume that the entire uni­verse is being sim­u­lat­ed, par­ti­cle by par­ti­cle? This is surely the worst-­case sce­nar­io, from the sim­u­la­tor’s point of view.

There are a num­ber of esti­mates, but we’ll say that there are 1086 par­ti­cles in the observ­able uni­verse. It’s not known how much infor­ma­tion it takes to describe a par­ti­cle in a rea­son­ably accu­rate sim­u­la­tion—a byte? A kilo­byte? But let’s say the aver­age par­ti­cle can be described by a megabyte—then the sim­u­lat­ing uni­verse needs just spare bytes. (About 1055 ulti­mate lap­tops of data stor­age.)

But we run into prob­lems. All that’s really needed for a sim­u­la­tion is things within human­i­ty’s . And the sim­u­la­tors could prob­a­bly ‘cheat’ even more with tech­niques like and .

It is not nec­es­sary for the sim­u­lat­ing thing to be larg­er, par­ti­cle-­wise, than the sim­u­lat­ed. The ‘greater size’ prin­ci­ple is infor­ma­tion-the­o­retic.

If one wanted to sim­u­late an Earth, the brute-­force approach would be to take an Earth’s worth of atoms, and put the par­ti­cles on a one-­to-one basis. But pro­gram­mers use ‘brute-­force’ as a pejo­ra­tive term con­not­ing an algo­rithm that is dumb, slow, and far from opti­mal.

Bet­ter algo­rithms are almost cer­tain to exist. For exam­ple, might ini­tially seem to require n2 space-­time as the plane fills up. But if one caches the many repeated pat­terns, as in algo­rithm, log­a­rith­mic and greater speedups are pos­si­ble.12 It need not be as inef­fi­cient as a real uni­verse, which mind­lessly recal­cu­lates again and again. It is not clear that a sim­u­la­tion need be iso­mor­phic in every step to a ‘real’ uni­verse, if it gets the right results. And if one does not demand a true iso­mor­phism between sim­u­la­tion and sim­u­lat­ed, but allows cor­ners to be cut, large con­stant fac­tors opti­miza­tions are avail­able.13

And the sit­u­a­tion gets worse depend­ing on how many lib­er­ties we take. What if instead of cal­cu­lat­ing just every­thing in the uni­verse, the sim­u­la­tion is cut down by orders of mag­ni­tude to just every­thing in our light­cone? Or in our solar sys­tem? Or just the Earth itself? Or the area imme­di­ately around one­self? Or just one’s brain? Or just one’s mind as a suit­able abstrac­tion? (The brain is not very big, infor­ma­tion-­wise.14) Remem­ber the esti­mate for a sin­gle mind: some­thing like 1017 oper­a­tions a sec­ond. Reusing the ops/second fig­ure from the ulti­mate lap­top, we see that our mind could be han­dled in per­haps of a liter.

This is a very poor lower bound! All this effort, and the most we can con­clude about a sim­u­lat­ing uni­verse is that it must be at least mod­er­ately big­ger than the ( m) cubed.

Investigating

We could imag­ine fur­ther tech­niques: per­haps we could send off to the far cor­ners of the uni­verse, in a bid to delib­er­ately increase resource con­sump­tion. (This is only use­ful if the sim­u­la­tors are ‘cheat­ing’ in some of the ways listed above. If they are sim­u­lat­ing every sin­gle par­ti­cle, mak­ing some par­ti­cles move around isn’t going to do very much.)

Or we could run sim­u­la­tions of our own. It would be dif­fi­cult for sim­u­la­tors to pro­gram their sys­tems to see through all the lay­ers of abstrac­tion and opti­mize the sim­u­la­tion. To do so in gen­eral would seem to be a vio­la­tion of (a gen­er­al­iza­tion of the ). It is well known that while any can be run on a Uni­ver­sal Tur­ing machine, the per­for­mance penalty can range from the minor to the hor­rif­ic.

The more s and there are between a pro­gram and its fun­da­men­tal sub­strate, the more dif­fi­cult it is to under­stand the run­ning code—it becomes ever more opaque, indi­rect, and bulky.

And there could be dozens of lay­ers. A sim­u­lated proces­sor is being run; this receives which it must trans­late into ; the machine code is being sent via the offices of an , which hap­pens to be host­ing another () oper­at­ing sys­tem (per­haps the user runs Linux but needs to test a pro­gram on Mac OS X). This hosted oper­at­ing sys­tem is largely idle save for another inter­preter for, say, . The pro­gram loaded into the inter­preter, , hap­pens to be itself an inter­preter… If any pos­si­ble sim­u­la­tion is exclud­ed, we have arguably reached at least 5 or 6 lev­els of indi­rec­tion already (view­ing the OSs as just sin­gle lay­er­s), and with­out resort­ing to any obtuse uses of indi­rec­tion15.

Even with­out resort to lay­ers, it is pos­si­ble for us to waste indef­i­nite amounts of com­put­ing pow­er, power that must be sup­plied by any sim­u­la­tor. We could brute-­force open ques­tions such as the , or we could sim­ply exe­cute every pos­si­ble pro­gram. It would be dif­fi­cult for the sim­u­la­tor to ‘cheat’ on that—how would they know what every pos­si­ble pro­gram does? (Or if they can know some­thing like that, they are so dif­fer­ent from us as to ren­der spec­u­la­tion quisquil­lian.) It may sound impos­si­ble to run every pro­gram, because we know many pro­grams are infi­nite loops; but it is, in fact, easy to imple­ment the tech­nique.

Risks

But one won­ders, aren’t we run­ning a risk here by send­ing off Von Neu­mann probes and using vast com­put­ing resources? We risk anger­ing the sim­u­la­tors, as we cal­lously use up their resources. Or might there not be a grand Killer? Every allo­ca­tion we come a lit­tle closer to the unknown lim­its!

There’s no guar­an­tee that we’re in a sim­u­la­tion in the first place. We never ruled out the pos­si­bil­ity that most civ­i­liza­tions are destroyed. It would be as ter­ri­ble to step softly for fear of the Divine Pro­gram­mer as it would be of the Divine. Sec­ond­ly, the higher we push the lim­its with­out dis­as­ter (and we will push the lim­its to enable eco­nomic growth alone), the more con­fi­dent we should be that we aren’t in a sim­u­la­tion. (The higher the lim­it, the larger the uni­verse; and small uni­verses are more par­si­mo­nious than large ones.) And who knows? Per­haps we live in a poor sim­u­la­tion, which will let us probe it more direct­ly.


  1. This is a com­plete dis­junc­tion; we can see this by con­sid­er­ing what’s left if we com­bine these 2 binary pred­i­cates: They don’t exist, but they use it? They don’t exist, and don’t use it?↩︎

  2. Although the revi­sion might not be too awful; tries such a revi­sion in “How To Live In A Sim­u­la­tion”, and comes up with pretty benign sug­ges­tions, some of which are a good idea on their own mer­its:

    …you should care less about oth­ers, live more for today, make your world look more likely to become rich, expect to and try more to par­tic­i­pate in piv­otal events, be more enter­tain­ing and praise­wor­thy, and keep the famous peo­ple around you hap­pier and more inter­ested in you.

    ↩︎
  3. Pretty much every video-game is a sim­u­la­tion of some­thing—war, rac­ing, trav­el, etc. They vary in real­ism and fic­tion­al­ized set­tings, but fun­da­men­tal­ly, they are still sim­u­la­tions. In the case of the , sim­u­la­tion of every­day life even.↩︎

  4. An exam­ple of some­thing pos­si­ble but not fea­si­ble might be fac­tor­ing clas­si­cally a com­pos­ite num­ber with a quadrillion dig­its; we know there is a fac­tor­iza­tion of it, and exactly how to go about get­ting it, but that does­n’t mean we could do it in less than the life­time of the uni­verse.↩︎

  5. foot­note 10; “Are you liv­ing in a com­puter sim­u­la­tion?”↩︎

  6. A great deal depends on how expen­sive sim­u­lat­ing a brain is. Ralph Merkle puts it at 1013-1016. Robert J. Brad­bury’s “Matrioshka Brains” essay lists 4 esti­mates for an indi­vid­ual brain rang­ing any­where from 1013 ops/second to 1017.↩︎

  7. Try eval­u­at­ing this in your friendly : (1.4*10^15)*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2*2 >= (5.009001*10^26). Man­u­ally mul­ti­ply­ing con­veys the mag­ni­tude of how many Moore-­dou­blings are nec­es­sary.↩︎

  8. Nick Bostrom, ibid.↩︎

  9. Sim­u­la­tions are pre­sum­ably observ­able in a few spe­cial cas­es—be­sides exam­ples, the sim­u­la­tors could vio­late var­i­ous phys­i­cal laws to demon­strate their exis­tence. It’s hard to see why this would be very com­mon, though—The Matrix seems to sug­gest that the qua­si­-­god­like beings who could build & run a sim­u­la­tion are some­how fal­li­ble, and such tam­per­ing would seem to destroy the value of sim­u­la­tions run for enter­tain­ment or research.↩︎

  10. An impor­tant assump­tion we must make is that the sim­u­lat­ing uni­verse is much like ours: with sim­i­lar laws of physics, and most impor­tant­ly, is com­putable. If the other uni­verse runs on rad­i­cally dif­fer­ent physics, that could make non­sense of any con­clu­sions. For­tu­nate­ly, assum­ing that the other is like ours is the sim­pler assump­tion, and is even rea­son­able (if uni­verses are too alien to each oth­er, then why one would cre­ate the oth­er? And from where could the detailed descrip­tion come from?).↩︎

  11. I use ‘lower’ in the sense of ‘more fun­da­men­tal’.↩︎

  12. Exam­ples of caching can be drawn from exist­ing emu­la­tors like which often and cache repeat­ing pat­terns to exe­cute fewer ‘native’ instruc­tions. From “Fab­rice Bel­lard”, by Andy Gocke and Nick Piz­zo­lato (ACM 2009):

    While a sub­stan­tial accom­plish­ment on its own, QEMU is not sim­ply a proces­sor emu­la­tor, it uses dynamic trans­la­tion to improve per­for­mance. As explained in the Usenix paper [Bel­lard 2005], QEMU uses a novel approach to ISA trans­la­tion. Instead of trans­lat­ing one instruc­tion at a time, QEMU gath­ers many instruc­tions together in a process called “chunk­ing,” and then trans­lates this chunk as a whole. QEMU then remem­bers these chunks. Many times there are cer­tain chunks which will occur many times in the code of a pro­gram. Instead of tak­ing the time to trans­late them all sep­a­rate­ly, QEMU stores the chunks and their native trans­la­tion, next time sim­ply exe­cut­ing the native trans­la­tion instead of doing trans­la­tion a sec­ond time. Thus, Bel­lard invented the first proces­sor emu­la­tor that could achieve near native per­for­mance in cer­tain instances.

    ↩︎
  13. Real-­world emu­la­tions offer inter­est­ing exam­ples of the trade-offs. cov­ers in “Accu­racy takes pow­er: one man’s 3GHz quest to build a per­fect SNES emu­la­tor” the vari­a­tion in demands for sim­u­lat­ing the : an early emu­la­tor man­aged to emu­late the NES with 25MHz x86 proces­sors but was some­what inac­cu­rate and required mod­i­fi­ca­tions to the games; a much more accu­rate emu­la­tor uses up 1600MHz. For the , the spec­trum is from 300MHz to 3,000MHz. Final­ly, an emu­la­tion down to the tran­sis­tor level for the 1972 _, runs at <=10fps per sec­ond on a 3,000MHz x86 proces­sor; to run in real-­time on a 50hz TV would require (to naively extrap­o­late) a 15,000MHz x86 proces­sor.↩︎

  14. Cryp­tog­ra­pher in “The Mol­e­c­u­lar Repair of the Brain” finds an upper bound of 100 bits per atom; “Dan­coff and Quastler[128], using a some­what bet­ter encod­ing scheme, say that 24.5 bits per atoms should suf­fice”; a will­ing­ness to work on the mol­e­cule level reduces this to 150 bits per mol­e­cules made of a few to thou­sands of atoms; a cuts the 150 bits down to 80 bits; Merkle com­ments “50 bits or less is quite achiev­able”.

    Expand­ing out to the whole brain, Merkle quotes Cher­niak (“The Bounded Brain: Toward Quan­ti­ta­tive Neu­roanatomy”):

    On the usual assump­tion that the synapse is the nec­es­sary sub­strate of mem­o­ry, sup­pos­ing very roughly that (given anatom­i­cal and phys­i­o­log­i­cal “noise”) each synapse encodes about one binary bit of infor­ma­tion, and a thou­sand synapses per neu­ron are avail­able for this task: 1010 cor­ti­cal neu­rons x 103 synapses = 1013 bits of arbi­trary infor­ma­tion (1.25 ter­abytes) that could be stored in the cere­bral cor­tex.

    (Anders Sand­berg, in com­par­ison, sug­gests much higher upper bounds as he exam­ines infor­ma­tion out­put.) An even more extreme lower bound than a ter­abyte is the one derived by (“How Much Do Peo­ple Remem­ber? Some Esti­mates of the Quan­tity of Learned Infor­ma­tion in Long-term Mem­ory”, Cog­ni­tive Sci­ence 10, 477-493, 1986) based on mem­ory tests, of 2 bits per sec­ond or mere hun­dreds of megabytes over a life­time! (See Merkle’s “How Many Bytes in Human Mem­o­ry?”)↩︎

  15. For exam­ple, per­haps one could instead be run­ning Fire­fox, which inter­prets JavaScript (in which one can run Linux), and one has vis­ited a web page with a (an inter­preter of Java); and this applet is run­ning Pearco­la­tor—which is an emu­la­tor. Of course, just a raw proces­sor isn’t very use­ful; per­haps one could run one of the oper­at­ing sys­tems writ­ten in Java on it, and then does­n’t one want to browse the Inter­net a lit­tle with good old Fire­fox…?↩︎