A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability
9 Dec 2012–15 June 2019 finished certainty: highly likely importance: 6
‘Computers’, in the sense of being Turing-complete, are extremely common. Almost any system of sufficient complexity—unless carefully engineered otherwise—may be found to ‘accidentally’ support Turing-complete somewhere inside it, even systems which would appear to have not the slightest thing to do with computation. Software systems are especially susceptible to this, which often leads to serious security problems as the Turing-complete components can be used to run attacks on the rest of the system.
I provide a running catalogue of systems which have been, surprisingly, demonstrated to be Turing-complete.
Turing-completeness (TC) is (avoiding the rigorous formal definition) the property of a system being able to, under some simple representation of input & output, compute any program of interest, including another computer in some form.
TC, besides being foundational to computer science and understanding many key issues like “why a perfect antivirus program is impossible”, is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite—it is difficult to write a useful system which does not immediately tip over into TC. “Surprising” examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult.
I like demonstrations of TC lurking in surprising places because they are often a display of considerable ingenuity, and feel like they are making a profound philosophical point about the nature of computation: computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.
“Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.”
They are probably best considered as a subset of “discovered” or “found” esoteric programming languages (esolangs). So FRACTRAN, as extraordinarily minimalist as it is, does not count; nor would a deliberately obfuscated language like Malbolge (where it took years to write a trivial program) count because it was designed to be an esolang; but neither would Conway’s Game of Life count because questions about whether it was TC appeared almost immediately upon publication and being able to program Tetris in it is not surprising, and given the complexity of packet-switching networks & routers it’s not necessarily too surprising if one can build a cellular automaton into them or encode logical circuits, or if airplane ticket planning/validation is not just NP-hard or EXPSPACE-hard but undecidable (because of the complex rules airlines require). Many configuration or special-purpose languages or tools or complicated games turn out to violate the Rule of least power & be “accidentally Turing-complete”, like Sendmail1, MediaWiki templates,
sed or repeated regexp/find-replace commands in an editor (any form of string substitution or templating or compile-time computation is highly likely to be TC on its own or when iterated since they often turn out to support a lambda calculus or a term-rewriting language or tag system eg esolangs “///” or Thue), XSLT, Infinite Minesweeper, Dwarf Fortress2, Starcraft, Minecraft, Ant, Transport Tycoon & Cities: Skyline, C++ templates & Java generics, DNA computing etc are TC but these are not surprising either: many games support scripting (ie TC-ness) to make their development easier and enable fan modifications, so games’ TC may be as simple as including syntax for calling out to a better-known language like Perl, or it may just be an obscure part of a standard format (most people these days are probably unaware that TrueType & many fonts are PostScript programs based on stack machines, similar to DWARF debugging and ELF metadata, or that some music formats go beyond MIDI in providing scripting capabilities and must be interpreted to be displayed; once one knows this, then fonts being TC are no more surprising than TeX documents being TC, leading of course, to many severe & fascinating font or media security vulnerabilities such as the BLEND vulnerability or SNES & NES code exploiting Linux systems Other formats, like PDF, are simply appalling.3). Similarly, such feats as creating a small Turing machine using Legos or dominos4 would not count, since we already know that mechanical computers work. On the other hand, “weird machines” are a fertile ground of “that’s TC?” reactions.
Many cases of discovering TC seem to consist of simply noticing that a primitive in a system is a little too powerful/flexible. For example, if Boolean logic can be implemented, that’s a sign that more may be possible and turn Boolean circuits into full-blown circuit logic for a TM. Substitutions, definitions/abbreviations, regular expressions (especially with any extensions or custom features), or any other kind of ‘search and replace’ functionality is another red flag, as they suggest that a cellular automaton or tag system is lurking. This applies to anything which can change state based on ‘neighbors’, like a spreadsheet cell or a pixel. Any sort of scripting interface or API, even if locked down, may not be locked down quite enough. An actual scripting language or VM is so blatant as to be boring when (not if) someone finds a vulnerability or escape from the sandbox. Operations which take variable lengths of times or whose completion can’t easily be predicted from the start are another source of primitives, as they may ‘depend’ on the data they are operating over in some way, implementing different operations on different data, which may mean that they can be made equivalent to Boolean conditionals based on a careful encoding of data.
What is “surprising” may differ from person to person. Here is a list of accidentally-Turing-complete systems that I found surprising:
Wang tiles: multi-colored squares, whose placement is governed by the rule that adjacent colors must be the same (historically, not surprising to Wang, but was surprising to me and I think to a lot of other people)
- MMU shuffle computer RAM around to make programming easier; if a program sets up its share of memory properly, it can execute arbitrary computations via MMU page-faults (comments; paper) without ever running code itself by turning the MMU faulting mechanism into a one-instruction set computer.
movis Turing-complete”: the apparently innocuous x86 assembler instruction
mov, which copies data between the CPU & RAM, can be used to implement a transport-triggered-architecture one instruction set computer, allowing for playing Doom (and for bonus points, it can be done using
xortoo—there are many such TC one-instruction set computers, such as ByteByteJump)
- “x86 is Turing-complete with no registers”
“return-into-libc attacks”: software libraries provide pre-packaged functions, each of which is intended to do one useful thing; a fully TC ‘language’ can be cobbled out of just calls to these functions and nothing else, which enables evasion of security mechanisms since the attacker is not running any recognizable code of his own. See, among many others, “The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)” & “On the Expressiveness of Return-into-libc Attacks”.
Pokemon Yellow: “Pokemon Yellow Total Control Hack” outlines an exploit of a memory corruption attack which allows one to write arbitrary Game Boy assembler programs by repeated in-game walking and item purchasing. (There are similar feats which have been developed by speedrun aficionados, but I tend to ignore most of them as they are ‘impure’: for example, one can turn the SNES Super Mario World into an arbitrary game like Snake or Pong but you need the new programs loaded up into extra hardware, so in my opinion, it’s not really showing SMW to be unexpectedly TC and is different from the other examples. Similarly, one can go from Super Game Boy to SNES to arbitrary code like IRC. This distinction is debatable.)
- a similar memory corruption issue surfaces in POSIX
%noption, among other C library functions (Carlini et al 2015); hence, “
printbf—Brainfuck interpreter in
- A StarCraft buffer overflow was used by the SC community to implement complicated maps, tower defense games, Mario, and Mario level editors; emulating the hack to avoid breaking the mods in updated SC versions caused Blizzard quite a bit of trouble.
- a similar memory corruption issue surfaces in POSIX
a 3D version of chess with check rules can apparently be made TC: Dempsey et al 2019
heart cells: interact in a way allowing logic gates and hence TC (perhaps not too surprising since cellular automatons were biologically motivated)
SVG: PostScript is TC by design, but what about the more modern vector graphics image format, SVG, which is written as XML, a (usually) not-TC document language? SVG also allows a slow encoding of Rule 110 (SVG files can be made arbitrarily large) and so is as TC as anything else.
If that’s not enough, the SVG standard is large and occasionally horrifying: the (failed) SVG 1.2 standard tried to add to SVG images the ability to open raw network sockets.
one category of weird machines doesn’t quite count since they require an assumption along the lines of the user mechanically clicking or making the only possible choice in order to drive the system into its next step; while the user provides no logical or computational power in the process, they aren’t as satisfying examples for this reason:
- Magic: the Gathering: not just TC5, but above arithmetic in the hierarchy
- CSS: was designed to be a declarative markup language for tweaking the visual appearance of HTML pages, but CSS declarations interact just enough to allow an encoding of the cellular automaton Rule 110, under the assumption of mechanical mouse clicks on the web browser to advance state (CSS hacks honorable mention: Kevin Kuchta’s “CSS-Only Chat”, which uses no JS by outsourcing computation to the server)
- Microsoft PowerPoint animations (excluding macros, VBScript etc) can implement a Turing machine when linked appropriately (Wildenhain 2017; video; PPT), under the assumption of a user clicking on the only active animation triggers
Possibly accidentally or surprisingly Turing-complete systems:
- CSS without the assumption of a driving mouse click (perhaps some sort of Wang tile using reflections and conditionals?)
- Unicode (!): Nicolas Seriot suggests that Unicode’s bidirectional algorithms (intended for displaying scripts like Arabic or Hebrew which go right-to-left rather than left-to-right like English) may be complex enough to support a tag system via case folding rules (eg Turkish). Fonts themselves also support glyph substitution rules which are suspiciously close to tag systems and allow for tricks like solving Fizz Buzz or rendering sparklines.
- Human visual illusions: Changizi 2008 presents ambiguous images in a circuit-like format, whose depth perception ‘flips’ based on the ‘input’ or top of the circuit, which are analogous to OR/AND/NOT/XOR computations; the existence of these ‘visual circuits’ hints at the possibility of ‘TC-complete’ images (although many pieces, like a working memory, are missing)
It turns out that given even a little control over input into something which transforms input to output, one can typically leverage that control into full-blown TC. This matters because, if one is clever, it provides an escape hatch from a system which is small, predictable, controllable, and secure, to one which could do anything. It’s hard enough to make a program do what it’s supposed to do without giving anyone in the world the ability to insert another program into your program. Even if there is no way to outright ‘escape’ the sandbox, such hidden programs can be dangerous, by extracting information about the surrounding program (eg JS embedded in a web page which can extract your passwords by using Row hammer to attack your hardware directly, even if it can’t actually escape your web browser), or can take the host into strange & uncharted (and untested) territories.
That we find these demonstrations surprising is itself a demonstration of our lack of imagination and understanding of computers, computer security, and AI. We pretend that we are running programs on these simple abstract machines which do what we intuitively expect, but they run on computers which are bizarre, and our programs themselves turn out to be computers which are even more bizarre. Secure systems have to be built by construction; once the genie has been let out of the lamp, it’s difficult to patch the lamp.
An active area of research is into languages & systems carefully designed and proven to not be TC (eg. total functional programming). Why this effort to make a language in which many programs can’t be written? Because TC is intimately tied to Godel’s incompleteness theorems & Rice’s theorem, allowing TC means that one is forfeiting all sorts of provability properties: in a non-TC language, one may be able to easily prove all sorts of useful things to know; for example, that programs terminate, that they are type-safe or not, that they can be easily converted into a logical theorem, that they consume a bounded amount of resources, that one implementation of a protocol is correct or equivalent to another implementation, that there are a lack of side-effects and a program can be transformed into a logically-equivalent but faster version (particularly important for declarative languages like SQL where the query optimizer being able to transform queries is key to acceptable performance, but of course one can do a surprising amount in SQL like 3D raytracing gradient descent for fitting machine learning models and some SQL extensions make it TC anyway by allowing either a cyclic tag system to be encoded, the
model DSL, or to call out to PL/SQL) etc.
Languages or systems which unintentionally cross over the line into being TC can be amusing or useful (although usually not), but they also have some serious implications: such systems, because they were never expected to be programmable, can be harmful, or extremely insecure & a cracker’s delight, as exemplified by the “language-theoretic security” paradigm, based on exploiting “weird machines”; some of the literature:
- “Exploit Programming: From Buffer Overflows to ‘Weird Machines’ and Theory of Computation”, Bratus et al 2011
- “The Halting Problems of Network Stack Insecurity”, Sassaman et al 2011
- “The Page-Fault Weird Machine: Lessons in Instruction-less Computation”, Bangert et al 2013
- “‘Weird Machines’ in ELF: A Spotlight on the Underappreciated Metadata”, Shapiro et al 2013
- “Interrupt-oriented Bugdoor Programming: A minimalist approach to bugdooring embedded systems firmware”, Tan et al 2014
- “The Weird Machines in Proof-Carrying Code”, Vanegue 2014
- “Framing Signals—A Return to Portable Shellcode”, Bosman & Bos 2014
- “Weird machines, exploitability, and provable unexploitability”, Dullien 2017
- “Psychic Paper”, Siguza 2020
Most recently, Spectre & generalizations (Mcilroy et al 2019) can be interpreted as providing a whole ‘shadow computer’ in the CPU via speculative execution which can be programmed to do things like run malware without visibly executing any of the malware instructions while having side-effects in the real computer. Spectre is interesting in being a class of vulnerabilities which have existed for decades in CPU architectures that were closely scrutinized for security problems, but just sort of fell into a collective human blind spot. Nobody thought of controllable speculative execution as being a ‘computer’ which could be ‘programmed’. Once someone noticed, because it was a powerful computer and of course TC, it could be used in many ways to attack stuff.
“Too powerful” languages can also manifest as nasty DoS attacks; the fuzz tester afl found that it could create an infinite loop in OpenBSD’s roff document formatting tool (first version, 43 years prior) by abusing some of the string substitution rules.
“Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail. I replied: ‘What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.’”
“Every drop of blood has great talent; the original cellule seems identical in all animals, and only varied in its growth by the varying circumstance which opens now this kind of cell and now that, causing in the remote effect now horns, now wings, now scales, now hair; and the same numerical atom, it would seem, was equally ready to be a particle of the eye or brain of man, or of the claw of a tiger…The man truly conversant with life knows, against all appearances, that there is a remedy for every wrong, and that every wall is a gate.”
“The question is”, said Alice, “whether you can make words mean so many different things.”
“The question is”, said Humpty Dumpty, “which is to be master—that’s all.”
Lewis Carroll, Through the Looking-Glass, and What Alice Found There (1872)7
“I don’t even see the code. All I see is blonde, brunette, redhead.”
Cypher, The Matrix
To draw some parallels here and expand Dullien 2017, I think unexpected Turing-complete systems and weird machines have something in common with heist movies or cons or stage magic: they all share a specific paradigm we might call the security mindset or hacker mindset.
What they/OP/security/speedrunning/hacking/social-engineering all have in common is that they show that the much-ballyhooed ‘hacker mindset’ is, fundamentally, a sort of reductionism run amok, where one ‘sees through’ abstractions to a manipulable reality.8 Like Neo in the Matrix—a deeply cliche analogy for hacking, but cliche because it resonates—one achieves enlightenment by seeing through the surface illusions of objects and can now see the endless lines of green code which make up the Matrix (and vice-versa).
In each case, the fundamental principle is that the hacker asks: “here I have a system W, which pretends to be made out of a few Xs; however, it is really made out of many Y, which form an entirely different system, Z; I will now proceed to ignore the X and understand how Z works, so I may use the Y to thereby change W however I like”.
Abstractions are vital, but abstractions also always leak. (“You’re very clever, young man, but it’s reductionism all the way down!”) This is in some sense the opposite of a mathematician: a mathematician tries to ‘see through’ a complex system’s accidental complexity up to a simpler more-abstract more-true version which can be understood & manipulated—but for the hacker, all complexity is essential, and they are instead trying to unsee the simple abstract system down to the more-complex less-abstract (but also more true) version. (A mathematician might try to transform a program up into successively more abstract representations to eventually show it is trivially correct; a hacker would prefer to compile a program down into its most concrete representation to brute force all execution paths & find an exploit trivially proving it incorrect.) Ordinary users ask only that all their everyday examples of Ys transforms into Z correctly; they forget to ask whether all and only correct examples of Ys transform into correct Zs, and whether only correct Zs can be constructed to become Ys.
It’s all ‘atoms and void’9:
In hacking, a computer pretends to be made out of things like ‘buffers’ and ‘lists’ and ‘objects’ with rich meaningful semantics, but really, it’s just made out of bits which mean nothing and only accidentally can be interpreted as things like ‘web browsers’ or ‘passwords’, and if you move some bits around and rewrite these other bits in a particular order and read one string of bits in a different way, now you have bypassed the password.
In speed running (particularly TASes), a video game pretends to be made out of things like ‘walls’ and ‘speed limits’ and ‘levels which must be completed in a particular order’, but it’s really again just made out of bits and memory locations, and messing with them in particular ways, such as deliberately overloading the RAM to cause memory allocation errors, can give you infinite ‘velocity’ or shift you into alternate coordinate systems in the true physics, allowing enormous movements in the supposed map, giving shortcuts to the ‘end’10 of the game.
In robbing a hotel room, people see ‘doors’ and ‘locks’ and ‘walls’, but really, they are just made out of atoms arranged in a particular order, and you can move some atoms around more easily than others, and instead of going through a ‘door’ you can just cut a hole in the wall11 (or ceiling) and obtain access to a space. At Los Alamos, Richard Feynman, among other tactics, obtained classified papers by reaching in underneath drawers and ignored the locks entirely.
One analysis of the movie Die Hard, “Nakatomi space”, highlights how it & the Israel military’s mouse-holing in the Battle of Nablus treat buildings as kinds of machines, which can be manipulated in weird ways to move around to attack their enemies.
That example reminds me of the Carr & Adey anatomy of locked room murder mysteries, laying out a taxonomy of all the possible solutions which—like a magician’s trick—violate one’s assumptions about the locked room: whether it was always locked, locked at the right time, the murder done while in the room, murder rather than suicide, the room with locked-doors having a ceiling etc. (These tricks inspired Umineko’s mysteries (review), although in it a lot of them turn out to just involve conspirators/lying.)
In lockpicking, copying a key or reverse-engineering its cuts are some of the most difficult ways to pick a lock. One can instead simply use a bump key to brute-force the positions of the pins in a lock, or kick the door in, or among other door lock bypasses, wiggle the bolt, or reach through a crack to open from the inside, or drill the lock. (How do you know someone hasn’t already? You assume it’s the same lock as yesterday?)
Locks & safes have many other interesting vulnerabilities; I particularly like Matt Blaze‘s master-key vulnerability (Blaze 2003/Blaze 2004a/Blaze 2004b), which uses the fact that a master-key lock is actually opening for any combination of master+ordinary key cuts (ie ’master OR ordinary’ rather than ‘master XOR ordinary’), and so it is like a password which one can guess one letter at a time.
In stage magic (especially close-up/card/coin/pickpocketing), one believes one is continuously seeing single whole objects which must move from one place to another continuously; in reality, one is only seeing, occasionally, surfaces of many (possibly duplicate) objects, which may be moving only when you are not looking, in the opposite direction, or not moving at all. By hacking object permanence and limited attentional resources, the stage magician shows the ‘impossible’ (Macknik et al 2008’s Table 1 lists folk assumptions which can be hacked). Stage magic works by exploiting our implicit beliefs that no adversary would take the trouble to so precisely exploit our heuristics and shortcuts.1213
In weird machines, you have a ‘protocol’ like SSL or X86 machine code which appear to do simple things like ‘check a cryptographic signature’ or ‘add one number in a register to another register’, but in reality, it’s a layer over far more complex realities like processor states & optimizations like speculative execution reading other parts of memory and then quickly erasing it, and these can be pasted together to execute operations and reveal secrets without ever running ‘code’ (see again Mcilroy et al 2019).
Similarly, in finding hidden examples of Turing completeness, one says, ‘this system appears to be a bunch of dominoes or whatever, but actually, each one is a computational element which has unusual inputs/outputs; I will now proceed to wire a large number of them together to form a Turing machine so I can play Tetris in Conway’s Game of Life or use heart muscle cells to implement Boolean logic or run arbitrary computations in a game of Magic: The Gathering’.
Or in side channels, you go below bits and say, ‘these bits are only approximations to the actual flow of electricity and heat in a system; I will now proceed to measure the physical system’ etc.
In social engineering/pen testing, people see social norms and imaginary things like ‘permission’ and ‘authority’ and ‘managers’ which ‘forbid access to facilities’, but in reality, all there is, is a piece of laminated plastic or a clipboard or certain magic words spoken; the people are merely non-computerized ways of implementing rules like ‘if laminated plastic, allow in’, and if you put on a blue piece of plastic to your shirt and you incant certain words at certain times, you can walk right past the guards.14
Many financial or economic strategies have a certain flavor of this; Alice Maz’s Minecraft economics exploitations strongly reminds me of ‘seeing through’, as do many clever financial trades based on careful reading of contractual minutiae or taking seriously what are usually abstracted details like ‘taking delivery’ of futures etc
and while we’re at it, why are puns so irresistible to hackers? (Consider how omnipresent they are in Gödel, Escher, Bach or the Jargon File or text adventures or…) Computers are nothing but puns on bits, and languages are nothing but puns on letters. Puns force one to drop from the abstract semantic level to the raw syntactic level of sub-words or characters, and back up again to achieve some semantic twist—they are literally hacking language.
And so on. These sorts of things can seem magical (‘how‽’), shocking (‘but—but—that’s cheating!’), or hilarious (in the ‘violation of expectations followed by understanding’ theory of humor) because the abstract system W & our verbalizations are so familiar and useful that we quickly get trapped in our dreams of abstractions, and forget that it is merely a map and not the territory, while inevitably the map has made gross simplifications and it fails to document various paths from one point to another point which we don’t want to exist.
The fool can walk right in because he was too ignorant to know that’s impossible. This is why atheoretical optimization processes like animals (eg cats engaged in fuzz testing) or SMT solvers or evolutionary AI are so dumb to begin with, but in the long run can be so good at surprising us and finding ‘unreasonable’ inputs or reward hacks (analogous to the bias-variance tradeoff): being unable to understand the map, they can’t benefit from it like we do, but they also can’t overvalue it, and, forced to explore the territory directly to get what they want, discover new things.
To escape our semantic illusions can require a determined effort to unsee them, and use of techniques to defamiliarize the things. For example, you can’t find typos in your own writing without a great deal of effort because you know what it’s supposed to say; so copyediting advice runs like ‘read it out loud’ or ‘print it out and read it’ or ‘wait a week’ or even ‘read it upside down’ (easier than it sounds). That’s the sort of thing it takes to force you to read what you actually wrote, and not what you thought you wrote. Similar tricks are used for learning drawing: a face is too familiar, so instead you can flip it in a mirror and try to copy it.
Table 1: Types of conjuring effects15 Magic effects Examples Methodological strategies Appearance: an object appears ‘as if by magic’ Pulling a rabbit out of a hat; the Miser’s Dream (in which hundreds of coins seem to appear where previously there were none)75, 94 (BOX 2; Supplementary information S2 (movie)); Mac King’s giant rock in a shoe trick75, 87 (Supplementary information S3 (movie))
- The object was already there but was concealed (for example, the magician might conceal a coin in his or her hand prior to its production)
- The object was secretly put into position (for example, in the Cups and Balls routine, various objects are secretly loaded under the cups during the routine)
- The object is not there but seems to be (for example, a ‘medium’ can simulate the presence of a spirit at a seance by secretly touching a spectator)
Vanish: an object disappears ‘as if by magic’ Vanishing of a coin; Penn and Teller’s underwater vanishing of a naval submarine; David Copperfield’s vanishing of the Statue of Liberty.
- The object was not really where it appeared to be to begin with (for example, the magician fakes a transfer of a coin from the left hand to the right hand, then shows that the coin ‘disappeared’ from the right)
- The object has been secretly removed (for example, the magician uses a secret device, called a gimmick, to pull an object into his sleeve)
- The object is still there but is concealed (a coin can seem to vanish from the magician’s hand although in reality it is merely concealed)
Transposition: an object changes position in space from position A to position B Houdini’s Metamorphosis (in which two people change places between locked boxes); Penn and Teller’s Hanging Man trick (in which Penn is apparently hanged to death, only to be found safe and sound in the audience)
- The object seemed to be at A, but actually was already at B (for example, the magician fakes the transfer of a coin from the right to the left hand, then pretends to transfer the coin magically from left to right)
- The object is still at A but seems to be at B (for example, the magician fakes a coin transfer from the left hand to the right and then, when revealing the coin by dropping it, uses sleight of hand to give the impression that it was dropped from the right hand)
- The object was secretly moved from A to B (for example, a coin in the left hand is secretly transferred to the right hand and then is revealed there)
- A duplicate object is used (for example, both hands hold identical coins that are revealed at different times to simulate a transfer)
Restoration: an object is damaged and then restored to its original condition. Cutting and restoring a rope; sawing an assistant in half; tearing and restoring a newspaper; breaking and restoring rubber bands
- The object was not really damaged
- The object was not really restored
- A duplicate is used
Penetration: matter seems to magically move through matter Chinese Linking Rings (metal rings that link and unlink magically); Houdini’s Walking Through A Wall trick; Coins Through The Table
- Penetrations combine the techniques used in the transposition and restoration categories
Transformation: an object changes form (size, colour, shape, weight, etc.) Colour-Changing Card Trick; Spellbound (in which a coin turns into a different coin); the Professor’s Nightmare (in which three ropes of different length are made equal in length)
Transformations can be seen as the vanishing of object A combined with the appearance of object B:
- Object A was secretly switched with object B
- Object B was always present but was initially disguised as object A
- Object A is disguised as object B at the point of ‘transformation’
Extraordinary feats (including mental and physical feats) Extraordinary memory (remembering the names of all the audience members); extraordinary calculation (reporting the result of multiplying randomly selected 4-digit numbers); extraordinary strength; invulnerability (specific examples: walking on hot coals; Penn and Teller’s bullet-catching trick)
- Might rely on relatively obscure scientific knowledge (such as mathematical or physiological knowledge). For example, walking on hot coals is harmless when performed correctly
Telekinesis: ‘magical’ levitation or animation of an object Levitation; spoon bending
- The action is caused by an external force (for example, an invisible thread)
- The action is caused by an internal force (elasticity, chemical reaction, magnetism, etc.)
- The action did not actually occur (for example, a spoon bender can convince a spectator that a stationary spoon is still bending)
Extrasensory perception (ESP; including clairvoyance, telepathy, precognition, mental control, etc.) Clairvoyance (acquiring information that is not known to others through ESP); telepathy (acquiring information that is known to others through ESP); precognition (acquiring information from the future); mental control (the performer influences the selection process of another person)
- Controlling a spectator’s choices to give the illusion of free will
- Discovering hidden information (for example, reading information that has been sealed in an envelope, fishing for or pumping information from a spectator, cold reading, etc.)
- Revealing apparent proof that information announced by the spectator was previously known by the magician (for example, by writing the announcement on paper and using sleight of hand to make the paper seem to come out of an envelope that was sealed before the announcement)
Why are there so many places for backdoors and weird machines in your “computer”? Because your computer is in fact scores or hundreds, perhaps even thousands, of computer chips, many of which host weird machines and are explicitly or implicitly capable of Turing-complete computations (many more powerful than desktops of bygone eras), working together to create the illusion of a single computer. Backdoors, bugs, weird machines, and security do not care about what you think—only where resources can be found and orchestrated into a computation.
If ‘the network is now the computer’, it is equally true that ‘the computer is now a network’. While perhaps the earliest computers like the Altair PC really did have just one ‘computer’ in them, one place where all Turing-complete tasks had to pass through, that era is long over, and a ‘computer’ is composed of countless components, many of which previously could have been a useful computer. The phrase “computer” is one of convenience, rather than a hard-and-fast distinction. What is important are the inputs and outputs: how capable is the system as a whole and what resources does it require and how can the components be reprogrammed?
No one cares if Google is implemented using 50 supercomputers, 50,000 mainframes, 5 million servers, or 50 million embedded/mobile processors, or a mix of any of the above exploiting a wide variety of chips from custom “tensor processing units” to custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers) to FPGAs to GPUs to CPUs to still more exotic hardware like prototype D-Wave quantum computers—as long as it is competitive with other tech corporations and can deliver its services at a reasonable cost. (A “supercomputer” these days mostly looks like a large number of rack-mounted servers with unusual numbers of GPUs & connected by unusually high-speed InfiniBand connections and is not that different from a datacenter.) Any of these pieces of hardware could support multiple weird machines on many different levels of computation depending on their internal dynamics & connectivity. The system can be seen on many levels, each equally invalid but useful for different purposes. Are you a ‘single biological intelligence’ or a community/ecosystem of human cells/neurons/bacteria/yeast/viruses/parasites? And does it matter in the slightest bit to anyone you might wrong?
Here is an example of the ill-defined nature of the question: on your desk or in your pocket, how many computers do you currently have? How many computers are in your “computer”? Did you think just one? Let’s take a closer look—it’s computers all the way down. You might think you have just the one large CPU occupying pride of place on your motherboard, and perhaps the GPU too, but the computational power available goes far beyond just the CPU/GPU, for a variety of reasons: transistors and processor cores are so cheap now that it often makes sense to use a separate core for realtime or higher performance, for security guarantees, to avoid having to burden the main OS with a task, for compatibility with an older architecture or existing software package, because a DSP or core can be programmed faster than a more specialized ASIC can be created, or because it was the quickest possible solution to slap down a small CPU and they couldn’t be bothered to shave some pennies16… Whenever a peripheral or device is made, the Wheel of Reincarnation begins to turn. Further, many of these components can be used as computational elements even if they were not intended to be or try to hide that functionality. (For example, I believe I’ve read that the Commodore 64’s floppy drive’s CPU running Commodore DOS was used as a source of spare compute power & for defeating copy-protection schemes.)
A common AMD/Intel x86 CPU has billions of transistors, devoted to many discrete components:
Each of the >2 CPU cores can run independently, shutting on or off as necessary, and has its own private caches L1–L3 (often bigger than desktop computers’ entire RAM a few decades ago17, and likely physically bigger than their CPUs were to boot), and must be regarded as individuals. Further, the CPU as a whole is reprogrammable through microcode, such as to work around errors in the chip design, so what those CPUs compute changes.
CPUs sport increasingly opaque features, such as the management engines/secure enclaves, like the Intel Management Engine (with a JVM for programmability; Ruan 2014 & SGX), or AMD’s Platform Security Processor (PSP) or Android’s TEEs or Titan chips; these hardware modules typically are full computers in their own right, running independently of the host and able to tamper with it (and have contributed to terminological confusion, as the formerly simple protection ring levels of 0–3 have had to be augmented with “ring −1”, “ring −2”, and “ring −3”). Intel’s ME runs a proprietary unauditable fork of MINIX (an OS better known for its role in the creation of Linux), whose security implications concerned Google enough to launch a project to remove MINIX from its CPUs & its Titan chip to cryptographically verify firmware on boot.
DSP units, custom silicon: ASICs for video formats like h.264 probably are not Turing-complete (despite their support for complicated deltas and compression techniques which might allow something like Wang tiles), but for example Apple’s A9 mobile system-on-a-chip goes far beyond simply a dual-core ARM CPU and GPU as like Intel/AMD desktop CPUs, it includes the secure enclave (a physically separate dedicated CPU core), but it also includes an image co-processor, a motion/voice-recognition coprocessor (partially to support Siri), and apparently a few other cores. These ASICs are sometimes there to support AI tasks, and presumably specialize in matrix multiplications for neural networks; as recurrent neural networks are Turing-complete… Other companies have rushed to expand their system-on-chips as well, like Motorola or Qualcomm
motherboard BIOS & management engines (on top of the CPU equivalents), typically with network access
These management or debugging chips may be ‘accidentally’ left enabled on shipping devices, like the Via C3 CPUs’s embedded ARM CPUs.
GPUs have >100s of simple GPU cores, which can run neural networks well (which are highly expressive or Turing-complete), or do general-purpose computation (albeit slower than the CPU)19
smartphones: in addition to all the other units mentioned, there is an independent baseband processor running a proprietary realtime OS for handling radio communications with the cellular towers/GPS/other things, or possibly more than one virtualized using something like L4. Baseband processors have been found with backdoors, in addition to all their vulnerabilities.
Given ARM CPUs are used in most of these embedded applications, it’s no surprise ARM likes to boast that “a modern smartphone will contain somewhere between 8 and 14 ARM processors, one of which will be the application processor (running Android or iOS or whatever), while another will be the processor for the baseband stack.”.
SIM cards for smartphones are much more than simple memory cards recording your subscription information, as they are smart cards which can independently run Java Card applications (apparently NFC chips may also be like this as well), somewhat like the JVM in the Intel Management Engine. Naturally, SIM cards can be hacked too and used for surveillance etc.
the IO controllers for tape drives, hard drives, flash drives, or SSD drives etc typically all have ARM processors to run the on-disk firmware for tasks like hiding bad sectors from the operating system or accelerating disk encryption or doing wear leveling; these can be hacked.
USB/Thunderbolt cables/devices, or motherboard-attached devices: an embedded processor on device is needed for negotiation of data/power protocols at the least for cables/batteries/chargers20, and may be even more heavy duty with multiple additional specialized processors themselves like WiFi adapters or keyboards or mice or SD cards.
monitor-embedded CPU for control of displaying whatever the GPU sends it (part of a tradition going back to smart teletypes)
So a desktop or smartphone can reasonably be expected to have anywhere from 15 to several thousand “computers” (in the sense of a Turing-complete device which can be programmed in a usefully general fashion with little or no code running on the ‘official’ computer); which is computationally powerful enough to run many programs from throughout computing history; and which can be exploited by an adversary for surveillance, exfiltration, or attacks against the rest of the system.
None of this is unusual historically, as even the earliest mainframes tended to be multiple computers, with the main computer doing batch processing while additional smaller computers took care of high-speed I/O operations that would otherwise choke the main computers with interrupts.
In practice, aside from the computer security community (as all these computers are insecure and thus useful attack routes), users don’t care that our computers, under the hood, are insanely complex and more accurately seen as a motley menagerie of hundreds of computers awkwardly yoked together; as long as it is working correctly, he perceives & uses it as a single powerful computer.
Until it stops working as expected.
The baroque complexity of Sendmail possibly contributed to its infamous reputation for insecurity—it was one of the exploit vectors of the Morris worm, and for years shipped with a remote root backdoor (
Dwarf Fortress provides clockwork mechanisms, so TC is unsurprising; but the water is implemented as a simple cellular automation, so there might be more ways of getting TC in DF! The DF wiki currently lists 4 potential ways of creating logic gates: the fluids, the clockwork mechanisms, mine-carts, and creature/animal logic gates involving doors+pressure-sensors.↩︎
Earlier versions required players to take all possible actions, but the 2019 paper claims to remove this assumption and force all actions, rendering the construction fully mechanical.↩︎
‘Thinking outside the box’ can be this, but often isn’t. This is a specific pattern of reductionism, and many instances of ‘thinking outside the box’ are other patterns, like putting on another layer, or eliminating the systems in question entirely.↩︎
Democritus: “By convention sweet is sweet, bitter is bitter, hot is hot, cold is cold, color is color; but in truth there are only atoms and the void.”↩︎
A fictional example from Ender’s Game is worth noting: if victory in Battle School is defined by 4 soldiers at the corner of the enemy gate & someone passing through, then why not—shades of Eurisko—skip fighting entirely & go straight for the gate?↩︎
pg356 of A Burglar’s Guide to the City, Geoff Manaugh 2016:
Schatz’s exhortation to players to move against the architecture, not with it, to uncover a scene’s possible crimes, is useful not only in the world of games. Ignoring the paths laid out by architects and even remaking a space from within are some of the most fundamental ways in which burglars misuse the built environment…In one of the most interesting moments in Bill Mason’s memoir, he sees that architecture can be made to do what he wants it to do; it’s like watching a character in Star Wars learn to use the Force.
…he explains that his intended prize was locked inside a room whose door was too closely guarded for him to slip through. Then he realizes the obvious: he has been thinking the way the hotel wanted him to think—the way the architects had hoped he would behave—looking for doors and hallways when he could simply carve a new route where he wanted it. The ensuing realization delights him. “Elated at the idea that I could cut my own door right where I needed one,” he writes, Mason simply breaks into the hotel suite adjacent to the main office. There, he flings open the closet, pushes aside the hangers, and cuts his way from one room into the other using a drywall knife. In no time at all, he has cut his “own door” through to the manager’s office, where he takes whatever he wants—departing right back through the very “door” he himself made. It is architectural surgery, pure and simple.
Later, Mason actually mocks the idea that a person would remain reliant on doors, making fun of anyone who thinks burglars, in particular, would respect the limitations of architecture. “Surely if someone were to rob the place,” he writes in all italics, barbed with sarcasm, “they’d come in as respectable people would, through the door provided for the purpose. Maybe that explains why people will have 4 heavy-duty locks on a solid oak door that’s right next to a glass window.” People seem to think they should lock-pick or kick their way through solid doors rather than just take a $12 drywall knife and carve whole new hallways into the world. Those people are mere slaves to architecture, spatial captives in a world someone else has designed for them.
Something about this is almost unsettlingly brilliant, as if it is nonburglars who have been misusing the built environment this whole time; as if it is nonburglars who have been unwilling to question the world’s most basic spatial assumptions, too scared to think past the tyranny of architecture’s long-held behavioral expectations….Because doors are often the sturdiest and most fortified parts of the wall in front of you, they are a distraction and a trap. By comparison, the wall itself is often more like tissue paper, just drywall and some 2×4s, without a lock or a chain in sight. Like clouds, apartment walls are mostly air; seen through a burglar’s eyes, they aren’t even there. Cut a hole through one and you’re in the next room in seconds.
Stage magician Teller, of Penn & Teller, puts this well in interviews: what makes stage magic work is hard work. Teller, “Teller Reveals His Secrets: The smaller, quieter half of the magician duo Penn & Teller writes about how magicians manipulate the human mind” 2012
I think you’ll see what I mean if I teach you a few principles magicians employ when they want to alter your perceptions…Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest. My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don’t hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can’t cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.
Or in his Huttson 2015 interview:
Matt: “So why don’t you explain all your tricks?”
Teller: “Because the short explanation—the explanation that you’d have to do during a theatrical or TV performance—is dull and no fun. The greatest secret to making a deceptive piece of magic is you do it by the ugliest possible means. It’s complex, it’s unromantic, it’s unclever. Because there are no big secrets. There is no safe full of magic secrets somewhere. Jim Steinmeyer said he thinks most of the public believes there’s a big safe that contains all the magic secrets. The biggest job for a magician, he says, is to conceal the fact that that safe is empty. Because every magic secret is just a minor modification of something that you fully understand in everyday life. Take suspending something with a thread, for example. Everybody’s not been able to see a piece a thread when they were trying to put it through a needle. What makes it difficult to find is lighting and background. If a magician’s using a thread on stage, say, to levitate a ball, he must use lighting and background to conceal the thread. There’s no obscure secret in that. You learned that playing in your grandmother’s sewing box. Every magic ‘secret’ is hiding in plain sight in the everyday world. It’s not news, and eminently drab.”
Houdini’s trick of Sir Arthur Conan Doyle exemplifies these strategies. No one would expect Houdini to renovate a room just for a trick, to have learned a steganographic code to communicate the phrase Doyle wrote on a piece of phrase to the assistant without Doyle noticing, or the assistant to manipulate a magnetic pole behind a small suspended slate board, hiding it in the viewers’ precise blind spot in order to make it appear as if the chalk were hovering in mid-air & writing by itself. Doyle did not, and disbelieved Houdini’s statement it was merely a trick. But Doyle should have remembered Hume’s dictum: which is more likely—witnessing the paranormal, or that somewhere in the world there was a man as cunning, careful, & compulsive as Houdini? The latter!↩︎
Speaking of ‘social engineering’, why was Facebook’s success in spreading from a niche of college students to much of the world by offering such superficial social networking so surprising to so many? Perhaps its success is a hint that the underlying logic of social interactions are much more abstractable than, and not as rich & subtle as, we’d prefer to think.↩︎
We adopt Lamont and Wiseman’s classification7 of conjuring or magic effects into 9 main categories.↩︎
An example of this kind of mentality is noted by pkaye on HN
Reminds of when I was doing firmware development and the ASIC team would ask if they could toss in a extra Cortex-M3 core to solve specific control problems. Those cores would be used as programmable state machines. For the ASIC team tossing in a extra core was free compared to custom logic design. However for the firmware team it would be another job to write and test that firmware. We had designs with upwards of 10 Cortex-M3 cores. I’ve heard from a friend at another employer had something like 32 such cores and it was a pain to debug.
Or might be running apparently harmless FP operations which still encode a (deep) NN by exploiting FP settings enabling nonlinearities. See also “Adversarial Reprogramming of Neural Networks”, Elsayed et al 2018 & Neekhara et al 2018.↩︎
Arrigo Triulzi in 2008 demoed an exploit system which combined a hack of the (computationally limited) NIC and GPU to run independently of the host OS/CPU: “Project Maux Mk.II: ‘I Own the NIC, Now I Want A Shell!’”/“The Jedi Packet Trick takes over the Deathstar (or: ‘taking NIC backdoors to the next level’)”.↩︎
Ken Shirriff amusingly notes in a Macbook charger analysis: “One unexpected component is a tiny circuit board with a microcontroller, which can be seen above. This 16-bit processor constantly monitors the charger’s voltage and current. It enables the output when the charger is connected to a Macbook, disables the output when the charger is disconnected, and shuts the charger off if there is a problem. This processor is a Texas Instruments MSP430 microcontroller, roughly as powerful as the processor inside the original Macintosh. [The processor in the charger is a MSP430F2003 ultra low power microcontroller with 1kB of flash and just 128 bytes of RAM. It includes a high-precision 16-bit analog to digital converter. More information is here.]”↩︎