A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability
2012-12-09–2019-06-15 finished certainty: highly likely importance: 6
‘Computers’, in the sense of being Turing-complete, are extremely common. Almost any system of sufficient complexity—unless carefully engineered otherwise—may be found to ‘accidentally’ support Turing-complete somewhere inside it through ‘weird machines’ which can be rebuilt out of the original system’s parts, even systems which would appear to have not the slightest thing to do with computation. Software systems are especially susceptible to this, which often leads to serious security problems as the Turing-complete components can be used to run attacks on the rest of the system.
I provide a running catalogue of systems which have been, surprisingly, demonstrated to be Turing-complete. These examples may help unsee surface systems to see the weird machines and Turing-completeness lurking within.
Turing-completeness (TC) is (avoiding pedantically rigorous formal definitions) the property of a system being able to, under some simple representation of input & output, compute any program of interest, including another computer in some form.
TC, besides being foundational to computer science and understanding many key issues like “why a perfect antivirus program is impossible”, is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite—it is difficult to write a useful system which does not immediately tip over into TC. “Surprising” examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult.
I like demonstrations of TC lurking in surprising places because they are often a display of considerable ingenuity, and feel like they are making a profound philosophical point about the nature of computation: computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.
“Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.”
They are probably best considered as a subset of “discovered” or “found” esoteric programming languages (esolangs). So FRACTRAN, as extraordinarily minimalist as it is, does not count; nor would a deliberately obfuscated language like Malbolge (where it took years to write a trivial program) count because it was designed to be an esolang; but neither would Conway’s Game of Life count because questions about whether it was TC appeared almost immediately upon publication and being able to program Tetris in it is not surprising, and given the complexity of packet-switching Ethernet networks & routers it’s not necessarily too surprising if one can build a cellular automaton into them or encode logical circuits, or if airplane ticket planning/
sed or repeated regexp/
Many cases of discovering TC seem to consist of simply noticing that a primitive in a system is a little too powerful/
What is “surprising” may differ from person to person. Here is a list of accidentally-Turing-complete systems that I found surprising:
Wang tiles: multi-colored squares, whose placement is governed by the rule that adjacent colors must be the same (historically, not surprising to Wang, but was surprising to me and I think to a lot of other people)
- MMU shuffle computer RAM around to make programming easier; if a program sets up its share of memory properly, it can execute arbitrary computations via MMU page-faults (comments; paper) without ever running code itself by turning the MMU faulting mechanism into a one-instruction set computer.
movis Turing-complete”: the apparently innocuous x86 assembler instruction
mov, which copies data between the CPU & RAM, can be used to implement a transport-triggered-architecture one instruction set computer, allowing for playing Doom (and for bonus points, it can be done using
xortoo—there are many such TC one-instruction set computers, such as ByteByteJump)
- “x86 is Turing-complete with no registers”
“return-into-libc attacks”: software libraries provide pre-packaged functions, each of which is intended to do one useful thing; a fully TC ‘language’ can be cobbled out of just calls to these functions and nothing else, which enables evasion of security mechanisms since the attacker is not running any recognizable code of his own. See, among many others, “The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)” & “On the Expressiveness of Return-into-libc Attacks”.
Pokemon Yellow: underflows/
overflows of various kinds and limited reprogramming of game RAM are primitives used in many speedruns, but can be taken much further than simply warping forward in the game (like in Super Mario Bros 3)—in the “Pokemon Yellow Total Control Hack” outlines an exploit of a memory corruption attack which allows one to write arbitrary Game Boy assembler programs by repeated in-game walking and item purchasing. (There are similar feats which have been developed by speedrun aficionados, but I tend to ignore most of them as they are ‘impure’: for example, one can turn the SNES Super Mario World into an arbitrary game like Snake or Pong but you need the new programs loaded up into extra hardware, so in my opinion, it’s not really showing SMW to be unexpectedly TC and is different from the other examples. Similarly, one can go from Super Game Boy to SNES to arbitrary code like IRC. This distinction is debatable.)
- a similar memory corruption issue surfaces in POSIX
%noption, among other C library functions (Carlini et al 2015); hence, “
printbf—Brainfuck interpreter in
- A StarCraft buffer overflow was used by the SC community to implement complicated maps, tower defense games, Mario, and Mario level editors; emulating the hack to avoid breaking the mods in updated SC versions caused Blizzard quite a bit of trouble.
- a similar memory corruption issue surfaces in POSIX
a 3D version of chess with check rules can apparently be made TC: Dempsey et al 2019
heart cells: interact in a way allowing logic gates and hence TC (perhaps not too surprising since cellular automatons were biologically motivated)
SVG: PostScript is TC by design, but what about the more modern vector graphics image format, SVG, which is written as XML, a (usually) not-TC document language? SVG also allows a slow encoding of Rule 110 (SVG files can be made arbitrarily large) and so is as TC as anything else.
If that’s not enough, the SVG standard is large and occasionally horrifying: the (failed) SVG 1.2 standard tried to add to SVG images the ability to open raw network sockets.
one category of weird machines doesn’t quite count since they require an assumption along the lines of the user mechanically clicking or making the only possible choice in order to drive the system into its next step; while the user provides no logical or computational power in the process, they aren’t as satisfying examples for this reason:
- Magic: the Gathering: not just TC5, but above arithmetic in the hierarchy
- CSS: was designed to be a declarative markup language for tweaking the visual appearance of HTML pages, but CSS declarations interact just enough to allow an encoding of the cellular automaton Rule 110, under the assumption of mechanical mouse clicks on the web browser to advance state (CSS hacks honorable mention: Kevin Kuchta’s “CSS-Only Chat”, which uses no JS by outsourcing computation to the server)
- Microsoft PowerPoint animations (excluding macros, VBScript etc) can implement a Turing machine when linked appropriately (Wildenhain 2017; video; PPT), under the assumption of a user clicking on the only active animation triggers
- JPEG XL includes powerful compression features which can implement fractals or cellular automatons, like Rule 110 in 50 bytes; unfortunately, to enable parallel decoding, JPEG XL appears to chunk images into squares which have a maximum size of 1024px, so what can be computed by any CA is severely limited.
Possibly accidentally or surprisingly Turing-complete systems:
CSS without the assumption of a driving mouse click (perhaps some sort of Wang tile using reflections and conditionals?)
Unicode (!): Nicolas Seriot suggests that Unicode’s bidirectional algorithms (intended for displaying scripts like Arabic or Hebrew which go right-to-left rather than left-to-right like English) may be complex enough to support a tag system via case folding rules (eg Turkish).
- Fonts themselves also support glyph substitution rules which are suspiciously close to tag systems and allow for tricks like solving Fizz Buzz or rendering sparklines, although some implementations may not allow recursion and others have hardcoded depth limits. The most impressive use so far is “Fontemon: World’s first video game in a font!” (a Choose-Your-Own-Adventure text game).
Human visual illusions: Changizi 2008 presents ambiguous images in a circuit-like format, whose depth perception ‘flips’ based on the ‘input’ or top of the circuit, which are analogous to OR/AND/NOT/XOR computations; the existence of these ‘visual circuits’ hints at the possibility of ‘TC-complete’ images (although many pieces, like a working memory, are missing)
It turns out that given even a little control over input into something which transforms input to output, one can typically leverage that control into full-blown TC. This matters because, if one is clever, it provides an escape hatch from a system which is small, predictable, controllable, and secure, to one which could do anything. It’s hard enough to make a program do what it’s supposed to do without giving anyone in the world the ability to insert another program into your program. Even if there is no way to outright ‘escape’ the sandbox, such hidden programs can be dangerous, by extracting information about the surrounding program (eg JS embedded in a web page which can extract your passwords by using Row hammer to attack your hardware directly, even if it can’t actually escape your web browser), or can take the host into strange & uncharted (and untested) territories.
That we find these demonstrations surprising is itself a demonstration of our lack of imagination and understanding of computers, computer security, and AI. We pretend that we are running programs on these simple abstract machines which do what we intuitively expect, but they run on computers which are bizarre, and our programs themselves turn out to be computers which are even more bizarre. Secure systems have to be built by construction; once the genie has been let out of the lamp, it’s difficult to patch the lamp.
An active area of research is into languages & systems carefully designed and proven to not be TC (eg. total functional programming). Why this effort to make a language in which many programs can’t be written? Because TC is intimately tied to Godel’s incompleteness theorems & Rice’s theorem, allowing TC means that one is forfeiting all sorts of provability properties: in a non-TC language, one may be able to easily prove all sorts of useful things to know; for example, that programs terminate, that they are type-safe or not, that they can be easily converted into a logical theorem, that they consume a bounded amount of resources, that one implementation of a protocol is correct or equivalent to another implementation, that there are a lack of side-effects and a program can be transformed into a logically-equivalent but faster version (particularly important for declarative languages like SQL where the query optimizer being able to transform queries is key to acceptable performance, but of course one can do a surprising amount in SQL like 3D raytracing gradient descent for fitting machine learning models and some SQL extensions make it TC anyway by allowing either a cyclic tag system to be encoded, the
model DSL, or to call out to PL/SQL) etc.
Languages or systems which unintentionally cross over the line into being TC can be amusing or useful (although usually not), but they also have some serious implications: such systems, because they were never expected to be programmable, can be harmful, or extremely insecure & a cracker’s delight, as exemplified by the “language-theoretic security” paradigm, based on exploiting “weird machines”; some of the literature:
- “Exploit Programming: From Buffer Overflows to ‘Weird Machines’ and Theory of Computation”, Bratus et al 2011
- “The Halting Problems of Network Stack Insecurity”, Sassaman et al 2011
- “The Page-Fault Weird Machine: Lessons in Instruction-less Computation”, Bangert et al 2013
- “‘Weird Machines’ in ELF: A Spotlight on the Underappreciated Metadata”, Shapiro et al 2013
- “Interrupt-oriented Bugdoor Programming: A minimalist approach to bugdooring embedded systems firmware”, Tan et al 2014
- “The Weird Machines in Proof-Carrying Code”, Vanegue 2014
- “Framing Signals—A Return to Portable Shellcode”, Bosman & Bos 2014
- “Weird machines, exploitability, and provable unexploitability”, Dullien 2017
- “Psychic Paper”, Siguza 2020
Most recently, Spectre & generalizations (Mcilroy et al 2019) can be interpreted as providing a whole ‘shadow computer’ in the CPU via speculative execution which can be programmed to do things like run malware without visibly executing any of the malware instructions while having side-effects in the real computer. Spectre is interesting in being a class of vulnerabilities which have existed for decades in CPU architectures that were closely scrutinized for security problems, but just sort of fell into a collective human blind spot. Nobody thought of controllable speculative execution as being a ‘computer’ which could be ‘programmed’. Once someone noticed, because it was a powerful computer and of course TC, it could be used in many ways to attack stuff.
“Too powerful” languages can also manifest as nasty DoS attacks; the fuzz tester afl found that it could create an infinite loop in OpenBSD’s roff document formatting tool (first version, 43 years prior) by abusing some of the string substitution rules.
Why are there so many places for backdoors and weird machines in your “computer”? Because your computer is in fact scores or hundreds, perhaps even thousands, of computer chips, many of which host weird machines and are explicitly or implicitly capable of Turing-complete computations (many more powerful than desktops of bygone eras), working together to create the illusion of a single computer. Backdoors, bugs, weird machines, and security do not care about what you think—only where resources can be found and orchestrated into a computation.
Moved to “How Many Computers Are In Your Computer?”.
To draw some parallels here and expand Dullien 2017, I think unexpected Turing-complete systems and weird machines have something in common with heist movies or cons or stage magic: they all share a specific paradigm we might call the security mindset or hacker mindset.
OP/ security/ speedrunning/ hacking/ social-engineering all have in common is that they show that the much-ballyhooed ‘hacker mindset’ is, fundamentally, a sort of reductionism run amok, where one ‘sees through’ abstractions to a manipulable reality.6 Like Neo in the Matrix—a deeply cliche analogy for hacking, but cliche because it resonates—one achieves enlightenment by seeing through the surface illusions of objects and can now see the endless lines of green code which make up the Matrix (and vice-versa).
In each case, the fundamental principle is that the hacker asks: “here I have a system W, which pretends to be made out of a few Xs; however, it is really made out of many Y, which form an entirely different system, Z; I will now proceed to ignore the X and understand how Z works, so I may use the Y to thereby change W however I like”.
The baroque complexity of Sendmail possibly contributed to its infamous reputation for insecurity—it was one of the exploit vectors of the Morris worm, and for years shipped with a remote root backdoor (
Dwarf Fortress provides clockwork mechanisms, so TC is unsurprising; but the water is implemented as a simple cellular automation, so there might be more ways of getting TC in DF! The DF wiki currently lists 4 potential ways of creating logic gates: the fluids, the clockwork mechanisms, mine-carts, and creature/
animal logic gates involving doors+pressure-sensors.↩︎
See Think Math’s domino logic gates & 2014 demonstration of a 4-bit adder implemented using domino logic.↩︎
Earlier versions required players to take all possible actions, but the 2019 paper claims to remove this assumption and force all actions, rendering the construction fully mechanical.↩︎
‘Thinking outside the box’ can be this, but often isn’t. This is a specific pattern of reductionism, and many instances of ‘thinking outside the box’ are other patterns, like putting on another layer, or eliminating the systems in question entirely.↩︎