Surprisingly Turing-Complete

A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability (computer science)
created: 9 Dec 2012; modified: 13 Dec 2018; status: finished; confidence: highly likely; importance: 6

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.Greenspun’s Tenth Law

Turing-completeness (TC) is the property of a system being able to, under some simple representation of input & output, compute any program.

TC, besides being foundational to computer science and understanding many key issues like why a perfect antivirus program is impossible, is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite and it is difficult to write a useful system which does not immediately tip over into TC. It turns out that given even a little control over input into something which transforms input to output, one can typically leverage that control into full-blown TC. This can be amusing, useful (although usually not), harmful, or extremely insecure & a cracker’s delight (see language-theoretic security, based on exploiting weird machines1). Surprising examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult.

Too powerful languages can also manifest as nasty DoS attacks; the fuzz tester afl found in OpenBSD’s roff that it could create an infinite loop by abusing some of the string substitution rules.

They are probably best considered as a subset of discovered or found esoteric programming languages (esolangs). So FRACTRAN, as extraordinarily minimalist as it is, does not count2; nor would a deliberately obfuscated language like Malbolge (where it took years to write a trivial program) count because it was designed to be an esolang; but neither would Conway’s Game of Life count because questions about whether it was TC appeared almost immediately upon publication and so it turning out to be TC is not surprising, and given the complexity of packet-switching networks & routers it’s not necessarily too surprising if one can build a cellular automaton into them or encode logical circuits, or if airplane ticket planning/validation is not just NP-hard or EXPSPACE-hard but undecidable (because of the complex rules airlines require). Many configuration or special-purpose languages or tools or complicated games turn out to violate the Rule of least power & be accidentally Turing-complete, like MediaWiki templates, sed or repeated regexp/find-replace commands in an editor (any form of string substitution or templating or compile-time computation is highly likely to be TC on its own or when iterated since they often turn out to support a lambda calculus or a term-rewriting language or tag system eg esolangs /// or Thue ), XSLT, Infinite Minesweeper, Dwarf Fortress3, Starcraft, Minecraft, Ant, Transport Tycoon, C++ templates & Java generics, DNA computing etc are TC but these are not surprising either: many games support scripting (ie TC-ness) to make their development easier and enable fan modifications, so games’ TC may be as simple as including syntax for calling out to a better-known language like Perl, or it may just be an obscure part of a standard format (most people these days are probably unaware that TrueType & many fonts are PostScript programs based on stack machines, similar to DWARF debugging and ELF metadata, or that some music formats go beyond MIDI in providing scripting capabilities and must be interpreted to be displayed; once one knows this, then fonts being TC are no more surprising than TeX documents being TC, leading of course, to many severe & fascinating font or media security vulnerabilities such as the BLEND vulnerability or SNES & NES code exploiting Linux systems Other formats, like PDF, are simply appalling.4). Similarly, such feats as creating a small Turing machine using Legos or dominos5 would not count, since we already know that mechanical computers work.

On the other hand, the vein of computer security research called weird machines is a fertile ground of that’s TC? reactions. What is surprising may differ from person to person.

Possibly accidentally Turing-complete systems:

  • CSS without the assumption of a driving mouse click
  • SVG: PostScript is TC by design, but what about the more modern vector graphics image format, SVG, which is written as XML, a (usually) not-TC document language? It seems like in conjunction with XSLT it may be, but I haven’t found any proofs or demonstrations of this in a normal web browser context. The SVG standard is large and occasionally horrifying: the (failed) SVG 1.2 standard tried to add to SVG images the ability to open raw network sockets.
  • Unicode: Nicolas Seriot suggests that Unicode’s bidirectional algorithms (intended for displaying scripts like Arabic or Hebrew which go right-to-left rather than left-to-right like English) may be complex enough to support a tag system via case folding rules (eg Turkish)

See also

Appendix

How many computers are in your computer?

Some people seem to get caught up in discussions about weird machines or how big an AI agent must be and whether there will be one, two, 10, or millions; this is not an important issue as it is merely an internal organizational one. What is important are the inputs and outputs: how capable is the system as a whole and what resources does it require? No one cares if Google is implemented using 50 supercomputers, 50,000 mainframes, 5 million servers, or 50 million embedded/mobile processors, or a mix of any of the above exploiting a wide variety of chips from custom tensor processing units to custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers) to FPGAs to GPUs to CPUs to still more exotic hardware like prototype D-Wave quantum computers - as long as it is competitive with other tech corporations and can deliver its services at a reasonable cost. (A supercomputer these days mostly looks like a large number of rack-mounted servers with unusual numbers of GPUs & connected by unusually high-speed InfiniBand connections and is not that different from a datacenter.) Any of these pieces of hardware could support multiple weird machines on many different levels of computation depending on their internal dynamics & connectivity. Similarly, it is foolish to insist on defining whether an AI is one AI or many: any AI system might be implemented as a single giant neural network, or as a sharded NN running asynchronously, or as a heterogeneous set of micro-services, or as a society of mind etc - but it doesn’t especially matter, from a complexity or risk perspective, how exactly it’s organized internally as long as the totality works; the question for AI risk is how dangerous is the totality. The system can be seen on many levels, each equally invalid but useful for different purposes.

Here is an example of the ill-defined nature of the question: on your desk or in your pocket, how many computers do you currently have? How many computers are in your computer? Did you think just one? Let’s take a closer look - it’s computers all the way down.

You might think you have just the one large CPU occupying pride of place on your motherboard, and perhaps the GPU too, but the computational power available goes far beyond just the CPU/GPU, for a variety of reasons: transistors and processor cores are so cheap now that it often makes sense to use a separate core for realtime or higher performance, for security guarantees, to avoid having to burden the main OS with a task, for compatibility with an older architecture or existing software package, because a DSP or core can be programmed faster than a more specialized ASIC can be created, or because it was the quickest possible solution to slap down a small CPU and they couldn’t be bothered to shave some pennies. Further, many of these components can be used as computational elements even if they were not intended to be or hide that functionality. (For example, I believe I’ve read that the Commodore 64’s floppy drive’s CPU running Commodore DOS was used as a source of spare compute power & for defeating copy-protection schemes.)

Thus:

  • A common AMD/Intel CPU has billions of transistors, devoted to a large number of tasks:

    • Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private caches L1-L3 (often bigger than desktop computers’ entire RAM a few decades ago6, and likely physically bigger than their CPUs were to boot), and must be regarded as individuals.
    • The CPU as a whole is reprogrammable through microcode, such as to work around errors in the chip design, and sport increasingly opaque features like the Intel Management Engine (with a JVM for programmability; Ruan 2014 & SGX), or AMD’s Platform Security Processor (PSP) or Android’s TEEs or Titan chips; these hardware modules typically are full computers in their own right, running independently of the host and able to tamper with it.
    • any floating point unit may be Turing-complete through encoding into floating-point operations in the spirit of FRACTRAN7
  • the MMU can be programmed into a page-fault weird machine driven by a CPU stub, as previously mentioned
  • DSP units, custom silicon: ASICs for video formats like h.264 probably are not Turing-complete (despite their support for complicated deltas and compression techniques which might allow something like Wang tiles), but for example Apple’s A9 mobile system-on-a-chip goes far beyond simply a dual-core ARM CPU and GPU as like Intel/AMD desktop CPUs, it includes the secure enclave (a physically separate dedicated CPU core), but it also includes an image co-processor, a motion/voice-recognition coprocessor (partially to support Siri), and apparently a few other cores. These ASICs are sometimes there to support AI tasks, and presumably specialize in matrix multiplications for neural networks; as recurrent neural networks are Turing-complete… Other companies have rushed to expand their system-on-chips as well, like Motorola or Qualcomm
  • motherboard BIOS and/or management chips with network access

    • Mark Ermolov notes that

      It’s amazing how many heterogeneous CPU cores were integrated in Intel Silvermont’s Moorefield SoC (ANN): x86, ARC, LMT, 8051, Audio DSP, each running own firmware and supporting JTAG interface

    These management or debugging chips may be accidentally left enabled on shipping devices, like the Via C3 CPUs’s embedded ARM CPUs
  • GPUs have several hundred or thousand simple cores, each of which can run neural networks well (which are highly expressive or Turing-complete), or do general-purpose computation (albeit slower than the CPU)8
  • the controllers for tape drives, hard drives, flash drives, or SSD drives typically all have ARM processors to run the on-disk firmware for tasks like hiding bad sectors from the operating system; these can be hacked. (Given ARM CPUs are used in most of these embedded applications, it’s no surprise ARM likes to boast that a modern smartphone will contain somewhere between 8 and 14 ARM processors, one of which will be the application processor (running Android or iOS or whatever), while another will be the processor for the baseband stack..)
  • network chips do independent processing for DMA. (This sort of independence is why features like Wake-on-LAN for netboot work.)
  • smartphones: in addition to all the other units mentioned, there is an independent baseband processor running a proprietary realtime OS for handling radio communications with the cellular towers/GPS/other things, or possibly more than one virtualized using something like L4. Baseband processors have been found with backdoors, in addition to all their vulnerabilities.
  • SIM cards for smartphones are much more than simple memory cards recording your subscription information, as they are smart cards which can independently run Java Card applications (apparently NFC chips may also be like this as well), somewhat like the JVM in the IME. Naturally, SIM cards can be hacked too and used for surveillance etc.
  • USB or Thunderbolt cables or devices, or motherboard-attached devices: an embedded processor on device is needed for negotiation of data/power protocols at the least for cables/batteries/chargers9, and may be even more heavy duty with multiple additional specialized processors themselves like WiFi adapters or keyboards or mice or SD cards. In theory, most of these are separate and are at least prevented from directly subverting the host via DMA by in-between IOMMU units, but the devil is in the details…
  • monitor-embedded CPU (part of a tradition going back to smart teletypes)
  • random weird chips like the Macbook Touch bar running WatchOS

So a desktop or smartphone can reasonably be expected to have anywhere from 15 to several thousand computers in the sense of a Turing-complete device which can be programmed in a usefully general fashion with little or no code running on the official computer, which is computationally powerful enough to run many programs from throughout computing history and which can be exploited by an adversary for surveillance, exfiltration, or attacks against the rest of the system.

None of this is unusual historically, as even the earliest mainframes tended to be multiple computers, with the main computer doing batch processing while additional smaller computers took care of high-speed I/O operations that would otherwise choke the main computers with interrupts.

In practice, aside from the computer security community (as all these computers are insecure and thus useful hidey-holes for the NSA & VXers), users don’t care that our computers, under the hood, are insanely complex and more accurately seen as a motley menagerie of hundreds of computers awkwardly yoked together (was it the network is the computer or the computer is the network…?); he perceives and uses it as a single computer.


  1. An active area of research is into languages & systems carefully designed and proven to not be TC (eg. total functional programming). Why this effort to make a language in which many programs can’t be written? Because TC is intimately tied to Godel’s incompleteness theorems & Rice’s theorem, allowing TC means that one is forfeiting all sorts of provability properties: in a non-TC language, one may be able to easily prove all sorts of useful things to know; for example, that programs terminate, that they are type-safe or not, that they can be easily converted into a logical theorem, that they consume a bounded amount of resources, that one implementation of a protocol is correct or equivalent to another implementation, that there are a lack of side-effects and a program can be transformed into a logically-equivalent but faster version (particularly important for declarative languages like SQL where the query optimizer being able to transform queries is key to acceptable performance, but of course one can do a surprising amount in SQL like gradient descent for fitting machine learning models and some SQL extensions make it TC anyway by allowing either a cyclic tag system to be encoded, the model DSL, or to call out to PL/SQL) etc. Some of the literature on weird machines:

  2. Although linear NNs exploiting round-to-zero floating point mode in order to encode complex, potentially Turing-complete (for RNNs) behavior, which is invisible when run normally, would be both accidentally Turing-complete and a good example of langsec.

  3. Dwarf Fortress provides clockwork mechanisms, so TC is unsurprising; but the water is implemented as a simple cellular automation, so there might be more ways of getting TC in DF! The DF wiki currently lists 4 potential ways of creating logic gates: the fluids, the clockwork mechanisms, mine-carts, and creature/animal logic gates involving doors+pressure-sensors.

  4. The full PDF specification is notoriously bloated. For example, in a PDF viewer which supports enough of the PDF spec, like the Google Chrome Browser, one can play Breakout (because PDF includes its own weird subset of JavaScript). The Adobe PDF viewer includes functionality as far afield as 3D CAD support.

  5. See Think Math’s domino logic gates & 2014 demonstration of a 4-bit adder implemented using domino logic.

  6. Eg a 2017 Intel Core i9-7900X CPU has ~24MB cache total, considerably larger than, say, a 1996 desktop Power Macintosh 5200 LC’s 16MB RAM (with 16KB CPU cache!).

  7. Or might be running apparently harmless FP operations which still encode a (deep) NN by exploiting FP settings enabling nonlinearities.

  8. Arrigo Triulzi in 2008 demoed an exploit system which combined a hack of the (computationally limited) NIC and GPU to run independently of the host OS/CPU: Project Maux Mk.II: I Own the NIC, Now I Want A Shell!/The Jedi Packet Trick takes over the Deathstar (or: taking NIC backdoors to the next level).

  9. Ken Shirriff amusingly notes in a Macbook charger analysis: One unexpected component is a tiny circuit board with a microcontroller, which can be seen above. This 16-bit processor constantly monitors the charger’s voltage and current. It enables the output when the charger is connected to a Macbook, disables the output when the charger is disconnected, and shuts the charger off if there is a problem. This processor is a Texas Instruments MSP430 microcontroller, roughly as powerful as the processor inside the original Macintosh. [The processor in the charger is a MSP430F2003 ultra low power microcontroller with 1kB of flash and just 128 bytes of RAM. It includes a high-precision 16-bit analog to digital converter. More information is here.]