Surprisingly Turing-Complete

A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability
topics: computer science, philosophy
created: 9 Dec 2012; modified: 15 June 2019; status: finished; confidence: highly likely; importance: 6


‘Computers’, in the sense of being Turing-complete, are extremely common. Almost any system of sufficient complexity—unless carefully engineered otherwise—may be found to ‘accidentally’ support Turing-complete somewhere inside it, even systems which would appear to have not the slightest thing to do with computation. Software systems are especially susceptible to this, which often leads to serious security problems as the Turing-complete components can be used to run attacks on the rest of the system.

I provide a running catalogue of systems which have been, surprisingly, demonstrated to be Turing-complete.

(TC) is (avoiding the rigorous formal definition) the property of a system being able to, under some simple representation of input & output, compute any program of interest, including another computer in some form.

TC, besides being foundational to computer science and understanding many key issues like “why a perfect antivirus program is impossible”, is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite and it is difficult to write a useful system which does not immediately tip over into TC. “Surprising” examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult.

I like demonstrations of TC lurking in surprising places because they are often a display of considerable ingenuity, and feel like they are making a profound philosophical point about the nature of computation: computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.

Accidentally Turing-complete

“Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.”

They are probably best considered as a subset of “discovered” or “found” s (esolangs). So , as extraordinarily minimalist as it is, does not count; nor would a deliberately obfuscated language like (where it took years to write a trivial program) count because it was designed to be an esolang; but neither would count because questions about whether it was TC appeared almost immediately upon publication and being able to is not surprising, and given the complexity of packet-switching networks & routers it’s not necessarily too surprising if one can build a cellular automaton into them or encode , or if airplane ticket planning/validation is not just (because of the complex rules airlines require). Many configuration or special-purpose languages or tools or complicated games turn out to violate the & be , like MediaWiki templates, or in an editor (any form of or templating or compile-time computation is highly likely to be TC on its own or when iterated since they often turn out to support a or a language or tag system eg esolangs or ), XSLT, , 1, Starcraft, , , & , & , DNA computing etc are TC but these are not surprising either: many games support scripting (ie TC-ness) to make their development easier and enable fan modifications, so games’ TC may be as simple as including syntax for calling out to a better-known language like Perl, or it may just be an obscure part of a standard format (most people these days are probably unaware that & many fonts are programs based on stack machines, similar to , or that go beyond in providing scripting capabilities and must be interpreted to be displayed; once one knows this, then fonts being TC are no more surprising than TeX documents being TC, leading of course, to many severe & fascinating font or media security vulnerabilities such as or & Other formats, like PDF, are simply appalling.2). Similarly, such feats as creating a small Turing machine using Legos or 3 would not count, since we already know that mechanical computers work. On the other hand, “weird machines” are a fertile ground of “that’s TC?” reactions.

Surprisingly Turing-complete

Many cases of discovering TC seem to consist of simply noticing that a primitive in a system is a little too powerful/flexible. For example, if Boolean logic can be implemented, that’s a sign that more may be possible and turn Boolean circuits into full-blown circuit logic for a TM. Substitutions, definitions/abbreviations, regular expressions (especially with any extensions or custom features), or any other kind of ‘search and replace’ functionality is another red flag, as they suggest that a cellular automaton or tag system is lurking. This applies to anything which can change state based on ‘neighbors’, like a spreadsheet cell or a pixel. Any sort of scripting interface or API, even if locked down, may not be locked down quite enough. An actual scripting language or VM is so blatant as to be boring when (not if) someone finds a vulnerability or escape from the sandbox. Operations which take variable lengths of times or whose completion can’t easily be predicted from the start are another source of primitives, as they may ‘depend’ on the data they are operating over in some way, implementing different operations on different data, which may mean that they can be made equivalent to Boolean conditionals based on a careful encoding of data.

What is “surprising” may differ from person to person. Here is a list of accidentally-Turing-complete systems that I found surprising:

Possibly accidentally or surprisingly Turing-complete systems:

  • CSS without the assumption of a driving mouse click (perhaps some sort of Wang tile using and conditionals?)
  • (!): suggests that (intended for displaying scripts like Arabic or Hebrew which go right-to-left rather than left-to-right like English) may be complex enough to support a tag system via rules (eg Turkish). Fonts themselves also support glyph substitution rules which are
  • Human visual illusions: presents ambiguous images in a circuit-like format, whose depth perception ‘flips’ based on the ‘input’ or top of the circuit, which are analogous to OR/AND/NOT/XOR computations; the existence of these ‘visual circuits’ hints at the possibility of ‘TC-complete’ images (although many pieces, like a working memory, are missing)

Security implications

It turns out that given even a little control over input into something which transforms input to output, one can typically leverage that control into full-blown TC. This matters because, if one is clever, it provides an escape hatch from system which is small, predictable, controllable, and secure, to one which could do anything. It’s hard enough to make a program do what it’s supposed to do without giving anyone in the world the ability to insert another program into your program, which can then interfere with or take over its host. Even if there is no way to outright ‘escape’ the sandbox, such hidden programs can be dangerous, by extracting information about the surrounding program (eg JS embedded in a web page which can extract your passwords by using RowHammer to attack your hardware directly, even if it can’t actually escape your web browser), or can take the host into strange & uncharted (and untested) territories. That we find these demonstrations surprising is itself a demonstration of our lack of imagination and understanding of computers, computer security, and AI. We pretend that we are running programs on these simple abstract machines which do what we intuitively expect, but they run on computers which are bizarre, and our programs themselves turn out to be computers which are even more bizarre. Secure systems have to be built by construction; once the genie of TC has been let out of the lamp, it’s difficult to patch the lamp.

An active area of research is into languages & systems carefully designed and proven to not be TC (eg. ). Why this effort to make a language in which many programs can’t be written? Because TC is intimately tied to & , allowing TC means that one is forfeiting all sorts of provability properties: in a non-TC language, one may be able to easily prove all sorts of useful things to know; for example, that programs terminate, that they are type-safe or not, that they can be easily converted into a logical theorem, that they consume a bounded amount of resources, that one implementation of a protocol is correct or equivalent to another implementation, that there are a lack of and a program can be transformed into a logically-equivalent but faster version (particularly important for declarative languages like where the being able to transform queries is key to acceptable performance, but of course one can do a surprising amount in SQL like and anyway by allowing either a cyclic tag system to be encoded, the , or to call out to ) etc.

Languages or systems which unintentionally cross over the line into being TC can be amusing or useful (although ), but they also have some serious implications: such systems, because they were never expected to be programmable, can be harmful, or extremely insecure & a cracker’s delight, as exemplified by the paradigm, based on exploiting ; some of the literature:

Most recently, & generalizations () can be interpreted as providing a whole ‘shadow computer’ in the CPU via which can be programmed to do things like while having side-effects in the real computer. Spectre is interesting in being a class of vulnerabilities which have existed for decades in CPU architectures that were closely scrutinized for security problems, but just sort of fell into a collective human blind spot. Nobody thought of controllable speculative execution as being a ‘computer’ which could be ‘programmed’. Once someone noticed, because it was a powerful computer and of course TC, it could be used in many ways to attack stuff.

“Too powerful” languages can also manifest as nasty DoS attacks; the found that it could create in ’s document formatting tool (first version, 43 years prior) by abusing some of the string substitution rules.

On Seeing Through and Unseeing

has been selling to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail. I replied: ‘What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.’”

, (2008)

“The question is”, said Alice, “whether you can make words mean so many different things.”
“The question is”, said Humpty Dumpty, “which is to be master—that’s all.”

Lewis Carroll, (1872)5

To draw some parallels here and expand , I think unexpected Turing-complete systems and weird machines have something in common with heist movies or cons or stage magic: they all share a specific paradigm we might call the security mindset or hacker mindset.

What they/OP/security//hacking/ all have in common is that they show that the much-ballyhooed ‘hacker mindset’ is, fundamentally, a sort of reductionism run amok, where one ‘sees through’ abstractions to a manipulable reality.6 Like Neo in the Matrix—a deeply cliche analogy for hacking, but cliche because it resonates—one achieves enlightenment by seeing through the surface illusions of objects and can now see the endless lines of green code which make up the Matrix.

In each case, the fundamental principle is that the hacker asks: “here I have a system W, which pretends to be made out of a few ; however, it is really made out of many Y, which form an entirely different system, Z; I will now proceed to ignore the X and understand how Z works, so I may use the Y to thereby change W however I like”.

Abstractions are vital, but . (“You’re very clever, young man, but it’s reductionism all the way down!”) This is in some sense the opposite of a mathematician: a mathematician tries to ‘see through’ a complex system’s accidental complexity up to a simpler more-abstract more-true version which can be understood & manipulated—but for the hacker, all complexity is essential, and they are instead trying to unsee the simple abstract system down to the more-complex less-abstract (but also more true) version. (A mathematician might try to transform a program up into successively more abstract representations to eventually show it is trivially correct; a hacker would prefer to compile a program down into its most concrete representation to & find an exploit trivially proving it incorrect.) Ordinary users ask only that all their everyday examples of Ys transforms into Z correctly; they forget to ask whether all and only correct examples of Ys transform into correct Zs, and whether only correct Zs can be constructed to become Ys.

It’s all ‘atoms and void’7:

  • In hacking, a computer pretends to be made out of things like ‘buffers’ and ‘lists’ and ‘objects’ with rich meaningful semantics, but really, it’s just made out of bits which mean nothing and only accidentally can be interpreted as things like ‘web browsers’ or ‘passwords’, and if you move some bits around and rewrite these other bits in a particular order and read one string of bits in a different way, now you have bypassed the password.

  • In speed running (particularly TASes), a video game pretends to be made out of things like ‘walls’ and ‘speed limits’ and ‘levels which must be completed in a particular order’, but it’s really again just made out of bits and memory locations, and messing with them in particular ways, such as deliberately overloading the RAM errors, can give you infinite ‘velocity’ or shift you into , allowing enormous movements in the supposed map, giving shortcuts to the ‘end’8 of the game.

  • In robbing a hotel room, people see ‘doors’ and ‘locks’ and ‘walls’, but really, they are just made out of atoms arranged in a particular order, and you can move some atoms around more easily than others, and instead of going through a ‘door’ you can just (or ceiling) and obtain access to a space. At Los Alamos, Richard Feynman, among other tactics, and ignored the locks entirely.

    • One analysis of the movie , , highlights how it & the Israel military’s in the treat buildings as kinds of machines, which can be manipulated in weird ways to move around to attack their enemies.

    • That example reminds me of the anatomy of , laying out a taxonomy of all the possible solutions which—like a magician’s trick—violate one’s assumptions about the locked room: whether it was always locked, locked at the right time, the murder done while in the room, murder rather than suicide, the room having a ceiling etc. (These tricks inspired ’s mysteries (), although in it a lot of them turn out to just involve .)

    • In lockpicking, copying a key or reverse-engineering its cuts are some of the most difficult ways to pick a lock. One can instead simply use a to brute-force the positions of the pins in a lock, or kick the door in, or drill the lock. (If you are concerned about detection, replace the lock with a new one afterwards!)

      Locks & safes have many other interesting vulnerabilities; I particularly like ‘s vulnerability (//), which uses the fact that a master-key lock is actually opening for any combination of master+ordinary key cuts (ie ’master OR ordinary’ rather than ‘master XOR ordinary’), and so it is like a password which one can guess one letter at a time.

  • In (especially close-up/card/coin/pickpocketing), one believes one is continuously seeing single whole objects which must move from one place to another continuously; in reality, one is only seeing, occasionally, surfaces of many (possibly duplicate) objects, which may be moving only when you are not looking, in the opposite direction, or not moving at all. By hacking and limited , the stage magician shows the ‘impossible’.

  • In weird machines, you have a ‘protocol’ like SSL or X86 machine code which appear to do simple things like ‘check a cryptographic signature’ or ‘add one number in a register to another register’, but in reality, it’s a layer over far more complex realities like processor states & optimizations like speculative execution reading other parts of memory and then quickly erasing it, and these can be pasted together to execute operations and reveal secrets without ever running ‘code’ (see again Mcilroy et al 2019).

    Similarly, in finding hidden examples of Turing completeness, one says, ‘this system appears to be a bunch of dominoes or whatever, but actually, each one is a computational element which has unusual inputs/outputs; I will now proceed to wire a large number of them together to form a Turing machine so I can play Tetris in Conway’s Game of Life or use heart muscle cells to implement Boolean logic or run arbitrary computations in a game of Magic: The Gathering’.

    Or in side channels, you go below bits and say, ‘these bits are only approximations to the actual flow of electricity and heat in a system; I will now proceed to measure the physical system’ etc.

  • In social engineering/pen testing, people see social norms and imaginary things like ‘permission’ and ‘authority’ and ‘managers’ which ‘forbid access to facilities’, but in reality, all there is, is a piece of laminated plastic or a clipboard or certain magic words spoken; the people are merely non-computerized ways of implementing rules like ‘if laminated plastic, allow in’, and if you put on a blue piece of plastic to your shirt and you incant certain words at certain times, you can walk right past the guards.

  • and while we’re at it, why are puns so ? (Consider how omnipresent they are in .) Computers are nothing but puns on bits, and languages are nothing but puns on letters. Puns force one to drop from the abstract semantic level to the raw syntactic level of sub-words or characters, and back up again to achieve some semantic twist.

And so on. These sorts of things can seem magical (‘how‽’), shocking (‘but—but—that’s cheating!’), or hilarious (in the ‘violation of expectations followed by understanding’ theory of humor) because the abstract system W & our verbalizations are so familiar and useful that we quickly get trapped in our dreams of abstractions, and forget that it is merely a map and not the territory, while inevitably the map has made gross simplifications and it fails to document various paths from one point to another point which we don’t want to exist.

Perversely, the more educated you are, and the more of the map you know, the worse this effect can be, because you have more to unsee. One must always maintain a certain contempt for & .

This is why atheoretical optimization processes like animals (eg ) or or are so dumb to begin with, but in the long run can be so good at surprising us and finding ‘unreasonable’ inputs or (analogous to the ): being unable to understand the map, they can’t benefit from it like we do, but they also can’t overvalue it, and, forced to explore the territory directly to get what they want, discover new things.

To escape our semantic illusions can require a determined effort to unsee them, and use of techniques to the things. For example, you can’t find typos in your own writing without a great deal of effort because you know what it’s supposed to say; so copyediting advice runs like ‘read it out loud’ or ‘print it out and read it’ or ‘wait a week’ or even ‘read it upside down’ (easier than it sounds). That’s the sort of thing it takes to force you to read what you actually wrote, and not what you thought you wrote. Similar tricks are used for learning drawing: a face is too familiar, so instead you can flip it in a mirror and try to copy it.

See also

Appendix

How many computers are in your computer?

Why are there so many places for backdoors and weird machines in your “computer”? Because your computer is in fact scores or hundreds, perhaps even thousands, of computer chips, many of which are explicitly or implicitly capable of Turing-complete computations (many more powerful than desktops of bygone eras), working together to create the illusion of a single computer. Backdoors, bugs, weird machines, and security do not care about what you think—only where resources can be found and orchestrated into a computation.

A curious fallacy circulating in discussion of AI or AI risk or especially the possibility of a “hard takeoff” is a pseudo-debate about whether an AI will be “one” AI or “a community/ecosystem” of AIs, with some concluding that risk is minimal or hard takeoffs impossible because AIs will inevitably be implemented as a ‘community’ or ‘network’—as if this made any difference or reflected any kind of principled distinction.

What is important are the inputs and outputs: how capable is the system as a whole and what resources does it require? No one cares if Google is implemented using 50 supercomputers, 50,000 mainframes, 5 million servers, or 50 million embedded/mobile processors, or exploiting a wide variety of chips from custom “tensor processing units” to custom on-die silicon (implemented by Intel on Xeon chips for a number of its biggest customers) to FPGAs to GPUs to CPUs to still more exotic hardware like prototype quantum computers—as long as it is competitive with other tech corporations and can deliver its services at a reasonable cost. (A “supercomputer” these days mostly looks like a large number of rack-mounted servers with unusual numbers of GPUs & connected by unusually high-speed connections and is not that different from a datacenter.) Any of these pieces of hardware could support multiple weird machines on many different levels of computation depending on their internal dynamics & connectivity. Similarly, it is foolish to insist on defining whether an AI is ‘one’ AI or ‘many’: any AI system might be implemented as a single giant neural network, or as a sharded NN running asynchronously, or as a heterogeneous set of micro-services, or as a “society of mind” etc—but it doesn’t especially matter, from a complexity or risk perspective, how exactly it’s organized internally as long as the totality works; the question for AI risk is how dangerous is the totality. (Is AlphaZero one AI or many instances of an AI repeatedly manifested inside an additional tree-search AI? Is 2 AIs playing together, or 1 AI happening to run as 2 copies? Is 5 different AI each controlling a player on its team, or because of shared embedding layers, 1 AI controlling the whole team?) The system can be seen on many levels, each equally invalid but useful for different purposes. Are you a ‘single biological intelligence’ or a community/ecosystem of human cells/neurons/bacteria/yeast/viruses/parasites? And does it matter in the slightest bit to anyone you might wrong?

Here is an example of the ill-defined nature of the question: on your desk or in your pocket, how many computers do you currently have? How many computers are in your “computer”? Did you think just one? Let’s take a closer look—it’s computers all the way down.

You might think you have just the one large CPU occupying pride of place on your motherboard, and perhaps the GPU too, but the computational power available goes far beyond just the CPU/GPU, for a variety of reasons: transistors and processor cores are so cheap now that it often makes sense to use a separate core for realtime or higher performance, for security guarantees, to avoid having to burden the main OS with a task, for compatibility with an older architecture or existing software package, because a DSP or core can be programmed faster than a more specialized ASIC can be created, or because it was the quickest possible solution to slap down a small CPU and they couldn’t be bothered to shave some pennies9. Further, many of these components can be used as computational elements even if they were not intended to be or hide that functionality. (For example, I believe I’ve read that the ’s ’s running Commodore DOS was used as a source of spare compute power & for defeating copy-protection schemes.)

Thus:

  • A common AMD/Intel CPU has billions of transistors, devoted to a large number of tasks:

    • Each of the 2-8 main CPU cores can run independently, shutting on or off as necessary, and has its own private caches L1-L3 (often bigger than desktop computers’ entire RAM a few decades ago10, and likely physically bigger than their CPUs were to boot), and must be regarded as individuals.
    • The CPU as a whole is reprogrammable through microcode, such as to work around errors in the chip design, and sport increasingly opaque features like the Intel Management Engine (with a ; & ), or (PSP) or or ; these hardware modules typically are full computers in their own right, running independently of the host and able to tamper with it.
      • any floating point unit may be Turing-complete through encoding into floating-point operations in the spirit of 11
  • the can be programmed into a page-fault weird machine driven by a CPU stub, as previously mentioned

  • units, custom silicon: ASICs for video formats like h.264 probably are not Turing-complete (despite their support for complicated deltas and compression techniques which might allow something like Wang tiles), but for example mobile system-on-a-chip goes far beyond simply a dual-core ARM CPU and GPU as like Intel/AMD desktop CPUs, it includes the secure enclave (a ), but it also includes an image co-processor, a motion/voice-recognition coprocessor (partially to support Siri), and apparently a few other cores. These ASICs are sometimes there to support AI tasks, and presumably specialize in matrix multiplications for neural networks; as recurrent neural networks are Turing-complete… Other companies have rushed to expand their system-on-chips as well, like Motorola or Qualcomm

  • motherboard BIOS and/or management chips with network access

    • notes that

      It’s amazing how many heterogeneous CPU cores were integrated in Intel Silvermont’s Moorefield SoC (ANN): x86, ARC, LMT, 8051, Audio DSP, each running own firmware and supporting JTAG interface

    These management or debugging chips may be ‘accidentally’ left enabled on shipping devices, like the ’s embedded ARM CPUs

  • GPUs have several hundred or thousand simple cores, each of which can run neural networks well (which are highly expressive or Turing-complete), or do general-purpose computation (albeit slower than the CPU)12

  • the controllers for tape drives, hard drives, flash drives, or SSD drives typically all have ARM processors to run the on-disk firmware for tasks like hiding bad sectors from the operating system; these . (Given ARM CPUs are used in most of these embedded applications, it’s no surprise ARM likes to boast that .)

  • network chips do independent processing for . (This sort of independence is why features like for work.)

  • smartphones: in addition to all the other units mentioned, there is an independent running a proprietary realtime OS for handling radio communications with the cellular towers/GPS/other things, or possibly more than one virtualized using . Baseband processors have been found with backdoors, in addition to all their vulnerabilities.

  • for smartphones are much more than simple memory cards recording your subscription information, as they are which can independently run applications (apparently NFC chips may also be like this as well), somewhat like the JVM in the IME. Naturally, SIM cards can be hacked too and used for surveillance etc.

  • USB or Thunderbolt cables or devices, or motherboard-attached devices: an embedded processor on device is needed for negotiation of data/power protocols at the least for cables/batteries/chargers13, and may be even more heavy duty with multiple additional specialized processors themselves like WiFi adapters or keyboards or mice or . In theory, most of these are separate and are at least prevented from directly subverting the host via DMA by in-between IOMMU units, but

  • monitor-embedded CPU (part of going back to smart teletypes)

  • random weird chips like the Macbook Touch bar running

So a desktop or smartphone can reasonably be expected to have anywhere from 15 to several thousand “computers” in the sense of a Turing-complete device which can be programmed in a usefully general fashion with little or no code running on the ‘official’ computer, which is computationally powerful enough to run many programs from throughout computing history and which can be exploited by an adversary for surveillance, exfiltration, or attacks against the rest of the system.

None of this is unusual historically, as even the earliest mainframes tended to be multiple computers, with the main computer doing batch processing while additional smaller computers took care of high-speed I/O operations that would otherwise choke the main computers with interrupts.

In practice, aside from the computer security community (as all these computers are insecure and thus useful hidey-holes for the NSA & VXers), users don’t care that our computers, under the hood, are insanely complex and more accurately seen as a motley menagerie of hundreds of computers awkwardly yoked together (was it “the network is the computer” or “the computer is the network”…?); as long as it is working correctly, he perceives & uses it as a single powerful computer.


  1. Dwarf Fortress provides clockwork mechanisms, so TC is unsurprising; but the water is implemented as a simple cellular automation, so there might be more ways of getting TC in DF! The DF wiki currently lists of creating logic gates: the fluids, the clockwork mechanisms, mine-carts, and creature/animal logic gates involving doors+pressure-sensors.↩︎

  2. The full PDF specification is notoriously bloated. For example, in a PDF viewer which supports enough of the PDF spec, like the Google Chrome Browser, (because PDF includes its own weird subset of JavaScript). The Adobe PDF viewer includes functionality as far afield as 3D CAD support.↩︎

  3. See Think Math’s & .↩︎

  4. Earlier versions required players to take all possible actions, but the 2019 paper claims to remove this assumption and force all actions, rendering the construction fully mechanical.↩︎

  5. “48. The best book on programming for the layman is Alice in Wonderland; but that’s because it’s the best book on anything for the layman.”, Perlis 1982.↩︎

  6. ‘Thinking outside the box’ can be this, but often isn’t. This is a specific pattern of reductionism, and many instances of ‘thinking outside the box’ are other patterns, like putting on another layer, or eliminating the systems in question entirely.↩︎

  7. Democritus: “By convention sweet is sweet, bitter is bitter, hot is hot, cold is cold, color is color; but in truth there are only atoms and the void.”↩︎

  8. A fictional example from is worth noting: if victory in Battle School is defined by 4 soldiers at the corner of the enemy gate & someone passing through, then why not—shades of —skip fighting entirely & go straight for the gate?↩︎

  9. An example of this kind of mentality is noted by

    Reminds of when I was doing firmware development and the ASIC team would ask if they could toss in a extra Cortex-M3 core to solve specific control problems. Those cores would be used as programmable state machines. For the ASIC team tossing in a extra core was free compared to custom logic design. However for the firmware team it would be another job to write and test that firmware. We had designs with upwards of 10 Cortex-M3 cores. I’ve heard from a friend at another employer had something like 32 such cores and it was a pain to debug.

    ↩︎
  10. Eg a 2017 has ~24MB cache total, considerably larger than, say, a 1996 desktop ’s 16MB RAM (with 16KB CPU cache!).↩︎

  11. Or might be running apparently harmless FP operations which still encode a (deep) NN by . See also , Elsayed et al 2018 & .↩︎

  12. Arrigo Triulzi in 2008 demoed an exploit system which combined a hack of the (computationally limited) NIC and GPU to run independently of the host OS/CPU: /.↩︎

  13. amusingly notes in a Macbook charger analysis: “One unexpected component is a tiny circuit board with a microcontroller, which can be seen above. This 16-bit processor constantly monitors the charger’s voltage and current. It enables the output when the charger is connected to a Macbook, disables the output when the charger is disconnected, and shuts the charger off if there is a problem. This processor is a Texas Instruments MSP430 microcontroller, roughly as powerful as the processor inside the original Macintosh. [The processor in the charger is a MSP430F2003 ultra low power microcontroller with 1kB of flash and just 128 bytes of RAM. It includes a high-precision 16-bit analog to digital converter. More information is here.]”↩︎