Computers (Link Bibliography)

“Computers” links:

  1. Turing-complete#security-implications

  2. https://fgiesen.wordpress.com/2014/03/23/networks-all-the-way-down/

  3. https://herbsutter.com/welcome-to-the-jungle/

  4. 2021-ranganathan.pdf#google: ⁠, Parthasarathy Ranganathan, Daniel Stodolsky, Jeff Calow, Jeremy Dorfman, Marisabel Guevara, Clinton Wills Smullen IV, Aki Kuusela, Raghu Balasubramanian, Sandeep Bhatia, Prakash Chauhan, Anna Cheung, In Suk Chong, Niranjani Dasharathi, Jia Feng, Brian Fosco, Samuel Foss, Ben Gelb, Sara J. Gwin, Yoshiaki Hase, Da-ke He, C. Richard Ho, Roy W. Huffman Jr., Elisha Indupalli, Indira Jayaram, Poonacha Kongetira, Cho Mon Kyaw, Aaron Laursen, Yuan Li, Fong Lou, Kyle A. Lucke, JP Maaninen, Ramon Macias, Maire Mahony, David Alexander Munday, Srikanth Muroor, Narayana Penukonda, Eric Perkins-Argueta, Devin Persaud, Alex Ramirez, Ville-Mikko Rautio, Yolanda Ripley, Amir Salek, Sathish Sekar, Sergey N. Sokolov, Rob Springer, Don Stark, Mercedes Tan, Mark S. Wachsler, Andrew C. Walton, David A. Wickeraad, Alvin Wijaya, Hon Kwan Wu (2021-02-27; cs):

    Video sharing (eg., YouTube, Vimeo, Facebook, TikTok) accounts for the majority of internet traffic, and video processing is also foundational to several other key workloads (video conferencing, virtual/​​​​augmented reality, cloud gaming, video in Internet-of-Things devices, etc.). The importance of these workloads motivates larger video processing infrastructures and—with the slowing of Moore’s law—specialized hardware accelerators to deliver more computing at higher efficiencies.

    This paper describes the design and deployment, at scale, of a new accelerator targeted at warehouse-scale video transcoding. We present our hardware design including a new accelerator building block—the video coding unit (VCU)—and discuss key design trade-offs for balanced systems at data center scale and co-designing accelerators with large-scale distributed software systems. We evaluate these accelerators “in the wild” serving live data center jobs, demonstrating 20–33× improved efficiency over our well-tuned non-accelerated baseline. Our design also enables effective adaptation to changing bottlenecks and improved failure management, and new workload capabilities not otherwise possible with prior systems.

    To the best of our knowledge, this is the first work to discuss video acceleration at scale in large warehouse-scale environments.

    [Keywords: video transcoding, warehouse-scale computing, domain-specific accelerators, hardware-software co-design]

    Table 1: Offline 2-pass single output (SOT) throughput in VCU vs. CPU and GPU systems. · Encoding Throughput: Table 1 shows throughput and perf/​​​​TCO (performance per total cost of ownership) for the 4 systems and is normalized to the perf/​​​​TCO of the CPU system. The performance is shown for offline 2-pass SOT encoding for H.264 and VP9. For H.264, the GPU has 3.5× higher throughput, and the 8×VCU and 20×VCU provide 8.4× and 20.9× more throughput, respectively. For VP9, the 20×VCU system has 99.4× the throughput of the CPU baseline. The 2 orders of magnitude increase in performance clearly demonstrates the benefits of our VCU system.

    The VCU package is a full-length PCI-E card and looks a lot like a graphics card. A board has 2 Argos ASIC chips buried under a gigantic, passively cooled aluminum heat sink. There’s even what looks like an 8-pin power connector on the end because PCI-E just isn’t enough power.

    Google provided a lovely chip diagram that lists 10 “encoder cores” on each chip, with Google’s white paper adding that “all other elements are off-the-shelf IP blocks.” Google says that “each encoder core can encode 2160p in realtime, up to 60 FPS (frames per second) using 3 reference frames.”

    The cards are specifically designed to slot into Google’s warehouse-scale computing system. Each compute cluster in YouTube’s system will house a section of dedicated “VCU machines” loaded with the new cards, saving Google from having to crack open every server and load it with a new card. Google says the cards resemble GPUs because they are what fit in its existing accelerator trays. CNET reports that “thousands of the chips are running in Google data centers right now”, and thanks to the cards, individual video workloads like 4K video “can be available to watch in hours instead of the days it previously took.”

    Factoring in the research and development on the chips, Google says this VCU plan will save the company a ton of money, even given the below benchmark showing the TCO (total cost of ownership) of the setup compared to running its algorithm on Intel Skylake chips and Nvidia T4 Tensor core GPUs.

  5. https://xkcd.com/2166/

  6. https://news.ycombinator.com/item?id=19348141

  7. ⁠, Halvar Flake (2018-05-29):

    CyCon Tallinn 2019, Keynote: Security, Moore’s law, and the anomaly of cheap complexity

    I was invited to Keynote CyCon, and my talk was supposed to be right before Bruce Schneier’s talk. I tried hard to make a talk that is accessible to people with a non-technical and non-engineering background, which nonetheless summarized the important things I had learnt about security. The core points are:

    1. CPUs are much more complex than 20 years ago, the feeling of being overwhelmed by complexity is not an illusion.
    2. We are sprinkling chips into objects like we are putting salt on food.
    3. We do this because complexity is cheaper than simplicity. We often use a cheap but complex computer to simulate a much simpler device for cost and convenience.
    4. The inherent complexity/​​​​​power of the underlying computer has a tendency to break to the surface as soon as something goes wrong.
    5. Discrete Dynamical Systems and computers share many properties, and tiny changes have a tendency to cause large changes quickly.

    This may be the most polished talk I have ever given—I did multiple dry-runs with different audiences, and bothered everybody and his dog with the slides.

    I am particularly proud that Bruce Schneier seemed to have liked it⁠; this is a big thing for me because reading “Applied Cryptography” and “A self-study course in block-cipher cryptanalysis” had a pretty substantial impact on my life.

  8. ⁠, Norman Hardy (2002):

    [2002?] Short technology essay based on (!) discussing a perennial pattern in computing history dubbed the ‘Wheel of Reincarnation’ for how old approaches inevitably reincarnate as the exciting new thing: shifts between ‘local’ and ‘remote’ computing resources, which are exemplified by repeated cycles in graphical display technologies from dumb ‘terminals’ which display only raw pixels to smart devices which interpret more complicated inputs like text or vectors or programming languages (eg ). These cycles are driven by cost, latency, architectural simplicity, and available computing power.

    The Wheel of Reincarnation paradigm has played out for computers as well, in shifts from local terminals attached to mainframes to PCs to smartphones to ‘cloud computing’.

  9. https://en.wikichip.org/wiki/intel/core_i9/i9-7900x

  10. http://www.slideshare.net/codeblue_jp/igor-skochinsky-enpub

  11. https://link.springer.com/book/10.1007/978-1-4302-6572-6

  12. https://googleprojectzero.blogspot.com/2017/07/trust-issues-exploiting-trustzone-tees.html

  13. https://cloud.google.com/blog/products/gcp/titan-in-depth-security-in-plaintext

  14. https://www.tomshardware.com/news/google-removing-minix-management-engine-intel,35876.html

  15. https://openai.com/blog/nonlinear-computation-in-linear-networks/

  16. ⁠, Gamaleldin F. Elsayed, Ian Goodfellow, Jascha Sohl-Dickstein (2018-06-28):

    Deep neural networks are susceptible to adversarial attacks. In computer vision, well-crafted perturbations to images can cause neural networks to make mistakes such as confusing a with a computer. Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce specific outputs chosen ahead of time by the attacker. We introduce attacks that instead reprogram the target model to perform a task chosen by the attacker—without the attacker needing to specify or compute the desired output for each test-time input. This attack finds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversary—even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task. We demonstrate adversarial reprogramming on six classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR-10 examples presented as inputs to the ImageNet model.

  17. ⁠, Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, Farinaz Koushanfar (2018-09-06):

    Adversarial Reprogramming has demonstrated success in utilizing pre-trained neural network classifiers for alternative classification tasks without modification to the original network. An adversary in such an attack scenario trains an additive contribution to the inputs to repurpose the neural network for the new classification task. While this reprogramming approach works for neural networks with a continuous input space such as that of images, it is not directly applicable to neural networks trained for tasks such as text classification, where the input space is discrete. Repurposing such classification networks would require the attacker to learn an adversarial program that maps inputs from one discrete space to the other. In this work, we introduce a context-based vocabulary remapping model to reprogram neural networks trained on a specific sequence classification task, for a new sequence classification task desired by the adversary. We propose training procedures for this adversarial program in both white-box and black-box settings. We demonstrate the application of our model by adversarially repurposing various text-classification models including ⁠, bi-directional LSTM and CNN for alternate classification tasks.

  18. GPT-3#prompt-programming

  19. ⁠, Anirudh Goyal, Yoshua Bengio (2020-11-30):

    A fascinating hypothesis is that human and animal intelligence could be explained by a few principles (rather than an encyclopedic list of heuristics). If that hypothesis was correct, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human-like intelligence. This hypothesis would suggest that studying the kind of inductive biases that humans and animals exploit could help both clarify these principles and provide inspiration for AI research and neuroscience theories. Deep learning already exploits several key inductive biases, and this work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing. The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans’ abilities in terms of flexible out-of-distribution and systematic generalization, which is currently an area where a large gap exists between state-of-the-art machine learning and human intelligence.

  20. ⁠, Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch (2021-03-09):

    We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning—in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language can improve performance and compute efficiency on non-language downstream tasks. Additionally, we perform an analysis of the architecture, comparing the performance of a random initialized transformer to a random LSTM. Combining the two insights, we find language-pretrained transformers can obtain strong performance on a variety of non-language tasks.

  21. ⁠, Xiaojin Zhu (2013-06-20):

    What if there is a teacher who knows the learning goal and wants to design good training data for a machine learner? We propose an optimal teaching framework aimed at learners who employ Bayesian models. Our framework is expressed as an optimization problem over teaching examples that balance the future loss of the learner and the effort of the teacher. This optimization problem is in general hard. In the case where the learner employs conjugate exponential family models, we present an approximate algorithm for finding the optimal teaching set. Our algorithm optimizes the aggregate sufficient statistics, then unpacks them into actual teaching examples. We give several examples to illustrate our framework.

  22. 2015-zhu.pdf: ⁠, Xiaojin Zhu (2015-01-01; ai):

    I draw the reader’s attention to machine teaching, the problem of finding an optimal training set given a machine learning algorithm and a target model. In addition to generating fascinating mathematical questions for computer scientists to ponder, machine teaching holds the promise of enhancing education and personnel training. The Socratic dialogue style aims to stimulate critical thinking.

    [See also Cakmak & Lopes 2012⁠; ⁠; ⁠.]

  23. ⁠, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros (2018-11-27):

    Model distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed and instead attempt to distill the knowledge from a large training dataset into a small one. The idea is to synthesize a small number of data points that do not need to come from the correct data distribution, but will, when given to the learning algorithm as training data, approximate the model trained on the original data. For example, we show that it is possible to compress 60,000 MNIST training images into just 10 synthetic distilled images (one per class) and achieve close to original performance with only a few gradient descent steps, given a fixed network initialization. We evaluate our method in various initialization settings and with different learning objectives. Experiments on multiple datasets show the advantage of our approach compared to alternative methods.

  24. ⁠, Ilia Sucholutsky, Matthias Schonlau (2020-09-17):

    Deep neural networks require large training sets but suffer from high computational cost and long training times. Training on much smaller training sets while maintaining nearly the same accuracy would be very beneficial. In the few-shot learning setting, a model must learn a new class given only a small number of samples from that class. One-shot learning is an extreme form of few-shot learning where the model must learn a new class from a single example. We propose the ‘less than one’-shot learning task where models must learn N new classes given only M<N examples and we show that this is achievable with the help of soft labels. We use a soft-label generalization of the k-Nearest Neighbors classifier to explore the intricate decision landscapes that can be created in the ‘less than one’-shot learning setting. We analyze these decision landscapes to derive theoretical lower bounds for separating N classes using M<N soft-label samples and investigate the robustness of the resulting systems.

  25. http://www.arm.com/products/security-on-arm/trustzone

  26. https://nitter.hu/_markel___/status/982364102449393668

  27. https://arstechnica.com/information-technology/2020/10/apples-t2-security-chip-has-an-unfixable-flaw/

  28. https://nitter.hu/qwertyoruiopz/status/1238606353645666308

  29. https://github.com/xoreaxeaxeax/rosenbridge

  30. https://www.alchemistowl.org/arrigo/Papers/Arrigo-Triulzi-PACSEC08-Project-Maux-II.pdf

  31. https://www.alchemistowl.org/arrigo/Papers/Arrigo-Triulzi-CANSEC10-Project-Maux-III.pdf

  32. http://boston.conman.org/2013/01/22.2

  33. https://www.theregister.co.uk/2013/03/07/baseband_processor_mobile_hack_threat/

  34. https://spritesmods.com/?art=hddhack&page=1

  35. http://www.righto.com/2015/11/macbook-charger-teardown-surprising.html#ref8

  36. http://www.ti.com/product/msp430f2003

  37. https://www.bunniestudios.com/blog/?p=3554

  38. https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html

  39. ⁠, T. H. Meyer, I. E. Sutherland (1968-06):

    The flexibility and power needed in the channel for a computer display are considered. To work efficiently, such a channel must have a sufficient number of instruction that it is best understood as a small processor rather than a powerful channel. As it was found that successive improvements to the display processor design lie on a circular path, by making improvements one can return to the original simple design plus one new general purpose computer for each trip around. The degree of physical separation between display and parent computer is a key factor in display processor design.

    [Keywords: display processor design, display system, computer graphics, graphic terminal, displays, graphics, display generator, display channel, display programming, graphical interaction, remote displays, Wheel of Reincarnation]

  40. https://news.ycombinator.com/item?id=18413757

  41. https://habr.com/post/429602/

  42. https://media.ccc.de/v/35c3-9778-open_source_firmware