Skip to main content

AI/​fully-connected directory

Links

“MLP-ASR: Sequence-length Agnostic All-MLP Architectures for Speech Recognition”, Sakuma et al 2022

“MLP-ASR: Sequence-length agnostic all-MLP architectures for speech recognition”⁠, Jin Sakuma, Tatsuya Komatsu, Robin Scheibler (2022-02-17; backlinks; similar):

We propose multi-layer perceptron (MLP)-based architectures suitable for variable length input.

MLP-based architectures, recently proposed for image classification, can only be used for inputs of a fixed, pre-defined size. However, many types of data are naturally variable in length, for example, acoustic signals.

We propose three approaches to extend MLP-based architectures for use with sequences of arbitrary length. The first one uses a circular convolution applied in the Fourier domain⁠, the second applies a depthwise convolution, and the final relies on a shift operation.

We evaluate the proposed architectures on an automatic speech recognition task with the Librispeech and Tedlium2 corpora.

The best proposed MLP-based architectures improves WER by 1.0 / 0.9%, 0.9 / 0.5% on Librispeech dev-clean/​dev-other, test-clean/​test-other set, and 0.8 / 1.1% on Tedlium2 dev/​test set using 86.4% the size of self-attention-based architecture.

“Mixing and Shifting: Exploiting Global and Local Dependencies in Vision MLPs”, Zheng et al 2022

“Mixing and Shifting: Exploiting Global and Local Dependencies in Vision MLPs”⁠, Huangjie Zheng, Pengcheng He, Weizhu Chen, Mingyuan Zhou (2022-02-14; backlinks; similar):

Token-mixing multi-layer perceptron (MLP) models have shown competitive performance in computer vision tasks with a simple architecture and relatively small computational cost. Their success in maintaining computation efficiency is mainly attributed to avoiding the use of self-attention that is often computationally heavy, yet this is at the expense of not being able to mix tokens both globally and locally.

In this paper, to exploit both global and local dependencies without self-attention, we present Mix-Shift-MLP (MS-MLP) which makes the size of the local receptive field used for mixing increase with respect to the amount of spatial shifting. In addition to conventional mixing and shifting techniques, MS-MLP mixes both neighboring and distant tokens from fine-grained to coarse-grained levels and then gathers them via a shifting operation. This directly contributes to the interactions between global and local tokens.

Being simple to implement, MS-MLP achieves competitive performance in multiple vision benchmarks. For example, an MS-MLP with 85 million parameters achieves 83.8% top-1 classification accuracy on ImageNet-1K. Moreover, by combining MS-MLP with state-of-the-art Vision Transformers such as the Swin Transformer, we show MS-MLP achieves further improvements on three different model scales, eg. by 0.5% on ImageNet-1K classification with Swin-B.

The code is available at: Github⁠.

“PNLP-Mixer: an Efficient All-MLP Architecture for Language”, Fusco et al 2022

“pNLP-Mixer: an Efficient all-MLP Architecture for Language”⁠, Francesco Fusco, Damian Pascual, Peter Staar (2022-02-09; backlinks; similar):

Large pre-trained language models drastically changed the natural language processing(NLP) landscape. Nowadays, they represent the go-to framework to tackle diverse NLP tasks, even with a limited number of annotations. However, using those models in production, either in the cloud or at the edge, remains a challenge due to the memory footprint and/​or inference costs. As an alternative, recent work on efficient NLP has shown that small weight-efficient models can reach competitive performance at a fraction of the costs.

Here, we introduce pNLP-Mixer, an embbedding-free model based on the MLP-Mixer architecture that achieves high weight-efficiency thanks to a novel linguistically informed projection layer.

We evaluate our model on two multi-lingual semantic parsing datasets, MTOP and multiATIS. On MTOP our pNLP-Mixer almost matches the performance of mBERT, which has 38× more parameters, and outperforms the state-of-the-art of tiny models (pQRNN) with 3× fewer parameters. On a long-sequence classification task (Hyperpartisan) our pNLP-Mixer without pretraining outperforms RoBERTa⁠, which has 100× more parameters, demonstrating the potential of this architecture.

“Data-driven Emergence of Convolutional Structure in Neural Networks”, Ingrosso & Goldt 2022

“Data-driven emergence of convolutional structure in neural networks”⁠, Alessandro Ingrosso, Sebastian Goldt (2022-02-01; ; backlinks; similar):

Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry and their capabilities triggered the first wave of deep learning successes. However, learning convolutions directly from translation-invariant data with a fully-connected network has so far proven elusive.

Here, we show how initially fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localized, space-tiling receptive fields. These receptive fields match the filters of a convolutional network trained on the same task.

By carefully designing data models for the visual scene, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs, which has long been recognised as the hallmark of natural images. We provide an analytical and numerical characterisation of the pattern-formation mechanism responsible for this phenomenon in a simple model, which results in an unexpected link between receptive field formation and the tensor decomposition of higher-order input correlations.

These results provide a new perspective on the development of low-level feature detectors in various sensory modalities, and pave the way for studying the impact of higher-order statistics on learning in neural networks.

“MLP Architectures for Vision-and-Language Modeling: An Empirical Study”, Nie et al 2021

“MLP Architectures for Vision-and-Language Modeling: An Empirical Study”⁠, Yixin Nie, Linjie Li, Zhe Gan, Shuohang Wang, Chenguang Zhu, Michael Zeng, Zicheng Liu, Mohit Bansal et al (2021-12-08; ; backlinks; similar):

We initiate the first empirical study on the use of MLP architectures for vision-and-language (VL) fusion.

Through extensive experiments on 5 VL tasks and 5 robust VQA benchmarks, we find that:

  1. Without pre-training, using MLPs for multimodal fusion has a noticeable performance gap compared to Transformers⁠;
  2. However, VL pre-training can help close the performance gap;
  3. Instead of heavy multi-head attention, adding tiny one-head attention to MLPs is sufficient to achieve comparable performance to Transformers.
  4. Moreover, we also find that the performance gap between MLPs and Transformers is not widened when being evaluated on the harder robust VQA benchmarks, suggesting using MLPs for VL fusion can generalize roughly to a similar degree as using Transformers.

These results hint that MLPs can effectively learn to align vision and text features extracted from lower-level encoders without heavy reliance on self-attention.

Based on this, we ask an even bolder question: can we have an all-MLP architecture for VL modeling, where both VL fusion and the vision encoder are replaced with MLPs?

Our result shows that an all-MLP VL model is sub-optimal compared to state-of-the-art full-featured VL models when both of them get pre-trained. However, pre-training an all-MLP can surprisingly achieve a better average score than full-featured Transformer models without pre-training.

This indicates the potential of large-scale pre-training of MLP-like architectures for VL modeling and inspires the future research direction on simplifying well-established VL modeling with less inductive design bias. Our code is publicly available⁠.

Figure 2: Results of models pre-trained on downsampled data.
Figure 3: Scaling effect of model parameters. Zero-Shot VQA Accuracy is the performance of pre-trained model without fine-tuning on VQA data only.

“MorphMLP: A Self-Attention Free, MLP-Like Backbone for Image and Video”, Zhang et al 2021

“MorphMLP: A Self-Attention Free, MLP-Like Backbone for Image and Video”⁠, David Junhao Zhang, Kunchang Li, Yunpeng Chen, Yali Wang, Shashwat Chandra, Yu Qiao, Luoqi Liu, Mike Zheng Shou et al (2021-11-24; ; backlinks; similar):

Self-attention has become an integral component of the recent network architectures, eg. Transformer, that dominate major image and video benchmarks. This is because self-attention can flexibly model long-range information. For the same reason, researchers make attempts recently to revive Multiple Layer Perceptron (MLP) and propose a few MLP-Like architectures, showing great potential. However, the current MLP-Like architectures are not good at capturing local details and lack progressive understanding of core details in the images and/​or videos.

To overcome this issue, we propose a novel MorphMLP architecture that focuses on capturing local details at the low-level layers, while gradually changing to focus on long-term modeling at the high-level layers. Specifically, we design a Fully-Connected-Like layer, dubbed as MorphFC, of two morphable filters that gradually grow its receptive field along the height and width dimension. More interestingly, we propose to flexibly adapt our MorphFC layer in the video domain. To our best knowledge, we are the first to create a MLP-Like backbone for learning video representation. Finally, we conduct extensive experiments on image classification, semantic segmentation and video classification. Our MorphMLP, such a self-attention free backbone, can be as powerful as and even outperform self-attention based models.

“Deep Learning without Shortcuts: Shaping the Kernel With Tailored Rectifiers”, Zhang et al 2021

“Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers”⁠, Guodong Zhang, Aleksandar Botev, James Martens (2021-11-18; similar):

Training very deep neural networks is still an extremely challenging task. The common solution to this is to add shortcut connections and normalization layers, which are both crucial ingredients in the ResNet architecture. However, there is strong evidence to suggest that ResNets behave more like ensembles of shallower networks than truly deep ones. Recently, it was shown that deep vanilla networks (ie. without normalization layers or shortcut connections) can be trained as fast as ResNets by applying certain transformations to their activation functions. However, this method (called Deep Kernel Shaping) isn’t fully compatible with ReLUs, and produces networks that exhibit statistically-significantly more overfitting than ResNets of similar size on ImageNet. In this work, we rectify this situation by developing a new type of transformation which is perfectly compatible with a variant of ReLUs—Leaky ReLUs. We show in experiments that our method, which introduces negligible extra computational cost, achieves tests accuracies with vanilla deep networks that are competitive with ResNets (of the same width/​depth), and significantly higher than those obtained with the Edge of Chaos (EOC) method. And unlike with EOC, the test accuracies we obtain do not get worse with depth.

[Keywords: Neural Network Training, Kernel Approximation for Neural Networks, Neural Network Initialization, Generalization]

“ADOP: Approximate Differentiable One-Pixel Point Rendering”, Rückert et al 2021

“ADOP: Approximate Differentiable One-Pixel Point Rendering”⁠, Darius Rückert, Linus Franke, Marc Stamminger (2021-10-13; ):

In this paper we present ADOP, a novel point-based, differentiable neural rendering pipeline. Like other neural renderers, our system takes as input calibrated camera images and a proxy geometry of the scene, in our case a point cloud. To generate a novel view, the point cloud is rasterized with learned feature vectors as colors and a deep neural network fills the remaining holes and shades each output pixel. The rasterizer renders points as one-pixel splats, which makes it very fast and allows us to compute gradients with respect to all relevant input parameters efficiently. Furthermore, our pipeline contains a fully differentiable physically-based photometric camera model, including exposure, white balance, and a camera response function. Following the idea of inverse rendering, we use our renderer to refine its input in order to reduce inconsistencies and optimize the quality of its output. In particular, we can optimize structural parameters like the camera pose, lens distortions, point positions and features, and a neural environment map, but also photometric parameters like camera response function, vignetting, and per-image exposure and white balance. Because our pipeline includes photometric parameters, eg. and camera response function, our system can smoothly handle input images with varying exposure and white balance, and generates high-dynamic range output. We show that due to the improved input, we can achieve high render quality, also for difficult input, eg. with imperfect camera calibrations, inaccurate proxy geometry, or varying exposure. As a result, a simpler and thus faster deep neural network is sufficient for reconstruction. In combination with the fast point rasterization, ADOP achieves real-time rendering rates even for models with well over 100M points. https://github.com/darglein/ADOP

“Sparse-MLP: A Fully-MLP Architecture With Conditional Computation”, Lou et al 2021

“Sparse-MLP: A Fully-MLP Architecture with Conditional Computation”⁠, Yuxuan Lou, Fuzhao Xue, Zangwei Zheng, Yang You (2021-09-05; ; backlinks; similar):

Mixture of Experts (MoE) with sparse conditional computation has been proved an effective architecture for scaling attention-based models to more parameters with comparable computation cost. In this paper, we propose Sparse-MLP, scaling the recent MLP-Mixer model with sparse MoE layers, to achieve a more computation-efficient architecture. We replace a subset of dense MLP blocks in the MLP-Mixer model with Sparse blocks. In each Sparse block, we apply two stages of MoE layers: one with MLP experts mixing information within channels along image patch dimension, one with MLP experts mixing information within patches along the channel dimension. Besides, to reduce computational cost in routing and improve experts capacity, we design Re-represent layers in each Sparse block. These layers are to re-scale image representations by two simple but effective linear transformations.

By pre-training on ImageNet-1k with MoCo v3 algorithm, our models can outperform dense MLP models with comparable parameters and less computational cost on several downstream image classification tasks.

“S2-MLPv2: Improved Spatial-Shift MLP Architecture for Vision”, Yu et al 2021

“S2-MLPv2: Improved Spatial-Shift MLP Architecture for Vision”⁠, Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, Ping Li (2021-08-02; backlinks; similar):

Recently, MLP-based vision backbones emerge. MLP-based vision architectures with less inductive bias achieve competitive performance in image recognition compared with CNNs and vision Transformers. Among them, spatial-shift MLP (S2-MLP), adopting the straightforward spatial-shift operation, achieves better performance than the pioneering works including MLP-mixer and ResMLP⁠. More recently, using smaller patches with a pyramid structure, Vision Permutator (ViP) and Global Filter Network (GFNet) achieve better performance than S2-MLP.

In this paper, we improve the S2-MLP vision backbone. We expand the feature map along the channel dimension and split the expanded feature map into several parts. We conduct different spatial-shift operations on split parts.

Meanwhile, we exploit the split-attention operation to fuse these split parts. Moreover, like the counterparts, we adopt smaller-scale patches and use a pyramid structure for boosting the image recognition accuracy. We term the improved spatial-shift MLP vision backbone as S2-MLPv2. Using 55M parameters, our medium-scale model, S2-MLPv2-Medium achieves an 83.6% top-1 accuracy on the ImageNet-1K benchmark using 224×224px images without self-attention and external training data.

“CycleMLP: A MLP-like Architecture for Dense Prediction”, Chen et al 2021

“CycleMLP: A MLP-like Architecture for Dense Prediction”⁠, Shoufa Chen, Enze Xie, Chongjian Ge, Runjian Chen, Ding Liang, Ping Luo (2021-07-21; backlinks; similar):

This paper presents a simple MLP-like architecture, CycleMLP, which is a versatile backbone for visual recognition and dense predictions.

As compared to modern MLP architectures, eg. MLP-Mixer⁠, ResMLP⁠, and MLP⁠, whose architectures are correlated to image size and thus are infeasible in object detection and segmentation, CycleMLP has two advantages compared to modern approaches. (1) It can cope with various image sizes. (2) It achieves linear computational complexity to image size by using local windows. In contrast, previous MLPs have 𝒪(n2) computations due to fully spatial connections.

We build a family of models which surpass existing MLPs and even state-of-the-art Transformer-based models, eg. Swin Transformer, while using fewer parameters and FLOPs. We expand the MLP-like models’ applicability, making them a versatile backbone for dense prediction tasks.

CycleMLP achieves competitive results on object detection, instance segmentation, and semantic segmentation. In particular, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset.

Code is available at Github⁠.

“Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition”, Hou et al 2021

“Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition”⁠, Qibin Hou, Zihang Jiang, Li Yuan, Ming-Ming Cheng, Shuicheng Yan, Jiashi Feng (2021-06-23; backlinks; similar):

In this paper, we present Vision Permutator, a conceptually simple and data efficient MLP-like architecture for visual recognition. By realizing the importance of the positional information carried by 2D feature representations, unlike recent MLP-like models that encode the spatial information along the flattened spatial dimensions, Vision Permutator separately encodes the feature representations along the height and width dimensions with linear projections. This allows Vision Permutator to capture long-range dependencies along one spatial direction and meanwhile preserve precise positional information along the other direction. The resulting position-sensitive outputs are then aggregated in a mutually complementing manner to form expressive representations of the objects of interest. We show that our Vision Permutators are formidable competitors to convolutional neural networks (CNNs) and vision transformers. Without the dependence on spatial convolutions or attention mechanisms, Vision Permutator achieves 81.5% top-1 accuracy on ImageNet without extra large-scale training data (eg. ImageNet-22k) using only 25M learnable parameters, which is much better than most CNNs and vision transformers under the same model size constraint. When scaling up to 88M, it attains 83.2% top-1 accuracy. We hope this work could encourage research on rethinking the way of encoding spatial information and facilitate the development of MLP-like models. Code is available at Github⁠.

“MLP Singer: Towards Rapid Parallel Korean Singing Voice Synthesis”, Tae et al 2021

“MLP Singer: Towards Rapid Parallel Korean Singing Voice Synthesis”⁠, Jaesung Tae, Hyeongju Kim, Younggun Lee (2021-06-15; ⁠, ; backlinks; similar):

Recent developments in deep learning have significantly improved the quality of synthesized singing voice audio. However, prominent neural singing voice synthesis systems suffer from slow inference speed due to their autoregressive design. Inspired by MLP-Mixer, a novel architecture introduced in the vision literature for attention-free image classification, we propose MLP Singer, a parallel Korean singing voice synthesis system. To the best of our knowledge, this is the first work that uses an entirely MLP-based architecture for voice synthesis. Listening tests demonstrate that MLP Singer outperforms a larger autoregressive GAN-based system, both in terms of audio quality and synthesis speed. In particular, MLP Singer achieves a real-time factor of up to 200 and 3400 on CPUs and GPUs respectively, enabling order of magnitude faster generation on both environments.

“S2-MLP: Spatial-Shift MLP Architecture for Vision”, Yu et al 2021

“S2-MLP: Spatial-Shift MLP Architecture for Vision”⁠, Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, Ping Li (2021-06-14; backlinks; similar):

Recently, visual Transformer (ViT) and its following works abandon the convolution and exploit the self-attention operation, attaining a comparable or even higher accuracy than CNN⁠. More recently, MLP-Mixer abandons both the convolution and the self-attention operation, proposing an architecture containing only MLP layers⁠. To achieve cross-patch communications, it devises an additional token-mixing MLP besides the channel-mixing MLP. It achieves promising results when training on an extremely large-scale dataset. But it cannot achieve as outstanding performance as its CNN and ViT counterparts when training on medium-scale datasets such as ImageNet1K and ImageNet21K. The performance drop of MLP-Mixer motivates us to rethink the token-mixing MLP.

We discover that token-mixing operation in MLP-Mixer is a variant of depthwise convolution with a global reception field and spatial-specific configuration. But the global reception field and the spatial-specific property make token-mixing MLP prone to over-fitting. In this paper, we propose a novel pure MLP architecture, spatial-shift MLP (S2-MLP). Different from MLP-Mixer, our S2-MLP only contains channel-mixing MLP. We devise a spatial-shift operation for achieving the communication between patches. It has a local reception field and is spatial-agnostic. Meanwhile, it is parameter-free and efficient for computation.

The proposed S2-MLP attains higher recognition accuracy than MLP-Mixer when training on ImageNet-1K dataset. Meanwhile, S2-MLP accomplishes as excellent performance as ViT on ImageNet-1K dataset with considerably simpler architecture and fewer FLOPs and parameters.

“Container: Context Aggregation Network”, Gao et al 2021

“Container: Context Aggregation Network”⁠, Peng Gao, Jiasen Lu, Hongsheng Li, Roozbeh Mottaghi, Aniruddha Kembhavi (2021-06-02; backlinks; similar):

Convolutional neural networks (CNNs) are ubiquitous in computer vision, with a myriad of effective and efficient variations. Recently, Transformers—originally introduced in natural language processing—have been increasingly adopted in computer vision. While early adopters continue to employ CNN backbones, the latest networks are end-to-end CNN-free Transformer solutions. A recent surprising finding shows that a simple MLP-based solution without any traditional convolutional or Transformer components can produce effective visual representations.

While CNNs, Transformers and MLP-Mixers may be considered as completely disparate architectures, we provide an unified view showing that they are in fact special cases of a more general method to aggregate spatial context in a neural network stack. We present the CONTAINER (CONText AggregatIon NEtwoRk), a general-purpose building block for multi-head context aggregation that can exploit long-range interactions a la Transformers while still exploiting the inductive bias of the local convolution operation leading to faster convergence speeds, often seen in CNNs.

In contrast to Transformer-based methods that do not scale well to downstream tasks that rely on larger input image resolutions, our efficient network, named CONTAINER-LIGHT, can be employed in object detection and instance segmentation networks such as DETR⁠, RetinaNet and Mask-RCNN to obtain an impressive detection mAP of 38.9, 43.8, 45.1 and mask mAP of 41.3, providing large improvements of 6.6, 7.3, 6.9 and 6.6 points respectively, compared to a ResNet-50 backbone with a comparable compute and parameter size.

Our method also achieves promising results on self-supervised learning compared to DeiT on the DINO framework.

“MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation”, Cazenavette & Guevara 2021

“MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation”⁠, George Cazenavette, Manuel Ladron De Guevara (2021-05-28; ; backlinks; similar):

While attention-based transformer networks achieve unparalleled success in nearly all language tasks, the large number of tokens coupled with the quadratic activation memory usage makes them prohibitive for visual tasks. As such, while language-to-language translation has been revolutionized by the transformer model, convolutional networks remain the de facto solution for image-to-image translation. The recently proposed MLP-Mixer architecture alleviates some of the speed and memory issues associated with attention-based networks while still retaining the long-range connections that make transformer models desirable. Leveraging this efficient alternative to self-attention, we propose a new unpaired image-to-image translation model called MixerGAN: a simpler MLP-based architecture that considers long-distance relationships between pixels without the need for expensive attention mechanisms. Quantitative and qualitative analysis shows that MixerGAN achieves competitive results when compared to prior convolutional-based methods.

“Pay Attention to MLPs”, Liu et al 2021

“Pay Attention to MLPs”⁠, Hanxiao Liu, Zihang Dai, David R. So, Quoc V. Le (2021-05-17; backlinks; similar):

Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years.

Here we propose a simple network architecture, gMLP, based on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications.

Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT⁠, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers.

In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.

“ResMLP: Feedforward Networks for Image Classification With Data-efficient Training”, Touvron et al 2021

“ResMLP: Feedforward networks for image classification with data-efficient training”⁠, Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Gautier Izacard et al (2021-05-07; backlinks; similar):

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification.

It is a simple residual network that alternates (1) a linear layer in which image patches interact, independently and identically across channels, and (2) a two-layer feed-forward network in which channels interact independently per patch.

When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/​complexity trade-offs on ImageNet. We also train ResMLP models in a self-supervised setup, to further remove priors from employing a labeled dataset.

Finally, by adapting our model to machine translation we achieve surprisingly good results.

We share pre-trained models and our code based on the Timm library.

“MLP-Mixer: An All-MLP Architecture for Vision”, Tolstikhin et al 2021

“MLP-Mixer: An all-MLP Architecture for Vision”⁠, Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung et al (2021-05-04; ; backlinks; similar):

[blog] Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer⁠, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary.

We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (ie. “mixing” the per-location features), and one with MLPs applied across patches (ie. “mixing” spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models.

We hope that these results spark further research beyond the realms of well established CNNs and Transformers.

“Revisiting Simple Neural Probabilistic Language Models”, Sun & Iyyer 2021

“Revisiting Simple Neural Probabilistic Language Models”⁠, Simeng Sun, Mohit Iyyer (2021-04-08; backlinks; similar):

Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements.

In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al 2003⁠, which simply concatenates word embeddings within a fixed window and passes the result through a feed-forward network to predict the next word.

When scaled up to modern hardware, this model (despite its many limitations) performs much better than expected on word-level language model benchmarks.

Our analysis reveals that the NPLM achieves lower perplexity than a baseline Transformer with short input contexts but struggles to handle long-term dependencies. Inspired by this result, we modify the Transformer by replacing its first self-attention layer with the NPLM’s local concatenation layer, which results in small but consistent perplexity decreases across three word-level language modeling datasets.

“An Attention Free Transformer”, Anonymous 2020

“An Attention Free Transformer”⁠, Anonymous (2020-09-28; ; backlinks):

We propose an efficient Transformer that eliminates attention.

We introduce Attention Free Transformer (AFT), an efficient variant of Transformers that eliminates the need for spatial attention. AFT offers great simplicity compared with standard Transformers, where the multi-head attention operation is replaced with the composition of element-wise multiplications/​divisions and global/​local pooling. We provide several variants of AFT along with simple yet efficient implementations that are supported by main stream deep learning libraries. We show that, surprisingly, we are able to train AFT effectively on challenging benchmarks, and also to match or surpass the standard Transformer counterparts.

[Keywords: Transformers, attention, efficient]

“Towards Learning Convolutions from Scratch”, Neyshabur 2020

“Towards Learning Convolutions from Scratch”⁠, Behnam Neyshabur (2020-07-27; backlinks; similar):

Convolution is one of the most essential components of architectures used in computer vision. As machine learning moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch. This, however, has proven elusive. For example, current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data.

In an attempt to understand the inductive bias that gives rise to convolutions, we investigate minimum description length as a guiding principle and show that in some settings, it can indeed be indicative of the performance of architectures./​p>

To find architectures with small description length, we propose β-LASSO⁠, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected nets on CIFAR-10 (85.19%), CIFAR-100 (59.56%) and SVHN (94.07%) bridging the gap between fully-connected and convolutional nets.

“Efficient Attention: Breaking The Quadratic Transformer Bottleneck”, Branwen 2020

Attention: “Efficient Attention: Breaking The Quadratic Transformer Bottleneck”⁠, Gwern Branwen (2020-07-25; ⁠, ⁠, ⁠, ⁠, ; backlinks; similar):

Discussion of removing a major architectural limitation in Transformer neural networks: the length of the input it can look at. Beyond a few thousand inputs, the resource requirements explode quadratically, rendering it infeasible to encode raw text at the character level, much less use entire books, images, or many other kinds of data which could be useful. Even for text, this inability also forces limitations like the use of BPE text encoding (responsible for sabotaging GPT-3’s rhyming, among other things), forgetfulness, limits to prompt programming, and inability to write coherent long texts.

Possibilities for fixing this generally fall into

  1. adding state, through recurrence (a memory) or creating a compressed history/​state as an explicit summary
  2. tinkering with matrix algebra to remove the quadratic explosion while still keeping more or less the same self-attention mechanism
  3. approximating self-attention: using attention on only a small subset of tokens at any time (dodging the quadratic limit), or using a mix of local and global attention (local attentions to do most of the work, and global attention on top of the local attentions, each one avoiding the quadratic by considering only a few inputs at a time)
  4. miscellaneous tricks: removing parts, using only randomized untrainable components (with no need to compute gradients over) etc

“Implicit Neural Representations With Periodic Activation Functions”, Sitzmann et al 2020

“Implicit Neural Representations with Periodic Activation Functions”⁠, Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein (2020-06-17; ; backlinks; similar):

Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations.

We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.

“Linformer: Self-Attention With Linear Complexity”, Wang et al 2020

“Linformer: Self-Attention with Linear Complexity”⁠, Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma (2020-06-08; backlinks; similar):

Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses 𝒪(n2) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from 𝒪(n2) to 𝒪(n) in both time and space. The resulting linear transformer, the Linformer, performs on par with standard Transformer models, while being much more memory-efficient and time-efficient.

“Synthesizer: Rethinking Self-Attention in Transformer Models”, Tay et al 2020

“Synthesizer: Rethinking Self-Attention in Transformer Models”⁠, Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, Che Zheng (2020-05-02; backlinks; similar):

The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models.

Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose Synthesizer, a model that learns synthetic attention weights without token-token interactions.

In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE⁠/​SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only 60% faster but also improves perplexity by a relative 3.5%. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks.

“Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems”, Naumov et al 2020

“Deep Learning Training in Facebook Data Centers: Design of Scale-up and Scale-out Systems”⁠, Maxim Naumov, John Kim, Dheevatsa Mudigere, Srinivas Sridharan, Xiaodong Wang, Whitney Zhao, Serhat Yilmaz et al (2020-03-20; ; backlinks; similar):

Large-scale training is important to ensure high performance and accuracy of machine-learning models. At Facebook we use many different models, including computer vision, video and language models. However, in this paper we focus on the deep learning recommendation models (DLRMs), which are responsible for more than 50% of the training demand in our data centers. Recommendation models present unique challenges in training because they exercise not only compute but also memory capacity as well as memory and network bandwidth. As model size and complexity increase, efficiently scaling training becomes a challenge. To address it we design Zion—Facebook’s next-generation large-memory training platform that consists of both CPUs and accelerators. Also, we discuss the design requirements of future scale-out training systems.

“NeRF: Representing Scenes As Neural Radiance Fields for View Synthesis”, Mildenhall et al 2020

“NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”⁠, Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng (2020-03-19; ; backlinks; similar):

We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (Θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.

“Gesticulator: A Framework for Semantically-aware Speech-driven Gesture Generation”, Kucherenko et al 2020

“Gesticulator: A framework for semantically-aware speech-driven gesture generation”⁠, Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite et al (2020-01-25; ; backlinks; similar):

During speech, people spontaneously gesticulate, which plays a key role in conveying information. Similarly, realistic co-speech gestures are crucial to enable natural and smooth interactions with social agents. Current end-to-end co-speech gesture generation systems use a single modality for representing speech: either audio or text. These systems are therefore confined to producing either acoustically-linked beat gestures or semantically-linked gesticulation (eg. raising a hand when saying “high”): they cannot appropriately learn to generate both gesture types. We present a model designed to produce arbitrary beat and semantic gestures together. Our deep-learning based model takes both acoustic and semantic representations of speech as input, and generates gestures as a sequence of joint angle rotations as output. The resulting gestures can be applied to both virtual agents and humanoid robots. Subjective and objective evaluations confirm the success of our approach. The code and video are available at the project page https: /  ​ /  ​svito-zar.github.io /  ​gesticulator⁠.

“MoGlow: Probabilistic and Controllable Motion Synthesis Using Normalizing Flows”, Henter et al 2019

“MoGlow: Probabilistic and controllable motion synthesis using normalizing flows”⁠, Gustav Eje Henter, Simon Alexanderson, Jonas Beskow (2019-05-16; ; backlinks; similar):

Data-driven modelling and synthesis of motion is an active research area with applications that include animation, games, and social robotics. This paper introduces a new class of probabilistic, generative, and controllable motion-data models based on normalizing flows. Models of this kind can describe highly complex distributions, yet can be trained efficiently using exact maximum likelihood⁠, unlike GANs or VAEs. Our proposed model is autoregressive and uses LSTMs to enable arbitrarily long time-dependencies. Importantly, is is also causal, meaning that each pose in the output sequence is generated without access to poses or control inputs from future time steps; this absence of algorithmic latency is important for interactive applications with real-time motion control. The approach can in principle be applied to any type of motion since it does not make restrictive, task-specific assumptions regarding the motion or the character morphology. We evaluate the models on motion-capture datasets of human and quadruped locomotion. Objective and subjective results show that randomly-sampled motion from the proposed method outperforms task-agnostic baselines and attains a motion quality close to recorded motion capture.

“Scalable Training of Artificial Neural Networks With Adaptive Sparse Connectivity Inspired by Network Science”, Mocanu et al 2018

“Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science”⁠, Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H. Nguyen, Madeleine Gibescu, Antonio Liotta et al (2018-06-19; backlinks; similar):

Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (eg. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers.

Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of 2 consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks’ fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets.

Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

“NAIS-Net: Stable Deep Networks from Non-Autonomous Differential Equations”, Ciccone et al 2018

“NAIS-Net: Stable Deep Networks from Non-Autonomous Differential Equations”⁠, Marco Ciccone, Marco Gallieri, Jonathan Masci, Christian Osendorfer, Faustino Gomez (2018-04-19; backlinks; similar):

This paper introduces Non-Autonomous Input-Output Stable Network (NAIS-Net), a very deep architecture where each stacked processing block is derived from a time-invariant non-autonomous dynamical system.

Non-autonomy is implemented by skip connections from the block input to each of the unrolled processing stages and allows stability to be enforced so that blocks can be unrolled adaptively to a pattern-dependent processing depth. NAIS-Net induces non-trivial, Lipschitz input-output maps, even for an infinite unroll length.

We prove that the network is globally asymptotically stable so that for every initial condition there is exactly one input-dependent equilibrium assuming tanh units, and incrementally stable for ReLU units. An efficient implementation that enforces the stability under derived conditions for both fully-connected and convolutional layers is also presented. Experimental results show how NAIS-Net exhibits stability in practice, yielding a substantial reduction in generalization gap compared to ResNets.

“Sharp Models on Dull Hardware: Fast and Accurate Neural Machine Translation Decoding on the CPU”, Devlin 2017

“Sharp Models on Dull Hardware: Fast and Accurate Neural Machine Translation Decoding on the CPU”⁠, Jacob Devlin (2017-05-04; ; backlinks; similar):

Attentional sequence-to-sequence models have become the new standard for machine translation, but one challenge of such models is a statistically-significant increase in training and decoding cost compared to phrase-based systems. Here, we focus on efficient decoding, with a goal of achieving accuracy close the state-of-the-art in neural machine translation (NMT), while achieving CPU decoding speed/​throughput close to that of a phrasal decoder.

We approach this problem from two angles: First, we describe several techniques for speeding up an NMT beam search decoder, which obtain a 4.4× speedup over a very efficient baseline decoder without changing the decoder output. Second, we propose a simple but powerful network architecture which uses an RNN (GRU/​LSTM) layer at bottom, followed by a series of stacked fully-connected layers applied at every timestep. This architecture achieves similar accuracy to a deep recurrent model, at a small fraction of the training and decoding cost. By combining these techniques, our best system achieves a very competitive accuracy of 38.3 BLEU on WMT English-French NewsTest2014, while decoding at 100 words/​sec on single-threaded CPU. We believe this is the best published accuracy/​speed trade-off of an NMT system.

“Do Deep Convolutional Nets Really Need to Be Deep and Convolutional?”, Urban et al 2016

“Do Deep Convolutional Nets Really Need to be Deep and Convolutional?”⁠, Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Rich Caruana, Abdelrahman Mohamed et al (2016-03-17; ⁠, ; backlinks; similar):

Figure 1: Accuracy of student models with different architectures trained to mimic the CIFAR10 ensemble. The average performance of the 5 best models of each hyperparameter-optimization experiment is shown, together with dashed lines indicating the accuracy of the best and the fifth best model from each setting. The short horizontal lines at 10M parameters are the accuracy of models trained without compression on the original 0/​1 hard targets.

Yes, they do. This paper provides the first empirical demonstration that deep convolutional models really need to be both deep and convolutional, even when trained with methods such as distillation that allow small or shallow models of high accuracy to be trained.

Although previous research showed that shallow feed-forward nets sometimes can learn the complex functions previously learned by deep nets while using the same number of parameters as the deep models they mimic, in this paper we demonstrate that the same methods cannot be used to train accurate models on CIFAR-10 unless the student models contain multiple layers of convolution. Although the student models do not have to be as deep as the teacher model they mimic, the students need multiple convolutional layers to learn functions of comparable accuracy as the deep convolutional teacher.

Figure 1 summarizes the results in Table 2 for student models of different depth, number of convolutional layers, and number of parameters when trained to mimic the ensemble teacher model. Student models trained on the ensemble logits are able to achieve accuracies previously unseen on CIFAR-10 for models with so few layers. Also, it is clear that there is a huge gap between the convolutional student models at the top of the figure, and the non-convolutional student models at the bottom of the figure: the most accurate student MLP has accuracy less than 75%, while the least accurate convolutional student model with the same number of parameters but only one convolutional layer has accuracy above 87%. And the accuracy of the convolutional student models increases further as more layers of convolution are added. Interestingly, the most accurate student MLPs with no convolutional layers have only 2 or 3 hidden layers; the student MLPs with 4 or 5 hidden layers are not as accurate.

Comparing the student MLP with only one hidden layer (bottom of the graph) to the student CNN with 1 convolutional layer clearly suggests that convolution is critical for this problem even when models are trained via distillation, and that it is very unlikely that a shallow non-convolutional model with 100 million parameters or less could ever achieve accuracy comparable to a convolutional model. It appears that if convolution is critical for teacher models trained on the original 0/​1 hard targets, it is likely to be critical for student models trained to mimic these teacher models. Adding depth to the student MLPs without adding convolution does not substantially close this “convolutional gap”.

“How Far Can We Go without Convolution: Improving Fully-connected Networks”, Lin et al 2015

“How far can we go without convolution: Improving fully-connected networks”⁠, Zhouhan Lin, Rol, Memisevic, Kishore Konda (2015-11-09; backlinks; similar):

We propose ways to improve the performance of fully connected networks. We found that two approaches in particular have a strong effect on performance: linear bottleneck layers and unsupervised pre-training using autoencoders without hidden unit biases. We show how both approaches can be related to improving gradient flow and reducing sparsity in the network.

We show that a fully connected network can yield ~70% classification accuracy on the permutation-invariant CIFAR-10 task, which is much higher than the current state-of-the-art. By adding deformations to the training data, the fully connected network achieves 78% accuracy, which is just 10% short of a decent convolutional network.

“Tensorizing Neural Networks”, Novikov et al 2015

“Tensorizing Neural Networks”⁠, Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, Dmitry Vetrov (2015-09-22; ; backlinks; similar):

Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size.

In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved.

In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200,000× leading to the compression factor of the whole network up to 7×.

“Deep Neural Networks for Large Vocabulary Handwritten Text Recognition”, Bluche 2015

2015-bluche.pdf: “Deep Neural Networks for Large Vocabulary Handwritten Text Recognition”⁠, Théodore Bluche (2015-05-13; ; backlinks; similar):

The automatic transcription of text in handwritten documents has many applications, from automatic document processing, to indexing and document understanding.

One of the most popular approaches nowadays consists in scanning the text line image with a sliding window, from which features are extracted, and modeled by Hidden Markov Models (HMMs). Associated with neural networks, such as Multi-Layer Perceptrons (MLPs) or Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs), and with a language model, these models yield good transcriptions. On the other hand, in many machine learning applications, including speech recognition and computer vision, deep neural networks consisting of several hidden layers recently produced a large reduction of error rates.

In this thesis, we have conducted a thorough study of different aspects of optical models based on deep neural networks in the hybrid neural network / HMM scheme, in order to better understand and evaluate their relative importance.

  1. First, we show that deep neural networks produce consistent and large improvements over networks with one or 2 hidden layers, independently of the kind of neural network, MLP or RNN, and of input, handcrafted features or pixels.
  2. Then, we show that deep neural networks with pixel inputs compete with those using handcrafted features, and that depth plays an important role in the reduction of the performance gap between the 2 kinds of inputs, supporting the idea that deep neural networks effectively build hierarchical and relevant representations of their inputs, and that features are automatically learnt on the way.
  3. Despite the dominance of LSTM-RNNs in the recent literature of handwriting recognition, we show that deep MLPs achieve comparable results. Moreover, we evaluated different training criteria. With sequence-discriminative training, we report similar improvements for MLP/​HMMs as those observed in speech recognition.
  4. We also show how the Connectionist Temporal Classification framework is especially suited to RNNs.
  5. Finally, the novel dropout technique to regularize neural networks was recently applied to LSTM-RNNs. We tested its effect at different positions in LSTM-RNNs, thus extending previous works, and we show that its relative position to the recurrent connections is important.

We conducted the experiments on 3 public databases, representing 2 languages (English and French) and 2 epochs, using different kinds of neural network inputs: handcrafted features and pixels. We validated our approach by taking part to the HTRtS contest in 2014.

The results of the final systems presented in this thesis, namely MLPs and RNNs, with handcrafted feature or pixel inputs, are comparable to the state-of-the-art on Rimes and IAM. Moreover, the combination of these systems outperformed all published results on the considered databases.

[Keywords: pattern recognition, Hidden Markov Models, neural networks, hand-writing recognition]

“Do Deep Nets Really Need to Be Deep?”, Ba & Caruana 2013

“Do Deep Nets Really Need to be Deep?”⁠, Lei Jimmy Ba, Rich Caruana (2013-12-21; backlinks; similar):

Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on the TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.

“Network In Network”, Lin et al 2013

“Network In Network”⁠, Min Lin, Qiang Chen, Shuicheng Yan (2013-12-16; backlinks; similar):

We propose a novel deep network structure called “Network In Network” (NIN) to enhance model discriminability for local patches within the receptive field. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to abstract the data within the receptive field. We instantiate the micro neural network with a multilayer perceptron, which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner as CNN; they are then fed into the next layer. Deep NIN can be implemented by stacking mutiple of the above described structure. With enhanced local modeling via the micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers. We demonstrated the state-of-the-art classification performances with NIN on CIFAR-10 and CIFAR-100, and reasonable performances on SVHN and MNIST datasets.

“Deep Big Multilayer Perceptrons for Digit Recognition”, Cireşan et al 2012

2012-ciresan.pdf: “Deep Big Multilayer Perceptrons for Digit Recognition”⁠, Dan Claudiu Cireşan, Ueli Meier, Luca Maria Gambardella, Jürgen Schmidhuber (2012; backlinks; similar):

The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent advancement by others dates back 8 years (error rate 0.4%).

Good old on-line backpropagation for plain multi-layer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark with a single MLP, and 0.31% with a committee of 7 MLPs.

All we need to achieve this until-2011-best-result are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning.

[Keywords: neural network, multilayer perceptron, GPU⁠, training set deformations, MNIST, committee, backpropagation]

Note: This work combines 3 previously published papers [1,2,3].

…In recent decades the amount of raw computing power per Euro has grown by a factor of 100–1000 per decade. Our results show that this ongoing hardware progress may be more important than advances in algorithms and software (although the future will belong to methods combining the best of both worlds). Current graphics cards (GPUs) are already more than 50× faster than standard microprocessors when it comes to training big and deep neural networks by the ancient algorithm, online backpropagation (weight update rate up to 7.5×109/​s, and more than 1015 per trained network). On the competitive MNIST handwriting benchmark, single precision floating-point GPU-based neural nets surpass all previously reported results, including those obtained by much more complex methods involving specialized architectures, unsupervised pre-training, combinations of machine learning classifiers etc. Training sets of sufficient size to avoid overfitting are obtained by appropriately deforming images.

Of course, the approach is not limited to handwriting, and obviously holds great promise for many visual and other pattern recognition problems.

“Extraction De Séquences Numériques Dans Des Documents Manuscrits Quelconques”, Chatelain 2006

2006-chatelain.pdf: “Extraction de séquences numériques dans des documents manuscrits quelconques”⁠, Clément Chatelain (2006-12-05; backlinks; similar):

Within the framework of the automatic processing of incoming mail documents, we present in this thesis the conception and development of a numerical field extraction system in weakly constrained handwritten documents.

Although the recognition of isolated handwritten entities can be considered as a partially solved problem, the extraction of information in images of complex and free-layout documents is still a challenge. This problem requires the implementation of both handwriting recognition and information extraction methods inspired by approaches developed within the field of information extraction in electronic documents.

Our contribution consists in the conception and the implementation of 2 different strategies: the first extends classical handwriting recognition methods, while the second is inspired from approaches used within the field of information extraction in electronic documents.

The results obtained on a real handwritten mail database show that our second approach is substantially better.

Finally, a complete, generic and efficient system is produced, answering one of the emergent perspectives in the field of the automatic reading of handwritten documents: the extraction of complex information in images of documents. [Text of paper is in French.]

“Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis”, Simard et al 2003

2003-simard.pdf#microsoft: “Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis”⁠, Patrice Y. Simard, Dave Steinkraus, John C. Platt (2003; backlinks; similar):

Neural networks are a powerful technology for classification of visual inputs arising from documents. However, there is a confusing plethora of different neural network methods that are used in the literature and in industry.

This paper describes a set of concrete best practices that document analysis researchers can use to get good results with neural networks.

The most important practice is getting a training set as large as possible: we expand the training set by adding a new form of distorted data⁠.

The next most important practice is that convolutional neural networks are better suited for visual document tasks than fully connected networks. We propose that a simple “do-it-yourself” implementation of convolution with a flexible architecture is suitable for many visual document problems. This simple convolutional neural network does not require complex methods, such as momentum, weight decay⁠, structure-dependent learning rates, averaging layers, tangent prop, or even finely-tuning the architecture.

The end result is a very simple yet general architecture which can yield state-of-the-art performance for document analysis.

We illustrate our claims on the MNIST set of English digit images.

Miscellaneous