## Directories

## Files

`2020-talebi.pdf#google`

: “Rank-Smoothed Pairwise Learning In Perceptual Quality Assessment”, (2020-09-30; backlinks):Conducting pairwise comparisons is a widely used approach in curating human perceptual preference data. Typically raters are instructed to make their choices according to a specific set of rules that address certain dimensions of image quality and aesthetics. The outcome of this process is a dataset of sampled image pairs with their associated empirical preference probabilities. Training a model on these pairwise preferences is a common deep learning approach. However, optimizing by gradient descent through mini-batch learning means that the “global” ranking of the images is not explicitly taken into account. In other words, each step of the gradient descent relies only on a limited number of pairwise comparisons. In this work, we demonstrate that regularizing the pairwise empirical probabilities with aggregated rankwise probabilities leads to a more reliable training loss. We show that training a deep image quality assessment model with our rank-smoothed loss consistently improves the accuracy of predicting human preferences.

`2019-aldridge.pdf`

: “Group Testing: An Information Theory Perspective”, Aldridge, Matthew, Johnson, Oliver, Scarlett, Jonathan`2010-martino.pdf`

: “Case studies in Bayesian computation using INLA”, (2010; backlinks):Latent Gaussian models are a common construct in statistical applications where a latent Gaussian field, indirectly observed through data, is used to model, for instance, time and space dependence or the smooth effect of covariates. Many well-known statistical models, such as smoothing-spline models, space time models, semiparametric regression, spatial and spatio-temporal models, log-Gaussian Cox models, and geostatistical models are latent Gaussian models.

Integrated Nested Laplace approximation (INLA) is a new approach to implement Bayesian inference for such models. It provides approximations of the posterior marginals of the latent variables which are both very accurate and extremely fast to compute. Moreover, INLA treats latent Gaussian models in a general way, thus allowing for a great deal of automation in the inferential procedure. The

`inla`

programme, bundled in the R library`INLA`

, is a prototype of such black-box for inference on latent Gaussian models which is both flexible and user-friendly. It is meant to, hopefully, make latent Gaussian models applicable, useful and appealing for a larger class of users.[

**Keywords**: approximate Bayesian inference, latent Gaussian model, Laplace approximations, structured additive regression models]`2008-ailon.pdf`

: “Aggregating inconsistent information: Ranking and clustering”, (2008-11; backlinks):We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the extent of disagreement with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc set problem on tournaments, and correlation and consensus clustering. We show that for all these problems (and various weighted versions of them), we can obtain improved approximation factors using essentially the same remarkably simple algorithm. Additionally, we almost settle a long-standing conjecture of Bang-Jensen and Thomassen and show that unless NP⊆BPP, there is no polynomial time algorithm for the problem of minimum feedback arc set in tournaments.

`1985-wolf.pdf`

: “Born again group testing: Multiaccess communications”, J. Wolf`1978-elo-theratingofchessplayerspastandpresent.pdf`

: “*The Rating of Chessplayers, Past and Present (Second Edition)*”, (1978; backlinks):One of the most extraordinary books ever written about chess and chessplayers, this authoritative study goes well beyond a lucid explanation of how today’s chessmasters and tournament players are rated. Twenty years’ research and practice produce a wealth of thought-provoking and hitherto unpublished material on the nature and development of high-level talent:

Just what constitutes an “exceptional performance” at the chessboard? Can you really profit from chess lessons? What is the lifetime pattern of Grandmaster development? Where are the masters born? Does your child have master potential?

The step-by-step rating system exposition should enable any reader to become an expert on it. For some it may suggest fresh approaches to performance measurement and handicapping in bowling, bridge, golf and elsewhere. 43 charts, diagrams and maps supplement the text.

How and why are chessmasters statistically remarkable? How much will your rating rise if you work with the devotion of a Steinitz? At what age should study begin? What toll does age take, and when does it begin?

Development of the performance data, covering hundreds of years and thousands of players, has revealed a fresh and exciting version of chess history. One of the many tables identifies 500 all-time chess greats, with personal data and top lifetime performance ratings.

Just what does government assistance do for chess? What is the Soviet secret? What can we learn from the Icelanders? Why did the small city of Plovdiv produce three Grandmasters in only ten years? Who are the untitled dead? Did Euwe take the championship from Alekhine on a fluke? How would Fischer fare against Morphy in a ten-wins match?

“It was inevitable that this fascinating story be written”, asserts FIDE President Max Euwe, who introduces the book and recognizes the major part played by ratings in today’s burgeoning international activity. Although this is the definitive ratings work, with statistics alone sufficient to place it in every reference library, it was written by a gentle scientist for pleasurable reading—for the enjoyment of the truths, the questions, and the opportunities it reveals.

`2020-04-03-florianloitsch-tenkilogramsofchocolatetournament-data.ods`

(backlinks)