AB-testing (Link Bibliography)

“AB-testing” links:

  1. http://www.psychology.sunysb.edu/attachment/measures/content/maccallum_on_dichotomizing.pdf

  2. Regression

  3. http://www.psy.lmu.de/allg2/download/schoenbrodt/pub/stable_correlations.pdf

  4. https://github.com/danmaz74/ABalytics

  5. https://opinionator.blogs.nytimes.com/2012/08/08/hear-all-ye-people-hearken-o-earth/

  6. Google-shutdowns

  7. http://ianstormtaylor.com/design-tip-never-use-black/

  8. http://www.beelinereader.com/

  9. https://news.ycombinator.com/item?id=6335784

  10. https://stackoverflow.com/questions/1197575/can-scripts-be-inserted-with-innerhtml

  11. https://news.ycombinator.com/item?id=7539390

  12. http://ignorethecode.net/blog/2010/04/20/footnotes/

  13. About#anonymous-feedback

  14. https://programmablesearchengine.google.com/about/cse/publicurl?cx=009114923999563836576:1eorkzz2gp4

  15. https://programmablesearchengine.google.com/about/cse/publicurl?cx=009114923999563836576:dv0a4ndtmly

  16. Ads

  17. ⁠, Andrej Karpathy (2015-05-21):

    [Exploration of char-RNN neural nets for generating text. Karpathy codes a simple recurrent NN which generates character-by-character, and discovers that it is able to generate remarkably plausible text (at the syntactic level) for Paul Graham, Shakespeare, Wikipedia, LaTeX, Linux C code, and baby names—all using the same generic architecture. Visualizing the internal activity of the char-RNNs, they seem to be genuinely understanding some of the recursive syntactic structure of the text in a way that other text-generation methods like n-grams cannot. Inspired by this post, I began tinkering with char-RNNs for poetry myself; as of 2019, char-RNNs have been largely obsoleted by the new Transformer architecture⁠, but recurrency will make a comeback and Karpathy’s post is still a valuable and fun read.]

    There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.We’ll train RNNs to generate text character by character and ponder the question “how is that even possible?”

  18. ⁠, Wojciech Zaremba, Ilya Sutskever (2014-10-17):

    Recurrent Neural Networks (RNNs) with Long Short-Term Memory units () are widely used because they are expressive and are easy to train. Our interest lies in empirically evaluating the expressiveness and the learnability of LSTMs in the sequence-to-sequence regime by training them to evaluate short computer programs, a domain that has traditionally been seen as too complex for neural networks. We consider a simple class of programs that can be evaluated with a single left-to-right pass using constant memory. Our main result is that LSTMs can learn to map the character-level representations of such programs to their correct outputs. Notably, it was necessary to use curriculum learning, and while conventional curriculum learning proved ineffective, we developed a new variant of curriculum learning that improved our networks’ performance in all experimental conditions. The improved curriculum had a dramatic impact on an addition problem, making it possible to train an LSTM to add two 9-digit numbers with 99% accuracy.

  19. ⁠, Alex Graves, Greg Wayne, Ivo Danihelka (2014-10-20):

    We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

  20. ⁠, Oriol Vinyals, Meire Fortunato, Navdeep Jaitly (2015-06-09):

    We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems—finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem—using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

  21. https://jigsaw.w3.org/css-validator/

  22. http://csstidy.sourceforge.net/

  23. http://incompleteideas.net/sutton/book/the-book.html

  24. http://umichrl.pbworks.com/w/page/7597597/Successes%20of%20Reinforcement%20Learning

  25. http://neuralnetworksanddeeplearning.com/

  26. http://repositorium.uni-osnabrueck.de/bitstream/urn:nbn:de:gbv:700-2008112111/2/E-Diss839_thesis.pdf

  27. http://diyhpl.us/~nmz787/pdf/Human-level_control_through_deep_reinforcement_learning.pdf

  28. https://sites.google.com/a/deepmind.com/dqn/

  29. https://github.com/soumith/cvpr2015/blob/master/DQN%20Training%20iTorch.ipynb

  30. ⁠, Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, David Silver (2015-07-15):

    We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (). Our distributed algorithm was applied to 49 games from Atari 2600 games from the Arcade Learning Environment, using identical hyperparameters. Our performance surpassed non-distributed DQN in 41 of the 49 games and also reduced the wall-time required to achieve these results by an order of magnitude on most games.

  31. http://citeseerx.ist.psu.edu/viewdoc/download?doi=

  32. http://www.jair.org/media/301/live-301-1561-jair.ps

  33. 1990-barto.pdf

  34. 1989-sutton.pdf

  35. http://videolectures.net/rldm2015_silver_reinforcement_learning/

  36. 1957-bellman-dynamicprogramming.pdf

  37. http://www.deeplearningbook.org/contents/rnn.html

  38. #deep-reinforcement-learning

  39. https://github.com/karpathy/char-rnn

  40. #karpathy-2015

  41. http://torch.ch/

  42. http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/index.html#ubuntu-installation

  43. https://developer.nvidia.com/cuda-downloads?sid=907142

  44. http://torch.ch/docs/getting-started.html

  45. https://github.com/torch/torch7/wiki/Cheatsheet

  46. Notes#efficient-natural-language

  47. ⁠, Alex Graves (2013-08-04):

    This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.

  48. http://prize.hutter1.net/

  49. https://www.amazon.com/Acer-Aspire-Edition-VN7-791G-792A-17-3-Inch/dp/B00WJSQRN0

  50. https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#LaunchInstanceWizard:ami=ami-b36981d8

  51. https://web.archive.org/web/20170908094102/http://timdettmers.com/2015/03/09/deep-learning-hardware-guide/

  52. https://aws.amazon.com/ec2/pricing/

  53. Archiving-URLs

  54. Sort

  55. https://jigsaw.w3.org/css-validator/#validate-by-input

  56. http://www.catb.org/jargon/html/B/banana-problem.html

  57. https://github.com/karpathy/char-rnn/issues/138#issuecomment-162763435

  58. http://unminify.com

  59. https://news.ycombinator.com/item?id=10013720

  60. http://nicolasgallagher.com/micro-clearfix-hack/

  61. https://www.dropbox.com/s/5719eisknt0u9fi/lm_css_epoch24.00_0.7660.t7.xz

  62. https://www.dropbox.com/s/bo137n58e8wm0rk/best.txt

  63. https://news.ycombinator.com/item?id=10012625

  64. https://old.reddit.com/r/MachineLearning/comments/3fzau7/training_a_neural_network_to_generate_css/

  65. https://old.reddit.com/r/compsci/comments/3fy9b0/training_a_neural_net_to_generate_css/

  66. https://github.com/sytelus/HackerNewsData

  67. http://www.csszengarden.com/

  68. https://github.com/giakki/uncss

  69. http://www.w3.org/TR/2011/REC-CSS2-20110607/cascade.html#cascade

  70. https://css-tricks.com/poll-results-how-do-you-order-your-css-properties/

  71. ⁠, Randall A. Lewis, Justin M. Rao (2013-04-23; economics⁠, advertising⁠, statistics  /​ ​​ ​decision):

    Classical theories of the firm assume access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure we show, using 25 online field experiments (representing $3.50$2.82013 million) with major U.S. retailers and brokerages, that this assumption typically does not hold. Statistical evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign—a “small” impact on a noisy dependent variable can generate positive returns. A concise statistical argument shows that the required sample size for an experiment to generate sufficiently informative confidence intervals is typically in excess of ten million person-weeks. This also implies that heterogeneity bias (or model misspecification) unaccounted for by observational methods only needs to explain a tiny fraction of the variation in sales to severely bias estimates. The weak informational feedback means most firms cannot even approach profit maximization.

  72. http://www.bulletproofexec.com/