Banner ad for the tech recruiting company Triplebyte: 'Triplebyte is building a background-blind screening process for hiring software engineers'

Calculating in R The Expected Maximum of a Gaussian Sample using Order Statistics

In generating a sample of _n_ datapoints drawn from a normal/Gaussian distribution, how big on average the biggest datapoint is will depend on how large n_ is. I implement in the R programming language & compare some of the approaches to estimate how big on average. (statistics, computer science, R)
created: 22 Jan 2016; modified: 18 Oct 2018; status: finished; confidence: highly likely; importance: 5

In generating a sample of n datapoints drawn from a normal/Gaussian distribution with a particular mean/SD, how big on average the biggest datapoint is will depend on how large n is. Knowing this average is useful in a number of areas like sports or breeding or manufacturing, as it defines how bad/good the worst/best datapoint will be (eg the score of the winner in a multi-player game).

The order statistic of the mean/average/expectation of the maximum of a draw of n samples from a normal distribution has no exact formula, unfortunately, and is generally not built into any programming language’s libraries.

I implement & compare some of the approaches to estimating this order statistic in the R programming language, for both the maximum and the general order statistic. The overall best approach is to calculate the exact order statistics for the n range of interest using numerical integration via lmomco and cache them in a lookup table, rescaling the mean/SD as necessary for arbitrary normal distributions; next best is a polynomial regression approximation; finally, the Elfving correction to the Blom 1958 approximation is fast, easily implemented, and accurate for reasonably large n such as n>100.

Visualizing maxima/minima in order statistics with increasing n in each sample (1-100).
Visualizing maxima/minima in order statistics with increasing n in each sample (1-100).


Monte Carlo

Most simply and directly, we can estimate it using a Monte Carlo simulation with hundreds of thousands of iterations:

scores  <- function(n, sd) { rnorm(n, mean=0, sd=sd); }
gain    <- function(n, sd) { scores <- scores(n, sd)
                             return(max(scores)); }
simGain <- function(n, sd=1, iters=500000) {
                             mean(replicate(iters, gain(n, sd))); }

But in R this can take seconds for small n and gets worse as n increases into the hundreds as we need to calculate over increasingly large samples of random normals (so one could consider this 𝒪(N)\mathcal{O}(N)); this makes use of the simulation difficult when nested in higher-level procedures such as anything involving resampling or simulation. In R, calling functions many times is slower than being able to call a function once in a vectorized way where all the values can be processed in a single batch. We can try to vectorize this simulation by generating nin \cdot i random normals, group it into a large matrix with n columns (each row being one n-sized batch of samples), then computing the maximum of the i rows, and the mean of the maximums. This is about twice as fast for small n; implementing using rowMaxs from the R package matrixStats, it is anywhere from 25% to 500% faster (at the expense of likely much higher memory usage, as the R interpreter is unlikely to apply any optimizations such as Haskell’s stream fusion):

simGain2 <- function(n, sd=1, iters=500000) {
    mean(apply(matrix(ncol=n, data=rnorm(n*iters, mean=0, sd=sd)), 1, max)) }

simGain3 <- function(n, sd=1, iters=500000) {
    mean(rowMaxs(matrix(ncol=n, data=rnorm(n*iters, mean=0, sd=sd)))) }

Each simulate is too small to be worth parallelizing, but there are so many iterations that they can be split up usefully and run with a fraction in a different process; something like

simGainP <- function(n, sd=1, iters=500000, n.parallel=4) {
   mean(unlist(mclapply(1:n.parallel, function(i) {
    mean(replicate(iters/n.parallel, gain(n, sd))); })))

We can treat the simulation estimates as exact and use memoization such as provided by the R package memoise to cache results & never recompute them, but it will still be slow on the first calculation. So it would be good to have either an exact algorithm or an accurate approximation: for one of analyses, I want accuracy to ±0.0006 SDs, which requires large Monte Carlo samples.

Upper bounds

To summarize the Cross Validated discussion: the simplest upper bound is E[Z]σ2log(n)E[Z] \leq \sigma \cdot \sqrt{2 \cdot log(n)}, which makes the diminishing returns clear. Implementation:

upperBoundMax <- function(n, sd=1) { sd * sqrt(2 * log(n)) }

Most of the approximations are sufficiently fast as they are effectively 𝒪(1)\mathcal{O}(1) with small constant factors (if we ignore that functions like Φ1\Phi^{-1}/qnorm themselves may technically be 𝒪(log(n))\mathcal{O}(log(n)) or 𝒪(n)\mathcal{O}(n) for large n). However, accuracy becomes the problem: this upper bound is hopelessly inaccurate in small samples when we compare to the Monte Carlo simulation. Others (also inaccurate) include n12n1σ\frac{n-1}{\sqrt{2 \cdot n - 1}} \cdot \sigma and Φ1(1n+1)σ-\Phi^{-1}(\frac{1}{n+1}) \cdot \sigma:

upperBoundMax2 <- function(n, sd=1) { ((n-1) / sqrt(2*n - 1)) * sd }
upperBoundMax3 <- function(n, sd=1) { -qnorm(1/(n+1), sd=sd) }


Blom 1958, Statistical estimates and transformed beta-variables provides a general approximation formula E(r,n)E(r,n), which specializing to the max (E(n,n)E(n,n)) is Φ1(nαn2α+1)σ;α=0.375\Phi^{-1}(\frac{n-\alpha}{n - 2\cdot\alpha + 1 }) \cdot \sigma; \alpha=0.375 and is better than the upper bounds:

blom1958 <- function(n, sd=1) { alpha <- 0.375; qnorm((n-alpha)/(n-2*alpha+1)) * sd }

Elfving 1947, apparently, by way of Mathematical Statistics, Wilks 1962, demonstrates that Blom 1958’s approximation is imperfect because actually α=pi8\alpha=\frac{pi}{8}, so:

elfving1947 <- function(n, sd=1) { alpha <- pi/8; qnorm((n-alpha)/(n-2*alpha+1)) * sd }

(Blom 1958 appears to be more accurate for n<48 and then Elfving’s correction dominates.)

Harter 1961 elaborated this by giving different values for α\alpha, and Royston 1982 provides computer algorithms; I have not attempted to provide an R implementation of these.

probabilityislogic offers a 2015 derivation via the beta-F compound distribution of: E[xi]μ+σΦ1(iN+1)[1+(iN+1)(1iN+1)2(N+2)[ϕ[Φ1(iN+1)]]2]E[x_{i}]\approx \mu+\sigma\Phi^{-1}\left(\frac{i}{N+1}\right)\left[1+\frac{\left(\frac{i}{N+1}\right)\left(1-\frac{i}{N+1}\right)}{2(N+2)\left[\phi\left[\Phi^{-1}\left(\frac{i}{N+1}\right)\right]\right]^{2}}\right] and an approximate (but highly accurate) numerical integration as well:

pil2015 <- function(n, sd=1) { sd * qnorm(n/(n+1)) * { 1 +
    ((n/(n+1)) * (1 - (n/(n+1)))) /
    (2*(n+2) * (pnorm(qnorm(n/(n+1))))^2) }}
pil2015Integrate <- function(n) { mean(qnorm(qbeta(((1:10000) - 0.5 ) / 10000, n, 1))) + 1}

Another approximation comes from Chen & Tyler 1999: Φ1(0.52641n)\Phi^{-1}(0.5264^{\frac{1}{n}}). Unfortunately, while accurate enough for most purposes, it is still off by as much as 1 IQ point and has an average mean error of -0.32 IQ points compared to the simulation:

chen1999 <- function(n, sd=1){ qnorm(0.5264^(1/n), sd=sd) }

approximationError <- sapply(1:1000, function(n) { (chen1999(n) - simGain(n)) * 15 } )
#       Min.    1st Qu.     Median       Mean    3rd Qu.       Max.
# -0.3801803 -0.3263603 -0.3126665 -0.2999775 -0.2923680  0.9445290
plot(1:1000, approximationError,  xlab="Number of samples taking the max", ylab="Error in 15*SD (IQ points)")
Error in using the Chen & Tyler 1999 approximation to estimate the expected value (gain) from taking the maximum of n normal samples
Error in using the Chen & Tyler 1999 approximation to estimate the expected value (gain) from taking the maximum of n normal samples

Polynomial regression

From a less mathematical perspective, any regression or machine learning model could be used to try to develop a cheap but highly accurate approximation by simply predicting the extreme from the relevant range of n=2-300 - the goal being less to make good predictions out of sample than to overfit as much as possible in-sample.

Plotting the extremes, they form a smooth almost logarithmic curve:

df <- data.frame(N=2:300, Max=sapply(2:300, exactMax))
l <- lm(Max ~ log(N), data=df); summary(l)
# Residuals:
#         Min          1Q      Median          3Q         Max
# -0.36893483 -0.02058671  0.00244294  0.02747659  0.04238113
# Coefficients:
#                Estimate  Std. Error   t value   Pr(>|t|)
# (Intercept) 0.658802439 0.011885532  55.42894 < 2.22e-16
# log(N)      0.395762956 0.002464912 160.55866 < 2.22e-16
# Residual standard error: 0.03947098 on 297 degrees of freedom
# Multiple R-squared:  0.9886103,   Adjusted R-squared:  0.9885719
# F-statistic: 25779.08 on 1 and 297 DF,  p-value: < 2.2204e-16
plot(df); lines(predict(l))

This has the merit of utter simplicity (function(n) {0.658802439 + 0.395762956*log(n)}), but while the R2 is quite high by most standards, the residuals are too large to make a good approximation - the log curve overshoots initially, then undershoots, then overshoots. We can try to find a better log curve by using polynomial or spline regression, which broaden the family of possible curves. A 4th-order polynomial turns out to fit as beautifully as we could wish, R2=0.9999998:

lp <- lm(Max ~ log(N) + I(log(N)^2) + I(log(N)^3) + I(log(N)^4), data=df); summary(lp)
# Residuals:
#           Min            1Q        Median            3Q           Max
# -1.220430e-03 -1.074138e-04 -1.655586e-05  1.125596e-04  9.690842e-04
# Coefficients:
#                  Estimate    Std. Error    t value   Pr(>|t|)
# (Intercept)  1.586366e-02  4.550132e-04   34.86418 < 2.22e-16
# log(N)       8.652822e-01  6.627358e-04 1305.62159 < 2.22e-16
# I(log(N)^2) -1.122682e-01  3.256415e-04 -344.76027 < 2.22e-16
# I(log(N)^3)  1.153201e-02  6.540518e-05  176.31640 < 2.22e-16
# I(log(N)^4) -5.302189e-04  4.622731e-06 -114.69820 < 2.22e-16
# Residual standard error: 0.0001756982 on 294 degrees of freedom
# Multiple R-squared:  0.9999998,   Adjusted R-squared:  0.9999998
# F-statistic: 3.290056e+08 on 4 and 294 DF,  p-value: < 2.2204e-16

## If we want to call the fitted objects:
linearApprox <- function (n) { predict(l, data.frame(N=n)); }
polynomialApprox <- function (n) { predict(lp, data.frame(N=n)); }
## Or simply code it by hand:
la <- function(n, sd=1) { 0.395762956*log(n) * sd; }
pa <- function(n, sd=1) { N <- log(n);
    (1.586366e-02 + 8.652822e-01*N^1 + -1.122682e-01*N^2 + 1.153201e-02*N^3 + -5.302189e-04*N^4) * sd; }

This has the virtue of speed & simplicity (a few arithmetic operations) and high accuracy, but is not intended to perform well past the largest datapoint of n=300 (although if one needed to, one could simply generate the additional datapoints, and refit, adding more polynomials if necessary), but turns out to be a good approximation up to n=800 (after which it consistently overestimates by ~0.01):

heldout <- sapply(301:1000, exactMax)
test <- sapply(301:1000, pa)
mean((heldout - test)^2)
# [1] 3.820988144e-05
plot(301:1000, heldout); lines(test)

So this method, while lacking any kind of mathematical pedigree or derivation, provides the best approximation so far.


The R package lmomco (Asquith 2011) calculates a wide variety of order statistics using numerical integration & other methods. It is fast, unbiased, and generally correct (for small values of n1) - it is close to the Monte Carlo estimates even for the smallest n where the approximations tend to do badly, so it does what it claims to and provides what we want (a fast exact estimate of the mean gain from selecting the maximum from n samples from a normal distribution). The results can be memoized for a further moderate speedup (eg calculated over n=1-1000, 0.45s vs 3.9s for a speedup of ~8.7x):

exactMax_unmemoized <- function(n, mean=0, sd=1) {
    expect.max.ostat(n, para=vec2par(c(mean, sd), type="nor"), cdf=cdfnor, pdf=pdfnor) }
## Comparison to MC:
# ...         Min.       1st Qu.        Median          Mean       3rd Qu.          Max.
#    -0.0523499300 -0.0128622900 -0.0003641315 -0.0007935236  0.0108748800  0.0645207000

exactMax_memoised <- memoise(exactMax_unmemoized)
Error in using Asquith 2011’s L-moment Statistics numerical integration package to estimate the expected value (gain) from taking the maximum of n normal samples
Error in using Asquith 2011’s L-moment Statistics numerical integration package to estimate the expected value (gain) from taking the maximum of n normal samples


With lmomco providing exact values, we can visually compare the presented methods for accuracy; there are considerable differences but the best methods are in close agreement:

Comparison of estimates of the maximum for n=2-300 for 12 methods, showing Chen 1999/polynomial/Monte Carlo/lmomco are the most accurate and Blom 1958 and the upper bounds highly inaccurate.
Comparison of estimates of the maximum for n=2-300 for 12 methods, showing Chen 1999/polynomial/Monte Carlo/lmomco are the most accurate and Blom 1958 and the upper bounds highly inaccurate.

And micro-benchmarking them quickly (excluding Monte Carlo) to get an idea of time consumption shows the expected results (aside from Pil 2015’s numerical integration taking longer than expected, suggesting either memoising or changing the fineness):

f <- function() { sample(2:1000, 1); }
microbenchmark(times=10000, upperBoundMax(f()),upperBoundMax2(f()),upperBoundMax3(f()),
# Unit: microseconds
#                    expr       min         lq          mean     median         uq       max neval
#                     f()     2.437     2.9610     4.8928136     3.2530     3.8310  1324.276 10000
#      upperBoundMax(f())     3.029     4.0020     6.6270124     4.9920     6.3595  1218.010 10000
#     upperBoundMax2(f())     2.886     3.8970     6.5326593     4.7235     5.8420  1029.148 10000
#     upperBoundMax3(f())     3.678     4.8290     7.4714030     5.8660     7.2945   892.594 10000
#           blom1958(f())     3.734     4.7325     7.3521356     5.6200     7.0590  1050.915 10000
#        elfving1947(f())     3.757     4.8330     7.7927493     5.7850     7.2800  1045.616 10000
#            pil2015(f())     5.518     6.9330    10.8501286     9.2065    11.5280   867.332 10000
#   pil2015Integrate(f()) 14088.659 20499.6835 21516.4141399 21032.5725 22151.4150 53977.498 10000
#           chen1999(f())     3.788     4.9260     7.7456654     6.0370     7.5600  1415.992 10000
#  exactMax_memoised(f())   106.222   126.1050   211.4051056   162.7605   221.2050  4009.048 10000
#                 la(f())     2.882     3.8000     5.7257008     4.4980     5.6845  1287.379 10000
#                 pa(f())     3.397     4.4860     7.0406035     5.4785     6.9090  1818.558 10000

Rescaling for generality

The memoised function has three arguments, so memoising on the fly would seem to be the best one could do, since one cannot precompute all possible combinations of the n/mean/SD. But actually, we only need to compute the results for various n!

We can default to assuming the standard normal distribution (𝒩(0,1)\mathcal{N}(0,1)) without loss of generality because it’s easy to rescale any normal to another normal: to scale to a different mean μ\mu, one simply adds μ\mu to the expected extreme, so one can assume μ=0\mu=0; and to scale to a different standard deviation, we simply multiply appropriately. So if we wanted the extreme for n=5 for 𝒩(10,2)\mathcal{N}(10,2), we can calculate it simply by taking the estimate for n=5 for 𝒩(0,1)\mathcal{N}(0,1) and multiplying by 21=2\frac{2}{1}=2 and then adding 100=1010-0=10:

(exactMax(5, mean=0, sd=1)*2 + 10) ; exactMax(5, mean=10, sd=2)
# [1] 12.32592895
# [1] 12.32592895

So in other words, it is unnecessary to memoize all possible combinations of n, mean, and SD - in reality, we only need to compute each n and then rescale it as necessary for each caller. And in practice, we only care about n=2-200, which is few enough that we can define a lookup table using the lmomco results and use that instead (with a fallback to lmomco for n>200n>200, and a fallback to Chen et al 1999 for n>2000n>2000 to work around lmomco’s bug with large n):

exactMax <- function (n, mean=0, sd=1) {
if (n>2000) {
    chen1999 <- function(n,mean=0,sd=1){ mean + qnorm(0.5264^(1/n), sd=sd) }
    chen1999(n,mean=mean,sd=sd) } else {
    if(n>200) { library(lmomco)
        exactMax_unmemoized <- function(n, mean=0, sd=1) {
            expect.max.ostat(n, para=vec2par(c(mean, sd), type="nor"), cdf=cdfnor, pdf=pdfnor) }
        exactMax_unmemoized(n,mean=mean,sd=sd) } else {

 lookup <- c(0,0,0.5641895835,0.8462843753,1.0293753730,1.1629644736,1.2672063606,1.3521783756,1.4236003060,

 return(mean + sd*lookup[n+1]) }}}

This gives us exact computation at 𝒪(1)\mathcal{O}(1) (with an amortized 𝒪(1)\mathcal{O}(1) when n>200n>200) with an extremely small constant factor (a conditional, vector index, multiplication, and addition, which is overall ~10x faster than a memoised lookup), giving us all our desiderata simultaneously & resolving the problem.

General order statistics for the normal distribution

One might also be interested in computing the general order statistic.

Some available implementations in R:

  • numerical integration:

    • lmomco, with j of n (warning: remember lmomco’s bug with n>2000):

      j = 9; n=10
      expect.max.ostat(n, j=j, para=vec2par(c(0, 1), type="nor"), cdf=cdfnor, pdf=pdfnor)
      # [1] 1.001357045
    • evNormOrdStats in EnvStats (version >=2.3.0), using Royston 1982:

      # [1] 6.446676405
      ## Warning message: In evNormOrdStatsScalar(10^10, 10^10) :
      ## The 'royston' method has not been validated for sample sizes greater than 2000 using
      ## the default value of inc = 0.025. You may want to make the value of 'inc' less than 0.025.
      evNormOrdStatsScalar(10^10,10^10, inc=0.000001)
      # [1] 6.446676817
  • Monte Carlo: the simple approach of averaging over i iterations of generating n random normal deviates, sorting, and selecting the jth order statistic does not scale well due to being 𝒪(n)\mathcal{O}(n) in both time & space for generation & 𝒪(nlog(n))\mathcal{O}(n \cdot log(n)) for a comparison sort or another 𝒪(n)\mathcal{O}(n) if one is more careful to use a lazy sort or selection algorithm, and coding up an online selection algorithm is not a one-liner. Better solutions typically use a beta transformation to efficiently generate a single sample from the expected order statistic:

    • order_rnorm in orderstats, with k of n:

      mean(replicate(100000, order_rnorm(k=10^10, n=10^10)))
      # [1] 6.446370373
    • order in evd, with j of n:

      mean(rorder(100000, distn="norm", j=10^10, mlen=10^10, largest=FALSE))
      # [1] 6.447222051
  • Blom & other approximations:

    • evNormOrdStats in EnvStats’s provides as an option the Blom approximation:2

      When method="blom", the following approximation to E(r,n)E(r,n), proposed by Blom (1958, pp. 68-75), is used:

      E(r,n)Φ1(rαn2α+1)(5)E(r, n) \approx \Phi^{-1}(\frac{r - \alpha}{n - 2\alpha + 1}) \;\;\;\;\;\; (5)

      By default, alpha = 38\frac{3}{8} = 0.375. This approximation is quite accurate. For example, for n>2n \gt 2, the approximation is accurate to the first decimal place, and for n>9n \gt 9 it is accurate to the second decimal place.

      Blom’s approximation is also quoted as:

      E(r,n)μ+Φ1(rαn2α+1)σ;α=0.375E(r, n) \approx \mu + \Phi^{-1} (\frac{r - \alpha}{n - 2\alpha +1})\sigma; \alpha = 0.375
    • Elfving’s correction to Blom is the same, yielding:

      elfving1947E <- function(r,n) { alpha=pi/8; qnorm( (r - alpha) / (n - 2*alpha + 1) )  }
      elfving1947E(10^10, 10^10)
      # [1] 6.437496713

  1. lmomco is accurate for all values I checked with Monte Carlo n<1000, but appears to have some bugs n>2000: there are occasional deviations from the quasi-logarithmic curve, such as n=2225-2236 (which are off by 1.02SD compared to the Monte Carlo estimates and the surrounding lmomco estimates), another cluster of errors n~=40,000, and then after n>60,000, the estimates are totally incorrect. The maintainer has been notified & verified the bug.

  2. A previous version of EnvStats described the approximation thus:

    The function evNormOrdStatsScalar computes the value of E(r,n)E(r,n) for user-specified values of r and n. The function evNormOrdStats computes the values of E(r,n)E(r,n) for all values of r for a user-specified value of n. For large values of n, the function evNormOrdStats with approximate=FALSE may take a long time to execute. When approximate=TRUE, evNormOrdStats and evNormOrdStatsScalar use the following approximation to E(r,n)E(r,n), which was proposed by Blom (1958, pp. 68-75, [6.9 An approximate mean value formula & formula 6.10.3-6.10.5]):

    E(r,n)Φ1(r38n+14)E(r,n) \approx \Phi^{-1} (\frac{r - \frac{3}{8}}{n + \frac{1}{4}})

    ## General Blom 1958 approximation:
    blom1958E <- function(r,n) { qnorm((r - 3/8) / (n + 1/4)) }
    blom1958E(10^10, 10^10)
    # [1] 6.433133208

    This approximation is quite accurate. For example, for n2n \ge 2, the approximation is accurate to the first decimal place, and for n9n \ge 9 it is accurate to the second decimal place.