“It's the Latency, Stupid”, (2001):
[Seminal essay explaining why the rollout of “broadband” home connections to replace 56k dialups had not improved regular WWW browsing as much as people expected: while broadband had greater throughput, it had similar (or worse) latency.
Because much of the wallclock time of any Internet connection is spent setting up and negotiating with the other end, and not that much is spent on the raw transfer of large numbers of bytes, the speedup is far smaller than one would expect by dividing the respective bandwidths.
Further, while bandwidth/throughput is easy to improve by adding more or higher-quality connections and can be patched elsewhere in the system by adding parallelism or upgrading parts or investing in data compression, the latency-afflicted steps are stubbornly serial, any time lost is physically impossible to retrieve, and many steps are inherently limited by the speed of light—more capacious connections quickly run into Amdahl’s law, where the difficult-to-improve serial latency-bound steps dominate the overall task. As Cheshire summarizes it:]
- Fact One: Making more bandwidth is easy.
- Fact Two: Once you have bad latency you’re stuck with it.
- Fact Three: Current consumer devices have appallingly bad latency.
- Fact Four: Making limited bandwidth go further is easy.
…That’s the problem with communications devices today. Manufacturers say “speed” when they mean “capacity”. The other problem is that as far as the end-user is concerned, the thing they want to do is transfer large files quicker. It may seem to make sense that a high-capacity slow link might be the best thing for the job. What the end-user doesn’t see is that in order to manage that file transfer, their computer is sending dozens of little control messages back and forth. The thing that makes computer communication different from television is interactivity, and interactivity depends on all those little back-and-forth messages.
The phrase “high-capacity slow link” that I used above probably looked very odd to you. Even to me it looks odd. We’ve been used to wrong thinking for so long that correct thinking looks odd now. How can a high-capacity link be a slow link? High-capacity means fast, right? It’s odd how that’s not true in other areas. If someone talks about a “high-capacity” oil tanker, do you immediately assume it’s a very fast ship? I doubt it. If someone talks about a “large-capacity” truck, do you immediately assume it’s faster than a small sports car?
We have to start making that distinction again in communications. When someone tells us that a modem has a speed of 28.8 kbit/sec we have to remember that 28.8 kbit/sec is its capacity, not its speed. Speed is a measure of distance divided by time, and ‘bits’ is not a measure of distance.
I don’t know how communications came to be this way. Everyone knows that when you buy a hard disk you should check what its seek time is. The maximum transfer rate is something you might also be concerned with, but the seek time is definitely more important. Why does no one think to ask what a modem’s ‘seek time’ is? The latency is exactly the same thing. It’s the minimum time between asking for a piece of data and getting it, just like the seek time of a disk, and it’s just as important.
“Search for the Wreckage of Air France Flight AF 447”, (2014-05-19):
In the early morning hours of June 1, 2009, during a flight from Rio de Janeiro to Paris, Air France Flight AF 447 disappeared during stormy weather over a remote part of the Atlantic carrying 228 passengers and crew to their deaths. After two years of unsuccessful search, the authors were asked by the French Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation to develop a probability distribution for the location of the wreckage that accounted for all information about the crash location as well as for previous search efforts. We used a Bayesian procedure developed for search planning to produce the posterior target location distribution. This distribution was used to guide the search in the third year, and the wreckage was found with one week of undersea search. In this paper we discuss why Bayesian analysis is ideally suited to solving this problem, review previous non-Bayesian efforts, and describe the methodology used to produce the posterior probability distribution for the location of the wreck.
Industry-sponsored clinical drug studies are associated with publication of outcomes that favor the sponsor, even when controlling for potential bias in the methods used. However, the influence of sponsorship bias has not been examined in preclinical animal studies.
We performed a meta-analysis of preclinical statin studies to determine whether industry sponsorship is associated with either increased effect sizes of efficacy outcomes and/or risks of bias in a cohort of published preclinical statin studies. We searched Medline (January 1966–April 2012) and identified 63 studies evaluating the effects of statins on atherosclerosis outcomes in animals. Two coders independently extracted study design criteria aimed at reducing bias, results for all relevant outcomes, sponsorship source, and investigator financial ties. The I2 statistic was used to examine heterogeneity. We calculated the standardized mean difference (SMD) for each outcome and pooled data across studies to estimate the pooled average SMD using random effects models. In a priori subgroup analyses, we assessed statin efficacy by outcome measured, sponsorship source, presence or absence of financial conflict information, use of an optimal time window for outcome assessment, accounting for all animals, inclusion criteria, blinding, and randomization.
The effect of statins was statistically-significantly larger for studies sponsored by nonindustry sources (−1.99; 95% CI −2.68, −1.31) versus studies sponsored by industry (−0.73; 95% −1.00, −0.47) (p < 0.001). Statin efficacy did not differ by disclosure of financial conflict information, use of an optimal time window for outcome assessment, accounting for all animals, inclusion criteria, blinding, and randomization. Possible reasons for the differences between nonindustry-sponsored and industry-sponsored studies, such as selective reporting of outcomes, require further study.
Author Summary: Industry-sponsored clinical drug studies are associated with publication of outcomes that favor the sponsor, even when controlling for potential bias in the methods used. However, the influence of sponsorship bias has not been examined in preclinical animal studies. We performed ato identify whether industry sponsorship is associated with increased risks of bias or of outcomes in a cohort of published preclinical studies of the effects of statins on outcomes related to atherosclerosis. We found that in contrast to clinical studies, the effect of statins was statistically-significantly larger for studies sponsored by nonindustry sources versus studies sponsored by industry. Furthermore, statin efficacy did not differ with respect to disclosure of financial conflict information, use of an optimal time window for outcome assessment, accounting for all animals, inclusion criteria, blinding, and randomization. Possible reasons for the differences between nonindustry-sponsored and industry-sponsored studies, such as selective outcome reporting, require further study. Overall, our findings provide empirical evidence regarding the impact of funding and other methodological criteria on research outcomes.
2013-clark.pdf: “Whatever next? Predictive brains, situated agents, and the future of cognitive science”, Andy Clark
2014-flyvbjerg.pdf: “What You Should Know About Megaprojects and Why: An Overview”, (2014-04-07; ):
This paper takes stock of megaproject management, an emerging and hugely costly field of study. First, it answers the question of how large megaprojects are by measuring them in the units mega, giga, and tera, concluding we are presently entering a new “tera era” of trillion-dollar projects. Second, total global megaproject spending is assessed, at USD 6–9 trillion annually, or 8 percent of total global GDP, which denotes the biggest investment boom in human history. Third, four “sublimes” —political, technological, economic, and aesthetic—are identified to explain the increased size and frequency of megaprojects. Fourth, the “iron law of megaprojects” is laid out and documented: Over budget, over time, over and over again. Moreover, the “break-fix model” of megaproject management is introduced as an explanation of the iron law. Fifth, Albert O. Hirschman’s theory of the Hiding Hand is revisited and critiqued as unfounded and corrupting for megaproject thinking in both the academy and policy. Sixth, it is shown how megaprojects are systematically subject to “survival of the unfittest”, explaining why the worst projects get built instead of the best. Finally, it is argued that the conventional way of managing megaprojects has reached a “tension point”, where tradition is challenged and reform is emerging.
2009-jones.pdf: “Hit or Miss? The Effect of Assassinations on Institutions and War”, (2009-07-01; ):
Assassinations are a persistent feature of the political landscape. Using a new dataset of assassination attempts on all world leaders from 1875 to 2004, we exploit inherent randomness in the success or failure of assassination attempts to identify the effects of assassination. We find that, on average, successful assassinations of autocrats produce sustained moves toward democracy. We also find that assassinations affect the intensity of small-scale conflicts. The results document a contemporary source of institutional change, inform theories of conflict, and show that small sources of randomness can have a pronounced effect on history.
…To implement this approach, we collected data on all publicly-reported assassination attempts for all national leaders since 1875. This produced 298 assassination attempts, of which 59 resulted in the leader’s death. We show that, conditional on an attempt taking place, whether the attack succeeds or fails in killing the leader appears uncorrelated with observable economic and political features of the national environment, suggesting that our basic identification strategy may be plausible.
We find that assassinations of autocrats produce substantial changes in the country’s institutions, while assassinations of democrats do not. In particular, transitions to democracy, as measured using the Polity IV dataset (Marshall & Jaggers 2004), are 13% more likely following the assassination of an autocrat than following a failed attempt on an autocrat. Similarly, using data on leadership transitions from the Archigos dataset (Goemans et al 2006), we find that the probability that subsequent leadership transitions occur through institutional means is 19% higher following the assassination of an autocrat than following the failed assassination of an autocrat. The effects on institutions extend over [long] periods, with evidence that the impacts are sustained at least 10 years later.