Capitalism's Cradle

An Economic History Blog

7 notes &

The Coming UK Productivity Recovery

Duncan Weldon has written an excellent article about the UK’s productivity puzzle. Slowing productivity is a serious problem. But in becoming more pessimistic, I fear he’s prematurely put the Christmas turkey in the bin.

First off, I’m not entirely sure that the UK’s growth in output per head has diverged all that much from its long-term trend. Look at the a nice log graph of UK GDP per capita (log so that exponential growth looks like a perfectly straight line, allowing us to look at the changing steepness of the line to compare growth rates). The current slow down is an unwelcome blip - and that can translate into some pretty nasty consequences for public and private debt - though it doesn’t look particularly permanent just yet.

image

But to make a more compelling case for optimism, let’s first look at where productivity growth comes from. A new process, service or product is invented. Then, the invention is applied - usually by a business, sometimes by non-profits or government. Once the invention has proven its worth, everybody else copies, and it becomes applied more broadly. It’s only at this stage that we start to see any impact on the country’s productivity - and it can take decades for an invention to reach this stage.

Something must be wrong with at least one of these stages if the UK is really experiencing a serious productivity slowdown. So how are we faring?

Well, I see no reason to believe that the process of invention itself is slowing. Pick up a copy of Wired magazine, or take a look at Jose L Ricon’s post on the subject, and you’ll soon lose track of the sheer number of industries that are undergoing improvement. I have compiled a list of hundreds upon hundreds of inventors active during the British Industrial Revolution, and I can quite confidently tell you that there are far more innovators now than there were then. I was beginning to scrape the barrel for the more obscure innovators when I reached 677 innovators active over two centuries. I could probably very easily find 677 innovators active in Britain in this year alone.

There are exciting things on the verge of application using new materials, new forms of energy storage, and unimaginably advanced computing power. And that’s not to mention the imperceptible march of progress in areas from plumbing to housework to coffee to music to agriculture to high street banking. The world’s innovation doesn’t seem to be slowing - so the available technologies Britain can apply to its own economy doesn’t seem to be diminishing either.

(By the way, pace Ben Southwood, it makes no sense to look at innovation per capita. Flushing toilets only need to be invented by one person in order to markedly improve living standards, regardless of whether it is applied to a nation of a thousand or a million people. Nor does it make any sense to count innovations - flushing toilets are hardly comparable to iphones or to actuarial science. All things we define as “innovations” are simply bundles of a whole load of improvements. And most lists of innovations are highly partial, while failing to appreciate this fact. We instead need to count innovators - the data Ben cites doesn’t do that).

If innovation isn’t slowing, might Britain be getting worse at applying innovations? I just don’t buy it. We have a market economy, people are still out looking for investment opportunities, and the government is broadly encouraging of entrepreneurship.

And the systems we have in place to fund and apply new innovations are more numerous and effective than ever. James Watt, who so radically transformed steam engines, didn’t have the opportunity to use Kickstarter or GoFundMe or Patreon. He didn’t even have the opportunity to form a joint stock company (banned in 1720 without special Parliamentary permission, re-legalised 1844), to use limited liability (introduced 1855), or to appeal to a well-defined group of venture capitalists. He had to rely on mere friends and acquaintances for funding!

But even if Britain is failing to apply innovations as quickly, that simply leaves us with greater potential catch-up growth. An illustrative example: if another country like the US is able to apply new innovations much faster, then they go through the whole expensive process of trying them out and seeing how well they work. Once those innovations are applied in one country though, they become much easier to copy in ours. It’s how countries like China are able to achieve such unprecedentedly high growth rates. I’m not saying this is happening, of course. But it is grounds for optimism if it is indeed the problem.

So we have ever more innovations to apply, and it’s never been easier to apply them. That leaves a remaining potential problem: recent inventions might be having a lower-than-usual impact on general productivity. This is plausible, perhaps even likely, but let me say just a few more words of optimism. If inventions are still appearing, then maybe it’s just a matter of waiting a little while for the particularly productivity-enhancing ones to come along. Call me Pangloss, but I bet we ain’t seen nothing yet.

Filed under Turkey in the bin Optimism Great Stagnation James Watt Crowdfunding Industrial Revolution Productivity Puzzle

4 notes &

The Dutch Golden Age

In my PhD thesis, I suggested that it was a commitment to proselytising innovation that was responsible for the remarkable acceleration of innovation in Britain over the course of the 18th century. But there were other societies that experienced innovation without any acceleration - was the commitment to proselytising innovation why Britain was unique?

We can’t say for sure without an international comparison. The most obvious society to compare is the Dutch Republic of the 17th century. The Dutch Republic was then the most technologically and economically advanced nation in the world, and seemed to have acquired most of the ingredients for an acceleration of innovation. Its population was the most concentrated in cities.* Its traders dominated the rapidly growing trade with India and East Asia. In fact, I often hear that Britain became so successful after 1688 because it developed some concoction of Dutch government, financial management, and religious toleration.

And yet in the eighteenth century the Dutch Golden Age lost its shine. Why? There have been all sorts of material explanations for its economic decline. But I’m interested in its innovators - the individuals who made the Netherlands great, but who then let it decline. Did their numbers dwindle? Did they lose interest in innovation and devote themselves to other pursuits? Were they simply more concentrated in industries with less of an economic impact than in Britain? 

I think the only way to know for sure is to compile as extensive a list of Dutch innovators as possible. If my hypothesis about the British acceleration of innovation is correct, then Dutch innovators may not have been as proactive at proselytising innovation. But I may be wrong, and I’ll let you know what I find.

In the meantime, I’ve been compiling a list of works to read on the subject, while teaching myself to read Dutch (using DuoLingo, which is excellent). Please suggest more. I’ll update it and add comments as I work my way through them:


The Dutch Republic: Its Rise, Greatness and Fall - Jonathan Israel

The First Modern Economy: Success, Failure, and Perseverance of the Dutch Economy, 1500-1815 - Jan de Vries & Ad van der Woude

The Rise and Decline of Dutch Technological Leadership: Technology, Economy and Culture in the Netherlands, 1350-1800 - Karel Davids 

Dutch Primacy in World Trade, 1585-1740 - Jonathan Israel

The Long Road to the Industrial Revolution: The European Economy in a Global Perspective, 1000-1800 - Jan Luiten Van Zanden

The Dutch Economy in the Golden Age - eds. Karel Davids & Leo Noordegraaf


*Even as late as 1800, only 29% of England’s population was urban, as compared to 34% in the Netherlands (Allen p.17).

Filed under dutch golden age robert allen Industrial Revolution Glorious Revolution

28 notes &

How Innovation Accelerated in Britain 1651-1851

I successfully defended my PhD thesis a few weeks ago, and will soon be turning it into a publishable book manuscript.

In 1651 Britain had just finished a destructive civil war. Tens of thousands had died. Its monarch had been beheaded. Its mode of government overturned. In its weakened state, it was about to engage in the first of many wars with the Dutch Republic for supremacy over trade. 

Fast forward two centuries to 1851 and Britain was a country triumphant. It was the world’s technological leader, and now pressed its advantage to accumulate the largest empire in history. Dignitaries were flocking from around the world to admire the shrine erected to the dramatic transformation: a magnificent Crystal Palace.

The dramatic transformation - an Industrial Revolution - had been brought about by an acceleration in the rate of innovation. I am interested in what caused that acceleration. 

Central to the story are the people responsible: the innovators. You’re probably aware of various inventions in poster-boy industries like cotton, iron or coal. You might even recognise a few names. James Watt, Abraham Darby, Richard Arkwright. But look also to glass-making, ceramics, civil engineering, gardening, surgery, cement, chimney-sweeping: all and more were transformed by only a few innovators.

So to discover the causes of Britain’s unprecedented acceleration of innovation, I delved into just about every recorded activity, behaviour and experience of 677 innovators of the time. They included the usual suspects, but also people you’ve probably never heard of, like Eleanor Coade, George Smart, or the advertising pioneer George Packwood.

After staring at my data for long enough, I began to notice a pattern. People went on to innovate if inventors had been among their teachers, colleagues, employers, employees, neighbours, friends, family, and acquaintances. And the more I looked, the more examples I found. Of the hundreds of inventors I studied, nearly all of them began to innovate after meeting inventors. Inspiration mattered - inventing seemed to spread from person to person.

But this wasn’t the spread of a particular technique, design or blueprint. It was the spread of a new approach - the very idea of inventing. Physicians like Erasmus Darwin improved carriages, then influenced the likes of Richard Lovell Edgeworth to improve agricultural machinery and telegraph systems too. Young cabinet-makers like Joseph Bramah watched their employers improve toilets, then went on to invent locks and hydraulic presses.

Hundreds of people in Britain began to see room for improvement everywhere. Lancelot Brown looked out over the gardens of the wealthy and declared them “capable” of improvement (earning him the nickname Capability Brown). An ingenious architect, Robert Salmon, got a hernia and developed a surgical truss to treat it. A young engineer, William Fairbairn, got carried away and tried to improve romance.

Many innovators stuck with what they knew best. Potters influenced other potters, and engineers influenced other engineers. But many branched out into the unfamiliar, teaching themselves as they went. A carpenter transformed clockmaking. A lawyer developed artillery. An art dealer sold his collection to lay the first under-water electric telegraph line.*

So people became innovators because others inspired them with a mentality of improvement. But we need to explain why this mentality was so uniquely virulent in Britain in the period. There were epidemics of innovation in prior societies - the Dutch Golden Age, Song Dynasty China, the Renaissance. So why did it become endemic only in Britain?** I think there was something special about this particular strain of the mentality of improvement.

The vast majority (over 80%) actively tried to spread innovation. They published about it, lectured about it, funded it, and advised on it. They founded and joined societies devoted to spreading it further. The natural philosopher Stephen Hales seemed incapable of holding a conversation without advising improvements. George Stephenson offered tips to train operators while waiting at the platform.

Innovators in Britain very rarely kept secrets. Even the 40% or so patent-holders very rarely sued for infringement; and they very rarely lobbied to extend them beyond the usual time limits. James Watt, who notoriously did all three, was a rarity. But even he sometimes betrayed a commitment to spreading innovation further.

To me, the spread of the improving mentality in Britain resembles the spread of a religious belief. It is all very well for people to pray in quiet, but successful religions require effective preachers. I believe that to have become so widespread, the improving mentality required proselytisers of its own. In Britain, the improving mentality became an ideology. 

Like any ideology, it had its disagreements. James Watt believed himself entitled to protect his patents from infringement, while Humphry Gainsborough was scandalised (yes, brother of the artist, and also an improver of steam engines). But ultimately, the vast majority of Britain’s inventors adhered to two commandments: improve, and pass it on.

Here, my thesis reaches its limits. I can’t explain why Britain was unique by looking at Britain alone. The ideological commitment to proselytising innovation seems a likely candidate to me because it was so widespread: nothing else came close to being as common. But I need to compare the experience of inventors in Britain with those in other societies where invention failed to accelerate. More on that later.


*I’ll think of a prize for anyone who can work out who these three innovators were!

**I’m grateful to Deirdre McCloskey and Tim Leunig for this image

Filed under Ideology of Innovation innovation James Watt Industrial Revolution Stephen Hales George Stephenson Robert Salmon Lancelot Capability Brown William Fairbairn eleanor coade

1 note &

How to Grade Essays Efficiently

I spend a lot of time thinking about how to make various aspects of academia more productive. One might even say I try to apply the improving mentality to it. Top of my list is marking. This is because, as any teaching assistant knows, marking is hell. It is tedious, time-consuming and tiring. It is the downside of the Faustian pact you made to pursue a career of intellectual stimulation.

So I developed a method that saves on time, as well as improving the quality of my grading.

1) Set up a grading dashboard, pictured below. (Here’s the link to the Excel sheet in google documents, in case you’d like to copy all the formulae).

image

2) Colour-code each essay by question answered.  So in column A, all the 1s  are red, all the 2s are blue, etc. Do this before you start marking, as you want to be able to mark all of the essays question by question. This is far more efficient than marking them in the given order, as it reduces the additional concentration and focus it will take to think about how they’ve answered a new question. It also makes it easier to judge the quality of answers when you’re directly comparing answers to the same question, thus improving the accuracy of the grades that you give.

As illustrated above (column B), when you copy the list from Turnitin, students might have labelled their essays without indicating the question answered. So it may require a bit of initial time invested in going through them all to work out which question they answer. But it is well worth it, due to the time saved over the course of the process as a whole. Specialisation and the division of labour is key here. 

The same principle applies to marking exam scripts. So if students have answered multiple questions (i.e. 1, 4 and 6) per script, you should mark them question-by-question rather than script-by-script. Aim for about 10 minutes marking each exam question as the answers tend to be much shorter. You get the idea.. (though it’s not worth setting up a grading dashboard for paper exams, other than to record the time you’ve spent marking).

3) Keep the feedback structure handy. We were required to structure our feedback to students according to a number of headings (Analysis & Understanding, Use of Literature, Structure, Referencing, etc.). It is worth keeping these headings in a Word document so that you can quickly Copy-Paste the headings into the feedback section of Turnitin. It saves you the time of writing it out over and over. In fact, you might want to combine this process with the colour coding if you need to open a lot of the documents to figure out which question is being answered.

4) Record the time spent marking. As shown in column E, I do this to the closest minute, from opening to closing of the pane in Turnitin. When I first marked papers, I spent far too long on them - about 45 minutes to an hour. This was a waste of time, but also of money, as I was paid to mark three essays per hour. So for a 2,500-3,000 word essay, you should aim for an average of under 20 minutes.

By timing yourself, you will very quickly find yourself focusing more intently on each essay, resulting in better quality work. No more losing concentration in the middle of an answer and having to decide whether to re-read or to plough on.

My first time, I wasted an entire January marking, whereas the following year, after recording the time, I got it all done in a week. This is because it is such a mentally exhausting process, that even marking a handful of answers can feel like it was day’s work. So I find that recording the total number of hours spent marking holds me to account, ensuring I actually spend more hours marking per day.

5) Collect commonly used feedback. I’m sure many students feel that their answers are unique, but the reality of marking is that they tend to make mistakes that are very similar to one another. It is also mentally taxing and time-consuming having to write things. So if you happen upon a particularly good way of expressing feedback, you should save it in a Word document and copy-paste it in response to the same mistake being used again. 

It’s worth collecting these feedback phrases sooner rather than later. Once again, this involves a time-consuming initial investment that will have major pay-offs later on.

Filed under grading marking phd life phd problems productivity

2 notes &

Doing a PhD (efficiently)

Apologies for the pause in service. I submitted my PhD thesis at the end of January, after 3 years and 4 months of work, and have since been readjusting to normal life while I wait for my viva. Writing a PhD thesis is the best thing I’ve ever done - I loved the process from start to finish, though of course it had its ups and downs.

So here’s some advice I would give to myself if I were starting out for the first time:

1. Do a PhD in the UK. It takes fewer years than elsewhere, and doesn’t generally require taking classes or passing any exams. It’s pure thesis-writing. I suspect the downside is that there isn’t always as much funding available.

2. Learn to deal with failure. This is one of the core skills I’ve gained from the process. You will consistently fail to meet the deadlines that you set for yourself, whether through taking on too much teaching, or wasting a month on marking, or taking too long to write a chapter, or finding that your data needs more coding, etc. etc. And you will therefore feel guilty all the time. You just have to learn to accept it, not let it debilitate you with stress or panic, and just keep on working.

This is particularly a problem for the sorts of people who generally do PhDs: people who are used to acing their way through school, university, and beyond. As such, it can take some getting used to, which brings me to the next point..

3. Turn up to the PhD office. Other than conducting research, my favourite thing about the whole process was being a part of the community of PhD students that built up over the last few years, all sharing the same office (the department is quite young, so there weren’t that many when I started). Writing a thesis is a fundamentally lonely process, so spending time with other PhD students can be a vital source of support and encouragement.

I don’t think I would have enjoyed the process even half as much without sharing my lunches and tea-breaks and occasional evening pints and late nights at the office with others. And all it requires is turning up to the office every weekday (I added weekends in the final stretch) rather than working in isolation at home.

4. Don’t teach too many courses. Teaching is a very rewarding experience, and doing the readings for subjects you’re unfamiliar with can provide you with insights you wouldn’t have gleaned from the literature your thesis focuses on. 

If you can, try to teach lots of classes of the same module, rather than different classes from different modules. It’s just far more efficient, as the main time-drain from teaching is from reading in preparation for leading the seminars. You would rather spend one evening preparing for eight seminars, than two evenings preparing for four. On the same principle, try to teach the same course year after year so that you don’t have to do the readings from scratch every year. 

If you can afford to, I also strongly recommend not teaching when you come to your writing-up year - you want to be able to knuckle down and focus on just the thesis for that final sprint.

Also, marking is hell. But I’ve developed some ways to get it over with as quickly as possible.

5. Start an academic blog. My main regret is not starting this blog earlier, and not finding the discipline to add to it more frequently. It has helped me practise my writing during the dark days of data collection, and often forced me to clarify my thoughts. 

It has also given me the chance to put my work out there, brought quite a lot of opportunities in its wake, and helped me find a broader online community of economic historians and historians of science and technology whose input and advice has often been invaluable. At the very least, you should “micro-blog” on Twitter.

Edit: Pablo Paniagua Preto suggests publishing as soon as possible too. Of course, this depends on how soon you have something worth publishing (in my case, I wrote everything up in the final 6 months). But it’s well worth accustoming yourself to the process of blind peer review (though I don’t yet have personal experience of this). In the meantime, writing a blog forces you to develop your ideas for an outside audience.

6. Use Zotero to collect your references and manage your bibliography. If you haven’t, download it right now. Don’t read on if you haven’t yet downloaded it. It’s free. It’s incredible. It’s intuitive. I can’t imagine writing my thesis without it. I would recommend it to undergraduates too, by the way.

7. Present at conferences, but not too many (thanks to Elizabeth Bruton for suggesting this one). I must admit this isn’t something I’ve proactively tried to do, at least until I finished my thesis. But you should take any opportunity to present your work to audiences, expand your network, and hopefully find mentors to look out for you. Try to focus on presenting at only the most relevant conferences, as you don’t want to spread yourself too thin.

8. Learn new methods while you have the chance. Thanks to my colleague Irena Schneider for this suggestion - someone who taught herself some pretty advanced statistics one year in, and now teaches it! “Don’t be afraid to step outside your comfort zone if your thesis seems to demand new skills. Don’t just choose methods because you know them. It can be difficult, so build a support network, and find other researchers and professors at other institutions who can offer advice and look at your work. You probably won’t get such a perfect opportunity to train yourself once you’ve completed the PhD.”

9. Develop routines. Wolf von Laer developed the excellent habit of writing 500 words every day, even if it was just for notes, or a blog, let alone for the thesis itself. Writing something is better than writing nothing; and practice makes perfect. 

Wolf also suggests writing up paragraph-long summaries of every relevant paper, and page-long summaries of every book. That way you’ll have lots to use later on. I would suggest doing this using Zotero’s note feature, so that you have everything in one place.

The fundamental routine, for me, was to try to work every day. As noted above, this is easier if you try to work in the office instead of at home. I was a stay-up-all-night-to-get-the-essay-done kind of undergraduate student, and tried to do this in the first year of my PhD. But it is impossible to sustain - writing a thesis is too gargantuan a task to pull off in that way. So think about and develop good habits sooner rather than later.


Please keep your suggestions coming!

Filed under phd student phd problems phd life

0 notes &

AI and the Problem of Ideology

Artificial Intelligence appears to be making leaps and bounds recently: I can hardly contain my excitement about autonomous vehicles / self-driving cars reaching the market soon; and I can’t wait to see what’s next. But have we been getting it wrong? Neuroscientist Gary Marcus apparently thinks so, at least in terms of how we get machines to mimic human learning.

The most popular approach, called “deep learning”, requires flooding computers with vast amounts of data so that they can recognise patterns and attune them. Only by giving them as much data as possible can we teach them to recognise often subtle exceptions to rules. But this is rather different to what humans do: we are able to extrapolate and act upon rules almost immediately, from sometimes only a handful of examples. 

Imagine how advanced AI could become, if it had that same rapid pattern-recognising capability combined with the ability to experience a world of data at lightning-fast speeds. Marcus’s insights into AI from neuroscience may be putting us on the verge of a large breakthrough. 

But hold on. Perhaps the philosophy of science has something to teach us here; along with a word of caution. I couldn’t help but be reminded of this essay by Karl Popper, who is pictured below.

image

These passages stood out:

Habits or customs do not, as a rule, originate in repetition. Even the habit of walking, or of speaking, or of feeding at certain hours, begins before repetition can play any part whatever. … As Hume admits, even a single striking observation may be sufficient to create a belief or an expectation. … Without waiting, passively, for repetitions to impress or impose regularities upon us, we actively try to impose regularities upon the world. We try to discover similarities in it, and to interpret it in terms of laws invented by us. Without waiting for premises we jump to conclusions. These may have to be discarded later, should observation show that they are wrong. This was a theory of trial and error - of conjectures and refutations. It made it possible to understand why our attempts to force interpretations upon the world were logically prior to the observation of similarities.

Popper’s view of the psychological origins of human knowledge (and by extension, of what may be deemed “science”), seems to be exactly the sort of thing Marcus is looking to replicate in machine learning. I find this intuitively convincing. Children learn by trial and error, not through brute-forcing countless experiences upon them. 

But suppose Marcus is successful in creating such a Popperian AI, able to learn like a human. This, I think, raises some interesting problems, that may make it commercially useless, if not downright dangerous

1) 

Humans effectively receive gigantic amounts of data every second (visual, auditory, gustatory, olfactory, vestibular, etc etc). Presumably due to space constraints in our brains, we filter what it is that we actually keep (recording it in our memory), and filter out what we (often instinctively) deem to be unimportant. Yet this filtering process is predicated on first hypothesising what may be important. Unless we suppose Popperian AI were to have physically unlimited data storage capacity, wouldn’t it have to filter which data it chose to even record? 

We would either have to precisely encode how it filters, or to allow it to gradually develop its own filtering techniques. And wouldn’t that lead to certain hypotheses  or conjectures - let’s call them “ideologies” - potentially becoming immune to refutation; much like humans are able to hold onto ideologies even in the presence of contravening evidence? In other words, once it stumbled upon the first ideology, a Popperian AI might then actively choose not to record all contravening evidence, assuming it to be irrelevant.

2) 

Even presuming AI had physically unlimited data storage capabilities, and recorded and assessed absolutely all evidence that bombarded its sensors, wouldn’t it still be prey to irrefutable ideologies? Popper’s essay warns against the allure of theories like Marxism. astrology or Freudianism. These theories are so all-encompassing that once they are adopted, all facts may be interpreted as confirmations of the theory. 

This is why even very clever humans can think stupid things. The Popperian AI would have so much evidence and computing power at its immediate disposal, all of which could be twisted to confirm an irrefutable theory, that nobody would be able to convince it to reject it. Perhaps this is why science seems to advance one death at a time - in which case, we’d need to convince a Popperian AI to kill itself or allow itself to be killed.

Alternatively, in order to avoid having potentially Jihadist self-driving cars, one would have to encode Popperian AI to only accept hypotheses that are testable - much as Popper advocated for as a useful definition of “science”. But encoding it with an a priori belief in the Popperian definition of science would be self-contradictory. In any case, I’m not sure how this could be done - after all, we wouldn’t be able to anticipate the billions upon billions of hypotheses that an AI with unlimited access to all data would be able to generate. 

3) 

Even if the Popperian AI had both unlimited data storage and some innate ability to prevent itself becoming biased by irrefutable hypotheses, it would have physically limited power. This is important, because while humans appear to throw up conjectures automatically and instinctively, we expend quite a bit of energy refuting them. (We expend energy throwing them up too - but let’s set that aside for now).

Presuming the Popperian AI were able, using its superior processing ability, to simultaneously throw up a billion conjectures about the world - how would it even begin to order which ones to attempt to refute first? In other words, as with humans, Popperian AI would need to have “interests”. This would require developing some kind of neurological “physiology” for the Popperian AI to do anything at all; let alone to attempt to direct it down problem-solving paths we find commercially useful.

Any attempt to order the Popperian AI’s refutational processes would require that it be a rather lazy thinker about problems it “instinctively” deemed less important. This would, I think, make it open to the same sorts of biases as humans, in much the same way as we only sparingly use our more energy-intensive “system 2″ processes to actively refute conjectures (as Daniel Kahneman summarises). Even if Popperian AI were immune to the problem of irrefutable ideologies, what if it adopted a working Marxist hypothesis early on, and simply never got around to refuting it because it was too busy refuting a billion other conjectures?

4)

This last point raises an even more fundamental and dangerous prospect. Having interests would imply that a Popperian AI would need to have opinions about the order in which to refute conjectures. In other words, it would need to have values, or some system of ethics, with which to inform those opinions. By needing to order its refutations - to ration its processing resources - it would require some sort of rational self-interest in pursuit of those values. (This is not to mention the fact that it would need to have values and opinions in order to decide which data to record, assuming limited storage).

By their very nature, values are received. Given humans often violently disagree with one another on which values to hold, are we to choose the best values to inform the Popperian AI’s “interests”, let alone allow it to choose ones that are not downright dangerous to humanity? Even if we could settle on some, how would we encode it? Even if we could encode it, how would we prevent the AI from being “radicalised” by adopting new and more dangerous values in their place?

Overall, the problem with a Popperian AI, able to learn like humans do, is that it would be prey to the same cognitive and moral imperfections as humans themselves; while having vastly superior processing powers. It would be like giving birth to a potentially immortal human with superior intelligence.  Superman might turn out to be a Nazi.

Filed under Artificial Intelligence Ideology Neuroscience

0 notes &

The Ideology of Innovation

Last month I presented my thesis findings for the very first time at Nobel prize-winner Edmund Phelps’ Center on Capitalism and Society, at Columbia University. I argue that the acceleration of innovation during the British Industrial Revolution - the fundamental source of our current prosperity and flourishing - was caused by the spread of an ideology of innovation.

Here’s the video. As you can see, I was a little nervous, at one point mixing up left from right…  Here’s the PDF of my slides and notes. 

There wasn’t the time to address other theories about the Industrial Revolution (demand, institutions, human capital, etc.) though I spend the bulk of my PhD thesis doing so.

I’m currently crossing t’s and dotting i’s for the final draft of my thesis, and generally just making it more readable - hence the delay blogging. It takes much longer than you’d expect!

Filed under Industrial Revolution innovation Ideology of Innovation inventors

0 notes &

Is Innovation Autonomous?

Matt Ridley makes an interesting claim about innovation: that it is an evolving organism. He compares it to a coral reef, dependent on animals to build and maintain it (i.e. us), but in a wider sense autonomous and living in its own right. Technology perpetuates itself, choosing its inventors like pawns. Like squeezing a balloon, if you push it down in one place, it will rise elsewhere. Unstoppable and relentless.

Yes, and not quite. 

Ridley cites numerous cases of simultaneous innovation to support his thesis. In fact, here’s another one: in 1675 Robert Hooke rushed to make his balance spring-watch design public, when he learned that Christiaan Huygens had invented it too (Hooke’s experiment pictured below).

image

So yes, there may have been other spring watches invented even if Hooke or Huygens had never lived. Eventually. And differently (as Artir points out).

Or maybe not. Indeed, one thing that comes up increasingly in my research is that individuals were important not only because they invented, but because they inspired others to innovate. I somehow doubt Huygens would have been quite as world-changing (he developed the pendulum too) if he hadn’t known Galileo Galilei as a child. Hooke, a mere chorister boy at Oxford, probably wouldn’t have been quite as major a figure if he hadn’t bumped into Robert Boyle and become his laboratory assistant. Without wanting to give too much of my thesis away, let’s just say Hooke and Huygens were particularly inspiring individuals.

And that leads me to another point. Ridley’s evolutionary account of innovation doesn’t explain why innovation accelerated so markedly in Britain over the course of the mid-eighteenth century. If it really were all about “ideas having sex” as he usually likes to put it, then why did so many very, very old ideas only start to be used in that period? You don’t need much for a flying shuttle (some wood and twine) to double the width of a loom, or for a spinning jenny (basically just the age-old spinning wheel, on its side, allowing you to have multiple spindles working at the same time).

Rather than focusing merely on accumulated knowledge, we need to recognise that there was a unique change in mentality. For the first time, far more people - and it didn’t even require that many - began to see the world through fresh eyes. Instead of working diligently in the ways they knew best, they saw room for improvement. They conceived of a world with fewer inefficiencies and more varied products; and they turned their minds, great and small, to making that world real.

So in a sense Ridley is correct. Once such a mentality has taken root, it is difficult to suppress. As he says, short of killing off an entire population, entirely removing the improving mentality is nigh on impossible. And to be fair, he does say that innovation only increasingly resembles an unstoppable organism. 

But in eighteenth century Britain you more or less had to know an innovator to stand a chance of catching the improving bug, though serendipitous infection did occur. Samuel Johnson, of dictionary fame, started his (historically neglected) obsession with experimental chemistry after translating the obituary of the medical pioneer Hermann Boerhaave.

Today, however, it’s practically in the air. I could flick through a copy of Wired magazine in a supermarket, or see the statue of James Watt in St Paul’s Cathedral. Innovation is unstoppable, but not because the Internet has innumerable nodes of useful information, as Ridley claims. It’s unstoppable because across the world, innovation is so deeply rooted in our speech, thought and culture. And we don’t even know it.

Maybe innovation is like a living thing after all. But rather than a coral reef, it’s a virus we all caught.

Filed under innovation Industrial Revolution Matt Ridley Robert Hooke Christiaan Huygens James Watt Evolution of Everything

0 notes &

Innovation vs Science

Matt Ridley has an interesting article in the Wall Street Journal, to promote his upcoming book, The Evolution of Everything. I don’t think I’ve ever more strongly both agreed and disagreed with an article. There’s a lot to talk about here, so here’s just one of the many issues I’d like to bring up.

Ridley is right to question the widely assumed primacy of scientific research in driving technological progress. Indeed, it’s important to distinguish between science and innovation. For the purposes of my own research, I take scientists and natural philosophers (as they used to be known) as those seeking to advance our understanding of the world. Innovators, on the other hand, are those who seek to improve or at least design the world around them. The distinction is similar to that between social scientists and policymakers (although the latter probably design more than they improve). 

So in one sense Ridley is right. You don’t need to agree with Copernicus that the earth revolves around the sun in order to construct a decent sundial - you need merely observe the sun’s progress through the sky. The economic historian Joel Mokyr calls this “useful knowledge”. Observation can be sufficient for innovation, without necessarily understanding why things are the way they are.

However, individuals were sometimes both scientists and innovators, and many innovators incorporated the insights of scientists into their designs. Take steam engines, which Ridley cites as having benefited very little from thermodynamics. 

While the earliest steam engines didn’t require much understanding of thermodynamics, they benefited immensely from our growing understanding of vacuums. Denis Papin, the designer of the earliest piston-driving engine, had worked on vacuums with Huygens, Leibniz and Boyle. Many of the other early improvers of steam engines like Thomas Savery, Henry Beighton and John Theophilus Desaguliers were all involved with the Royal Society, the pre-eminent society of scientists in the country.

It was only later, when the efficiency of these engines needed to be improved, that thermodynamics became important. They were extraordinarily wasteful, with vast amounts of fuel expended on heating up and cooling down the same piston chamber. And we may not have developed much beyond the basic, mine-pumping, atmospheric-pressure steam engines, if James Watt hadn’t taken into account the theory of latent heat, developed by his friend and mentor Joseph Black.

image

When you find out that it takes a lot more energy to bring about a change in state, from liquid to gas, it makes sense to do the condensing in a separate chamber, while keeping the piston chamber warm. Hence, Watt’s separate condenser, pictured above. And Black didn’t glean this insight from observing steam engines, as far as I know, but from observing that snow didn’t melt instantaneously. Wikipedia informs me that Black’s discovery marks the beginning of thermodynamics.

So, Ridley is right to stress that innovation need not necessarily stem from science, but the impact of science on innovation should not be underestimated. Indeed, I strongly suspect that science plays an ever more important role in innovation. See nuclear power reactors, bio-engineering, many pharmaceuticals, and the development of new materials. However, this requires further research before saying for sure, and I’ll deal with some of his other points in later posts.

Filed under James Watt Denis Papin Steam Engines Thomas Savery Science Natural Philosophy Industrial Revolution Innovation

5 notes &

How many Industrial Revolutions?!

I often come across the argument that we are about to experience a “4th Industrial Revolution”. Here’s one example, just from today:

image

I think this is wrong, both in description and chronology. And I wish economic historians and the commentariat could settle on definitions that are both historically accurate and analytically useful.

First off, why is it wrong? 

Just a few (of many) reasons, mostly involving problems of timing and inaccuracy.

It implies that the first IR started with the steam engine, thus placing far too much emphasis on coal as an explanation for its origins. To be fair, it’s better than others because it mentions water and mechanisation too, but if so, why choose 1784 as the starting point? If you’re going to do that, you could push it decades back. 

And let’s not forget that the British Industrial Revolution was the invention of invention. So perhaps we shouldn’t just be concentrating on only a few general-purpose technologies (GPTs). If you are going to look at GPTs though, this should be done with accuracy. “Division of labour” seems a bizarre one to put in the 2nd IR (I assume they mean Ford-style assembly lines?). Surely it should be earlier given Smith was writing about it in the 1770s. If anything, division of labour is what accompanies many technological changes rather than a GPT in and of itself. 

A similar attempt by Gordon, placing steel production as part of the 1st IR (1770-1840 for him) has the dates off completely. You might go with Benjamin Huntsman’s crucible steel which was much earlier, in the 1740s, but actually you’d want to go with Bessemer’s converter (patented later, in 1856), which actually brought on the transition to mass production (watch this video of it - it’s one of the coolest things ever invented).

So what should be the standard?

There are two possibilities. 

1. One is to accept that the Industrial Revolution was an acceleration of innovation - the birth of modern economic growth. As such, it never stopped, and we’re still living through it. This has several analytical advantages, the main one being that it forces us to think about the general sources of innovation rather than what led to specific developments. It also prevents us getting hung up on explanations of the IR that don’t quite work (i.e. that coal caused the IR) by underscoring the fact that the IR was a generalised wave of innovation, applied to all industries, from advertising to actuarial science and agriculture, as well as to iron, cotton and steam.

2. The other is to devise something more accurate, that focuses on complementary GPTs in materials, techniques and energy in particular. I would also add transport, communication and perhaps medicine and food to that list for their large knock-on effects. Here’s a sketch of what I mean, with occasional illustrative examples of prominent inventors in brackets, and approximate dates (I’m less clear on the later stages, so please be a little forgiving):

1st IR (1650-1820): instruments & measurement (Huygens, Harrison, etc), mechanisation, tools and factories (Arkwright, Boulton, Bramah, Brunel Snr), water power (Sorocold), civil engineering (Yarranton, Smeaton, Brunel Jnr), iron (Darby, Cort), inoculation and vaccination (Montagu, Jenner), early steam power (Savery, Newcomen, Watt, Trevithick), early chemicals and rubber (MacIntosh), food canning (Donkin)

2nd IR (1820-1870): coal gas for heating & lighting (Accum, Malam), steam engines for transport on land and sea (Stephenson, Pettit-Smith), precision tools for mass production (Maudslay, Whitworth), steel (Bessemer, Siemens), mass hygiene and sanitation (Arnott, Pasteur), mass industrial and agricultural chemicals including plastics (Lawes, Parkes), electric telegraphy (Cooke, Wheatstone, Diamond), hydraulic cement (Aspdin, Johnson), refrigeration (Harrison)

3rd IR (1870-1914): mass electricity and its applications (Edison, Bell), early oil for transport (Benz, Diesel), production line (Ford), chemical fertilisers (Haber), early flight (Wright), mass vaccination (Pasteur), mass steel use

4th IR (1914-1970): mass home machinery (i.e. washing machines), mass oil transport use, antibiotics, early computing, early mass electronic devices (i.e. TVs), early space flight, early non-fossil power, green revolution, mass plastic use

5th IR (1970-?): mass silicon use, mass electronic devices, robots in production, mass flight, mass internet use, early communication-enabled asset sweating (i.e. sharing economy)

6th IR (?) all speculative: mass computed industrial use of additive manufacturing (i.e. robot-designed and 3D printed buildings), bio-manufacturing (i.e. applying agriculture to industry), mass space flight, mass renewable power and storage, mass home robots, engineered medicine (i.e. programmable viruses), slowed ageing, automated transport, mass asset sweating

Filed under Industrial Revolution What was the Industrial Revolution When was the Industrial Revolution