Against Copyright

Copyright considered paradoxical, incoherent, and harmful from an information theory and compression perspective as there is no natural kind corresponding to ‘works’, merely longer or shorter strings for stupider or smarter algorithms.
philosophy, politics, computer-science
2008-09-262014-05-04 finished certainty: possible importance: 2


One of the most trou­ble­some as­pects of copy­right law as ap­plied to tech­nol­ogy is how the lat­ter makes it pos­si­ble - and even en­cour­ages - do­ing things that ex­pose the in­tel­lec­tual in­co­her­ence of the for­mer; copy­right is merely an ad hoc set of rules and cus­tom evolved for by­gone eco­nomic con­di­tions to ac­com­plish cer­tain so­cial­ly-de­sir­able ends (and can be crit­i­cized or abol­ished for its fail­ures). If we can­not get the cor­rect of copy­right, then the dis­cus­sion is fore­doomed1. Many peo­ple suffer from the delu­sion that it is some­thing more than that, that copy­right is some­how ob­jec­tive, or even some sort of ac­tual moral hu­man right (con­sider the French “droit d’au­teur”, one of the )2 with the same prop­er­ties as other rights such as be­ing per­pet­ual3. This is quite wrong. In­for­ma­tion has a his­to­ry, but it car­ries with it no in­trin­sic copy­right.

This has been ar­tic­u­lated in some ways se­ri­ous and hu­mor­ous, but we can ap­proach it in an in­ter­est­ing way from the di­rec­tion of the­o­ry.

Lossless compression

One of the more el­e­gant ideas in com­puter sci­ence is the proof that loss­less com­pres­sion does not com­press all files. That is, while a al­go­rithm like ZIP will com­press a great many files - per­haps to tiny frac­tions of the orig­i­nal file size - it will nec­es­sar­ily fail to com­press many other files, and in­deed for every file it shrinks, it will ex­pand some other file. The gen­eral prin­ci­ple here is :

“There ain’t no such thing as a free lunch.”

There is no free lunch in com­pres­sion. The nor­mal proof of this in­vokes the ; the proof goes that each file must map onto a sin­gle unique shorter file, and that shorter file must uniquely map back to the longer file (if the shorter did not, you would have de­vised a sin­gu­larly use­less com­pres­sion al­go­rithm - one that did not ad­mit of de­com­pres­sion).

But the prob­lem is, a long string sim­ply has more ‘room’ (pos­si­bil­i­ties) than a shorter string. Con­sider a sim­ple case: we have a num­ber be­tween 0 and 1000, and we wish to com­press it. Our com­pressed out­put is be­tween 0 and 10 - short­er, yes? But sup­pose we com­press 1000 into 10. Which num­bers do 900-999 get com­pressed to? Do they all go to 9? But then given a 9, we have ab­solutely no idea what it is sup­posed to ex­pand in­to. Per­haps 999 goes to 9, 998 to 8, 9997 to 7 and so on - but just a few num­bers later we run out of sin­gle-digit num­bers, and we face the prob­lem again.

Fun­da­men­tal­ly, you can­not pack 10kg of stuff into a 5kg bag. TANSTAAFL.

You keep using that word…

The fore­go­ing may seem to have proven loss­less com­pres­sion im­pos­si­ble, but we know that we do it rou­tine­ly; so how does that work? Well, we have proven that loss­lessly map­ping a set of long strings onto a set of shorter sub­strings is im­pos­si­ble. The an­swer is to re­lax the shorter re­quire­ment: we can have our al­go­rithm ‘com­press’ a string into a longer one. Now the Pi­geon­hole Prin­ci­ple works for us - there is plenty of space in the longer strings for all our to-be-com­pressed strings. And as it hap­pens, one can de­vise ‘patho­log­i­cal’ in­put to some com­pres­sion al­go­rithms in which a short in­put de­com­presses4 into a much larger out­put - there are avail­able which em­pir­i­cally demon­strate the pos­si­bil­i­ty.

What makes loss­less com­pres­sion any more than a math­e­mat­i­cal cu­rios­ity is that we can choose which sets of strings will wind up use­fully small­er, and what sets will be rel­e­gated to the outer dark­ness of obe­si­ty.

Compress Them All—The User Will Know His Own

TANSTAAFL is king, but the uni­verse will often ac­cept pay­ment in trash. We hu­mans do not ac­tu­ally want to com­press all pos­si­ble strings but only ones we ac­tu­ally make. This is anal­o­gous to sta­tic typ­ing in pro­gram­ming lan­guages; type check­ing may re­sult in re­ject­ing many cor­rect pro­grams, but we do not re­ally care as those are not pro­grams we ac­tu­ally want to run. Or, high level pro­gram­ming lan­guages in­su­late us from the ma­chine and make it im­pos­si­ble to do var­i­ous tricks one could do if one were pro­gram­ming in as­sem­bler; but most of us do not ac­tu­ally want to do those tricks, and are happy to sell that abil­ity for con­ve­niences like porta­bil­i­ty. Or we hap­pily barter away man­ual mem­ory man­age­ment (with all the power in­dued) to gain con­ve­nience and cor­rect­ness.

This is a pow­er­ful con­cept which is ap­plic­a­ble to many trade­offs in com­puter sci­ence and en­gi­neer­ing, but we can view the mat­ter in a differ­ent way - one which casts doubt on sim­plis­tic views of knowl­edge and cre­ativ­ity such as we see in copy­right law. For starters, we can look at the : a fast al­go­rithm sim­ply treats the in­put data (eg. WAV) as es­sen­tially the out­put, while a smaller in­put with ba­sic re­dun­dancy elim­i­nated will take ad­di­tional pro­cess­ing and re­quire more time to run. A com­mon phe­nom­e­non, with some ex­treme ex­am­ples5, but we can look at a more mean­ing­ful trade­off.

Cast your mind back to a loss­less al­go­rithm. It is some­how choos­ing to com­press ‘in­ter­est­ing’ strings and let­ting un­in­ter­est­ing strings blow up. How is it do­ing so? It can’t be do­ing it at ran­dom, and do­ing easy things like cut­ting down on rep­e­ti­tion cer­tainly won’t let you write a al­go­rithm that can cut 30 megabyte files down to 5 megabytes.

Work smarter, not harder

“An effi­cient pro­gram is an ex­er­cise in log­i­cal brinkman­ship.”

6

The an­swer is that the al­go­rithms are smart about a kind of file. Some­one put a lot of effort think­ing about the kinds of reg­u­lar­ity that one might find in only a WAV file and how one could pre­dict & ex­ploit them. As we pro­gram­mers like to say, the al­go­rithms em­body do­main-spe­cific knowl­edge. A GZIP al­go­rithm, say, op­er­ates while im­plic­itly mak­ing an en­tire con­stel­la­tion of as­sump­tions about its in­put - that rep­e­ti­tion in it comes in chunks, that it is low-en­tropy, that it is glob­ally sim­i­lar, that it is prob­a­bly text (which en­tails an­other batch of reg­u­lar­i­ties gzip can ex­ploit) in an Eng­lish-like lan­guage or it’s bi­na­ry, and so on. These as­sump­tions baked into the al­go­rithm col­lec­tively con­sti­tute a sort of rudi­men­tary in­tel­li­gence.

The al­go­rithm com­presses smartly be­cause it op­er­ates over a small do­main of all pos­si­ble strings. If the do­main shrunk even more, even more as­sump­tions could be made; con­sider the com­pres­sion ra­tios at­tained by top-s­cor­ers in the Mar­cus Hut­ter com­pres­sion chal­lenge. The sub­ject area is very nar­row: highly styl­ized and reg­u­larly for­mat­ted, hu­man-gen­er­ated Eng­lish text in Me­di­aWiki markup on en­cy­clo­pe­dic sub­jects. Here the al­go­rithms quite lit­er­ally ex­hibit in­tel­li­gence in or­der to eek out more space sav­ings, draw­ing on AI tech­niques such as neural nets.

If the fore­go­ing is un­con­vinc­ing, then go and con­sider a wider range of loss­less al­go­rithms. Video codecs think in terms of wavelets and frames; they will not even deign to look at ar­bi­trary streams. Au­dio codecs strive to em­u­late the hu­man ear. Im­age com­pres­sion schemes work on unique­ly-hu­man as­sump­tions like trichro­matic­i­ty.

So then, we are agreed that a good com­pres­sion al­go­rithm em­bod­ies knowl­edge or in­for­ma­tion about the sub­ject mat­ter. This propo­si­tion seems in­du­bitable. The fore­go­ing has all been in­for­mal, but I be­lieve there is noth­ing in it which could not be turned into a rig­or­ous math­e­mat­i­cal proof at need.

Compression requires interpretation

But this propo­si­tion should be sub­lim­i­nally wor­ry­ing copy­right-minded per­sons. Con­tent de­pends on in­ter­pre­ta­tion? The lat­ter is in­ex­tri­ca­bly bound up with the for­mer? This feels dan­ger­ous. But it gets worse.

Code is data, data code

Sup­pose we have a copy­righted movie, . And we are watch­ing it on our DVD player or com­put­er, and the MPEG4 file is be­ing played, and every­thing is fine. Now, it seems clear that the player could not play the file if it did not in­ter­pret the file through a spe­cial MPEG4-only al­go­rithm. After all, any other al­go­rithm would just make a mess of things; so the al­go­rithm con­tains nec­es­sary in­for­ma­tion. And it seems equally clear that the file it­self con­tains nec­es­sary in­for­ma­tion, for we can­not sim­ply tell the DVD player to play Ti­tanic with­out hav­ing that quite spe­cific mul­ti­-gi­ga­byte file - feed­ing the MPEG4 al­go­rithm a ran­dom file or no file at all will like­wise just make a mess of things. So there­fore both al­go­rithm and file con­tain in­for­ma­tion nec­es­sary to the fi­nal copy­righted ex­pe­ri­ence of ac­tual au­dio-vi­su­als.

There is noth­ing stop­ping us from fus­ing the al­go­rithm and file. It is a tru­ism of com­puter sci­ence that ‘code is data, and data is code’. Every com­puter re­lies on this truth. We could eas­ily come up with a 4 or 5 gi­ga­byte ex­e­cutable file which when run all by its lone­some, yields Ti­tanic. The gen­eral ap­proach stor­ing re­quired data within a pro­gram is a com­mon pro­gram­ming tech­nique, as it makes in­stal­la­tion sim­pler.

The point of the fore­go­ing is that the in­for­ma­tion which con­sti­tutes Ti­tanic is mo­bile; it could at any point shift from be­ing in the file to the al­go­rithm - and vice ver­sa.

We could squeeze most of the in­for­ma­tion from the al­go­rithm to the file, if we wanted to; an ex­am­ple would be a WAV au­dio file, which is ba­si­cally the raw in­put for the speaker - next to no in­ter­pre­ta­tion is re­quired (as op­posed to the much smaller FLAC file, which re­quires much al­go­rith­mic work). A sim­i­lar thing could be done for the Ti­tanic MPEG4 files. A tele­vi­sion is n pix­els by n, ren­der­ing n times a sec­ond, so a com­pletely raw and nearly un­in­ter­preted file would be the fi­nite sum of n3 pix­els. That demon­strates one end of the spec­trum is pos­si­ble, where the file con­tains al­most all the in­for­ma­tion and the al­go­rithm very lit­tle.

Gedankenexperiment

The other end of the spec­trum is where the al­go­rithm pos­sesses most of the in­for­ma­tion and the file very lit­tle. How do we con­struct such a sit­u­a­tion?

Well, con­sider the most de­gen­er­ate ex­am­ple: an al­go­rithm which con­tains in it an ar­ray of frames which are Ti­tanic. This ex­e­cutable would re­quire 0 bits of in­put. A less ex­treme ex­am­ple: the ex­e­cutable holds each frame of Ti­tanic, num­bered from 1 to (say) 1 mil­lion. The in­put would then merely con­sist of [1,2,3,4..1000000]. And when we ran the al­go­rithm on ap­pro­pri­ate in­put, lo and be­hold - Ti­tanic!

Ab­surd, you say. That demon­strates noth­ing. Well, would you be sat­is­fied if in­stead it had 4 mil­lion quar­ter-frames? No? Per­haps 16 mil­lion six­teen­th-frames will be suffi­ciently un-Ti­tanic-y. If that does not sat­isfy you, I am will­ing to go down to the pixel lev­el. It is worth not­ing that even a pixel level ver­sion of the al­go­rithm can still en­code all other movies! It may be awk­ward to have to ex­press the other movies pixel by pixel (a frame would be a nearly ar­bi­trary se­quence like 456788,67,89,189999,1001..20000), but it can be done.

The at­ten­tive reader will have al­ready no­ticed this, but this hy­po­thet­i­cal Ti­tanic al­go­rithm pre­cisely demon­strates what I meant about TANSTAAFL and al­go­rithms know­ing things; this al­go­rithm knows an in­cred­i­ble amount about Ti­tanic, and so Ti­tanic is fa­vored with an ex­tremely short com­pressed out­put; but it nev­er­the­less lets us en­code every other movie - just with longer out­put.

In Which All Is Made Clear

There is no mean­ing­ful & non-ar­bi­trary line to draw. Copy­right is an un­an­a­lyz­able ghost in the ma­chine, which has held up thus far based on fiat and lim­ited hu­man (ma­chine) ca­pa­bil­i­ties. This rarely came up be­fore as hu­mans are not free to vary how much in­ter­pre­ta­tion our eyes or ears do, and we lack the men­tal abil­ity to freely choose be­tween ex­tremely spelled out and de­tailed text, and ex­tremely crabbed el­lip­ti­cal al­lu­sive lan­guage; we may try, but the math out­does us and can pro­duce program/data pairs which differ by or­ders of mag­ni­tudes, and our ma­chines can han­dle the math.

But now that we have a bet­ter un­der­stand­ing of con­cepts such as in­for­ma­tion, it is ap­par­ent that copy­right is no longer sus­tain­able on its log­i­cal or moral mer­its. There are only prac­ti­cal eco­nomic rea­sons for main­tain­ing copy­right, and the cur­rent copy­right regime clearly fails to achieve such aims.

See Also


  1. For ex­am­ple, if you grant that copy­right ex­ists as a moral right, then you have im­me­di­ately ac­cepted the ab­ro­ga­tion of an­other moral right, to free speech; this is in­trin­si­cally built into the con­cept of copy­right, which is noth­ing but im­pos­ing re­stric­tions on other peo­ple’s speech.↩︎

  2. From dis­sent in Golan v. Holder 2012 ():

    Jeffer­son even­tu­ally came to agree with Madis­on, sup­port­ing a lim­ited con­fer­ral of mo­nop­oly rights but only “as an en­cour­age­ment to men to pur­sue ideas which may pro­duce util­ity.” Let­ter from Thomas Jeffer­son to Isaac McPher­son (Aug. 13, 1813), in 6 Pa­pers of Thomas Jeffer­son, at 379, 383 (J. Looney ed. 2009) (em­pha­sis added).

    This util­i­tar­ian view of copy­rights and patents, em­braced by Jeffer­son and Madis­on, stands in con­trast to the “nat­ural rights” view un­der­ly­ing much of con­ti­nen­tal Eu­ro­pean copy­right law-a view that the Eng­lish book­sellers pro­moted in an effort to limit their losses fol­low­ing the en­act­ment of the and that in part mo­ti­vated the en­act­ment of some of the colo­nial statutes. Pat­ter­son 158-179, 183-192. Premised on the idea that an au­thor or in­ven­tor has an in­her­ent right to the fruits of his labor, it myth­i­cally stems from a leg­endary 6th-cen­tury state­ment of King Di­armed “‘to every cow her calf, and ac­cord­ingly to every book its copy.’” A. Bir­rell, Seven Lec­tures on the Law and His­tory of Copy right in Books 42 (1899). That view, though per­haps re­flected in the Court’s opin­ion, ante, at 30, runs con­trary to the more util­i­tar­ian views that in­flu­enced the writ­ing of our own Con­sti­tu­tion’s . See S. Rick­et­son, The Berne Con­ven­tion for the Pro­tec­tion of Lit­er­ary and Artis­tic Works: 1886-1986, pp. 5-6 (1987) (The first French copy­right laws “placed au­thors’ rights on a more el­e­vated ba­sis than the Act of Anne had done,” on the un­der­stand­ing that they were “sim­ply ac­cord­ing for­mal recog­ni­tion to what was al­ready in­her­ent in the ‘very na­ture of things’”); S. Stew­art, In­ter­na­tional Copy­right and Neigh­bour­ing Rights 6-7 (2d ed. 1989) (de­scrib­ing the Eu­ro­pean sys­tem of droit d’au­teur).

    ↩︎
  3. ed­i­to­r­ial “A Great Idea Lives For­ev­er. Should­n’t Its Copy­right?” (Lessig’s re­view) is one of the stan­dard ex­am­ples of this sort of view. He be­gins by claim­ing that in­tel­lec­tual prop­erty is morally equiv­a­lent to other forms of prop­er­ty, and hence is as morally pro­tected as the oth­ers:

    What if, after you had paid the taxes on earn­ings with which you built a house, sales taxes on the ma­te­ri­als, real es­tate taxes dur­ing your life, and in­her­i­tance taxes at your death, the gov­ern­ment would even­tu­ally com­man­deer it en­tire­ly? This does not hap­pen in our so­ci­ety … to hous­es. Or to busi­ness­es. Were you to have ush­ered through the many gates of tax­a­tion a flour mill, travel agency or news­pa­per, they would not suffer to­tal con­fis­ca­tion.

    Of course, an au­thor can pos­sess his work in per­pe­tu­ity - the phys­i­cal ar­ti­fact. The gov­ern­ment does not in­flict ‘to­tal con­fis­ca­tion’ on the au­thor’s man­u­script or com­put­er. Rather, what is be­ing con­fis­cated is the abil­ity to call upon the gov­ern­ment to en­force, with phys­i­cal co­er­cion and vi­o­lence, any­where within its bor­ders, some prac­tices re­gard­ing in­for­ma­tion. This is a lit­tle differ­ent from prop­er­ty. Hel­prin then ad­dresses the eco­nomic ra­tio­nale of copy­right, only to im­me­di­ately ap­peal to eth­i­cal con­cerns tran­scend­ing any mere laws or eco­nomic gains:

    It is, then, for the pub­lic good. But it might also be for the pub­lic good were Con­gress to al­low the en­slave­ment of for­eign cap­tives and their de­scen­dants (this was tried); the seizure of Bill Gates’s bank­book; or the ruth­less sup­pres­sion of Alec Bald­win. You can al­ways make a case for the pub­lic in­ter­est if you are will­ing to ex­clude from com­mon eq­uity those whose rights you seek to abridge. But we don’t op­er­ate that way, most­ly…­Congress is free to ex­tend at will the term of copy­right. It last did so in 1998, and should do so again, as far as it can throw. Would it not be just and fair for those who try to ex­tract a liv­ing from the un­cer­tain arts of writ­ing and com­pos­ing to be freed from a form of con­fis­ca­tion not vis­ited upon any­one else? The an­swer is ob­vi­ous, and tran­scends even jus­tice. No good case ex­ists for the in­equal­ity of real and in­tel­lec­tual prop­er­ty, be­cause no good case can ex­ist for treat­ing with spe­cial dis­fa­vor the work of the spirit and the mind.

    From Hel­prin’s “In De­fense of The Book: A re­ply to the crit­ics of Dig­i­tal Bar­barism:

    Copy­right was the some­times sparkling, con­tro­ver­sial fuse I used as an ar­ma­ture for a much ex­panded ar­gu­ment in re­gard to the re­la­tion­ship of man and ma­chine, in which I at­tacked di­rectly into the as­sault of mod­ernism, col­lec­tivism, mil­i­tant athe­ism, util­i­tar­i­an­ism, mass con­for­mi­ty, and like things that are poi­son to the nat­ural pace and re­quire­ments of the soul, that reify what those who say there is no soul be­lieve is left, and that, in a head­long rush to fash­ion man solely after his own con­cep­tions, are suc­ceed­ing. The greater the suc­cess of this ten­den­cy, how­ev­er, the un­hap­pier are its ad­her­ents and the more they seek after their un­avail­ing ad­dic­tions, which, like the ex­pla­na­tion for nympho­ma­nia, is not sur­pris­ing. It is es­pe­cially true in re­gard to the be­lief that hap­pi­ness and sal­va­tion can be found in gad­gets: i.e., toy wor­ship. I ad­dressed this. I de­fended prop­erty as a moral ne­ces­sity of lib­er­ty. I at­tempted to re­claim Jeffer­son from the pre­sump­tu­ous em­brace of the copy­left. (Even the do­cents at Mon­ti­cello mis­in­ter­pret Jeffer­son, fail­ing to rec­og­nize his deep and abid­ing love of the di­vine or­der.) And I ad­vanced a propo­si­tion to which the crit­ics of copy­right have been made al­ler­gic by their col­lec­tivist ed­u­ca­tion - that the great achieve­ment of West­ern civ­i­liza­tion is the evo­lu­tion from cor­po­rate to in­di­vid­ual right, from man de­fined, priv­i­leged, or op­pressed by cast, clan, guild, eth­nic­i­ty, race, sex, and creed, to the in­di­vid­u­al’s rights and priv­i­leges that once were the province only of kings

    But this is its con­clud­ing pas­sage:

    ’The new, dig­i­tal bar­barism is, in its lan­guage, com­port­ment, thought­less­ness, and obei­sance to force and pow­er, very much like the old. And like the old, and every form of tyran­ny, hard or soft, it is most vul­ner­a­ble to a bright light shone upon it. To call it for what it is, to ex­am­ine it while pay­ing no heed to its rich bribes and pow­er­ful co­er­cions, to con­trast it to what it pre­sumes to re­place, is to be­gin the long fight against it.

    Very clear­ly, the choice is be­tween the pre­em­i­nence of the in­di­vid­ual or of the col­lec­tive, of im­pro­vi­sa­tion or of rou­tine, of the soul or of the ma­chine. It is a choice that per­haps you have al­ready made, with­out know­ing it. Or per­haps it has been made for you. But it is al­ways pos­si­ble to opt in or out, be­cause your affir­ma­tions are your own, the court of judge­ment your mind and heart. These are free, and you are the sov­er­eign, al­ways. Choose.’

    ↩︎
  4. This should be true only of de­com­pres­sion - only with an in­com­pe­tent com­pres­sion al­go­rithm will one ever be able to com­press a small in­put to a large out­put, be­cause the al­go­rith­m’s for­mat can start with a sin­gle bit in­di­cat­ing whether the fol­low­ing is com­pressed or not com­pressed, and if the out­put is larger than in­put, not bother com­press­ing at all. That ad­di­tional bit makes the file larger and so does­n’t raise any is­sues with the Pi­geon­hole Prin­ci­ple.↩︎

  5. The most ex­treme space-time trade­off I have ever run in­to, far be­yond any ex­am­ples of or , is a purely the­o­ret­i­cal one; from Ben­nett 1988:

    The trade­off be­tween con­cise­ness of rep­re­sen­ta­tion and ease of de­cod­ing is il­lus­trated in an ex­treme form by the in­for­ma­tion re­quired to solve the . One stan­dard rep­re­sen­ta­tion of this in­for­ma­tion is as an (the char­ac­ter­is­tic se­quence of the halt­ing set) whose i’th bit is 0 or 1 ac­cord­ing to whether the i’th pro­gram halts. This se­quence is clearly re­dun­dant, be­cause many in­stances of the halt­ing prob­lem are eas­ily solv­able or re­ducible to other in­stances. In­deed, K0 is far more re­dun­dant than this su­per­fi­cial ev­i­dence might in­di­cate. Barzdin [2] showed that this in­for­ma­tion can be com­pressed to the log­a­rithm of its orig­i­nal bulk, but no con­cisely en­coded rep­re­sen­ta­tion of it can be de­coded in re­cur­sively bounded time.

    like , which are fairly high up the com­plex­ity hi­er­ar­chy, are by­words for in­tractabil­ity & fu­til­ity among pro­gram­mers. But this de­cod­ing of an ex­tremely space-effi­cient rep­re­sen­ta­tion of K0 is a process so slow that it does­n’t even fit in any nor­mal com­plex­ity class!↩︎

  6. A para­phrase by Lamp­son 1983’s “Hints for com­puter sys­tem de­sign” of a Dijk­stra com­ment in EWD 709: My hopes of com­put­ing sci­ence”:

    The hope is that trans­for­ma­tions from a mod­est li­brary will pro­vide a path from a naïve, in­effi­cient but ob­vi­ously cor­rect pro­gram to a so­phis­ti­cated effi­cient so­lu­tion. I have seen how via pro­gram trans­for­ma­tions strik­ing gains in effi­ciency have been ob­tained by avoid­ing re­com­pu­ta­tions of the same in­ter­me­di­ate re­sults, even in sit­u­a­tions in which this pos­si­bil­i­ty—note that the in­ter­me­di­ate re­sults are never part of the orig­i­nal prob­lem state­men­t!—was, at first sight, sur­pris­ing…I am afraid that great hopes of pro­gram trans­for­ma­tions can only be based on what seems to me an un­der­es­ti­ma­tion of the log­i­cal brinkman­ship that is re­quired for the jus­ti­fi­ca­tion of re­ally effi­cient al­go­rithms. It is cer­tainly true, that each pro­gram trans­for­ma­tion em­bod­ies a the­o­rem, but are these the the­o­rems that could con­tribute sig­nifi­cantly to the body of knowl­edge and un­der­stand­ing that would give us ma­tu­ri­ty? I doubt, for many of them are too triv­ial and too much tied to pro­gram no­ta­tion.

    ↩︎