Against Copyright

Copyright considered paradoxical, incoherent, and harmful from an information theory and compression perspective as there is no natural kind corresponding to ‘works’, merely longer or shorter strings for stupider or smarter algorithms.
philosophy, politics, computer-science
2008-09-262014-05-04 finished certainty: possible importance: 2


One of the most trou­ble­some aspects of copy­right law as applied to tech­nol­ogy is how the lat­ter makes it pos­si­ble - and even encour­ages - doing things that expose the intel­lec­tual inco­her­ence of the for­mer; copy­right is merely an ad hoc set of rules and cus­tom evolved for bygone eco­nomic con­di­tions to accom­plish cer­tain social­ly-de­sir­able ends (and can be crit­i­cized or abol­ished for its fail­ures). If we can­not get the cor­rect of copy­right, then the dis­cus­sion is fore­doomed1. Many peo­ple suf­fer from the delu­sion that it is some­thing more than that, that copy­right is some­how objec­tive, or even some sort of actual moral human right (con­sider the French “”, one of the )2 with the same prop­er­ties as other rights such as being per­pet­ual3. This is quite wrong. Infor­ma­tion has a his­to­ry, but it car­ries with it no intrin­sic copy­right.

This has been artic­u­lated in some ways seri­ous and humor­ous, but we can approach it in an inter­est­ing way from the direc­tion of the­o­ry.

Lossless compression

One of the more ele­gant ideas in com­puter sci­ence is the proof that loss­less com­pres­sion does not com­press all files. That is, while a algo­rithm like ZIP will com­press a great many files - per­haps to tiny frac­tions of the orig­i­nal file size - it will nec­es­sar­ily fail to com­press many other files, and indeed for every file it shrinks, it will expand some other file. The gen­eral prin­ci­ple here is :

“There ain’t no such thing as a free lunch.”

There is no free lunch in com­pres­sion. The nor­mal proof of this invokes the ; the proof goes that each file must map onto a sin­gle unique shorter file, and that shorter file must uniquely map back to the longer file (if the shorter did not, you would have devised a sin­gu­larly use­less com­pres­sion algo­rithm - one that did not admit of decom­pres­sion).

But the prob­lem is, a long string sim­ply has more ‘room’ (pos­si­bil­i­ties) than a shorter string. Con­sider a sim­ple case: we have a num­ber between 0 and 1000, and we wish to com­press it. Our com­pressed out­put is between 0 and 10 - short­er, yes? But sup­pose we com­press 1000 into 10. Which num­bers do 900-999 get com­pressed to? Do they all go to 9? But then given a 9, we have absolutely no idea what it is sup­posed to expand into. Per­haps 999 goes to 9, 998 to 8, 9997 to 7 and so on - but just a few num­bers later we run out of sin­gle-digit num­bers, and we face the prob­lem again.

Fun­da­men­tal­ly, you can­not pack 10kg of stuff into a 5kg bag. TANSTAAFL.

You keep using that word…

The fore­go­ing may seem to have proven loss­less com­pres­sion impos­si­ble, but we know that we do it rou­tine­ly; so how does that work? Well, we have proven that loss­lessly map­ping a set of long strings onto a set of shorter sub­strings is impos­si­ble. The answer is to relax the shorter require­ment: we can have our algo­rithm ‘com­press’ a string into a longer one. Now the Pigeon­hole Prin­ci­ple works for us - there is plenty of space in the longer strings for all our to-be-­com­pressed strings. And as it hap­pens, one can devise ‘patho­log­i­cal’ input to some com­pres­sion algo­rithms in which a short input decom­presses4 into a much larger out­put - there are avail­able which empir­i­cally demon­strate the pos­si­bil­i­ty.

What makes loss­less com­pres­sion any more than a math­e­mat­i­cal curios­ity is that we can choose which sets of strings will wind up use­fully small­er, and what sets will be rel­e­gated to the outer dark­ness of obe­si­ty.

Compress Them All—The User Will Know His Own

TANSTAAFL is king, but the uni­verse will often accept pay­ment in trash. We humans do not actu­ally want to com­press all pos­si­ble strings but only ones we actu­ally make. This is anal­o­gous to sta­tic typ­ing in pro­gram­ming lan­guages; type check­ing may result in reject­ing many cor­rect pro­grams, but we do not really care as those are not pro­grams we actu­ally want to run. Or, high level pro­gram­ming lan­guages insu­late us from the machine and make it impos­si­ble to do var­i­ous tricks one could do if one were pro­gram­ming in assem­bler; but most of us do not actu­ally want to do those tricks, and are happy to sell that abil­ity for con­ve­niences like porta­bil­i­ty. Or we hap­pily barter away man­ual mem­ory man­age­ment (with all the power indued) to gain con­ve­nience and cor­rect­ness.

This is a pow­er­ful con­cept which is applic­a­ble to many trade­offs in com­puter sci­ence and engi­neer­ing, but we can view the mat­ter in a dif­fer­ent way - one which casts doubt on sim­plis­tic views of knowl­edge and cre­ativ­ity such as we see in copy­right law. For starters, we can look at the : a fast algo­rithm sim­ply treats the input data (eg. WAV) as essen­tially the out­put, while a smaller input with basic redun­dancy elim­i­nated will take addi­tional pro­cess­ing and require more time to run. A com­mon phe­nom­e­non, with some extreme exam­ples5, but we can look at a more mean­ing­ful trade­off.

Cast your mind back to a loss­less algo­rithm. It is some­how choos­ing to com­press ‘inter­est­ing’ strings and let­ting unin­ter­est­ing strings blow up. How is it doing so? It can’t be doing it at ran­dom, and doing easy things like cut­ting down on rep­e­ti­tion cer­tainly won’t let you write a algo­rithm that can cut 30 megabyte files down to 5 megabytes.

Work smarter, not harder

“An effi­cient pro­gram is an exer­cise in log­i­cal brinkman­ship.”

6

The answer is that the algo­rithms are smart about a kind of file. Some­one put a lot of effort think­ing about the kinds of reg­u­lar­ity that one might find in only a WAV file and how one could pre­dict & exploit them. As we pro­gram­mers like to say, the algo­rithms embody domain-spe­cific knowl­edge. A GZIP algo­rithm, say, oper­ates while implic­itly mak­ing an entire con­stel­la­tion of assump­tions about its input - that rep­e­ti­tion in it comes in chunks, that it is low-en­tropy, that it is glob­ally sim­i­lar, that it is prob­a­bly text (which entails another batch of reg­u­lar­i­ties gzip can exploit) in an Eng­lish-­like lan­guage or it’s bina­ry, and so on. These assump­tions baked into the algo­rithm col­lec­tively con­sti­tute a sort of rudi­men­tary intel­li­gence.

The algo­rithm com­presses smartly because it oper­ates over a small domain of all pos­si­ble strings. If the domain shrunk even more, even more assump­tions could be made; con­sider the com­pres­sion ratios attained by top-s­cor­ers in the Mar­cus Hut­ter com­pres­sion chal­lenge. The sub­ject area is very nar­row: highly styl­ized and reg­u­larly for­mat­ted, human-­gen­er­ated Eng­lish text in Medi­aWiki markup on ency­clo­pe­dic sub­jects. Here the algo­rithms quite lit­er­ally exhibit intel­li­gence in order to eek out more space sav­ings, draw­ing on AI tech­niques such as neural nets.

If the fore­go­ing is uncon­vinc­ing, then go and con­sider a wider range of loss­less algo­rithms. Video codecs think in terms of wavelets and frames; they will not even deign to look at arbi­trary streams. Audio codecs strive to emu­late the human ear. Image com­pres­sion schemes work on unique­ly-hu­man assump­tions like trichro­matic­i­ty.

So then, we are agreed that a good com­pres­sion algo­rithm embod­ies knowl­edge or infor­ma­tion about the sub­ject mat­ter. This propo­si­tion seems indu­bitable. The fore­go­ing has all been infor­mal, but I believe there is noth­ing in it which could not be turned into a rig­or­ous math­e­mat­i­cal proof at need.

Compression requires interpretation

But this propo­si­tion should be sub­lim­i­nally wor­ry­ing copy­right-­minded per­sons. Con­tent depends on inter­pre­ta­tion? The lat­ter is inex­tri­ca­bly bound up with the for­mer? This feels dan­ger­ous. But it gets worse.

Code is data, data code

Sup­pose we have a copy­righted movie, . And we are watch­ing it on our DVD player or com­put­er, and the MPEG4 file is being played, and every­thing is fine. Now, it seems clear that the player could not play the file if it did not inter­pret the file through a spe­cial MPEG4-only algo­rithm. After all, any other algo­rithm would just make a mess of things; so the algo­rithm con­tains nec­es­sary infor­ma­tion. And it seems equally clear that the file itself con­tains nec­es­sary infor­ma­tion, for we can­not sim­ply tell the DVD player to play Titanic with­out hav­ing that quite spe­cific mul­ti­-gi­ga­byte file - feed­ing the MPEG4 algo­rithm a ran­dom file or no file at all will like­wise just make a mess of things. So there­fore both algo­rithm and file con­tain infor­ma­tion nec­es­sary to the final copy­righted expe­ri­ence of actual audio-vi­su­als.

There is noth­ing stop­ping us from fus­ing the algo­rithm and file. It is a tru­ism of com­puter sci­ence that ‘code is data, and data is code’. Every com­puter relies on this truth. We could eas­ily come up with a 4 or 5 giga­byte exe­cutable file which when run all by its lone­some, yields Titanic. The gen­eral approach stor­ing required data within a pro­gram is a com­mon pro­gram­ming tech­nique, as it makes instal­la­tion sim­pler.

The point of the fore­go­ing is that the infor­ma­tion which con­sti­tutes Titanic is mobile; it could at any point shift from being in the file to the algo­rithm - and vice ver­sa.

We could squeeze most of the infor­ma­tion from the algo­rithm to the file, if we wanted to; an exam­ple would be a WAV audio file, which is basi­cally the raw input for the speaker - next to no inter­pre­ta­tion is required (as opposed to the much smaller FLAC file, which requires much algo­rith­mic work). A sim­i­lar thing could be done for the Titanic MPEG4 files. A tele­vi­sion is n pix­els by n, ren­der­ing n times a sec­ond, so a com­pletely raw and nearly unin­ter­preted file would be the finite sum of n3 pix­els. That demon­strates one end of the spec­trum is pos­si­ble, where the file con­tains almost all the infor­ma­tion and the algo­rithm very lit­tle.

Gedankenexperiment

The other end of the spec­trum is where the algo­rithm pos­sesses most of the infor­ma­tion and the file very lit­tle. How do we con­struct such a sit­u­a­tion?

Well, con­sider the most degen­er­ate exam­ple: an algo­rithm which con­tains in it an array of frames which are Titanic. This exe­cutable would require 0 bits of input. A less extreme exam­ple: the exe­cutable holds each frame of Titanic, num­bered from 1 to (say) 1 mil­lion. The input would then merely con­sist of 1,2,3,4..1000000. And when we ran the algo­rithm on appro­pri­ate input, lo and behold - Titanic!

Absurd, you say. That demon­strates noth­ing. Well, would you be sat­is­fied if instead it had 4 mil­lion quar­ter-frames? No? Per­haps 16 mil­lion six­teen­th-frames will be suf­fi­ciently un-Titanic-y. If that does not sat­isfy you, I am will­ing to go down to the pixel lev­el. It is worth not­ing that even a pixel level ver­sion of the algo­rithm can still encode all other movies! It may be awk­ward to have to express the other movies pixel by pixel (a frame would be a nearly arbi­trary sequence like 456788,67,89,189999,1001..20000), but it can be done.

The atten­tive reader will have already noticed this, but this hypo­thet­i­cal Titanic algo­rithm pre­cisely demon­strates what I meant about TANSTAAFL and algo­rithms know­ing things; this algo­rithm knows an incred­i­ble amount about Titanic, and so Titanic is favored with an extremely short com­pressed out­put; but it nev­er­the­less lets us encode every other movie - just with longer out­put.

In Which All Is Made Clear

There is no mean­ing­ful & non-ar­bi­trary line to draw. Copy­right is an unan­a­lyz­able ghost in the machine, which has held up thus far based on fiat and lim­ited human (ma­chine) capa­bil­i­ties. This rarely came up before as humans are not free to vary how much inter­pre­ta­tion our eyes or ears do, and we lack the men­tal abil­ity to freely choose between extremely spelled out and detailed text, and extremely crabbed ellip­ti­cal allu­sive lan­guage; we may try, but the math out­does us and can pro­duce program/data pairs which dif­fer by orders of mag­ni­tudes, and our machines can han­dle the math.

But now that we have a bet­ter under­stand­ing of con­cepts such as infor­ma­tion, it is appar­ent that copy­right is no longer sus­tain­able on its log­i­cal or moral mer­its. There are only prac­ti­cal eco­nomic rea­sons for main­tain­ing copy­right, and the cur­rent copy­right regime clearly fails to achieve such aims.

See Also


  1. For exam­ple, if you grant that copy­right exists as a moral right, then you have imme­di­ately accepted the abro­ga­tion of another moral right, to free speech; this is intrin­si­cally built into the con­cept of copy­right, which is noth­ing but impos­ing restric­tions on other peo­ple’s speech.↩︎

  2. From dis­sent in Golan v. Holder 2012 ():

    Jef­fer­son even­tu­ally came to agree with Madis­on, sup­port­ing a lim­ited con­fer­ral of monop­oly rights but only “as an encour­age­ment to men to pur­sue ideas which may pro­duce util­ity.” Let­ter from Thomas Jef­fer­son to Isaac McPher­son (Aug. 13, 1813), in 6 Papers of Thomas Jef­fer­son, at 379, 383 (J. Looney ed. 2009) (em­pha­sis added).

    This util­i­tar­ian view of copy­rights and patents, embraced by Jef­fer­son and Madis­on, stands in con­trast to the “nat­ural rights” view under­ly­ing much of con­ti­nen­tal Euro­pean copy­right law-a view that the Eng­lish book­sellers pro­moted in an effort to limit their losses fol­low­ing the enact­ment of the and that in part moti­vated the enact­ment of some of the colo­nial statutes. Pat­ter­son 158-179, 183-192. Premised on the idea that an author or inven­tor has an inher­ent right to the fruits of his labor, it myth­i­cally stems from a leg­endary 6th-­cen­tury state­ment of King Diarmed “‘to every cow her calf, and accord­ingly to every book its copy.’” A. Bir­rell, Seven Lec­tures on the Law and His­tory of Copy right in Books 42 (1899). That view, though per­haps reflected in the Court’s opin­ion, ante, at 30, runs con­trary to the more util­i­tar­ian views that influ­enced the writ­ing of our own Con­sti­tu­tion’s . See S. Rick­et­son, The Berne Con­ven­tion for the Pro­tec­tion of Lit­er­ary and Artis­tic Works: 1886-1986, pp. 5-6 (1987) (The first French copy­right laws “placed authors’ rights on a more ele­vated basis than the Act of Anne had done,” on the under­stand­ing that they were “sim­ply accord­ing for­mal recog­ni­tion to what was already inher­ent in the ‘very nature of things’”); S. Stew­art, Inter­na­tional Copy­right and Neigh­bour­ing Rights 6-7 (2d ed. 1989) (de­scrib­ing the Euro­pean sys­tem of droit d’au­teur).

    ↩︎
  3. edi­to­r­ial “A Great Idea Lives For­ev­er. Should­n’t Its Copy­right?” (Lessig’s review) is one of the stan­dard exam­ples of this sort of view. He begins by claim­ing that intel­lec­tual prop­erty is morally equiv­a­lent to other forms of prop­er­ty, and hence is as morally pro­tected as the oth­ers:

    What if, after you had paid the taxes on earn­ings with which you built a house, sales taxes on the mate­ri­als, real estate taxes dur­ing your life, and inher­i­tance taxes at your death, the gov­ern­ment would even­tu­ally com­man­deer it entire­ly? This does not hap­pen in our soci­ety … to hous­es. Or to busi­ness­es. Were you to have ush­ered through the many gates of tax­a­tion a flour mill, travel agency or news­pa­per, they would not suf­fer total con­fis­ca­tion.

    Of course, an author can pos­sess his work in per­pe­tu­ity - the phys­i­cal arti­fact. The gov­ern­ment does not inflict ‘total con­fis­ca­tion’ on the author’s man­u­script or com­put­er. Rather, what is being con­fis­cated is the abil­ity to call upon the gov­ern­ment to enforce, with phys­i­cal coer­cion and vio­lence, any­where within its bor­ders, some prac­tices regard­ing infor­ma­tion. This is a lit­tle dif­fer­ent from prop­er­ty. Hel­prin then addresses the eco­nomic ratio­nale of copy­right, only to imme­di­ately appeal to eth­i­cal con­cerns tran­scend­ing any mere laws or eco­nomic gains:

    It is, then, for the pub­lic good. But it might also be for the pub­lic good were Con­gress to allow the enslave­ment of for­eign cap­tives and their descen­dants (this was tried); the seizure of Bill Gates’s bank­book; or the ruth­less sup­pres­sion of Alec Bald­win. You can always make a case for the pub­lic inter­est if you are will­ing to exclude from com­mon equity those whose rights you seek to abridge. But we don’t oper­ate that way, most­ly…­Congress is free to extend at will the term of copy­right. It last did so in 1998, and should do so again, as far as it can throw. Would it not be just and fair for those who try to extract a liv­ing from the uncer­tain arts of writ­ing and com­pos­ing to be freed from a form of con­fis­ca­tion not vis­ited upon any­one else? The answer is obvi­ous, and tran­scends even jus­tice. No good case exists for the inequal­ity of real and intel­lec­tual prop­er­ty, because no good case can exist for treat­ing with spe­cial dis­fa­vor the work of the spirit and the mind.

    From Hel­prin’s “In Defense of The Book: A reply to the crit­ics of Dig­i­tal Bar­barism:

    Copy­right was the some­times sparkling, con­tro­ver­sial fuse I used as an arma­ture for a much expanded argu­ment in regard to the rela­tion­ship of man and machine, in which I attacked directly into the assault of mod­ernism, col­lec­tivism, mil­i­tant athe­ism, util­i­tar­i­an­ism, mass con­for­mi­ty, and like things that are poi­son to the nat­ural pace and require­ments of the soul, that reify what those who say there is no soul believe is left, and that, in a head­long rush to fash­ion man solely after his own con­cep­tions, are suc­ceed­ing. The greater the suc­cess of this ten­den­cy, how­ev­er, the unhap­pier are its adher­ents and the more they seek after their unavail­ing addic­tions, which, like the expla­na­tion for nympho­ma­nia, is not sur­pris­ing. It is espe­cially true in regard to the belief that hap­pi­ness and sal­va­tion can be found in gad­gets: i.e., toy wor­ship. I addressed this. I defended prop­erty as a moral neces­sity of lib­er­ty. I attempted to reclaim Jef­fer­son from the pre­sump­tu­ous embrace of the copy­left. (Even the docents at Mon­ti­cello mis­in­ter­pret Jef­fer­son, fail­ing to rec­og­nize his deep and abid­ing love of the divine order.) And I advanced a propo­si­tion to which the crit­ics of copy­right have been made aller­gic by their col­lec­tivist edu­ca­tion - that the great achieve­ment of West­ern civ­i­liza­tion is the evo­lu­tion from cor­po­rate to indi­vid­ual right, from man defined, priv­i­leged, or oppressed by cast, clan, guild, eth­nic­i­ty, race, sex, and creed, to the indi­vid­u­al’s rights and priv­i­leges that once were the province only of kings

    But this is its con­clud­ing pas­sage:

    ’The new, dig­i­tal bar­barism is, in its lan­guage, com­port­ment, thought­less­ness, and obei­sance to force and pow­er, very much like the old. And like the old, and every form of tyran­ny, hard or soft, it is most vul­ner­a­ble to a bright light shone upon it. To call it for what it is, to exam­ine it while pay­ing no heed to its rich bribes and pow­er­ful coer­cions, to con­trast it to what it pre­sumes to replace, is to begin the long fight against it.

    Very clear­ly, the choice is between the pre­em­i­nence of the indi­vid­ual or of the col­lec­tive, of impro­vi­sa­tion or of rou­tine, of the soul or of the machine. It is a choice that per­haps you have already made, with­out know­ing it. Or per­haps it has been made for you. But it is always pos­si­ble to opt in or out, because your affir­ma­tions are your own, the court of judge­ment your mind and heart. These are free, and you are the sov­er­eign, always. Choose.’

    ↩︎
  4. This should be true only of decom­pres­sion - only with an incom­pe­tent com­pres­sion algo­rithm will one ever be able to com­press a small input to a large out­put, because the algo­rith­m’s for­mat can start with a sin­gle bit indi­cat­ing whether the fol­low­ing is com­pressed or not com­pressed, and if the out­put is larger than input, not bother com­press­ing at all. That addi­tional bit makes the file larger and so does­n’t raise any issues with the Pigeon­hole Prin­ci­ple.↩︎

  5. The most extreme space-­time trade­off I have ever run into, far beyond any exam­ples of or , is a purely the­o­ret­i­cal one; from Ben­nett 1988:

    The trade­off between con­cise­ness of rep­re­sen­ta­tion and ease of decod­ing is illus­trated in an extreme form by the infor­ma­tion required to solve the . One stan­dard rep­re­sen­ta­tion of this infor­ma­tion is as an (the char­ac­ter­is­tic sequence of the halt­ing set) whose i’th bit is 0 or 1 accord­ing to whether the i’th pro­gram halts. This sequence is clearly redun­dant, because many instances of the halt­ing prob­lem are eas­ily solv­able or reducible to other instances. Indeed, K0 is far more redun­dant than this super­fi­cial evi­dence might indi­cate. Barzdin [2] showed that this infor­ma­tion can be com­pressed to the log­a­rithm of its orig­i­nal bulk, but no con­cisely encoded rep­re­sen­ta­tion of it can be decoded in recur­sively bounded time.

    like , which are fairly high up the com­plex­ity hier­ar­chy, are bywords for intractabil­ity & futil­ity among pro­gram­mers. But this decod­ing of an extremely space-­ef­fi­cient rep­re­sen­ta­tion of K0 is a process so slow that it does­n’t even fit in any nor­mal com­plex­ity class!↩︎

  6. A para­phrase by Lamp­son 1983’s “Hints for com­puter sys­tem design” of a Dijk­stra com­ment in EWD 709: My hopes of com­put­ing sci­ence”:

    The hope is that trans­for­ma­tions from a mod­est library will pro­vide a path from a naïve, inef­fi­cient but obvi­ously cor­rect pro­gram to a sophis­ti­cated effi­cient solu­tion. I have seen how via pro­gram trans­for­ma­tions strik­ing gains in effi­ciency have been obtained by avoid­ing recom­pu­ta­tions of the same inter­me­di­ate results, even in sit­u­a­tions in which this pos­si­bil­i­ty—note that the inter­me­di­ate results are never part of the orig­i­nal prob­lem state­men­t!—was, at first sight, sur­pris­ing…I am afraid that great hopes of pro­gram trans­for­ma­tions can only be based on what seems to me an under­es­ti­ma­tion of the log­i­cal brinkman­ship that is required for the jus­ti­fi­ca­tion of really effi­cient algo­rithms. It is cer­tainly true, that each pro­gram trans­for­ma­tion embod­ies a the­o­rem, but are these the the­o­rems that could con­tribute sig­nif­i­cantly to the body of knowl­edge and under­stand­ing that would give us matu­ri­ty? I doubt, for many of them are too triv­ial and too much tied to pro­gram nota­tion.

    ↩︎