Who wrote the 'Death Note' script?

Internal, external, stylometric evidence point to live-action leak being real
anime, statistics, predictions, Haskell, R, Bayes
2009-11-022016-04-27 finished certainty: likely importance: 1


I give a his­tory of the 2009 leaked script, dis­cuss inter­nal & exter­nal evi­dence for its real­ness includ­ing sty­lo­met­rics; and then give a sim­ple step-by-step Bayesian analy­sis of each point. We fin­ish with high con­fi­dence in the script being real, dis­cus­sion of how this analy­sis was sur­pris­ingly enlight­en­ing, and what fol­lowup work the analy­sis sug­gests would be most valu­able.

Begin­ning in May 20091 and up to Octo­ber 2009, there appeared online a PDF file (orig­i­nal Medi­aFire down­load) claim­ing to be a script for the of the anime (see Wikipedia or my own lit­tle essay for a gen­eral descrip­tion). Such a leak inevitably raises the ques­tion: is it gen­uine? Of course the stu­dio had “no com­ment”.

I was skep­ti­cal at first - how many unpro­duced screen­plays get leaked? I thought it rare even in this Inter­net age - so I down­loaded a copy and read it.

Plot summary

FADE UP: EXT. QUEENS - NYC
A work­ing class neigh­bor­hood in the heart of Far Rock­away. Bro­ken down stoops adorn each home while CAR ALARMS and SHOUTING can be heard in the dis­tance as the hard SQUABBLE [sic] LOCALS go about their morn­ing rou­tine.
INT. BEDROOM - ROW HOUSE
LUKE MURRAY, 2, lies in bed, dead to the world, even as the late morn­ing sun fights its way in. Sud­denly his SIDEKICK vibrates to life.
He slowly starts to stir as the side­kick works its way off the desk and CRASHES to the floor with a THUNK

The plot is curi­ous. Ryuk and other are entirely omit­ted, as is Misa Amane (the lat­ter might be expect­ed: it’s just one movie). Light Yagami is renamed “Luke Mur­ray”, and now lives in New York City, already in col­lege. The plot is gen­er­ally sim­pli­fied.

What is more inter­est­ing is the changed emphases. Luke has been given a mur­dered moth­er, and much of his efforts go to track­ing down the mur­derer (who, of course, escaped con­vic­tion for that mur­der). The Death Note is unam­bigu­ously depicted as a tool for evil, and a malign influ­ence in its own right. There is min­i­mal inter­est in the idea that Kira might be good. The Japan­ese aspects are min­i­mized and treated as exotic curios, in the worst Hol­ly­wood tra­di­tion (Luke goes to a Japan­ese acquain­tance for a trans­la­tion of the kanji for ‘shinigami’, who being a prim­i­tive native, shud­ders in fear and flees the sahib… oh, sor­ry, wrong era. But the descrip­tion is still accu­rate.) cell­phones are men­tioned and used a lot (6 times by my coun­t).

The end­ing shows Luke using the mem­o­ry-wip­ing gam­bit to elude L (who from the script seems much the same, although things not cov­ered by the script, such as cast­ing, will be crit­i­cally impor­tant to mak­ing L, L), and find­ing the hid­den mes­sage from his old self - but destroy­ing the mes­sage before he learns where he had hid­den the Death Note. It is implied that Luke has redeemed him­self, and L is let­ting him go. So the end­ing is clas­sic Hol­ly­wood pap.

(A more detailed plot sum­mary can be found on Fan­Fic­tion.Net.)

The end­ing indi­cates some­one who does­n’t love DN for its shades of gray men­tal­i­ty, its con­stant ambi­gu­ity and com­plex­i­ty. Any DN fan feels deep sym­pa­thy for Light, even if they root for L and com­pa­ny. I sus­pect that if they were to pen a script, the end­ing would be of the “Light wins every­thing” vari­ety, and not this hack­neyed sop. I know I could­n’t bring myself to write such a thing, even as a par­ody of Hol­ly­wood.

In gen­er­al, the dia­logue is short and cliche. There are no excel­lent mega­lo­ma­niac speeches about cre­at­ing a new world; one can expect a dearth of omi­nous choral chant­ing in the movie. Even the ver­i­est tyro of fan­fic­tion could write more DN-like dia­logue than this script did. (After look­ing through many DN fan­fic­tions for the sty­lo­met­ric analy­sis, I’ve real­ized this claim is unfair to the scrip­t.)

Fur­ther, the com­plex­i­ties of rati­o­ci­na­tion are largely absent, remain­ing only in the Lind L. Tay­lor TV trick of L and the famous eat­ing-chips scene of Light. The tricks are even writ­ten incom­pe­tently - as writ­ten, on the bus, the cru­cial ID is seen by acci­dent, whereas in DN, Light had specif­i­cally writ­ten in the rev­e­la­tion of the ID. The moral sub­tlety of DN is gone; you can­not argue that Luke is a new god like Light. He is only an angry boy with a good heart lash­ing out, but by the end he has returned to the straight and nar­row of con­ven­tional moral­i­ty.

Of this plot sum­ma­ry, Justin Sevakis of com­ments:

It’s impor­tant to keep expec­ta­tions in check, when­ever a film project emerges, because the vast major­ity of film projects do end up kind of suck­ing. When an early script of the as-yet unmade Amer­i­can Death Note movie leaked a few years back, I told a close friend of mine about it, and that it was hard to tell if it was actu­ally real of an inter­net hoax. This friend of mine had directed a fea­ture at Fox, writ­ten and doc­tored many scripts for sev­eral stu­dios. He asked me, “Is it any good?” “No,” I replied, “it’s atro­cious.” He grinned. “Then it’s real.”

Evidence

The ques­tion of real­ness falls under the hon­or­able rubric of , which offers the handy dis­tinc­tion of vs .

Internal

The first thing I noticed was that the 2 authors claimed on the PDF, “Charley and Vlas Par­la­panides”, was cor­rect: they were the 2 broth­ers of whom it had been qui­etly announced in 2009-04-30 that they were hired to write it, con­firm­ing the rumors of their June 2008 hir­ing. (And “Charley”? He was born “Charles”, and much cov­er­age uses that name; sim­i­larly for “Vlas” vs “Vla­sis”. On the other hand, there are some media pieces using the diminu­tive, most promi­nently their IMDb entries.)

Another inter­est­ing detail is the cor­po­rate address qui­etly listed at the bot­tom of the page: “WARNER BROS. / 4000 Warner Boule­vard / Bur­bank, Cal­i­for­nia 91522”. That address is widely avail­able on Google if you want to search for it, but one has to know about it in the first place and so it is eas­ier to leave it out.

PDF Metadata

(The exact PDF I used has the : 3d0d66be9587018082b41f8a676c90041fa2ee0455571551d266e4ef8613b08a2.)

The sec­ond thing I did was take a look at the meta­data3:

  • The cre­ator tool checks out: “DynamicPDF v5.0.2 for .NET” is part of a com­mer­cial suite, and it was pirated well before April 2009, although I could not fig­ure out when the com­mer­cial release was.

  • The date, though, is “Thu 2009-04-09 09:32:47 PM EDT”. Keep in mind, this leak was in May-Oc­to­ber 2009, and the orig­i­nal Vari­ety announce­ment was dated 2009-04-30.

    If one were fak­ing such a script, would­n’t one through either sheer care­less­ness & omis­sion or by nat­ural assump­tion (the Par­la­panides signed a con­tract, the press release went out, and they started work) set the date well after the announce­ment? Why would you set it close to a month before? Would­n’t you take pains to show every­thing is exactly as an out­sider would expect it to be? As writes in :

    Gib­bon observes [in the ] that in the Arab book par excel­lence, the Koran, there are no camels; I believe that if there were ever any doubt as to the authen­tic­ity of the Koran, this lack of camels would suf­fice to prove it Arab. It was writ­ten by Mohammed, and Mohammed as an Arab had no rea­son to know that camels were par­tic­u­larly Arab; they were for him a part of real­i­ty, and he had no rea­son to sin­gle them out, while the first thing a forger or tourist or Arab nation­al­ist would do is to bring on the camels - whole car­a­vans of camels on every page; but Mohammed, as an Arab, was uncon­cerned. He knew he could be Arab with­out camels.

    Another small point is that the date is in the “EDT” time­zone, or East­ern Day­light-sav­ings Time: the Par­la­panides have long been based out of New Jer­sey, which is indeed in EDT. Would a coun­ter­feiter have looked this up and set the time­zone exactly right?

Writing/formatting

What of the actual play? Well, it is writ­ten like a screen­play, prop­erly for­mat­ted, and the scene descrip­tions are brief but occa­sion­ally detailed like the other screen­plays I’ve read (such as the Star Wars tril­o­gy’s script­s). It is quite long and detailed. I could eas­ily see a 2 hour movie being filmed from it. There are no red flags: the spelling is uni­formly cor­rect, the gram­mar with­out issue, there are few or no com­mon ama­teur errors like con­fus­ing “it’s”/“its”, and in gen­eral I see noth­ing in it - speak­ing as some­one who has been paid on occa­sion to write - which would sug­gest to me that the author(s) were nei­ther of pro­fes­sional cal­iber nor unusu­ally skilled ama­teurs.

The time com­mit­ment for a faker is sub­stan­tial: the script is ~22,000 words, well-edited and for­mat­ted, and rea­son­ably pol­ished. For com­par­ison, tasks writ­ers with pro­duc­ing 50,000 words of pre-­planned, unedit­ed, low-qual­ity con­tent in one mon­th, with a sec­ond month (NaNoEdMo) devoted to edit­ing. So the script rep­re­sents at a min­i­mum a mon­th’s work - and then there’s the edit­ing, review­ing, and for­mat­ting (and most ama­teur writ­ers are not famil­iar with screen­writ­ing con­ven­tions in the first place).

So much for the low-hang­ing fruit of inter­nal evi­dence: all sug­ges­tive, none damn­ing. A faker could have ran­domly changed Charles to “Charley”, looked up an appro­pri­ate address, edited the meta­data, come up with all the Hol­ly­wood touch­es, wrote the whole damn thing (quite an endeav­our since rel­a­tively lit­tle mate­r­ial is bor­rowed from DN), and put it online.

Stylometrics

The next step in assess­ing inter­nal evi­dence is hard­core: we start run­ning tools on the leaked script to see whether the style is con­sis­tent with the Par­la­panides as authors. The PDF is 112 images with no text pro­vid­ed; I do not care to tran­scribe it by hand. So I split the PDF with pdftk to upload both halves to Google Docs (which has an upload size limit) to down­load its text; and then ran the PDF through to com­pare - the Google Docs tran­script was clearly supe­rior even before I spellchecked it. (In a nasty sur­prise halfway through the process, I found that for some rea­son, Google Docs would only OCR the first 10 pages or so of an upload - so I wound up actu­ally upload­ing 12 split PDFs and recom­bin­ing them!)

Sam­ples of the Par­la­panides’ writ­ing is hard to obtain; the only pro­duced movie with their script is the 2000 Every­thing For A Rea­son and the 2011 (so any analy­sis in 2009 would’ve been dif­fi­cult). I could not find the script for either avail­able any­where for down­load, so I set­tled for OpenSubtitles.org’s sub­ti­tles in for­mat and stripped the tim­ings: grep -v [0-9] Immortals.2011.DVDscr.Xvid-SceneLovers.srt > 2011-parlapanides-immortals.txt (There are no sub­ti­tles avail­able for the other movie, it seem­s.)

Sam­ples of fan­fic­tion are easy to acquire. Death Note sec­tion (24,246 fanfic­s), sort by: num­ber of favorit­ing users, com­plet­ed, in Eng­lish, and >5000 words. This yields 2,028 results but offers no way to fil­ter by fan­fic­tions writ­ten in a screen­play or script style, and no entry in the first 5 pages men­tions “script” or “screen­play” so it is a dead end. The ded­i­cated play/musical sec­tion lists noth­ing for “Death Note”. Googling "Death Note" (script OR screenplay OR teleplay) -skit site:fanfiction.net/s/ offers 8,990 hits, unfor­tu­nate­ly, the over­whelm­ing major­ity are either irrel­e­vant (eg. using “script” in the sense of cur­sive writ­ing) or too short or too low qual­ity to make a plau­si­ble com­par­i­son. (I also sub­mit­ted a Red­dit request, which yielded no sug­ges­tion­s.) The final selec­tion:

As a con­trol-­con­trol, I selected some fan­fic­tions that I knew to be of higher qual­i­ty:

The fan­fic­tions were con­verted to text using the now-de­funct Web ver­sion of Fan­Fic­tion­Down­loader.

With 10 fan­fic­tions, it makes sense to com­pare with 10 real movie scripts; if we did­n’t include real movie scripts for­mat­ted like movie scripts, one would won­der if all the sty­lo­met­rics was doing was putting one script together with anoth­er. So in total, this worry is diluted by 3 fac­tors (in descend­ing order):

  1. the use of 10 real movie scripts (as just dis­cussed)
  2. the use of 10 fan­fic­tions resem­bling movie scripts to var­i­ous degrees (pre­vi­ous)
  3. the known Par­la­panides work (the Immor­tals sub­ti­tles) being pure dia­logue and includ­ing no action or scene descrip­tion which the sty­lo­met­rics could “pick up on”

The scripts, drawn from a col­lec­tion (grab­bing one I knew of, and then select­ing the remain­ing 9 from the first movies alpha­bet­i­cally to have work­ing .txt links as a qua­si­-ran­dom sam­ple):

For the actual analy­sis, we use the com­pu­ta­tional styl­is­tics pack­age of code; after down­load­ing stylo, the analy­sis is pretty easy:

install.packages("tcltk2")
source("stylo_0-4-6_utf.r")

The set­tings4 are to: run a which uses the entire cor­pus, assumes Eng­lish, and looks at the dif­fer­ence between files in their use of “most pop­u­lar words” (start­ing at 1 word & max­ing out at 1000 dif­fer­ent words, because the entire Immor­tals subs is only ~4000 words of dia­logue), where dif­fer­ence is a sim­ple Euclid­ean dis­tance.

The script PDF, full cor­pus, inter­me­di­ate files, and stylo source code are avail­able as a tar­ball.

The clus­ter analy­sis of the 30-strong cor­pus.

The graphed results are unsur­pris­ing:

  1. The movies clus­ter together in the top third

  2. The DN fanfics are also a very dis­tinct clus­ter at the bot­tom

  3. In the mid­dle, split­ting the dif­fer­ence (which actu­ally makes sense if they are indeed more com­pe­tently or “pro­fes­sion­ally” writ­ten), are the “good” fanfics I select­ed. In par­tic­u­lar, the fanfics by are gen­er­ally close together - vin­di­cat­ing the basic idea of infer­ring author­ship through sim­i­lar word choice.

  4. Exactly as expect­ed, the Immor­tals subs and the leaked DN script are as closely joined as pos­si­ble, and they prac­ti­cally form their own lit­tle clus­ter within the movie scripts.

    This is impor­tant because it’s evi­dence for 2 dif­fer­ent ques­tions: whether the known Par­la­panides work is sim­i­lar to the leaked script, and whether the leaked script is sim­i­lar to any fan­fic­tions rather than movies. We can answer the lat­ter ques­tion by not­ing that it is grouped far away from any fan­fic­tion (the only fan­fic­tion in the clus­ter, the “Three Char­ac­ters” fan­fic­tion, is very short and for­mal­ized), even though Eliezer Yud­kowsky (him­self a pub­lished author) wrote sev­eral of the fan­fic­tions and one of them (Harry Pot­ter and the Meth­ods of Ratio­nal­ity) is intended for pub­li­ca­tion and per­haps even a Hugo award.

That the analy­sis spat out the files together is evi­dence: there were 30 files in the cor­pus, so if we gen­er­ated 15 pairs of files at ran­dom, there’s just a chance of those two wind­ing up togeth­er. The tree does not gen­er­ate purely pairs of files, so the actual chance is much lower than 6.6% and so the evi­dence is stronger than it looks; but we’ll stick with it in the spirit of con­ser­vatism and weak­en­ing our argu­ments.

External

Dating

But is there any exter­nal evi­dence? Well, the time­line is right: hired around June 2008, deliv­ered a script in early April 2009, offi­cial announce­ment in late April 2009. How long should deliv­ery take? The inter­val seems plau­si­ble: Fig­ure about 2 months for both broth­ers to read through the DN manga or watch the anime twice, clear up their other com­mit­ments, a month to brain­storm, 3 months to write the first draft, a month to edit it up and run it by the stu­dio, and we’re at 7 months or around Feb­ru­ary 2009. That leave a good 6 months for it to float around offices and get leaked, and then come to the wider atten­tion of the Inter­net.

Credit

Given this effort and the mild news cov­er­age of it, one might expect a faker to take con­sid­er­able pride in his work and want to claim credit at some point for a suc­cess­ful hoax. But as of Jan­u­ary 2013, I am unaware of any­one even allud­ing or hint­ing that they did it.

Official statements

Addi­tional evi­dence comes from the Jan­u­ary 2011 announce­ment by Warner Bros that the new direc­tor was one , and the script was now being writ­ten by Anthony Bagarozzi and Charles Mondry (with pre­sum­ably the pre­vi­ous script tossed):

“It’s my favorite man­ga, I was just struck by its unique and bril­liant sen­si­bil­i­ty,” Black said. “What we want to do is take it back to that man­ga, and make it closer to what is so com­plex and truth­ful about the spir­i­tu­al­ity of the sto­ry, ver­sus tak­ing the con­cept and try­ing to copy it as an Amer­i­can thriller. Jeff Robi­nov and Greg Sil­ver­man liked that.” Black’s repped by WME and Green­Lit Cre­ative.

ANN quoted Black at a con­ven­tion pan­el:

How­ev­er, Black added that the project was in jeop­ardy because the stu­dio ini­tially wanted to lose “the demon [Ryuk]. [They] don’t want the kid to be evil… They just kept qual­i­fy­ing it until it ceased to exist.” Black said that “the cre­ation of a vil­lain, the down­ward spi­ral” of the main char­ac­ter Light has been restored in the script, and added that this is what the film should be about.’

…Ac­cord­ing to the direc­tor of and the upcom­ing film, the stu­dios ini­tially wanted to give the main char­ac­ter Light Yagami a new back­ground story to explain his “down­ward spi­ral” as a vil­lain. The new back­ground would have had a friend of Light mur­dered when he was young. When Light obtains the Death Note - a note­book with which he can put peo­ple to death by writ­ing their names - he uses it to seek vengeance. How­ev­er, Black empha­sized that he opposed this back­ground change and the sug­gested removal of the Shinigami (Gods of Death), and added that nei­ther change is in his planned ver­sion.

Black’s com­ments line up well with the leaked script: Ryuk is indeed omit­ted entire­ly, Light is indeed mostly good and redeemed, Light does have a back­story jus­ti­fy­ing his vengeance, and so on. The only dis­cor­dant detail is that in the leaked script, it was his mother mur­dered and not “a friend”.

Analysis

We could leave mat­ters there with a bald state­ment that the evi­dence is “com­pelling”, but recently offered in Prov­ing His­to­ry: Bayes’s The­o­rem and the Quest for the His­tor­i­cal Jesus (2012; 2008 hand­out, LW review) a defense of how mat­ters of his­tory and author­ship could be more rig­or­ously inves­ti­gated with some sim­ple sta­tis­ti­cal think­ing, and there’s no rea­son we can­not try to give some rough num­bers to each pre­vi­ous piece of evi­dence. Even if we can only agree on whether a piece of evi­dence is for or against the hypoth­e­sis of the Par­la­panides’ author­ship, and not how strong a piece of evi­dence it is, the analy­sis will be use­ful in demon­strat­ing how con­verg­ing weak lines of rea­son­ing can yield a strong con­clu­sion.

We’ll prin­ci­pally use Bayes’s the­o­rem, no math more advanced than mul­ti­pli­ca­tion or divi­sion, com­mon sense/Fermi esti­mates, the Inter­net, and the strong assump­tion of (see the con­di­tional inde­pen­dence appen­dix). Despite these severe restric­tions (what, no , , , or any­thing? You call this sta­tis­tics‽), we’ll get some answers any­way.

Priors

The first piece of evi­dence is that the leak exists in the first place.

Extra­or­di­nary claims require extra­or­di­nary evi­dence, but ordi­nary claims require only ordi­nary evi­dence: a claim to have uncov­ered 40 years after his death is a remark­able dis­cov­ery and so it will take more evi­dence before we believe we have the pri­vate thoughts of the Fuhrer than if one finds what pur­ports to be one’s sis­ter’s diary in the attic. The for­mer is a unique his­toric event as most diaries are found quick­ly, few world lead­ers keep diaries (as they are busy world-lead­ing), and there is large finan­cial incen­tive (9 mil­lion Deutschmarks or ~$13.6m 2012 dol­lars) to fake such diaries (even in 60 vol­umes). The lat­ter is not ter­ri­bly unusual as many females keep diaries and then lose track of them as adults, with fakes being almost unheard of.

How many leaked scripts end up being hoaxes or fakes? What is the ?

Leaks seem to be com­mon in gen­er­al. Just googling “leaked script”, I see recent inci­dents for Robo­cop, Teenage Mutant Tur­tles, Mass Effect 3 (con­firmed by Bioware to have been real), Les Mis­érables, Juras­sic Park IV (con­cept art), Bat­man5, and Halo 4. A blog post makes itself use­ful by round­ing up 10 old leaks and assess­ing how they panned out: 4 turned out to be fakes, 5 real, and 1 (for The Mas­ter) unsure. Assum­ing the worst, this gives us 5⁄10 are real or 50% odds that a ran­domly selected leak would be real. Given the num­ber of “draft” scripts on IMSDb, 50% may be low. But we will go with it.

Internal evidence

Authorship

How would we esti­mate the evi­dence of “Charley Par­la­panides”? The names of the writ­ers could either be:

  1. present and wrong

    Very strong evi­dence it is fake: who puts their own name down wrong? This would be over­whelm­ing evi­dence, but we don’t have it so we will drop this pos­si­bil­ity from con­sid­er­a­tion and con­sider the remain­ing pos­si­bil­i­ties:

  2. present and right

    Evi­dence it is real. Of the 10 scripts used in the sty­lo­met­ric, 9⁄10 included right author­ship infor­ma­tion.

  3. not present

    Of the 4 known fake scripts men­tioned pre­vi­ous­ly, only 2 included author­ship infor­ma­tion.

Given this infor­ma­tion, how does the pres­ence of right author­ship influ­ence our prior belief of 50%?

Let a be “is real” and b be “has cor­rect author­ship”. We want to know the prob­a­bil­ity of a given the obser­va­tion “cor­rect author­ship”. A ver­sion of (stolen from “An Intu­itive Expla­na­tion of Bayes’s The­o­rem”; you can see other appli­ca­tions in my modafinil essay; a nice visu­al­iza­tion is given by Oscar Bonilla or one could watch dis­tri­b­u­tions be updated):

If you look, the right-­hand side of that equa­tion has exactly 4 pieces in its puz­zle:

  1. This is some­thing we already know, “prob­a­bil­ity of being real”. This is the base rate we already esti­mated at 50% or 0.5.

  2. This is the nega­tion of the pre­vi­ous. What is the nega­tion of 50%, its con­trary? 50%.

  3. Remem­ber, we read the pipe nota­tion back­wards, so this is ‘the prob­a­bil­ity that a real script (a) will include author­ship’ (b)’. We said that 9⁄10 of good scripts include author­ship, so this is 90% or 0.9. (One way to com­pen­sate for the small sam­ple size of 10 scripts would be to use , , which would yield .)

  4. Final­ly, we have “the prob­a­bil­ity that a fake script will include author­ship”. We looked at 4 fake scripts and 2 included author­ship, which is another 50% or 0.5.

To put all these def­i­n­i­tions in a list:

  1. a = is real
  2. b = has author­ship
  3. = prob­a­bil­ity of being real = 50% = 0.50
  4. = prob­a­bil­ity of being not real = 50% = 0.50
  5. = prob­a­bil­ity a real script will include author­ship = 90% = 0.9
  6. = prob­a­bil­ity a fake script will include author­ship = 50% = 0.5

We sub­sti­tute in to the orig­i­nal equa­tion:

San­ity checks:

  1. Author­ship is evi­dence for it being real; did we increase our con­fi­dence that the script is real?

    Yes, because 64.3% > 50%. So we moved the right direc­tion.

  2. Did we move the right amount?

    Well, the fake scripts have a 50% rate and the real scripts have 90%; since this is the only evi­dence we’ve taken into account so far, our first cal­cu­la­tion should­n’t move us “very far”, what­ever that means, since not all real scripts have author­ship and plenty of fake ones are care­ful to include them. (Imag­ine a world where 80% of fakes include author­ship: author­ship would become even weaker evi­dence; and when fakes hit 90% inclu­sion, author­ship would be so weak as to be no evi­dence at all since the fakes and reals look exactly the same.) The inclu­sion of author­ship does not seem like tremen­dous evi­dence so after tak­ing author­ship into account, we should be close to our orig­i­nal prior of 50% than to any extreme cer­tainty like 90%.

    Are we? Our pos­te­rior of 64% does­n’t strike me as a big shift from 50%, so we con­clude that this sec­ond san­ity check is sat­is­fied. Good!

A final cal­cu­la­tion: the prob­a­bil­ity that “a test gives a true pos­i­tive” divided by “the prob­a­bil­ity that a test gives a false pos­i­tive” () is the “” of that test (see also odds ratio). A like­li­hood ratio of 1 indi­cates that our test is use­less as it is equally likely for real scripts and fake scripts alike; <1 indi­cates it is evi­dence against being real, and >1 evi­dence for being real. Like­li­hood ratios will be use­ful later, so we’ll cal­cu­late them too as we go along. So:

(As expected of evi­dence for the script being real, the like­li­hood ratio > 1.)

Author spelling

I also remarked that the use of “Charley” was inter­est­ing since there were mul­ti­ple ways to spell his name. Does this spelling serve as evi­dence for being real? It turns out: no! It is either irrel­e­vant or evi­dence against.

To use “Charley” as evi­dence, we need to know what the real man would be more or less likely to write, and what fakes would be more or less likely to write. I have been unable to find out the “ground truth” here; all 3 vari­ants are used in Google:

  • “Charles”: 11,800 hits
  • “Charley”: 182,000 hits
  • “Char­lie”: 1,440 hits

I sus­pect the truth is likely “Charles” since his Twit­ter account uses “Charles” (and like­wise, Vlas is under Vla­sis); his IMDb page lists 5 cred­its “as Charles Par­la­panides” (but nev­er­the­less calls him “Charley”).

What ques­tion would we ask here? We could put it as: if we make the assump­tion that the real man has an even chance of using either “Charles” or “Char­lie”/“Charley”, while a fake would choose based on the Google hits (unaware of the vari­ants), how would we change our belief upon observ­ing the scrip­t’s use of “Charley”?

  1. a = is real
  2. b = name is spelled “Charley”
  3. = prob­a­bil­ity of being real = 64% = 0.64
  4. = prob­a­bil­ity of being not real = 1 - 0.64 = 0.36
  5. = prob­a­bil­ity a real script will include “Charley” = 50% (“even chance”) = 0.5
  6. = prob­a­bil­ity a fake script will include “Charley” = = 0.93

Sub­sti­tute:

That really hurt the prob­a­bil­i­ty, since by assump­tion using the pop­u­lar spelling is so heav­ily cor­re­lated with a fake.

Like­li­hood ratio:

(We real­ized the name vari­ant was evi­dence again­st, and accord­ing­ly, the like­li­hood ratios < 1.)

Corporate address

Googling “Warner Broth­ers address” turns up the address used in the PDF as the sec­ond hit (it seems to be the offi­cial address of all Warner Bros. oper­a­tions), so we can assume that any faker can find it—if they thought to include it. This ques­tion is sim­ply: is a cor­po­rate address includ­ed? Check­ing, we see addresses are rare: of the real, 1⁄10; of the fakes, fakes: 0⁄4.

  1. a = is real
  2. b = has address
  3. = prob­a­bil­ity of being real = 0.49
  4. = prob­a­bil­ity of being not real = 1 - 0.49 = 0.51
  5. = prob­a­bil­ity a real script will include an address = 1⁄10; we apply Laplace’s Rule of Suc­ces­sion to get
  6. = prob­a­bil­ity a fake script will include address = 0⁄4; we apply Laplace (as before) to get = 1⁄6 = 0.16

Sub­sti­tute:

0.49? But that was what we started with! It turns out that we are work­ing with such a small sam­ple that when we cor­rect with Laplace’s law, we learn that there are so few instances of screen­plays float­ing around with cor­po­rate addresses in them, we can’t actu­ally infer much of any­thing from it. Does the like­li­hood ratio agree?

(Here we see the final cat­e­gory of like­li­hood ratios: nei­ther greater than nor less than 1, but equal to 1 - thus nei­ther evi­dence for nor against.)

PDF date

We noted the curi­ous fact that while the Par­la­panides’ work on the script was announced on 30 April, the PDF claims a date of 9 April.

I did not expect this inver­sion, but think­ing about it in ret­ro­spect, this seems con­sis­tent with the script being real: the stu­dio com­mis­sioned them to write a script, they turned in mate­ri­al, the stu­dio liked it, and the offi­cial word went out. (Pre­sum­ably had the stu­dio dis­liked it, they would’ve been qui­etly paid a small sum and a new writer tried.) An ordi­nary per­son like me, how­ev­er, would date any fake ver­sion to after the announce­ment, rea­son­ing that it would be “safe” to date any script to after the announce­ment.

So we want to express that this inver­sion is evi­dence for the script being real, and that frauds would be dated as one would nor­mally expect. If I were to set out to make a fraud, I don’t think I would tin­ker that way with the PDF date even once out of 20 times, but let’s be very con­ser­v­a­tive and say a mere 75% of fake scripts would have a nor­mal date (that is: 25% of the time, the faker would be clever enough to invert the dates); and let’s say there was a 50% chance that the real script would be inverted (since we don’t know the real fre­quency of inver­sion). The core assump­tion here is that inver­sion is more likely for real scripts than fake scripts, an assump­tion I feel is highly likely (what faker would dare such a bla­tant incon­sis­ten­cy? It’s Gib­bon & the camels again but in a stronger for­m.) We know how to run the num­bers now:

  1. a = is real
  2. b = the date is inverted
  3. = prob­a­bil­ity of being real = 0.49
  4. = prob­a­bil­ity of being not real = 1 - 0.49 = 0.51
  5. = prob­a­bil­ity a real script will be inverted = 50% = 0.5
  6. = prob­a­bil­ity a fake script will be inverted = 25% = 0.25

Sub­sti­tute:

A jump from 49% to 65.8% is a respectable jump for such a weird date. Then the like­li­hood ratio is:

PDF creator tool

The cre­ator tool listed in the meta­data was released and pirated before the cre­ation date. It may not seem infor­ma­tive - how could the PDF be cre­ated before the PDF gen­er­a­tor was writ­ten? - but it actu­ally is: it tells us that this was not a care­less fraud where the per­son installed the lat­est & great­est PDF gen­er­a­tor, wrote a script, edited the date, and did­n’t real­ize that the cre­at­ing gen­er­a­tor & ver­sion num­ber was included as well. If the ver­sion num­ber had been of a pro­gram released any­where between April and Octo­ber6 2009, then this would be a glar­ing red flag warn­ing that the PDF was fake! In all real PDFs, the gen­er­a­tor tool would be before the file cre­ation date; but in many fake PDFs, this would be invert­ed. The case of inter­est is where the fake author installs a new pro­gram between April and Octo­ber, and then fails to notice the reveal­ing meta­data (a con­junc­tion).

  1. a = is real

  2. b = date is not inverted

  3. = prob­a­bil­ity of being real = 0.658

  4. = prob­a­bil­ity of being not real = 1 - 0.658 = 0.342

  5. = prob­a­bil­ity a real script will include non-in­verted date = 0.99 (why not 100%? Well, shit hap­pen­s.)

  6. = prob­a­bil­ity a fake script will include a non-in­verted date = 1 - 0.0415 = 0.9585

    This is a hard esti­mate. Let’s think about the oppo­site: what is the chance that a faker will invert date? What leads to that hap­pen­ing? Sup­pose every­one replaces their com­puter every 5 years; what is the chance this replace­ment (and ensur­ing upgrade of all soft­ware) hap­pens in the 5 month win­dow between April and Octo­ber 2009? Well, it’s . What’s the chance they then fail to notice? Unless they’re really skilled I’d expect them to usu­ally miss it, but let’s be con­ser­v­a­tive and say they usu­ally notice it and fix it, and have only a 40% chance of miss­ing it. An inver­sion requires both the upgrade (8.3%) and then a miss (40%) for a final chance of 4.15%! This is so small that we know in advance that it’s not going to make a big dif­fer­ence and may not have been worth think­ing about.

And indeed, 0.665 is not very much larger than 0.658.

Like­li­hood ratio:

(As expected of such weak evi­dence, it’s hardly dif­fer­ent from 1.)

PDF timezone

The meta­data date being set in the right time­zone is another piece of evi­dence: a fraud could live pretty much any­where in the world and his com­puter will set the PDF to the wrong time­zone and he’d have to remem­ber to man­u­ally set it to the “right” time­zone, while the Par­la­panides live in New Jer­sey and will likely have their PDF time­zone set appro­pri­ately (even if they trav­el, as they must, their com­put­ers may not go with them, or if the com­put­ers go with them, may not change their time­zone set­tings, or if the com­put­ers go with them and change their time­zone, they may not cre­ate the PDF dur­ing the trip). So this def­i­nitely seems like at least weak evi­dence.

How to esti­mate the chance that the fake author would live in a dif­fer­ent time­zone? If the fraud lived in the US (as is over­whelm­ingly likely and I’ll assume for the sake of con­ser­vatism), the US spans some­thing like 6 dis­tinct time­zones. Time­zones split up roughly by states so peo­ple can esti­mate the pop­u­la­tion per time­zone; steal­ing one such esti­mate:

  1. CST: 85385031

  2. MST: 18715536

  3. PST: 48739504

  4. thus, non-EST: 152840071

  5. EST: 141631478

  6. thus, total pop­u­la­tion: 152840071+141631478=294471549

    The US pop­u­la­tion is more like 312 mil­lion than 294 mil­lion but the dif­fer­ence isn’t impor­tant: what is impor­tant is the size of EST com­pared to the rest of the pop­u­la­tion.

So, the prob­lem setup becomes:

  1. a = is real
  2. b = is EDT
  3. = prob­a­bil­ity of being real = 0.665
  4. = prob­a­bil­ity of being not real = 1 - 0.665 = 0.3349
  5. = prob­a­bil­ity a real script will be in EDT = 99% (shit hap­pens) = 0.99
  6. = prob­a­bil­ity a fake script will be in EDT xor the faker will remem­ber to edit the time­zone = 141631478⁄294471549 xor 0.4 (we assume 0.4 because we used it last time for the PDF cre­ator tool) = 0.481 + 0.4 = 0.881

Sub­sti­tute:

This would have been a much big­ger update than 2.6% (from 66.5% to 69.1%) if the evi­dence of the time­zone had­n’t been neutered by our assump­tion that most fak­ers would be clever enough to edit it. But any­way, the like­li­hood ratio:

One com­pli­cat­ing fac­tor I noticed after writ­ing this sec­tion is that Charley Par­la­panides’s Twit­ter page states he lives in Los Ange­les, Cal­i­for­nia - not New Jer­sey. Could they have been liv­ing in Los Ange­les 2008-2009, and the PDF time­zone actu­ally be strong evi­dence against being real? Maybe. My best evi­dence indi­cates the move did­n’t hap­pen after 2011.7 If the effect of a <2009 move to Los Ange­les were sim­ply to ren­der this argu­ment use­less - a like­li­hood ratio equal to 1 - it would not bother me too much because the like­li­hood ratio is ‘just’ 1.12, and an error here small com­pared to errors else­where like in the sty­lo­met­rics analy­sis. But more real­is­ti­cal­ly, if this argu­ment were wrong, the right argu­ment would likely flip the like­li­hood ratio to some­thing more like 0.5, and the dif­fer­ence between 1.12 and 0.5 is worth wor­ry­ing about.

So far so good? No! Vin­cent Yu points out some­thing inter­est­ing: my PDF view­er, Evince, may dis­play time­zones as the user’s time­zone, not the actual time­zone of cre­ation. Is this true? Is Evince mis­lead­ing me when it gives the time­zone as EDT (the time­zone I live in)? We appeal to pdftk again: the exact raw date was “D:20090409213247Z”. docs explain the dat­e­stamp, par­tic­u­larly the puz­zling final char­ac­ter ‘Z’:

Cre­ation­Date - string, option­al, the date and time the doc­u­ment was cre­at­ed, in the fol­low­ing form: “D:YYYYMMDDHHmmSSOHH’mm’”, where: YYYY is the year. MM is the month. DD is the day (01-31)…The apos­tro­phe char­ac­ter (’) after HH and mm is part of the syn­tax. All fields after the year are option­al. (The pre­fix D:, although also option­al, is strongly rec­om­mend­ed.) The default val­ues for MM and DD are both 01; all other numer­i­cal fields default to zero val­ues. A plus sign (+) as the value of the O field sig­ni­fies that local time is later than UT [], a minus sign (−) that local time is ear­lier than UT, and the let­ter Z that local time is equal to UT. If no UT infor­ma­tion is spec­i­fied, the rela­tion­ship of the spec­i­fied time to UT is con­sid­ered to be unknown. Whether or not the time zone is known, the rest of the date should be spec­i­fied in local time.

The “Z” says the input date was in UT. is a syn­onym for - so this PDF was cre­ated in Europe/England? No; a lit­tle more sleuthing turns up the PDF cre­ator soft­ware, DynamicPDF, has an API in which the CreationDate is defined to be a java.util.Date object which does­n’t deal with time­zones but instead defaults to UT/GMT. So, the time­zone does­n’t exist in the meta­data; it never exist­ed; and it never could exist in data pro­duced by this PDF cre­ator soft­ware.

We could try to res­cue the time­zone argu­ment by shift­ing the argu­ment to point­ing out that the PDF cre­ator soft­ware could have been a type which cor­rectly stored the orig­i­nal time­zone in the meta­data, which could then pro­vide evi­dence against being real if the time­zone were not EDT, so we could regard this as a very weak piece of evi­dence in favor of being real - a pos­si­ble coun­ter­point turned out to not exist - but this is now so ten­u­ous it is bet­ter to drop the argu­ment entire­ly.

Writing/formatting

We could iso­late mul­ti­ple tests here from my freeform obser­va­tions:

  1. length

    Some of the fake scripts are very long and com­plete; I remarked in an ear­lier foot­note that the fake Bat­man script is actu­ally too long for a movie. One of the fake scripts was a sin­gle leaked page, mak­ing for a 3⁄4 rate.

  2. for­mat­ting

    The sam­ple of real scripts has been refor­mat­ted for Inter­net dis­tri­b­u­tion and does­n’t include the “orig­i­nal” PDFs or rep­re­sen­ta­tions there­of; worse, the 4 or 5 fake scripts are all prop­erly for­mat­ted. With the exist­ing cor­pus, this test turns out to be use­less!

    With the dubi­ous ben­e­fit of hind­sight, we might claim this is not a sur­prise: after all, any script with­out for­mat­ting would be “obvi­ously” a fake and one would never hear about it. One only hears about plau­si­ble fakes which pos­sess at least the basic sur­face fea­tures of a real script.

  3. writ­ing qual­ity (spelling & gram­mar)

    In addi­tion, the fake scripts are well-writ­ten. Like for­mat­ting, this turns out to be a bad indi­ca­tor; some­one writ­ing a movie-length script seems to also be the sort of per­son who can write well. The descrip­tion of one of the fakes is inter­est­ing in this regard:

    This is prob­a­bly one of the most elab­o­rate ruses on the list. The script was writ­ten by 27-year-old Los Ange­les writer Justin Beck­er, and as far as we can tell, he did it for laughs. Becker trav­eled across the West Coast, plant­ing his scripts all over book­stores, hop­ing they would get dis­cov­ered. He basi­cally thought, “it would be funny to find out that a movie had been writ­ten, and it was very seri­ous and pre­ten­tious and polit­i­cal, and it had been shelved because of 9/11” (SF Weekly), which is explained in the pref­ace of the script and by the fact that the screen­play was sup­pos­edly writ­ten one day before Sep­tem­ber 11th, 2001 and con­tained George W. Bush in the sto­ry.

This leaves just length as a test:

  1. a = is real
  2. b = is ful­l-length
  3. = prob­a­bil­ity of being real = 0.691
  4. = prob­a­bil­ity of being not real = 1 - 0.691 = 0.334
  5. = prob­a­bil­ity a real script will be ful­l-length = 99% (shit hap­pens) = 0.99
  6. = prob­a­bil­ity a fake script will be ful­l-length = 3⁄4, by Laplace, = 0.66

Sub­sti­tute:

Like­li­hood ratio:

Plot

The ear­lier plot sum­mary con­veyed the “Hol­ly­wood” feel of the plot but unfor­tu­nately it’s hard to judge from local­iza­tion: a DN fan attempt­ing to imi­tate a Hol­ly­wood-­tar­geted script might rename Light to “Luke”, might sim­plify the plot con­sid­er­ably (there is prece­dent in the Japan­ese live-ac­tions movies , & ), might set it in NYC (Tokyo is out of the ques­tion, as Hol­ly­wood movies are never set over­seas unless the plot calls for it specif­i­cal­ly, and NYC seems to be the default loca­tion of crime-re­lated movies & TV shows), and so on.

Some of the plot changes make more sense after read­ing the biog­ra­phy of the Par­la­panides broth­ers: they are Greek and live in New Jer­sey. Chang­ing “Light” to “” is a very clever touch in local­iz­ing the char­ac­ter: besides the visual resem­blance of being short one-­syl­la­ble names start­ing with “L”, appar­ently “Luke” is a form of “Lucius”, bet­ter known as “Lucifer”, and the Latin was lit­er­ally “light”! (And indeed, Luke seems to still be a com­mon Greek name, per­haps thanks to the Gospel of Luke). NYC is a the default loca­tion, but it’s even more nat­ural when you are 2 screen­writ­ers who grew up and live in New Jer­sey. (I grew up on Long Island, and for me too, NYC is sim­ply “the city”.)

More impor­tant­ly, the plot includes sev­eral idiot-ball-re­lated changes that I think any DN fan com­pe­tent enough to write this fake would never have made, even in the name of local­iza­tion and Hol­ly­wood­iza­tion: the incom­pe­tent bus ID trick comes to mind.

Unfor­tu­nate­ly, in both respects, I can’t assign defen­si­ble num­bers to my inter­pre­ta­tion for the sim­ple rea­son that any rea­son­able dif­fer­ences in prob­a­bil­i­ties leads to a ridicu­lously strong con­clu­sion!

For exam­ple, if I gave 90% (fakes) vs 95% (re­al) for the indi­vid­ual local­iza­tion points (for each of name, sim­pli­fi­ca­tion, loca­tion), and then 25% (fakes) vs 50% (re­al) for 2 instances of incom­pe­tence, this gives us a like­li­hood ratio of:

(Here we see an advan­tage of like­li­hood ratios: they’re easy to cal­cu­late and give us an indi­ca­tor of argu­ment strength with­out hav­ing to run through 5 dif­fer­ent iter­a­tions of Bayes’s the­o­rem! This is some­thing one learns to appre­ci­ate after a few cal­cu­la­tion­s.)

A like­li­hood ratio of 4.7 would be the sin­gle strongest set of argu­ments we have seen yet, and even stronger than the sty­lo­met­ric like­li­hood ratio in the next sec­tion. If we used this result, it would be solely respon­si­ble for a very large amount of the con­clu­sion. A critic of the final con­clu­sion would be right to won­der if the con­clu­sion rested solely on this dubi­ous and unusu­ally sub­jec­tive sec­tion, so we will omit it (with the under­stand­ing that as usu­al, we are being con­ser­v­a­tive and essen­tially try­ing to cal­cu­late a lower bound to com­pen­sate for arro­gance or overly favor­able assump­tions else­where).

Stylometrics

The sty­lo­met­ric result is straight­for­ward: if a fake script gets paired up ran­dom­ly, then it had just a 1⁄15 chance of pair­ing up with Immor­tals. Even if we restrict the matches to the other movie scripts, there were 10 movie scripts and 2 odd­balls for 12 total or 6 pair­ings, giv­ing 1⁄6 chance of ran­domly pair­ing up with Immor­tals. The real ques­tion is: if the script is real, what chance does it have of pair­ing up with some­thing else by the same authors? I included 4 fan­fic­tions by the same author (Eliezer Yud­kowsky), and 2 wound up pair­ing (with the other 2 in the same over­all clus­ter but more dis­tant from the pair and each oth­er), giv­ing a rough guess of 50%; this is con­ve­nient since our default “I have no idea at all” guess for any binary ques­tion is 50%, and even if we apply Laplace, we still get 50% ( = 50%). So as usu­al, we will make the most con­ser­v­a­tive assump­tion for the fake, and keep our pes­simistic assump­tion about the real.

  1. a = is real
  2. b = is paired with Immor­tals
  3. = prob­a­bil­ity of being real = 0.749
  4. = prob­a­bil­ity of being not real = 1 - 0.7703 = 0.251
  5. = prob­a­bil­ity a real script will be paired with Immor­tals = 50% = 0.50
  6. = prob­a­bil­ity a fake script will paired with Immor­tals = 1⁄6 = 0.1667

As expect­ed, the sty­lo­met­rics was pow­er­ful evi­dence.

External evidence

Dating

The argu­ment there seems to be of the form that a PDF dated April 2009 is con­sis­tent with the esti­mated time­line for the true script. But what would be incon­sis­tent? Well, a PDF dated after April 2009: such a PDF would raise the ques­tion “what exactly the broth­ers were doing from June 2008 all the way to this coun­ter­fac­tual post-April 2009 date?”

But it turns out we already used this argu­ment! We used it as the PDF date inver­sion test. Can we use the April date as evi­dence again and dou­ble-­count it? I don’t think we should since it’s just another way of say­ing “April and ear­lier is evi­dence for it being real, post-April is evi­dence against”, regard­less of whether we jus­tify pre-April dates as being dur­ing the writ­ing period or as being some­thing a faker would­n’t dare do. This argu­ment turns out to be redun­dant with the pre­vi­ous inter­nal evi­dence (which in hind­sight, starts to sound like we ought to have clas­si­fied it as exter­nal evi­dence).

What we might be jus­ti­fied in doing is going back to the PDF date inver­sion test and strength­en­ing it since now we have 2 rea­sons to expect pre-April dates. But as usu­al, we will be con­ser­v­a­tive and leave out this strength­en­ing.

Credit

This is an inter­est­ing exter­nal argu­ment as it’s the only one depen­dent purely on the pas­sage of time. It’s a sort of , or more specif­i­cal­ly, a .

Hope function

The hope func­tion is sim­ple but exhibits some deeply coun­ter­in­tu­itive prop­er­ties (the focus of the psy­chol­o­gists writ­ing the pre­vi­ously linked paper). Our case is the straight­for­ward part, though. We can best visu­al­ize the hope func­tion as a per­son search­ing a set of n boxes or draw­ers or books for some­thing which may not even be there (p). If he finds the item, he now knows p = 1 (it was there after all), and once he has searched all n boxes with­out find­ing the thing, he knows p = 0 (it was­n’t there after all). Log­i­cal­ly, the more boxes he searches with­out find­ing it, the more pes­simistic he becomes (p shrinks towards 0). How much, exact­ly? Falk et al 1994 give a gen­eral for­mula for n boxes of which you’ve searched i boxes when your prior prob­a­bil­ity of the thing being there is L0:

So for exam­ple: if there’s n = 10 box­es, we searched i = 5 with­out find­ing the thing, and we were only L0 = 50% sure the thing was there in the first place, our new guess about whether the thing was there:

In this exam­ple, 33% seems like a rea­son­able answer (and inter­est­ing­ly, it’s not sim­ply ).

Credit & hope function

In the case of “tak­ing credit”, we can imag­ine the boxes as years, and each year passed is a box opened. As of Octo­ber 2012, we have opened 3 boxes since the May/October 2009 leak. How many boxes total should there be? I think 20 boxes is more than gen­er­ous: after 2 decades, the DN fran­chise highly likely won’t even be active8 - if any­one was going to claim cred­it, they likely would’ve done so by then. What’s our prior prob­a­bil­ity that they will do so at all? Well, of the 4 faked scripts, the author of the Mr. Peep­ers script took credit but the other 3 seem to be unknown - but it’s early days yet, so we’ll punt with a 50%. And of course, if the script is real, very few peo­ple are going to falsely claim author­ship (thereby claim­ing it’s fake?). So our setup looks like this:

  1. a = is real
  2. b = no one has claimed author­ship
  3. = prob­a­bil­ity of being real = 0.899
  4. = prob­a­bil­ity of being not real = 1 - 0.899 = 0.101
  5. = prob­a­bil­ity a real script will have no own­er­ship claim = 99% (shit hap­pens9) = 0.99
  6. = prob­a­bil­ity a fake script will have no own­er­ship claim = prob­a­bil­ity some­one will claim it is the hope func­tion with n = 20, i = 3, L0 = 50% = , so the prob­a­bil­ity some­one will not is

Then Bayes:

Like­li­hood ratio:

Official statements

The 2011 descrip­tions of the plot of the real script match the leaked script in sev­eral ways:

  1. no Ryuk or shinigamis

    This is an inter­est­ing change. I don’t think it’s likely a faker would remove them: with­out them, there’s no expla­na­tion of how a Death Note can exist, there’s no comic relief, some plot mechan­ics change (like deal­ing with the hid­den cam­eras), etc. Cer­tainly there’s no rea­son to remove them because they’re hard to film - that’s what CGI is for, and who in the world does SFX or CGI bet­ter than Hol­ly­wood?

  2. Light ends the story good and not evil

  3. Light seeks vengeance

    Items 2 & 3 seem like they would often be con­nect­ed: if Light is to be a good char­ac­ter, what rea­son does he have to use a Death Note? Vengeance is one of the few socially per­mis­si­ble uses. Of course, Light could start as a good char­ac­ter using the Death Note for vengeance and slide down to an evil end­ing, but it’s not as like­ly.

  4. Light seek­ing vengeance for a friend rather than his mother

    This item is con­tra­dic­to­ry, but only weakly so: a switch between mother and friend is an easy change to make, one which does­n’t much affect the rest of the plot.

On net, these 4 items clearly favor the hypoth­e­sis of the script being real. But how much? How much would we expect the fan or faker to avoid Hol­ly­wood-style changes com­pared to actual Hol­ly­wood screen­writ­ers like the Par­la­panides?

This is the exact same ques­tion we already con­sid­ered in the plot sec­tion of inter­nal evi­dence! Now that we have exter­nal attes­ta­tion that some of the plot changes I iden­ti­fied back in 2009 as being Hol­ly­wood-style are in the real script, can we do cal­cu­la­tions?

I don’t think we can. The exter­nal attes­ta­tion proves I was right in fin­ger­ing those plot changes as Hol­ly­wood-style, but this is essen­tially a mas­sive increase in (the chance a real script will have Hol­ly­wood-style changes is now ~100%)… but what we did­n’t know before, and still do not know now, is the other half of the prob­lem, (the chance a fake script will have sim­i­lar Hol­ly­wood-style changes).

We could assume that a fake script has 50% chance of mak­ing each change and item 4 negates one of the oth­ers (even though it’s really weak­er), for a total like­li­hood ratio of , but like before, we have no real ground to defend the 50% guess and so we will be con­ser­v­a­tive and drop this argu­ment like its sib­ling argu­ment.

Results

To review and sum­ma­rize each argu­ment we con­sid­ered:

Argument/test
author­ship 0.5 0.5 0.83 0.5 0.64 1.8
name spelling 0.64 0.36 0.5 0.93 0.49 0.54
address 0.49 0.51 0.16 0.16 0.49 1
PDF date 0.49 0.51 0.5 0.25 0.66 2
PDF cre­ator 0.66 0.34 0.99 0.96 0.67 1.03
PDF time­zone
script length 0.666 0.333 0.99 0.66 0.749 1.5
Hol­ly­wood plot 0.749 0.251 ~1.0 ? ? ? (>1)
sty­lo­met­rics 0.749 0.251 0.5 0.167 0.899 2.99
dat­ing 0.899 0.101 ? ? ? ? (>1)
credit 0.899 0.101 0.99 0.541 0.949 1.83
offi­cial plot 0.942 0.058 ~1.0 ? ? ? (>1)
legal take­down 0.942 0.058 0.5 0.10 0.988 5

The final pos­te­rior speaks for itself: 98%. By tak­ing into account 9 dif­fer­ent argu­ment and think­ing about how con­sis­tent each one is with the script being real, we’ve gone from con­sid­er­able uncer­tainty to a sur­pris­ingly high val­ue, even after bend­ing over back­wards to omit 3 par­tic­u­larly dis­putable argu­ments.

(One inter­est­ing point here is that it’s unlikely that any one script, either fake or real, would sat­isfy all of these fea­tures. Isn’t that evi­dence against it being real, cer­tainly with p < 0.05 how­ever we might cal­cu­late such a num­ber? Not real­ly. We have this data, how­ever we have it, and so the ques­tion is only “which the­ory is more con­sis­tent with our observed data?” After all, any one piece of data is extremely unlikely if you look at it right. Con­sider a coin-flip­ping sequence like “HTTTHT”; it looks “fair” with no pat­tern or bias, and yet what is the prob­a­bil­ity you will get this sequence by flip­ping a fair coin 6 times? Exactly the same as “HHHHHH”! Both out­comes have the iden­ti­cal prob­a­bil­ity ; some sequence had to win our coin-flip­ping lot­tery, even if it’s very unlikely any par­tic­u­lar sequence would win.)

Likelihood ratio tweaking

Is 98% the cor­rect pos­te­ri­or? Well, that depends both on whether one accepts each indi­vid­ual analy­sis and also the orig­i­nal prior of 50%. Sup­pose one accepted the analy­sis as pre­sented but believes that actu­ally only 10% of leaked scripts are real? Would such a per­son wind up believ­ing that the leak is real >50%? How can we answer this ques­tion with­out redo­ing 9 chained appli­ca­tions of Bayes’s the­o­rem? At last we will see the ben­e­fit of com­put­ing like­li­hood ratios all along: since like­li­hood ratios omit the prior , they are express­ing some­thing inde­pen­dent, and that turns out to be how much we should increase our prior (what­ever it is).

To update using a like­li­hood ratio (some more read­ing mate­ri­al: ), we express our as instead , mul­ti­ply by the like­li­hood ratio, and con­vert back! So for our table: we start with , mul­ti­ply by 1.8, 0.538, 1…5:

And we con­vert back as - like mag­ic, our final pos­te­rior reap­pears. Know­ing the prod­uct of our like­li­hood ratios is the fac­tor to mul­ti­ply by, we can eas­ily run other exam­ples. What of the per­son start­ing with a 10% pri­or? Well:

and

And a 1% per­son is and Ooh, almost to 50%, so we know any­one with a prior of 2% who accepts the analy­sis may be moved all the way to think­ing the script more likely to be true than not (specif­i­cal­ly, 0.62).

What if we thought we had the right prior of 50% but we ter­ri­bly messed up each analy­sis and each like­li­hood ratio was twice as large/small as it should be? If we cut each like­li­hood ratio’s strength by half11, then we get a new total like­li­hood ratio of 3.9, and our new pos­te­rior is:

;

What if instead we ignored the 2 argu­ments with a like­li­hood ratio greater than 2? Then we get a mul­ti­plied like­li­hood ratio of 3.08712 and from 50% we will go to:

;

Chal­lenges for advanced read­ers:

  1. Redo the cal­cu­la­tions, but instead of being restricted to point esti­mates, work on inter­vals: give what you feel are the end­points of 95% cre­dence inter­vals for & and run Bayes on the end­points to get worst-­case and best-­case pos­te­ri­ors, to feed into the next argu­ment eval­u­a­tion
  2. Start­ing with a uni­form prior over 0-1, treat each argu­ment as input to a Bernoulli (be­ta) dis­tri­b­u­tion: a like­li­hood ratio of >1 counts as “suc­cess” while a like­li­hood ratio <=1 counts as a “fail­ure”. How does the pos­te­rior prob­a­bil­ity dis­tri­b­u­tion change after each argu­ment?
  3. Start with the uni­form pri­or, but now treat each argu­ment as a sam­ple from a new nor­mal dis­tri­b­u­tion with a known mean (the best-guess like­li­hood ratio) but unknown vari­ance (how likely each best-guess is to be over­turned by unknown infor­ma­tion). Update on each argu­ment, show the pos­te­rior prob­a­bil­ity dis­tri­b­u­tions as of each argu­ment, and list the final 95% cred­i­ble inter­val.
  4. Do the above, but with an unknown mean as well as unknown vari­ance.

Benefits

With the final result in hand - and as promised, no math beyond arith­metic was nec­es­sary - and after the con­sid­er­a­tion of how strong the result is, it’s worth dis­cussing just what all that work bought us. (How­ever long it took you to read it, it took much longer to write it!) I don’t know about you, but I found it fas­ci­nat­ing going through my old infor­mal argu­ments and see­ing how they stood up to the chal­lenge:

  1. I was sur­prised to real­ize that the “Charley” obser­va­tion was evi­dence against
  2. the cor­po­rate address seemed like good evi­dence for
  3. I did­n’t appre­ci­ate that the inter­nal evi­dence of PDF date and exter­nal evi­dence of dat­ing was dou­ble-­count­ing evi­dence and hence exag­ger­ated the strength of the case
  4. Nor did I real­ize that the key ques­tion about the plot changes was not how clearly Hol­ly­wood they were, but how well a faker could or would imi­tate Hol­ly­wood
  5. Hence, I did­n’t appre­ci­ate that the 2011 descrip­tions of the plot were not the con­clu­sive break­through I took them for, but closer to a minor foot­note cor­rob­o­rat­ing my view of the plot changes as being Hol­ly­wood
  6. Since I had­n’t looked into the details, I did­n’t real­ize the file­shar­ing links going dead was more dubi­ous than they ini­tially seemed

If any­one else were inter­ested in the issue, the frame­work of the 12 tests pro­vides a fan­tas­tic way of struc­tur­ing dis­agree­ment. By putting num­bers on each item, we can focus dis­agree­ment to the exact issue of con­tention, and the for­mal struc­ture lets us tar­get any future research by focus­ing on the largest (or small­est) like­li­hood ratios:

  • What data could we find on legal take­downs of scripts or files in gen­eral to firm up our
  • How accu­rate is sty­lo­met­rics exact­ly? Could I just have got­ten lucky? If we get a script for Every­thing For A Rea­son or Immor­tals, are the results rein­forced or does the clus­ter­ing go hay­wire and the leaked script no longer resem­ble their known writ­ing?
  • Can we find offi­cial mate­ri­al, writ­ten by Charles Par­la­panides, which uses “Charley” instead?
  • Given the French site report­ing script mate­r­ial in May, should we throw out the PDF date entirely by say­ing the gap between April and May is too short to be worth includ­ing in the analy­sis? Or does that just make us shift the like­li­hood ratio of 2 to the other dat­ing argu­ment?
  • If we assem­bled a larger cor­pus of leaked and gen­uine scripts, will the like­li­hood ratio for the inclu­sion of author­ship (1.8) shrink, since that was derived from a small cor­pus?

This would be the sort of dis­cus­sion even bit­ter foes could engage in pro­duc­tive­ly, by col­lab­o­rat­ing on com­pil­ing scripts or search­ing inde­pen­dently for mate­r­ial - and pro­duc­tive dis­cus­sions are the best kind of dis­cus­sion.

The truth?

In tex­tual crit­i­cism, usu­ally the ground truth is unob­tain­able: all par­ties are dead & new dis­cov­er­ies of defin­i­tive texts are rare. Many ques­tions are “not beyond all con­jec­ture” (pace Thomas Browne13) but are beyond res­o­lu­tion.

Our case is hap­pier: we can just ask one of the Par­la­panides. A Twit­ter account was already linked, so ask­ing is easy. Will they reply? 2009 was a long time ago, but 2011 (when they were replaced) was not so long ago. Since the script was scrapped, one would hope they would feel free to reply or reply hon­est­ly, but we can’t know.

I sus­pect he will, but I’m not so san­guine he will give a clear yes or no. If he does, I have ~85% con­fi­dence that he will con­firm they did write it.

Why this pes­simism of only 85%?

  1. I have not done this sort of analy­sis before, either the Bayesian or sty­lo­met­ric aspects
  2. one argu­ment turned out to be an argu­ment against being real
  3. sev­eral argu­ments turned out to be use­less or unquan­tifi­able
  4. sev­eral argu­ments rest on weak enough data that they could also turn out use­less or neg­a­tive; eg. the PDF time­zone argu­ment
  5. our appli­ca­tions of Bayes assumes, as men­tioned pre­vi­ous­ly, “con­di­tional inde­pen­dence”: that each argu­ment is “inde­pen­dent” and can be taken at face-­val­ue. This is false: sev­eral of the argu­ments are plau­si­bly (eg. a skilled forger might be expected to look up addresses and names and time­zones), and so the true con­clu­sion will be weak­er, per­haps much weak­er. Hope­fully mak­ing con­ser­v­a­tive choices par­tially off­set this over­es­ti­mat­ing ten­dency - but how much did it?
  6. I made more mis­takes than I care to admit work­ing out each prob­lem.
  7. And final­ly, I haven’t been able to come up with mul­ti­ple good argu­ments why the script is a fake, which sug­gests I am now per­son­ally invested in it being real and so my final 98% cal­cu­la­tion is an sub­stan­tial over­es­ti­mate. One should­n’t be fool­ishly con­fi­dent in one’s sta­tis­tics.

No comment

I mes­saged Par­la­panides on Twit­ter on 2012-10-27; after some back and forth, he spec­i­fied that his “no” answer was an infer­ence based on what was then the first line of the plot sec­tion: the men­tion that Ryuk did not appear in the script, but that they loved Ryuk and so it was not their script. I tried get­ting a more direct answer by men­tion­ing the ANN arti­cle about Shane and name-­drop­ping “Luke Mur­ray” to see if he would object or elab­o­rate, but he repeated that the stu­dio hated how Ryuk appeared in the manga and he could­n’t say much more. I thanked him for his time and dropped the con­ver­sa­tion.

Unfor­tu­nate­ly, this is not the clear open-and-shut denial or affir­ma­tion I was hop­ing for. (I do not hold it against him, since I’m grate­ful and a lit­tle sur­prised he took the time to answer me at all: there is no pos­si­ble ben­e­fit for him to answer my ques­tions, poten­tial harm to his rela­tion­ships with stu­dios, and he is a busy guy from every­thing I read about him & his brother while research­ing this essay.)

There are at least two ways to inter­pret curi­ous sort of non-denial/non-affirmation: the script has noth­ing to do with the Par­la­panides or the stu­dios and is a fake which merely hap­pens to match the stu­dio’s desires in omit­ting Ryuk entire­ly; or it is some­how a descen­dant or rel­a­tive of the Par­la­panides script which they are dis­own­ing or regard as not their script (Ryuk is a major char­ac­ter in most ver­sions of DN).

If Par­la­panides had affirmed the script, then clearly that would be strong evi­dence for the scrip­t’s real­ness. If he had denied the script, that would be strong evi­dence against the script. And the in-­be­tween cas­es? If there had been a clear hint on his part - per­haps some­thing like “of course I can­not offi­cially con­firm that that script is real” - then we might want to con­strue it as evi­dence for being real, but he gave a spe­cific way in which the leaked script did not match his script, and this must be evi­dence against.

How much evi­dence again­st? I spec­i­fied my best guess that he would reply clearly was 40% and that he would affir­ma­tively con­di­tional on reply­ing clearly was 85%, so rough­ly, I was expect­ing a clear affir­ma­tion only 40% times 85% or 34%; so, I did not expect to get a clear affir­ma­tion despite hav­ing a high con­fi­dence in the script, and this sug­gests that the lack of clear affir­ma­tion can­not be very strong evi­dence for me. I don’t think I would be happy with a like­li­hood ratio stronger (small­er) than 0.25, so I would update thus, reusing our pre­vi­ous like­li­hood ratios:

and then we have a new pos­te­ri­or:

Conclusion

How should we regard this? I’m mod­er­ately dis­turbed: it feels like Par­la­panides’s non-an­swer should mat­ter more. But all the pre­vi­ous points seem roughly right. This rep­re­sents an inter­est­ing ques­tion of bul­let-bit­ing & “Con­fi­dence lev­els inside and out­side an argu­ment”, or per­haps : does the con­clu­sion dis­credit the argu­ments & cal­cu­la­tions, or do the argu­ments & cal­cu­la­tions dis­credit the con­clu­sion?

Over­all, I feel inclined to bite the bul­let. Now that I have laid out the mul­ti­ple lines of con­verg­ing evi­dence and rig­or­ously spec­i­fied why I found them con­vinc­ing argu­ments, I sim­ply don’t see how to escape the con­clu­sion. Even assum­ing large errors in the strength - in the like­li­hood sec­tion, we looked at halv­ing the strength of each dis­junct and also dis­card­ing the 2 best - we still increase in con­fi­dence.

So: I believe the script is real, if not exactly what the Par­la­panides broth­ers wrote.

See Also

Appendix

Conditional independence

The phrase “con­di­tional inde­pen­dence” is just the assump­tion that each argu­ment is sep­a­rate and lives or dies on its own. This is not true, since if some­one were delib­er­ately fak­ing a script, then a good faker would be much more likely to not cut cor­ners and care­fully fake each obser­va­tion while a care­less faker would be much more likely to be lazy and miss many. Mak­ing this assump­tion means that our final esti­mate will prob­a­bly over­state the prob­a­bil­i­ty, but in exchange, it makes life much eas­ier: not only is it harder to even think about what con­di­tional depen­den­cies there might be between argu­ments, it makes the math too hard for me to do right now!

Alex Schell offers some help­ful com­ments on this top­ic.

The odds form of Bayes’ the­o­rem is this:

In Eng­lish, the ratio of the pos­te­rior prob­a­bil­i­ties (the “pos­te­rior odds” of a) equals the prod­uct of the ratio of the prior prob­a­bil­i­ties and the like­li­hood ratio.

What we are inter­ested in is the like­li­hood ratio , where e is all exter­nal and inter­nal evi­dence we have about the DN script.

e is equiv­a­lent to the con­junc­tion of each of the 13 indi­vid­ual pieces of evi­dence, which I’ll refer to as e1 through e13:

So the like­li­hood ratio we’re after can be writ­ten like this:

I abbre­vi­ate as $LR(b)4, and as .

Now, it fol­lows from prob­a­bil­ity the­ory that the above is equiv­a­lent to

(The order­ing is arbi­trary.) Now comes the point where the assump­tion of con­di­tional inde­pen­dence sim­pli­fies things great­ly. The assump­tion is that the “impact” of each evi­dence (i.e. the like­li­hood ratio asso­ci­ated with it) does not vary based on what other evi­dence we already have. That is, for any evi­dence ei its like­li­hood ratio is the same no mat­ter what other evi­dence you add to the right-­hand side:

for any con­junc­tion c of other pieces of evi­dence

Assum­ing con­di­tional inde­pen­dence sim­pli­fies the expres­sion for great­ly:

On the other hand, the con­di­tional inde­pen­dence assump­tion is likely to have a sub­stan­tial impact on what value takes. This is because most pieces of evi­dence are expected to cor­re­late pos­i­tively with one another instead of being inde­pen­dent. For exam­ple, if you know that the script is 20,000-­words of Hol­ly­wood plot and that the sty­lo­met­ric analy­sis seems to check out, then if you are deal­ing with a fake script (“is not real”) it is an extremely elab­o­rate fake, and (e.g.) the PDF meta­data are almost cer­tain to “check out” and so pro­vide much weaker evi­dence for “is real” than the cal­cu­la­tion assum­ing con­di­tional inde­pen­dence sug­gests. On the other hand, the evi­dence of legal take­downs seems unaf­fected by this con­cern, as even a com­pe­tent faker would hardly be expected to cre­ate the evi­dence of take­downs.


  1. The ear­li­est men­tion I’ve been able to find is a French site which posted on 2009-05-17 a trans­la­tion of the begin­ning of the leaked script; no source is given, and it’s not clear who did the trans­la­tion, what script was used, or where the script was obtained. So while the script was clearly cir­cu­lat­ing by mid-­May, I can’t date the leak any ear­lier than that date.↩︎

  2. SHA-512: 954082c8cde2ccee1383196fe7c420bd444b5b9e5d676b01b3eb9676fa40427983fb27ad8458a784ea765d66be93567bac97aa173ab561cd7231d8c017a4fa70↩︎

  3. The raw meta­data can be extracted using pdftk like thus: pdftk 2009-parlapanides-deathnotemovie.pdf dump_data:

    InfoKey: Producer
    InfoValue: DynamicPDF v5.0.2 for .NET
    InfoKey: CreationDate
    InfoValue: D:20090409213247Z
    PdfID0: 9234e3f3316974458188a09a7ad849e3
    PdfID1: 9234e3f3316974458188a09a7ad849e3
    NumberOfPages: 112
    ↩︎
  4. Specif­i­cal­ly, config.txt reads:

    corpus.format="plain"
    corpus.lang="English.all"
    analyzed.features="w"
    ngram.size=1
    mfw.min=1
    mfw.max=1000
    mfw.incr=1
    start.at=1
    culling.min=0
    culling.max=0
    culling.incr=20
    mfw.list.cutoff=5000
    delete.pronouns=FALSE
    analysis.type="CA"
    use.existing.freq.tables=FALSE
    use.existing.wordlist=FALSE
    consensus.strength=0.5
    distance.measure="EU"
    display.on.screen=TRUE
    write.pdf.file=FALSE
    write.jpg.file=FALSE
    write.emf.file=FALSE
    write.png.file=FALSE
    use.color.graphs=TRUE
    titles.on.graphs=TRUE
    dendrogram.layout.horizontal=TRUE
    pca.visual.flavour="classic"
    sampling="no.sampling"
    sample.size=10000
    length.of.random.sample=10000
    sampling.with.replacement=FALSE
    ↩︎
  5. The fake Bat­man script is pretty weird; it starts off inter­est­ing and has many good parts, but then floun­ders in opaque­ness and con­cludes even more weirdly with far too much mate­r­ial in it for a sin­gle film to plau­si­bly include. If it were sup­posed to be by any­one but Christo­pher Nolan, you’d com­ment “this can’t be real - the plot is too flabby and con­fus­ing, and the dia­logue veers into non sequiturs and half-baked phi­los­o­phy” (which of course it is). But one expects that of Nolan, almost, and for the filmed movie to be bet­ter than the script, so para­dox­i­cal­ly, the wors­en­ing qual­ity may have lent it some cred­i­bil­i­ty.↩︎

  6. Mod­ulo the pre­vi­ously dis­cussed issue that the leaked script seems to have been cir­cu­lat­ing in May 2009, which would dras­ti­cally cut down the win­dow to a month or less.↩︎

  7. The ear­li­est Tweet I can find using Snap­Bird tying him to LA is 2011-06-10 (other searches like “mov­ing”, “move”, “relo­cat­ing”, “Cal­i­for­nia”, “CA”, “New Jer­sey”, “NJ” etc do not turn up any­thing use­ful). This is prob­a­bly because his tweets do not go fur­ther back than April 2011, where there is men­tion of some sort of hack­ing of his account. The next step is a Google search for Charley Parlapanides ("New Jersey" OR "Los Angeles" OR California) with a date range of 6/1/2009-6/9/2011 (to pick up any loca­tions given from when they started on the script to just before that 2011-06-10 tweet). Results were equiv­o­cal: a 2011-02-12 blog com­ment about “this town” might indi­cate res­i­dence in LA/Hollywood; a 2010-12-19 men­tion of walk­ing into a direc­tor’s pro­duc­tion office of sets & cos­tumes might indi­cate res­i­dence as well. Beyond that, I can’t find any­thing.↩︎

  8. Quick, of the anime aired 20 years ago in 1992, how many are active fran­chis­es? Of the 48 on the first page, maybe 3 or 4 seem active.↩︎

  9. Or more pre­cise­ly, some­times peo­ple do falsely claim author­ship and even sue stu­dios over it; but if you picked 100 ran­dom scripts, would you expect to find more than 1 such instances? Keep­ing in mind most scripts never turn into movies but die in !↩︎

  10. 1 link was dead because “File Belongs to Non-­Val­i­dated Account” and another link was dead because “The file you attempted to down­load is an archive that is part of a set of archives. Medi­aFire does not sup­port unlim­ited down­loads of split archives and the limit for this file has been reached. Medi­aFire under­stands the need for users to trans­fer very large or split archives, up to 10GB per file, and we offer this ser­vice start­ing at $1.50 per month.” Nei­ther rea­son would nec­es­sar­ily be applic­a­ble to a 3MB PDF script.↩︎

  11. The gory details; since the strength of a ratio in either direc­tion is the dif­fer­ence from 1, we need to sub­tract or add 1 depend­ing on the direc­tion:

    map (\x -> if x==1 then 1 else (if x>1 then 1+((x-1)/2) else 1-(x/2)))
        [1.8, 0.538,1,2,1.033,1.5,2.999,1.831,5]
    
    [1.4,0.731,1.0,1.5,1.0165,1.25,1.9995,1.4155,3.0]
    
    product [1.4,0.731,1.0,1.5,1.0165,1.25,1.9995,1.4155,3.0]
    
    16.6
    ↩︎
  12. Easy enough:

    product (filter (<2) [1.8, 0.538,1,2,1.033,1.5,2.999,1.831,5])
    
    2.74
    ↩︎
  13. Sir , (chap­ter 5):

    What Song the Syrens sang, or what name Achilles assumed when , though puz­zling Ques­tions are not beyond all con­jec­ture. What time the per­sons of these Ossuar­ies entred the famous Nations of the dead, and slept with Princes and Coun­sel­lours, might admit a wide res­o­lu­tion. But who were the pro­pri­etaries of these bones, or what bod­ies these ashes made up, were a ques­tion above Anti­quar­ism. Not to be resolved by man, nor eas­ily per­haps by spir­its, except we con­sult the Provin­ciall Guardians, or tutel­lary Obser­va­tors. Had they made as good pro­vi­sion for their names, as they have done for their Reliques, they had not so grossly erred in the art of per­pet­u­a­tion. But to sub­sist in bones, and be but Pyra­mi­dally extant, is a fal­lacy in dura­tion. Vain ash­es, which in the obliv­ion of names, per­sons, times, and sex­es, have found unto them­selves, a fruit­lesse con­tin­u­a­tion, and only arise unto late pos­ter­i­ty, as Emblemes of mor­tall van­i­ties; Anti­dotes against pride, vain-­glo­ry, and madding vices. Pagan vain-­glo­ries which thought the world might last for ever, had encour­age­ment for ambi­tion, and find­ing no Atro­pos unto the immor­tal­ity of their Names, were never dampt with the neces­sity of obliv­ion. Even old ambi­tions had the advan­tage of ours, in the attempts of their vain-­glo­ries, who act­ing ear­ly, and before the prob­a­ble Merid­ian of time, have by this time found great accom­plish­ment of their desig­nes, whereby the ancient Heroes have already out­-lasted their Mon­u­ments, and Mechan­i­call preser­va­tions. But in this lat­ter Scene of time we can­not expect such Mum­mies unto our mem­o­ries, when ambi­tion may fear the Prophecy of Elias, and Charles the fifth can never hope to live within two Methusela’s of Hec­tor.

    ↩︎