# Death Note: L, Anonymity & Eluding Entropy

Applied Computer Science: On Murder Considered as STEM Field; using information theory to quantify the magnitude of Light Yagami’s mistakes in Death Note and considering fixes
anime, criticism, computer-science, cryptography, Bayes, insight-porn
2011-05-042017-12-15 finished certainty: highly likely

In the manga Death Note, the pro­tag­o­nist Light Yagami is given the super­nat­ural weapon “Death Note” which can kill any­one on demand, and begins using it to reshape the world. The genius detec­tive L attempts to track him down with analy­sis and trick­ery, and ulti­mately suc­ceeds. Death Note is almost a thought-ex­per­i­men­t-given the per­fect mur­der weapon, how can you screw up any­way? I con­sider the var­i­ous steps of L’s process from the per­spec­tive of com­puter secu­ri­ty, cryp­tog­ra­phy, and infor­ma­tion the­o­ry, to quan­tify Light’s ini­tial anonymity and how L grad­u­ally de-anonymizes him, and con­sider which mis­take was the largest as fol­lows:

1. Light’s fun­da­men­tal mis­take is to kill in ways unre­lated to his goal.

Killing through heart attacks does not just make him vis­i­ble early on, but the deaths reveals that his assas­si­na­tion method is impos­si­bly pre­cise and some­thing pro­foundly anom­alous is going on. L has been tipped off that Kira exists. What­ever the bogus jus­ti­fi­ca­tion may be, this is a major vic­tory for his oppo­nents. (To deter crim­i­nals and vil­lains, it is not nec­es­sary for there to be a glob­al­ly-known sin­gle anom­alous or super­nat­ural killer, when it would be equally effec­tive to arrange for all the killings to be done nat­u­ral­is­ti­cally by ordi­nary mech­a­nisms such as third parties/police/judiciary or used indi­rectly as par­al­lel con­struc­tion to crack cas­es.)

2. Worse, the deaths are non-ran­dom in other ways—they tend to occur at par­tic­u­lar times!

Just the sched­ul­ing of deaths cost Light 6 bits of anonymity

3. Light’s third mis­take was react­ing to the bla­tant provo­ca­tion of Lind L. Tai­lor.

Tak­ing the bait let L nar­row his tar­get down to 1⁄3 the orig­i­nal Japan­ese pop­u­la­tion, for a gain of ~1.6 bits. 4. Light’s fourth mis­take was to use con­fi­den­tial police infor­ma­tion stolen using his police­man father’s cre­den­tials.

This mis­take was the largest in bits lost. This mis­take cost him 11 bits of anonymi­ty; in other words, this mis­take cost him twice what his sched­ul­ing cost him and almost 8 times the mur­der of Tai­lor! 5. Killing Ray Pen­bar and the FBI team.

If we assume Pen­bar was tasked 200 leads out of the 10,000, then mur­der­ing him and the fiancee dropped Light just 6 bits or a lit­tle over half the fourth mis­take and com­pa­ra­ble to the orig­i­nal sched­ul­ing mis­take. 6. Endgame: At this point in the plot, L resorts to direct mea­sures and enters Light’s life direct­ly, enrolling at the uni­ver­si­ty, with Light unable to per­fectly play the role of inno­cent under intense in-per­son sur­veil­lance.

From that point on, Light is screwed as he is now play­ing a deadly game of “Mafia” with L & the inves­tiga­tive team. He frit­tered away >25 bits of anonymity and then L intu­ited the rest and sus­pected him all along.

Final­ly, I sug­gest how Light could have most effec­tively employed the Death Note and lim­ited his loss of anonymi­ty. In an appen­dix, I dis­cuss the max­i­mum amount of infor­ma­tion leak­age pos­si­ble from using a Death Note as a com­mu­ni­ca­tion device.

(Note: This essay assumes a famil­iar­ity with the early plot of and . If you are unfa­mil­iar with DN, see my essay or con­sult or read the DN rules.)

I have called the pro­tag­o­nist of Death Note, Light Yagami, “hubris­tic” and said he made big mis­takes. So I ought to explain what he did wrong and how he could do bet­ter.

While Light starts schem­ing and tak­ing seri­ous risks as early as the arrival of the FBI team in Japan, he has fun­da­men­tally already screwed up. L should never have got­ten that close to Light. The Death Note kills flaw­lessly with­out foren­sic trace and over arbi­trary dis­tances; Death Note is almost a thought-ex­per­i­men­t—­given the per­fect mur­der weapon, how can you screw up any­way?

Some of the other Death Note users high­light the prob­lem. The user in the car­ries out the nor­mal exe­cu­tions, but also kills a num­ber of promi­nent com­peti­tors. The killings directly point to the Yot­suba Group and even­tu­ally the user’s death. The moral of the story is that indi­rect rela­tion­ships can be fatal in nar­row­ing down the pos­si­bil­i­ties from ‘every­one’ to ‘these 8 men’.

# Detective stories as optimization problems

In Light’s case, L starts with the world’s entire pop­u­la­tion of 7 bil­lion peo­ple and needs to nar­row it down to 1 per­son. It’s a search prob­lem. It maps fairly directly onto basic , in fact. (See also , , and for case stud­ies in applied deanonymiza­tion, .) To uniquely spec­ify one item out of 7 bil­lion, you need 33 bits of infor­ma­tion because ; to use an anal­o­gy, your 32-bit com­puter can only address one unique loca­tion in mem­ory out of 4 bil­lion loca­tions, and adding another bit dou­bles the capac­ity to >8 bil­lion. Is 33 bits of infor­ma­tion a lot?

Not real­ly. L could get one bit just by look­ing at his­tory or crime sta­tis­tics, and not­ing that mass mur­der­ers are, to an aston­ish­ing degree, male1, thereby rul­ing out half the world pop­u­la­tion and actu­ally start­ing L off with a require­ment to obtain only 32 bits to break Light’s anonymi­ty.2 If Death Note users were suffi­ciently ratio­nal & knowl­edge­able, they could draw on con­cepts like to acausally coop­er­ate3 to avoid this infor­ma­tion leak­age… by arrang­ing to pass on Death Notes to females4 to restore a 50:50 gen­der ratio—­for exam­ple, if for every female who obtained a Death note there were 3 males with Death Notes, then all users could roll a 1d3 dice and if 1 keep it and if 2 or 3 pass it on to some­one of the oppo­site gen­der.

We should first point out that Light is always going to leak some bits. The only way he could remain per­fectly hid­den is to not use the Death Note at all. If you change the world in even the slight­est way, then you have leaked infor­ma­tion about your­self in prin­ci­ple. Every­thing is con­nected in some sense; you can­not mag­i­cally wave away the exis­tence of fire with­out cre­at­ing a cas­cade of con­se­quences that result in every liv­ing thing dying. For exam­ple, the fun­da­men­tal point of Light exe­cut­ing crim­i­nals is to shorten their lifes­pan—there’s no way to hide that. You can’t both shorten their lives and not shorten their lives. He is going to reveal him­self this way, at the least, to the actu­ar­ies and sta­tis­ti­cians.

More his­tor­i­cal­ly, this has been a chal­lenge for cryp­tog­ra­phers, like in WWII: how did they exploit the Enigma & other com­mu­ni­ca­tions with­out reveal­ing they had done so? Their solu­tion was mis­di­rec­tion: , like search planes that ‘just hap­pened’ to find Ger­man sub­marines or leaks to about there being undis­cov­ered spies. (How­ev­er, the famous story that Win­ston Churchill allowed the town of Coven­try to be bombed rather than risk the secret of Ultra has .) This worked in part because of Ger­man over­con­fi­dence, because the war did not last too long, and in part because each cover story was plau­si­ble on its own and no one was, in the chaos of war, able to see the whole pic­ture and real­ize that there were too many lucky search planes and too many undis­cov­er­able moles; even­tu­al­ly, how­ev­er, some­one would real­ize, and appar­ently some Ger­mans did con­clude that Enigma had to have been bro­ken (but much too late). It’s not clear to me what would be the best mis­di­rec­tion for Light to mask his nor­mal killings—use the Death Note’s con­trol fea­tures to invent a anti-crim­i­nal ter­ror­ist orga­ni­za­tion?

So there is a real chal­lenge here: one party is try­ing to infer as much as pos­si­ble from observed effects, and the other is try­ing to min­i­mize how much the for­mer can observe while not stop­ping entire­ly. How well does Light bal­ance the com­pet­ing demands?

# Mistakes

## Mistake 1

How­ev­er, he can try to reduce the leak­age and make his as large as pos­si­ble. For exam­ple, killing every crim­i­nal with a heart attack is a dead give-away. Crim­i­nals do not die of heart attacks that often. (The point is more dra­matic if you replace ‘heart attack’ with ‘lupus’; as we all know, in real life it’s never lupus.) Heart attacks are a sub­set of all deaths, and by restrict­ing him­self, Light makes it eas­ier to detect his activ­i­ties. 1000 deaths of lupus are a blar­ing red alarm; 1000 deaths of heart attacks are an odd­i­ty; and 1000 deaths dis­trib­uted over the sta­tis­ti­cally likely sus­pects of can­cer and heart dis­ease etc. are almost invis­i­ble (but still notice­able in prin­ci­ple).

So, Light’s fun­da­men­tal mis­take is to kill in ways unre­lated to his goal. Killing through heart attacks does not just make him vis­i­ble early on, but the deaths reveals that his assas­si­na­tion method is super­nat­u­rally pre­cise. L has been tipped off that Kira exists. What­ever the bogus jus­ti­fi­ca­tion may be, this is a major vic­tory for his oppo­nents.

First mis­take, and a clas­sic one of ser­ial killers (eg the ’s vaunt­ing was less anony­mous than he believed): delu­sions of grandeur and the desire to taunt, play with, and con­trol their vic­tims and demon­strate their power over the gen­eral pop­u­la­tion. From a lit­er­ary per­spec­tive, this sim­i­lar­ity is clearly not an acci­dent, as we are meant to read Light as the Sociopath Hero arche­type: his ulti­mate down­fall is the con­se­quence of his , , par­tic­u­larly in the orig­i­nal sadis­tic sense. Light can­not help but self­-s­ab­o­tage like this.

(This is also deeply prob­lem­atic from the point of car­ry­ing out Light’s the­ory of deter­rence: to deter crim­i­nals and vil­lains, it is not nec­es­sary for there to be a glob­al­ly-known sin­gle super­nat­ural killer, when it would be equally effec­tive to arrange for all the killings to be done nat­u­ral­is­ti­cally by third parties/police/judiciary or used indi­rectly to crack cas­es. Arguably the deter­rence would be more effec­tive the more diffused it’s believed to be—s­ince a sin­gle killer has a finite lifes­pan, finite knowl­edge, fal­li­bil­i­ty, and idio­syn­cratic pref­er­ences which reduce the threat and con­nec­tion to crim­i­nal­i­ty, while if all the deaths were ascribed to unusu­ally effec­tive police or detec­tives, this would be inferred as a gen­eral increase in all kinds of police com­pe­tence, one which will not instantly dis­ap­pear when one per­son gets bored or hit by a bus.)

## Mistake 2

Worse, the deaths are non-ran­dom in other ways—they tend to occur at par­tic­u­lar times! Graphed, daily pat­terns jump out.

L was able to nar­row down the active times of the pre­sum­able stu­dent or worker to a par­tic­u­lar range of lon­gi­tude, say 125–150° out of 180°; and what coun­try is most promi­nent in that range? Japan. So that cut down the 7 bil­lion peo­ple to around 0.128 bil­lion; 0.128 bil­lion requires 27 bits () so just the sched­ul­ing of deaths cost Light 6 bits of anonymi­ty!

### De-anonymization

On a side-note, some might be skep­ti­cal that one can infer much of any­thing from the graph and that Death Note was just gloss­ing over this part. “How can any­one infer that it was some­one liv­ing in Japan just from 2 clumpy lines at morn­ing and evening in Japan?” But actu­al­ly, such a graph is sur­pris­ingly pre­cise. I learned this years before I watched Death Note, when I was heav­ily active on Wikipedia; often I would won­der if two edi­tors were the same per­son or roughly where an edi­tor lived. What I would do if their edits or user page did not reveal any­thing use­ful is I would go to “Kate’s edit counter” and I would exam­ine the times of day all their hun­dreds or thou­sands of edits were made at. Typ­i­cal­ly, what one would see was ~4 hours where there were no edits what­so­ev­er, then ~4 hours with mod­er­ate to high activ­i­ty, a trough, then another grad­ual rise to 8 hours later and a fur­ther decline down to the first 4 hours of no activ­i­ty. These peri­ods quite clearly cor­re­sponded to sleep (pretty much every­one is asleep at 4 AM), morn­ing, lunch & work hours, evening, and then night with peo­ple occa­sion­ally stay­ing up late and edit­ing5. There was noise, of course, from peo­ple stay­ing up espe­cially late or get­ting in a bunch of edit­ing dur­ing their work­day or occa­sion­ally trav­el­ing, but the over­all pat­terns were clear—n­ever did I dis­cover that some­one was actu­ally a night­watch­man and my guess was an entire hemi­sphere off. (Aca­d­e­mic esti­mates based on user edit­ing pat­terns cor­re­late well with what is pre­dicted by on the basis of the geog­ra­phy of IP edits.6)

Com­puter secu­rity research offers more scary results. Per­haps because , there are an amaz­ing num­ber of ways to break some­one’s pri­vacy and de-anonymize them (back­ground; there is also finan­cial incen­tive to do so in order to adver­tise & ):

1. small errors in their com­put­er’s clock’s time (even over Tor)

2. Web brows­ing his­tory7 or just the ver­sion and plu­g­ins8; and this is when ran­dom Fire­fox or Google Docs or Face­book bugs don’t leak your iden­tity

3. based on how slow pages load9 (how many there are; tim­ing attacks can also be used to learn web­site user­names or # of pri­vate pho­tos)

4. Knowl­edge of what ‘groups’ a per­son was in could uniquely iden­tify 42%10 of peo­ple on social net­work­ing site , and pos­si­bly Face­book & 6 oth­ers

5. Sim­i­lar­ly, some­one has watched11, pop­u­lar or obscure, through often grants access to the rest of their pro­file if it was included in the . (This was more dra­matic than the because AOL searches had a great deal of per­sonal infor­ma­tion embed­ded in the search queries, but in con­trast, the Net­flix data seems impos­si­bly impov­er­ished—there’s noth­ing obvi­ously iden­ti­fy­ing about what anime one has watched unless one watches obscure ones.)

6. The researchers to find iso­mor­phisms between arbi­trary graphs12 (such as social net­works stripped of any and all data except for the graph struc­ture), for exam­ple and , and give many exam­ples of pub­lic datasets that could be de-anonymized13—such as your Ama­zon pur­chases (Calan­drino et al 2011; blog). These attacks are on just the data that is left after attempts to anonymize data; they don’t exploit the obser­va­tion that the choice of what data to remove is as inter­est­ing as what is left, what calls “The Redac­tor’s Dilemma”.

7. User­names hardly bear dis­cussing

8. Your hos­pi­tal records can be de-anonymized just by look­ing at pub­lic vot­ing rolls14 That researcher later went on to run “exper­i­ments on the iden­ti­fi­a­bil­ity of de-i­den­ti­fied sur­vey data [cite], phar­macy data [cite], clin­i­cal trial data [cite], crim­i­nal data [State of Delaware v. Gan­nett Pub­lish­ing], DNA [cite, cite, cite], tax data, pub­lic health reg­istries [cite (sealed by court), etc.], web logs, and par­tial Social Secu­rity num­bers [cite].” (Whew.)

9. Your is sur­pris­ingly unique and the sounds of typ­ing and arm move­ments can iden­tify you or be used snoop on input &

10. Know­ing your morn­ing com­mute as loosely as to the indi­vid­ual blocks (or less gran­u­lar) uniquely iden­ti­fies (Golle & Par­tridge 2009) you; know­ing your com­mute to the zip code/census tract uniquely iden­ti­fies 5% of peo­ple

11. Your hand­writ­ing is fairly unique, sure—but so is how you fill in bub­bles on tests15

12. Speak­ing of hand­writ­ing, your writ­ing style can be pretty unique too

13. the unno­tice­able back­ground elec­tri­cal hum may uniquely date audio record­ings. Unno­tice­able sounds can also be used to per­sis­tently track devices/people, exfil­trate infor­ma­tion across air gaps, and can be used to mon­i­tor room presence/activity, and even or tap­ping noises or

14. you may have heard of for eaves­drop­ping… but what about eaves­drop­ping via video record­ing of potato chip bags or candy wrap­pers? (press release), or cell­phone gyro­scopes? Lasers are good for detect­ing your heart­beat as well, which is—of course—uniquely iden­ti­fy­ing And Soon even will no longer be safe…

15. steer­ing & dri­ving pat­terns are suffi­ciently unique as to allow iden­ti­fi­ca­tion of dri­vers from as lit­tle as 1 turn in some cas­es: . These attacks also work on smart­phones for time zone, baro­met­ric pres­sure, pub­lic trans­porta­tion tim­ing, IP address, & pat­tern of con­nect­ing to WiFi or cel­lu­lar net­works (Mose­nia et al 2017)

16. smart­phones can be IDed by the pat­tern of pixel noise, due to sen­sor noise such as small imper­fec­tions in the CCD sen­sors and lenses (and Face­book has even patented this)

17. smart­phone usage pat­terns, such as app pref­er­ences, app switch­ing rates, con­sis­tency of com­mute pat­terns, over­all geo­graphic mobil­i­ty, slower or less dri­ving have been cor­re­lated with Alzheimer’s dis­ease (Kour­tis et al 2019) and per­son­al­ity ().16

Eye track­ing is .

18. voices cor­re­late with not just age/gender/ethnicity, but… ?

(The only sur­pris­ing thing about DNA-related pri­vacy breaks is how long they have taken to show up.)

To sum­ma­rize: is almost impos­si­ble17 and pri­vacy is dead18. (See also “Bro­ken Promises of Pri­va­cy: Respond­ing to the Sur­pris­ing Fail­ure of Anonymiza­tion”.)

## Mistake 3

Light’s third mis­take was react­ing to the provo­ca­tion of Lind L. Tai­lor. It was a bla­tant attempt to pro­voke a reac­tion from an unpre­pared Light and that alone should have been suffi­cient rea­son to sim­ply ignore it (even if Light could not have rea­son­ably known exactly how it was a trap): one should never do what an enemy wants one to do on ground & terms & tim­ing pre­pared by the ene­my. Run­ning the broad­cast in 1 region was also a gam­ble & a poten­tial mis­take on L’s part; he had no real rea­son to think Light was in (or if he did already have priors/information to that effect, he should’ve been bisect­ing Kan­to) and should have arranged for it to be broad­cast to exactly half of Japan’s pop­u­la­tion, obtain­ing an expected max­i­mum of 1 bit. But it was one that paid off; he nar­rowed his tar­get down to 1⁄3 the orig­i­nal Japan­ese pop­u­la­tion, for a gain of ~1.6 bits. (You can see it was a gam­ble by con­sid­er­ing if Light had been out­side Kan­to; since he would not see it live, he would not have react­ed, and all L would learn is that his sus­pect was in that other 2⁄3 of the pop­u­la­tion, for a gain of only ~0.3 bit­s.)

But even this was­n’t a huge mis­take. He lost 6 bits to his sched­ule of killing, and lost another 1.6 bits to tem­pera­men­tally killing Lind L. Tai­lor, but since the male pop­u­la­tion of Kanto is 21.5 mil­lion (43 mil­lion total), he still has ~24 bits of anonymity left (). That’s not too ter­ri­ble, and the loss is mit­i­gated even fur­ther by other details of this mis­take, as pointed out by Zmflav­ius; specifi­cal­ly, that unlike “being male” or “being Japan­ese”, the infor­ma­tion about being in Kanto is sub­ject to decay, since peo­ple move around all the time for all sorts of rea­sons:

(One could still run the infer­ence “back­wards” on any par­tic­u­lar per­son to ver­ify they were in Kanto in the right time peri­od, but as time pass­es, it becomes less pos­si­ble to run the infer­ence “for­wards” and only exam­ine peo­ple in Kan­to.)

This mis­take also shows us that the impor­tant thing that infor­ma­tion the­ory buys us, real­ly, is not the bit (we could be using rather than , and com­pares rather than “bits”) so much as com­par­ing events in the plot on a log­a­rith­mic scale. If we sim­ply looked at how the absolute num­ber of how many peo­ple were ruled out at each step, we’d con­clude that the first mis­take by Light was a deba­cle with­out com­pare since it let L rule out >6 bil­lion peo­ple, approx­i­mately 60× more peo­ple than all the other mis­takes put together would let L rule out. Mis­takes are rel­a­tive to each oth­er, not absolutes.

## Mistake 4

Light’s fourth mis­take was to use con­fi­den­tial police infor­ma­tion stolen using his police­man father’s cre­den­tials. This was unnec­es­sary as there are count­less crim­i­nals he could still exe­cute using pub­lic infor­ma­tion (face+­name is not typ­i­cally diffi­cult to get), and if for some rea­son he needed a spe­cific crim­i­nal, he could either restrict use of secret infor­ma­tion to a few high­-pri­or­ity vic­tim­s—if only to avoid sus­pi­cions of hack­ing & sub­se­quent secu­rity upgrades cost­ing him access!—or man­u­fac­ture, using the Death Note’s coer­cive pow­ers or Kira’s pub­lic sup­port, a way to release infor­ma­tion such as a ‘leak’ or pass­ing pub­lic trans­parency laws.

This mis­take was the largest in bits lost. But inter­est­ing­ly, many or even most Death Note fans do not seem to regard this as his largest mis­take, instead point­ing to his killing Lind L. Tai­lor or per­haps rely­ing too much on Mika­mi. The infor­ma­tion the­o­ret­i­cal per­spec­tive strongly dis­agrees, and lets us quan­tify how large this mis­take was.

When he acts on the secret police infor­ma­tion, he instantly cuts down his pos­si­ble iden­tity to one out of a few thou­sand peo­ple con­nected to the police. Let’s be gen­er­ous and say 10,000. It takes 14 bits to spec­ify 1 per­son out of 10,000 ()—as com­pared to the 24–25 bits to spec­ify a Kanto dweller.

This mis­take cost him 11 bits of anonymi­ty; in other words, this mis­take cost him twice what his sched­ul­ing cost him and almost 8 times the mur­der of Tai­lor!

## Mistake 5

In com­par­ison, the fifth mis­take, mur­der­ing Ray Pen­bar’s fiancee and focus­ing L’s sus­pi­cion on Pen­bar’s assigned tar­gets was pos­i­tively cheap. If we assume Pen­bar was tasked 200 leads out of the 10,000, then mur­der­ing him and the fiancee dropped Light from 14 bits to 8 bits () or just 6 bits or a lit­tle over half the fourth mis­take and com­pa­ra­ble to the orig­i­nal sched­ul­ing mis­take.

## Endgame

At this point in the plot, L resorts to direct mea­sures and enters Light’s life direct­ly, enrolling at the uni­ver­si­ty. From this point on, Light is screwed as he is now play­ing a deadly game of with L & the inves­tiga­tive team. He frit­tered away >25 bits of anonymity and then L intu­ited the rest and sus­pected him all along. (We could jus­tify L skip­ping over the remain­ing 8 bits by point­ing out that L can ana­lyze the deaths and infer psy­cho­log­i­cal char­ac­ter­is­tics like arro­gance, puz­zle-solv­ing, and great intel­li­gence, which com­bined with heuris­ti­cally search­ing the remain­ing can­di­dates, could lead him to zero in on Light.)

From the the­o­ret­i­cal point of view, the game was over at that point. The chal­lenge for L then became prov­ing it to L’s sat­is­fac­tion under his self­-im­posed moral con­straints.19

# Security is Hard (Let’s Go Shopping)

What should Light have done? That’s easy to answer, but tricky to imple­ment.

One could try to man­u­fac­ture disinfor­ma­tion. rehearses many of the above points about infor­ma­tion the­ory & anonymi­ty, and the pos­si­ble ben­e­fits of :

…one addi­tional way to gain more anonymity is through delib­er­ate dis­in­for­ma­tion. For instance, sup­pose that one reveals 100 inde­pen­dent bits of infor­ma­tion about one­self. Ordi­nar­i­ly, this would cost 100 bits of anonymity (as­sum­ing that each bit was a pri­ori equally likely to be true or false), by cut­ting the num­ber of pos­si­bil­i­ties down by a fac­tor of 2100; but if 5 of these 100 bits (cho­sen ran­domly and not revealed in advance) are delib­er­ately fal­si­fied, then the num­ber of pos­si­bil­i­ties increases again by a fac­tor of (100 choose 5) ~ 226, recov­er­ing about 26 bits of anonymi­ty. In prac­tice one gains even more anonymity than this, because to dis­pel the dis­in­for­ma­tion one needs to solve a prob­lem, which can be noto­ri­ously intractable com­pu­ta­tion­al­ly, although this addi­tional pro­tec­tion may dis­si­pate with time as algo­rithms improve (e.g. by incor­po­rat­ing ideas from ).

## Randomizing

The diffi­culty with sug­gest­ing that Light should—or could—have used dis­in­for­ma­tion on the tim­ing of deaths is that we are, in effect, engag­ing in a sort of . How exactly is Light or any­one sup­posed to know that L could deduce his time­zone from his killings? I men­tioned an exam­ple of using Wikipedia edits to local­ize edi­tors, but that tech­nique was unique to me among WP edi­tors20 and no doubt there are many other forms of infor­ma­tion leak­age I have never heard of despite com­pil­ing a list; if I were Light, even if I remem­bered my Wikipedia tech­nique, I might not bother evenly dis­trib­ut­ing my killing over the clock or adopt­ing a decep­tive pat­tern (eg sug­gest­ing I was in Europe rather than Japan). If Light had known he was leak­ing tim­ing infor­ma­tion but did­n’t know that some­one out there was clever enough to use it (a “known unknown”), then we might blame him; but how is Light sup­posed to know these “unknown unknowns”?

is the answer. Ran­dom­iza­tion and encryp­tion scram­ble the cor­re­la­tions between input and out­put, and they would serve as well in Death Note as they do in cryp­tog­ra­phy & sta­tis­tics in the real world, at the cost of some effi­cien­cy. The point of ran­dom­iza­tion, both in cryp­tog­ra­phy and in sta­tis­ti­cal exper­i­ments, is to not just pre­vent the leaked infor­ma­tion or (re­spec­tive­ly) you do know about but also the ones you do not yet know about.

To steal & para­phrase an exam­ple from Uncon­trolled: you’re run­ning a weight-loss exper­i­ment. You know that the effec­tive­ness might vary with each sub­jec­t’s pre-ex­ist­ing weight, but you don’t believe in ran­dom­iza­tion (you’re a prac­ti­cal man! only prissy sta­tis­ti­cians worry about ran­dom­iza­tion!); so you split the sub­jects by weight, and for con­ve­nience you allo­cate them by when they show up to your exper­i­men­t—in the end, there are exactly 10 exper­i­men­tal sub­jects over 150 pounds and 10 con­trols over 150 pounds, and so on and so forth. Unfor­tu­nate­ly, it turns out that unbe­knownst to you, a genetic vari­ant con­trols weight gain and a whole extended fam­ily showed up at your exper­i­ment early on and they all got allo­cated to ‘exper­i­men­tal’ and none of them to ‘con­trol’ (since you did­n’t need to ran­dom­ize, right? you were mak­ing sure the groups were matched on weight!). Your exper­i­ment is now bogus and mis­lead­ing. Of course, you could run a sec­ond exper­i­ment where you make sure the exper­i­men­tal and con­trol groups are matched on weight and also now matched on that genetic vari­ant… but now there’s the poten­tial for some third con­founder to hit you. If only you had used ran­dom­iza­tion—then you would prob­a­bly have put some of the vari­ants into the other group as well and your results would­n’t’ve been bogus!

So to deal with Light’s first mis­take, sim­ply sched­ul­ing every death on the hour will not work because the wake-sleep cycle is still pre­sent. If he set up a list and wrote down n crim­i­nals for each hour to elim­i­nate the peak-troughs rather than ran­dom­iz­ing, could that still go wrong? May­be: we don’t know what infor­ma­tion might be left in the data which an L or Tur­ing could deci­pher. I can spec­u­late about one pos­si­bil­i­ty—the allo­ca­tion of each kind of crim­i­nal to each hour. If one were to draw up lists and go in order (hey, one does­n’t need ran­dom­iza­tion, right?), then the order might go ‘crim­i­nals in the morn­ing news­pa­per, crim­i­nals on TV, crim­i­nals whose details were not imme­di­ately given but were avail­able online, crim­i­nals from years ago, his­tor­i­cal crim­i­nals etc’; if the morn­ing-news­pa­per-crim­i­nals start at say 6 AM Japan time… And allo­cat­ing evenly might be hard, since there’s nat­u­rally going to be short­falls when there just aren’t many crim­i­nals that day or the news­pa­pers aren’t pub­lish­ing (hol­i­days?) etc., so the short­fall peri­ods will pin­point what the Kira con­sid­ers ‘end of the day’.

A much safer pro­ce­dure is thor­ough-go­ing ran­dom­iza­tion applied to tim­ing, sub­jects, and man­ner of death. Even if we assume that Light was bound and deter­mined to reveal the exis­tence of Kira and gain pub­lic­ity and inter­na­tional noto­ri­ety (a major char­ac­ter flaw in its own right; accom­plish­ing things, tak­ing cred­it—­choose one), he still did not have to reduce his anonymity much past 32 bits.

1. Each exe­cu­tion’s time could be deter­mined by a ran­dom dice roll (say, a 24-sided dice for hours and a 60-sided dice for min­utes).
2. Select­ing method of death could be done sim­i­larly based on eas­ily researched demo­graphic data, although per­haps irrel­e­vant (serv­ing mostly to con­ceal that a killing has taken place).
3. Select­ing crim­i­nals could be based on inter­na­tion­ally acces­si­ble peri­od­i­cals that plau­si­bly every human has access to, such as the New York Times, and deaths could be delayed by months or years to broaden the pos­si­bil­i­ties as to where the Kira learned of the vic­tim (TV? books? the Inter­net?) and avoid­ing issues like killing a crim­i­nal only pub­li­cized on one obscure Japan­ese pub­lic tele­vi­sion chan­nel. And so on.

Let’s remem­ber that all this is pred­i­cated on anonymi­ty, and on Light using low-tech strate­gies; as one per­son asked me, “why does­n’t Light set up an cryp­to­graphic or just take over the world? He would win with­out all this clev­er­ness.” Well, then it would not be Death Note.

# Appendices

## Communicating with a Death Note

One might won­der how much infor­ma­tion one could send inten­tion­ally with a Death Note, as opposed to inad­ver­tently leak bits about one’s iden­ti­ty. As deaths are by and large pub­licly known infor­ma­tion, we’ll assume the sender and recip­i­ent have some sort of pre-arranged key or one-time pad (although one would won­der why they’d use such an immoral and clumsy sys­tem as opposed to steganog­ra­phy or mes­sages online).

A death inflicted by a Death Note has 3 main dis­tin­guish­ing traits which one can con­trol—who, when, and how:

1. the per­son

The ‘who?’ is already cal­cu­lated for us: if it takes 33 bits to spec­ify a unique human, then a par­tic­u­lar human can con­vey 33 bits. Con­cerns about learn­abil­ity (how would you learn of an Ama­zon tribesman’s death?) imply that it’s really <33 bits.

If you try some scheme to encode more bits into the choice of assas­si­na­tion, you either wind up with 33 bits or you wind up unable to con­vey cer­tain com­bi­na­tions of bits and effec­tively 33 bits any­way—y­our scheme will tell you that to con­vey your des­per­ately impor­tant mes­sage X of 50 bits telling all about L’s true iden­tity and how you dis­cov­ered it, you need to kill an Ola­fur Jacobs of Tan­za­nia who weighs more than 200 pounds and is from Tai­wan, but alas! Jacobs does­n’t exist for you to kill.

2. the time

The ‘when’ is han­dled by sim­i­lar rea­son­ing. There is a cer­tain gran­u­lar­ity to Death Note kills: even if it is capa­ble of tim­ing deaths down to the nanosec­ond, one can’t actu­ally wit­ness this or receive records of this. Doc­tors may note time of death down to the min­ute, but no finer (and how do you get such pre­cise med­ical records any­way?). News reports may be even less accu­rate, not­ing merely that it hap­pened in the morn­ing or in the late evening. In rare cases like live broad­casts, one may be able to do a lit­tle bet­ter, but even they tend to be delayed by a few sec­onds or min­utes to allow for buffer­ing, tech­ni­cal glitches be fixed, the stenog­ra­phers pro­duce the closed cap­tion­ing, or sim­ply to guard against embar­rass­ing events (like Janet Jack­son’s nip­ple-s­lip). So we’ll not assume the tim­ing can be more accu­rate than the minute. But which min­utes does a Death Note user have to choose from? Inas­much as the Death Note is appar­ently inca­pable of influ­enc­ing the past or caus­ing Pratch­et­t­ian21 super­lu­mi­nal effects, the past is off-lim­its; but mes­sages also have to be sent in time for what­ever they are sup­posed to influ­ence, so one can­not afford to have a win­dow of a cen­tu­ry. If the mes­sage needs to affect some­thing within the day, then the user has a win­dow of only min­utes, which is bits; if the user has a win­dow of a year, that’s slightly bet­ter, as a death’s tim­ing down to the minute could embody as much as bits. (Over a decade then is 22.3 bits, etc.) If we allow tim­ing down to the sec­ond, then a year would be 24.9 bits. In any case, it’s clear that we’re not going to get more than 33 bits from the date. On the plus side, an ‘IP over Death’ pro­to­col would be supe­rior to —here, the worse your laten­cy, the more bits you could extract from the pack­et’s time­stamp! on com­pres­sion schemes:

3. the cir­cum­stances (such as the place)

The ‘how’… has many more degrees of free­dom. The cir­cum­stances is much more diffi­cult to cal­cu­late. We can sub­di­vide it in a lot of ways; here’s one:

1. Loca­tion (eg. latitude/longitude)

Earth has ~510,072,000,000 square meters of sur­face area; most of it is entirely use­less from our per­spec­tive—if some­one is in an air­plane and dies, how on earth does one fig­ure out the exact square meter he was above? Or on the oceans? Earth has ~148,940,000,000 square meters of land, which is more usable: the usual cal­cu­la­tions gives us bits. (Sur­prised at how sim­i­lar to the ‘who?’ bit cal­cu­la­tion this is? But and . The SF clas­sic drew its name from the obser­va­tion that the 7 bil­lion peo­ple alive in 2010 would fit in Zanz­ibar only if they stood shoul­der to shoul­der—spread them out, and mul­ti­ply that area by ~18…) This raises an issue that affects all 3: how much can the Death Note con­trol? Can it move vic­tims to arbi­trary points in, say, Siberia? Or is it lim­ited to within dri­ving dis­tance? etc. Any of those issues could shrink the 37 bits by a great deal.

2. Cause Of Death

The Inter­na­tional Clas­si­fi­ca­tion of Dis­eases lists upwards of 20,000 dis­eases, and we can imag­ine thou­sands of pos­si­ble acci­den­tal or delib­er­ate deaths. But what mat­ters is what gets com­mu­ni­cat­ed: if there are 500 dis­tinct brain can­cers but the death is only reported as ‘brain can­cer’, the 500 count as 1 for our pur­pos­es. But we’ll be gen­er­ous and go with 20,000 for reported dis­eases plus acci­dents, which is bits.

3. Action Prior To Death

Actions prior to death over­laps with acci­den­tal caus­es; here the series does­n’t help us. Light’s early exper­i­ments cul­mi­nat­ing in the “L, do you know death gods love apples?” seem to imply that actions are lim­ited in entropy as each word took a death (as­sum­ing the ordi­nary Eng­lish vocab­u­lary of 50,000 words, 16 bit­s), but other plot events imply that humans can under­take long com­plex plans at the order of Death Notes (like Mikami bring­ing the fake Death Note to the final con­fronta­tion with Near). Actions before death could be reported in great detail, or they could be hid­den under offi­cial secrecy like the afore­men­tioned death gods men­tioned (Light uniquely priv­i­leged in learn­ing it suc­ceeded as part of L test­ing him). I can’t begin to guess how many dis­tinct nar­ra­tives would sur­vive trans­mis­sion or what lim­its the Note would set. We must leave this one unde­fined: it’s almost surely more than 10 bits, but how many?

Sum­ming, we get bits per death.

## “Bayesian Jurisprudence”

in his posthu­mous Prob­a­bil­ity The­o­ry: The Logic of Sci­ence (on ) includes a chap­ter 5 on “Queer Uses For Prob­a­bil­ity The­ory”, dis­cussing such top­ics as ESP; mir­a­cles; heuris­tics & ; how visual per­cep­tion is the­o­ry-laden; phi­los­o­phy of sci­ence with regard to New­ton­ian mechan­ics and the famed ; horse-rac­ing & weather fore­cast­ing; and final­ly—­sec­tion 5.8, “Bayesian jurispru­dence”. Jay­nes’s analy­sis is some­what sim­i­lar in spirit to my above analy­sis, although mine is not explic­itly Bayesian except per­haps in the dis­cus­sion of gen­der as elim­i­nat­ing one nec­es­sary bit.

It is inter­est­ing to apply prob­a­bil­ity the­ory in var­i­ous sit­u­a­tions in which we can’t always reduce it to num­bers very well, but still it shows auto­mat­i­cally what kind of infor­ma­tion would be rel­e­vant to help us do plau­si­ble rea­son­ing. Sup­pose some­one in New York City has com­mit­ted a mur­der, and you don’t know at first who it is, but you know that there are 10 mil­lion peo­ple in New York City. On the basis of no knowl­edge but this, is the plau­si­bil­ity that any par­tic­u­lar per­son is the guilty one.

How much pos­i­tive evi­dence for guilt is nec­es­sary before we decide that some man should be put away? Per­haps +40 db, although your reac­tion may be that this is not safe enough, and the num­ber ought to be high­er. If we raise this num­ber we give increased pro­tec­tion to the inno­cent, but at the cost of mak­ing it more diffi­cult to con­vict the guilty; and at some point the inter­ests of soci­ety as a whole can­not be ignored.

For exam­ple, if 1000 guilty men are set free, we know from only too much expe­ri­ence that 200 or 300 of them will pro­ceed imme­di­ately to inflict still more crimes upon soci­ety, and their escap­ing jus­tice will encour­age 100 more to take up crime. So it is clear that the dam­age to soci­ety as a whole caused by allow­ing 1000 guilty men to go free, is far greater than that caused by falsely con­vict­ing one inno­cent man.

If you have an emo­tional reac­tion against this state­ment, I ask you to think: if you were a judge, would you rather face one man whom you had con­victed false­ly; or 100 vic­tims of crimes that you could have pre­vent­ed? Set­ting the thresh­old at +40 db will mean, crude­ly, that on the aver­age not more than one con­vic­tion in 10,000 will be in error; a judge who required juries to fol­low this rule would prob­a­bly not make one false con­vic­tion in a work­ing life­time on the bench.

In any event, if we took +40 db start­ing out from −70 db, this means that in order to ensure a con­vic­tion you would have to pro­duce about 110 db of evi­dence for the guilt of this par­tic­u­lar per­son. Sup­pose now we learn that this per­son had a motive. What does that do to the plau­si­bil­ity for his guilt? Prob­a­bil­ity the­ory says

(5-38)

since , i.e. we con­sider it quite unlikely that the crime had no motive at all. Thus, the [im­por­tance] of learn­ing that the per­son had a motive depends almost entirely on the prob­a­bil­ity that an inno­cent per­son would also have a motive.

This evi­dently agrees with our com­mon sense, if we pon­der it for a moment. If the deceased were kind and loved by all, hardly any­one would have a motive to do him in. Learn­ing that, nev­er­the­less, our sus­pect did have a motive, would then be very [im­por­tant] infor­ma­tion. If the vic­tim had been an unsa­vory char­ac­ter, who took great delight in all sorts of foul deeds, then a great many peo­ple would have a motive, and learn­ing that our sus­pect was one of them is not so [im­por­tan­t]. The point of this is that we don’t know what to make of the infor­ma­tion that our sus­pect had a motive, unless we also know some­thing about the char­ac­ter of the deceased. But how many mem­bers of juries would real­ize that, unless it was pointed out to them?

Sup­pose that a very enlight­ened judge, with pow­ers not given to judges under present law, had per­ceived this fact and, when tes­ti­mony about the motive was intro­duced, he directed his assis­tants to deter­mine for the jury the num­ber of peo­ple in New York City who had a motive. If this num­ber is then

and equa­tion (5-38) reduces, for all prac­ti­cal pur­pos­es, to

(5-39)

You see that the pop­u­la­tion of New York has can­celed out of the equa­tion; as soon as we know the num­ber of peo­ple who had a motive, then it does­n’t mat­ter any more how large the city was. Note that (5-39) con­tin­ues to say the right thing even when is only 1 or 2.

You can go on this way for a long time, and we think you will find it both enlight­en­ing and enter­tain­ing to do so. For exam­ple, we now learn that the sus­pect was seen near the scene of the crime shortly before. From Bayes’ the­o­rem, the [im­por­tance] of this depends almost entirely on how many inno­cent per­sons were also in the vicin­i­ty. If you have ever been told not to trust Bayes’ the­o­rem, you should fol­low a few exam­ples like this a good deal fur­ther, and see how infal­li­bly it tells you what infor­ma­tion would be rel­e­vant, what irrel­e­vant, in plau­si­ble rea­son­ing.22

In recent years there has grown up a con­sid­er­able lit­er­a­ture on Bayesian jurispru­dence; for a review with many ref­er­ences, see Vig­naux and Robert­son (1996) [This is appar­ently Inter­pret­ing Evi­dence: Eval­u­at­ing Foren­sic Sci­ence in the Court­room –Ed­i­tor].

Even in sit­u­a­tions where we would be quite unable to say that numer­i­cal val­ues should be used, Bayes’ the­o­rem still repro­duces qual­i­ta­tively just what your com­mon sense (after per­haps some med­i­ta­tion) tells you. This is the fact that George Polya demon­strated in such o exhaus­tive detail that the present writer was con­vinced that the con­nec­tion must be more than qual­i­ta­tive.

1. In fact, every sin­gle per­son men­tioned in my Ter­ror­ism is not Effec­tive is male, and this seems to be true of the full as well.↩︎

2. This rea­son­ing would be wrong in the case of , but Misa is an absurd char­ac­ter—a Gothic lolita pop star who falls in love with Light through an extra­or­di­nary coin­ci­dence and does­n’t flinch at any­thing, even sac­ri­fic­ing 75% of her lifes­pan or her mem­o­ries; hence it’s not sur­pris­ing to learn on Wikipedia from the author that the moti­va­tion for her char­ac­ter was to avoid a “bor­ing” all-male cast and be “a cute female”. (Death Note is not immune to the Rule of Cool or Rule of Sexy!)↩︎

3. Acausal­ity is an odd sort of new con­cept in , pri­mar­ily dis­cussed in , chap­ters 5–7, and on Less­Wrong.­com.↩︎

4. My first solu­tion involved sex reas­sign­ment surgery, but that makes the sit­u­a­tion worse, as trans­sex­u­als are so rare that an L intel­li­gent enough to antic­i­pate these ultra­-ra­tional Death Note users would instantly gain a huge clue: just check every­one on the surgery lists. Any­way, most Death Note users would prob­a­bly pre­fer the pass­ing-it-on solu­tion.↩︎

5. This applies to many other activ­i­ties like Twit­ter posts or Google search­es; eg. blog­ger muflax observed the same clear cir­ca­dian rhythms in his Google searches by hour.↩︎

6. You can steal infor­ma­tion through JS or CSS, and ana­lyz­ing the his­tory for infer­ring demo­graph­ics is already patented.↩︎

7. You can try your own browser live at the ’s Panop­ticlick.↩︎

8. Fel­ten & Schnei­der 2000, “Tim­ing Attacks on Web Pri­vacy”↩︎

10. Cov­er­age of this de-anonymiza­tion algo­rithm gen­er­ally linked it to rat­ings, but the authors are clear—you could have those rat­ings from any source, there’s noth­ing spe­cial about IMDb aside from it being pub­lic and online.↩︎

11. This sounds like some­thing that ought to be , and while the is known to be in NP, it is almost unique in being like —it may be easy or hard, there is no proof either way. In prac­tice, large real-world graphs tend to be effi­cient to solve.↩︎

12. From the paper’s abstract:

[we] develop a new re-i­den­ti­fi­ca­tion algo­rithm tar­get­ing anonymized social-net­work graphs. To demon­strate its effec­tive­ness on real-world net­works, we show that a third of the users who can be ver­i­fied to have accounts on both Twit­ter, a pop­u­lar microblog­ging ser­vice, and Flickr, an online pho­to-shar­ing site, can be re-i­den­ti­fied in the anony­mous Twit­ter graph with only a 12% error rate. Our de-anonymiza­tion algo­rithm is based purely on the net­work topol­o­gy, does not require cre­ation of a large num­ber of dummy “sybil” nodes, is robust to noise and all exist­ing defens­es, and works even when the over­lap between the tar­get net­work and the adver­sary’s aux­il­iary infor­ma­tion is small.

↩︎
13. eg. 97% of the Cam­bridge, Mass­a­chu­setts vot­ers could be iden­ti­fied with birth-date and zip code, and 29% by birth-date and just gen­der.↩︎

14. See “Bub­ble Trou­ble: Off-Line De-Anonymiza­tion of Bub­ble Forms”, USENIX 2011S Secu­rity Sym­po­sium; from “New Research Result: Bub­ble Forms Not So Anony­mous”:

If bub­ble mark­ing pat­terns were com­pletely ran­dom, a clas­si­fier could do no bet­ter than ran­domly guess­ing a test set’s cre­ator, with an expected accu­racy of . Our clas­si­fier achieves over 51% accu­ra­cy. The clas­si­fier is rarely far off: the cor­rect answer falls in the clas­si­fier’s top three guesses 75% of the time (vs. 3% for ran­dom guess­ing) and its top ten guesses more than 92% of the time (vs. 11% for ran­dom guess­ing).

↩︎
15. See also , Cut­ler & Kulis 2018/ or “Social medi­a-pre­dicted per­son­al­ity traits and val­ues can help match peo­ple to their ideal jobs”, Kern et al 2019, for exam­ples of what ordi­nary use of social media or media con­sump­tion can leak.↩︎

16. Arvind Narayanan and Vitaly Shmatikov brusquely sum­ma­rize the impli­ca­tions of their de-anonymiza­tion:

So, what’s the solu­tion?

We do not believe that there exists a tech­ni­cal solu­tion to the prob­lem of anonymity in social net­works. Specifi­cal­ly, we do not believe that any graph trans­for­ma­tion can (a) sat­isfy a robust defi­n­i­tion of pri­va­cy, (b) with­stand de-anonymiza­tion attacks described in our paper, and (c) pre­serve the util­ity of the graph for com­mon data-min­ing and adver­tis­ing pur­pos­es. There­fore, we advo­cate non-tech­ni­cal solu­tions.

So, the de-anonymiz­ing just hap­pens behind closed doors:

…re­searchers don’t have the incen­tive for deanonymiza­tion any­more. On the other hand, if mali­cious enti­ties do it, nat­u­rally they won’t talk about it in pub­lic, so there will be no PR fall­out. Reg­u­la­tors have not been very aggres­sive in inves­ti­gat­ing anonymized data releases in the absence of a pub­lic out­cry, so that may be a neg­li­gi­ble risk. Some have ques­tioned whether deanonymiza­tion in the wild is actu­ally hap­pen­ing. I think it’s a bit silly to assume that it isn’t, given the eco­nomic incen­tives. Of course, I can’t prove this and prob­a­bly never can. No com­pany doing it will pub­licly talk about it, and the pri­vacy harms are so indi­rect that tying them to a spe­cific data release is next to impos­si­ble. I can only offer anec­dotes to explain my posi­tion: I have been approached mul­ti­ple times by orga­ni­za­tions who wanted me to deanonymize a data­base they’d acquired, and I’ve had friends in differ­ent indus­tries men­tion casu­ally that what they do on a daily basis to com­bine differ­ent data­bases together is essen­tially deanonymiza­tion.

In gen­er­al, there’s no clear dis­tinc­tion between ‘use­ful’ and ‘use­less’ infor­ma­tion from the per­spec­tive of identifying/breaking privacy/reversing anonymiza­tion (empha­sis added):

‘Qua­si­-i­den­ti­fier’ is a notion that arises from attempt­ing to see some attrib­utes (such as ZIP code) but not oth­ers (such as tastes and behav­ior) as con­tribut­ing to re-i­den­ti­fi­a­bil­i­ty. How­ev­er, the major les­son from the re-i­den­ti­fi­ca­tion papers of the last few years has been that any infor­ma­tion at all about a per­son can be poten­tially used to aid re-i­den­ti­fi­ca­tion.

↩︎
17. But hey, at least the lack of pri­vacy is two-way and the pub­lic can male­fac­tors like the gov­ern­ment, as argues is the best out­come.

But wait, Wik­ileaks has revealed the mas­sive expan­sion of Amer­i­can gov­ern­ment secrecy due to the War on Ter­ror and even the sup­posed friend of trans­paren­cy, Pres­i­dent Oba­ma, has presided over an expan­sion of Pres­i­dent George W. Bush’s secrecy pro­grams and crack­downs on of all stripes? Oh. Too bad about that, I guess.↩︎

18. Given the extremely high global stakes and appar­ent impos­si­bil­ity of the mur­ders indi­cat­ing that L is deeply igno­rant of extremely impor­tant infor­ma­tion about what is going on, a more prag­matic L would have sim­ply kid­napped & tor­tured or assas­si­nated Light as soon as L began to seri­ously sus­pect Light.↩︎

19. I have since seen exam­ples of attempt­ing to cor­re­late activ­ity times with loca­tion on the dark­net mar­kets and else­where, such as try­ing to infer the time­zones of Dread Pirate Roberts (USA) and Satoshi Nakamoto (?).↩︎

20. The only things known to go faster than ordi­nary light is monar­chy, accord­ing to the philoso­pher Ly Tin Wee­dle. He rea­soned like this: you can’t have more than one king, and tra­di­tion demands that there is no gap between kings, so when a king dies the suc­ces­sion must there­fore pass to the heir instan­ta­neously. Pre­sum­ably, he said, there must be some ele­men­tary par­ti­cles—kingons, or pos­si­bly queon­s—that do this job, but of course suc­ces­sion some­times fails if, in mid-flight, they strike an anti-par­ti­cle, or repub­li­con. His ambi­tious plans to use his dis­cov­ery to send mes­sages, involv­ing the care­ful tor­tur­ing of a small king in order to mod­u­late the sig­nal, were never fully expanded because, at that point, the bar closed.

↩︎
21. “Note that in these cases we are try­ing to decide, from scraps of incom­plete infor­ma­tion, on the truth of an Aris­totelian propo­si­tion; whether the defen­dant did or did not com­mit some well-de­fined action. This is the sit­u­a­tion an issue of fact for which prob­a­bil­ity the­ory as logic is designed. But there are other legal sit­u­a­tions quite differ­ent; for exam­ple, in a med­ical mal­prac­tice suit it may be that all par­ties are agreed on the facts as to what the defen­dant actu­ally did; the issue is whether he did or did not exer­cise rea­son­able judg­ment. Since there is no offi­cial, pre­cise defi­n­i­tion of ‘rea­son­able judg­ment’, the issue is not the truth of an Aris­totelian propo­si­tion (how­ev­er, if it were estab­lished that he will­fully vio­lated one of our Chap­ter 1 desider­ata of ratio­nal­i­ty, we think that most juries would con­vict him). It has been claimed that prob­a­bil­ity the­ory is basi­cally inap­plic­a­ble to such sit­u­a­tions, and we are con­cerned with the par­tial truth of a non-Aris­totelian propo­si­tion. We sug­gest, how­ev­er, that in such cases we are not con­cerned with an issue of truth at all; rather, what is wanted is a value judg­ment. We shall return to this topic later (Chap­ters 13, 18).”↩︎