created: 04 May 2011; modified: 28 Mar 2017; status: finished; confidence: highly likely; importance: 7
I have elsewhere called Light
hubristic and said he made mistakes. So I am obliged to explain what he did wrong and how he could do better.
While Light starts scheming and taking serious risks as early as the arrival of the FBI team in Japan, he has fundamentally already screwed up. L should never have gotten that close to Light. The Death Note kills flawlessly without forensic trace and over arbitrary distances; Death Note is almost a thought-experiment - given the perfect murder weapon, how can you screw up anyway?
Some of the other Death Note users highlight the problem. The user in the Yotsuba Group carries out the normal executions, but also kills a number of prominent competitors. The killings directly point to the Yotsuba Group and eventually the user’s death. The moral of the story is that indirect relationships can be fatal in narrowing down the possibilities from
these 8 men.
Detective stories as optimization problems
In Light’s case, L starts with the world’s entire population of 7 billion people and needs to narrow it down to 1 person. It’s a search problem. It maps fairly directly onto basic information theory, in fact. (See also Copyright, Simulation inferences, and The 3 Grenades.) To uniquely specify one item out of 7 billion, you need 33 bits of information because ; to use an analogy, your 32-bit computer can only address one unique location in memory out of 4 billion locations, and adding another bit doubles the capacity to >8 billion. Is 33 bits of information a lot?
Not really. L could get one bit just by looking at history or crime statistics, and noting that mass murderers are, to an astonishing degree, male1, thereby ruling out half the world population and actually starting L off with a requirement to obtain only 32 bits to break Light’s anonymity.2 If Death Note users were sufficiently rational & knowledgeable, they could draw on concepts like superrationality to acausally cooperate3 to avoid this information leakage… by arranging to pass on Death Notes to females4 to restore a 50:50 gender ratio - for example, if for every female who obtained a Death note there were 3 males with Death Notes, then all users could roll a 1d3 dice and if 1 keep it and if 2 or 3 pass it on to someone of the opposite gender.
We should first point out that Light is always going to leak some bits. The only way he could remain perfectly hidden is to not use the Death Note at all. If you change the world in even the slightest way, then you have leaked information about yourself in principle. Everything is connected in some sense; you cannot magically wave away the existence of fire without creating a cascade of consequences that result in every living thing dying. For example, the fundamental point of Light executing criminals is to shorten their lifespan - there’s no way to hide that. You can’t both shorten their lives and not shorten their lives. He is going to reveal himself this way, at the very least to the actuaries and statisticians.
More historically, this has been a challenge for cryptographers, like in WWII: how did they exploit the Enigma & other communications without revealing they had done so? Their solution was misdirection: constantly arranging for plausible alternatives, like search planes that
just happened to find a German ship or submarine. (However, the famous story that Winston Churchill allowed the town of Coventry to be bombed rather than risk the secret of Ultra has since been put into question.) It’s not clear to me what would be the best misdirection for Light to mask his normal killings - use the Death Note’s control features to invent a anti-criminal terrorist organization?
So there is a real challenge here: one party is trying to infer as much as possible from observed effects, and the other is trying to minimize how much the former can observe while not stopping entirely. How well does Light balance the competing demands?
However, he can try to reduce the leakage and make his anonymity set as large as possible. For example, killing every criminal with a heart attack is a dead give-away. Criminals do not die of heart attacks that often. (The point is more dramatic if you replace
heart attack with
lupus; as we all know, in real life it’s never lupus.) Heart attacks are a subset of all deaths, and by restricting himself, Light makes it easier to detect his activities. 1000 deaths of lupus are a blaring red alarm; 1000 deaths of heart attacks are an oddity; and 1000 deaths distributed over the statistically likely suspects of cancer and heart disease etc. are almost invisible (but still noticeable in principle).
So, Light’s fundamental mistake is to kill in ways unrelated to his goal. Killing through heart attacks does not just make him visible early on, but the deaths reveals that his assassination method is supernaturally precise. L has been tipped off that Kira exists. Whatever the bogus justification may be, this is a major victory for his opponents.
First mistake, and a classic one with serial killers (eg the BTK killer’s vaunting was less anonymous than he believed): delusions of grandeur and the desire to taunt, play with, and control their victims and demonstrate their power over the general population. From a literary perspective, this similarity is clearly not an accident, as we are meant to read Light as the Sociopath Hero archetype: his ultimate downfall is the consequence of his fatal personality flaw, hubris, particularly in the original sadistic sense. Light cannot help but self-sabotage like this.
(This is also deeply problematic from the point of carrying out Light’s theory of deterrence: to deter criminals and villains, it is not necessary for there to be a globally-known single supernatural killer, when it would be equally effective to arrange for all the killings to be done naturalistically by third parties/police/judiciary or used indirectly to crack cases. Arguably the deterrence would be more effective the more diffused it’s believed to be - since a single killer has a finite lifespan, finite knowledge, fallibility, and idiosyncratic preferences which reduce the threat and connection to criminality, while if all the deaths were ascribed to unusually effective police or detectives, this would be inferred as a general increase in all kinds of police competence, one which will not instantly disappear when one person gets bored or hit by a bus.)
Worse, the deaths are non-random in other ways - they tend to occur at particular times! Graphed, daily patterns jump out.
L was able to narrow down the active times of the presumable student or worker to a particular range of longitude, say 125-150° out of 180°; and what country is most prominent in that range? Japan. So that cut down the 7 billion people to around 0.128 billion; 0.128 billion requires 27 bits () so just the scheduling of deaths cost Light 6 bits of anonymity!
On a side-note, some might be skeptical that one can infer much of anything from the graph and that Death Note was just glossing over this part.
How can anyone infer that it was someone living in Japan just from 2 clumpy lines at morning and evening in Japan? But actually, such a graph is surprisingly precise. I learned this years before I watched Death Note, when I was heavily active on Wikipedia; often I would wonder if two editors were the same person or roughly where an editor lived. What I would do if their edits or user page did not reveal anything useful is I would go to
Kate’s edit counter and I would examine the times of day all their hundreds or thousands of edits were made at. Typically, what one would see was ~4 hours where there were no edits whatsoever, then ~4 hours with moderate to high activity, a trough, then another gradual rise to 8 hours later and a further decline down to the first 4 hours of no activity. These periods quite clearly corresponded to sleep (pretty much everyone is asleep at 4 AM), morning, lunch & work hours, evening, and then night with people occasionally staying up late and editing5. There was noise, of course, from people staying up especially late or getting in a bunch of editing during their workday or occasionally traveling, but the overall patterns were clear - never did I discover that someone was actually a nightwatchman and my guess was an entire hemisphere off. (Academic estimates based on user editing patterns correlate well with what is predicted by on the basis of the geography of IP edits.6)
Computer security research offers more scary results. There are an amazing number of ways to break someone’s privacy and de-anonymize them (background; there is also financial incentive to do so in order to advertise & price discriminate):
- small errors in their computer’s clock’s time (even over Tor)
- Web browsing history7 or just the version and plugins8; and this is when random Firefox or Google Docs or Facebook bugs don’t leak your identity
- Timing attacks based on how slow pages load9 (how many cache misses there are; timing attacks can also be used to learn website usernames or # of private photos)
- Knowledge of what
groupsa person was in could uniquely identify 42%10 of people on social networking site XING, and possibly Facebook & 6 others
- Similarly, knowing just a few movies someone has watched11, popular or obscure, through Netflix often grants access to the rest of their profile if it was included in the Netflix Prize. (This was more dramatic than the AOL search data scandal because AOL searches had a great deal of personal information embedded in the search queries, but in contrast, the Netflix data seems impossibly impoverished - there’s nothing obviously identifying about what anime one has watched unless one watches very obscure ones.)
- The researchers generalized their Netflix work to find isomorphisms between arbitrary graphs12 (such as social networks stripped of any and all data except for the graph structure), for example Flickr and Twitter, and give many examples of public datasets that could be de-anonymized13 - such as your Amazon purchases (Calandrino et al 2011; blog). These attacks are on just the data that is left after attempts to anonymize data; they don’t exploit the observation that the choice of what data to remove is as interesting as what is left, what Julian Sanchez calls
The Redactor’s Dilemma.
- Usernames hardly bear discussing
- Your hospital records can be de-anonymized just by looking at public voting rolls14 That researcher later went on to run
experiments on the identifiability of de-identified survey data [cite], pharmacy data [cite], clinical trial data [cite], criminal data [State of Delaware v. Gannett Publishing], DNA [cite, cite, cite], tax data, public health registries [cite (sealed by court), etc.], web logs, and partial Social Security numbers [cite].(Whew.)
- Your typing is surprisingly unique and the sounds of typing and arm movements can identify you or be used snoop on input & steal passwords
- Knowing your morning commute as loosely as to the individual blocks (or less granular) uniquely identifies (Golle & Partridge 2009) you; knowing your commute to the zip code/census tract uniquely identifies 5% of people
- Your handwriting is fairly unique, sure - but so is how you fill in bubbles on tests15
- Speaking of handwriting, your writing style can be pretty unique too
- the unnoticeable background electrical hum may uniquely date audio recordings
- you may have heard of laser microphones for eavesdropping… but what about eavesdropping via video recording of potato chip bags (press release), or cellphone gyroscopes?
(The only surprising thing about DNA-related privacy breaks is how long they have taken to show up.)
To summarize: differential privacy is almost impossible16 and privacy is dead17. (See also
Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization.)
Light’s third mistake was reacting to the provocation of Lind L. Tailor. Running the broadcast in 1 region was a gamble on L’s part; he had no real reason to think Light was in Kanto and should have arranged for it to be broadcast to exactly half of Japan’s population. But it was one that paid off; he narrowed his target down to the original Japanese population, for a gain of ~1.6 bits. (You can see it was a gamble by considering if Light had been outside Kanto; since he would not see it live, he would not have reacted, and all L would learn is that his suspect was in that other of the population, for a gain of only ~0.3 bits.)
But even this wasn’t a huge mistake. He lost 6 bits to his schedule of killing, and lost another 1.6 bits to temperamentally killing Lind L. Tailor, but since the male population of Kanto is 21.5 million (43 million total), he still has ~24 bits of anonymity left (). That’s not too terrible, and the loss is mitigated even further by other details of this mistake, as pointed out by Zmflavius; specifically, that unlike
being male or
being Japanese, the information about being in Kanto is subject to decay, since people move around all the time for all sorts of reasons:
…quite possibly Light’s biggest mistake was inadvertently revealing his connection to the police hierarchy by hacking his dad’s computer. Whereas even the Lind L. Taylor debacle only revealed his killing mechanics and narrowed him down tosomeone in the Kanto region(which is, while an impressive accomplishment based on the information he had, entirely meaningless for actually finding a suspect), there were perhaps a few hundred people who had access to the information Light’s dad had. There’s also the fact that L knew that Light was probably someone in their late teens, meaning that there was an extremely high chance that at the end of the school year, even that coup of his would expire, thanks to students heading off to university all over Japan (of course, Light went to Toudai, and a student of his caliber not attending such a university would be suspicious, but L had no way of knowing that then). I mean, perhaps L had hoped that Kira would reveal himself by suddenly moving away from the Kanto region, but come the next May, he would have no way of monitoring unusual movements among late teenagers, because a large percentage of them would be moving for legitimate reasons.
(One could still run the inference
backwards on any particular person to verify they were in Kanto in the right time period, but as time passes, it becomes less possible to run the inference
forwards and only examine people in Kanto.)
This mistake also shows us that the important thing that information theory buys us, really, is not the bit (we could be using rather than , and compares
dits rather than
bits) so much as comparing events in the plot on a logarithmic scale. If we simply looked at how the absolute number of how many people were ruled out at each step, we’d conclude that the very first mistake by Light was a debacle without compare in human history since it let L rule out >6 billion people, approximately 60x more people than all the other mistakes put together would let L rule out. Mistakes are relative to each other, not absolutes.
Light’s fourth mistake was to use confidential police information stolen using his policeman father’s credentials. This mistake was the largest in bits lost. But interestingly, many or even most Death Note fans do not seem to regard this as his largest mistake, instead pointing to his killing Lind L. Tailor or perhaps relying too much on Mikami. The information theoretical perspective strongly disagrees, and lets us quantify how large this mistake was.
When he acts on the secret police information, he instantly cuts down his possible identity to one out of a few thousand people connected to the police. Let’s be generous and say 10,000. It takes 14 bits to specify 1 person out of 10,000 () - as compared to the 24-25 bits to specify a Kanto dweller.
This mistake cost him 11 bits of anonymity; in other words, this mistake cost him twice what his scheduling cost him and almost 8 times the murder of Tailor!
In comparison, the fifth mistake, murdering Ray Penbar’s fiancee and focusing L’s suspicion on Penbar’s assigned targets was positively cheap. If we assume Penbar was tasked 200 leads out of the 10,000, then murdering him and the fiancee dropped Light from 14 bits to 8 bits () or just 6 bits or a little over half the fourth mistake and comparable to the original scheduling mistake.
At this point in the plot, L resorts to direct measures and enters Light’s life directly, enrolling at the university. From this point on, Light is screwed as he is now playing a deadly game of Mafia with L & the investigative team. He frittered away >25 bits of anonymity and then L intuited the rest and suspected him all along. (We could justify L skipping over the remaining 8 bits by pointing out that L can analyze the deaths and infer psychological characteristics like arrogance, puzzle-solving, and great intelligence, which combined with heuristically searching the remaining candidates, could lead him to zero in on Light.)
From the theoretical point of view, the game was over at that point. The challenge for L then became proving it to L’s satisfaction under his self-imposed moral constraints.18
Security is Hard (Let’s Go Shopping)
What should Light have done? That’s easy to answer, but tricky to implement.
One could try to manufacture disinformation. Terence Tao rehearses many of the above points about information theory & anonymity, and goes on to loosely discuss the possible benefits of faking information:
…one additional way to gain more anonymity is through deliberate disinformation. For instance, suppose that one reveals 100 independent bits of information about oneself. Ordinarily, this would cost 100 bits of anonymity (assuming that each bit was a priori equally likely to be true or false), by cutting the number of possibilities down by a factor of 2100; but if 5 of these 100 bits (chosen randomly and not revealed in advance) are deliberately falsified, then the number of possibilities increases again by a factor of (100
choose5) ~ 226, recovering about 26 bits of anonymity. In practice one gains even more anonymity than this, because to dispel the disinformation one needs to solve a satisfiability problem, which can be notoriously intractable computationally, although this additional protection may dissipate with time as algorithms improve (e.g. by incorporating ideas from compressed sensing).
The difficulty with suggesting that Light should - or could - have used disinformation on the timing of deaths is that we are, in effect, engaging in a sort of hindsight bias. How exactly is Light or anyone supposed to know that L could deduce his timezone from his killings? I mentioned an example of using Wikipedia edits to localize editors, but that technique was (as far as I know) unique to me and no doubt there are many other forms of information leakage I have never heard of despite compiling a list; if I were Light, even if I remembered my Wikipedia technique, I might not bother evenly distributing my killing over the clock or adopting a deceptive pattern (eg suggesting I was in Europe rather than Japan). If Light had known he was leaking timing information but didn’t know that someone out there was clever enough to use it (a
known unknown), then we might blame him; but how is Light supposed to know these
Randomization is the answer. Randomization and encryption scramble the correlations between input and output, and they would serve as well in Death Note as they do in cryptography & statistics in the real world, at the cost of some efficiency. The point of randomization, both in cryptography and in statistical experiments, is to not just prevent the leaked information or confounders (respectively) you do know about but also the ones you do not yet know about.
To steal & paraphrase an example from Jim Manzi
s Uncontrolled: you’re running a weight-loss experiment. You know that the effectiveness might vary with each subject’s pre-existing weight, but you don’t believe in randomization (you’re a practical man! only prissy statisticians worry about randomization!); so you split the subjects by weight, and for convenience you allocate them by when they show up to your experiment - in the end, there are exactly 10 experimental subjects over 150 pounds and 10 controls over 150 pounds, and so on and so forth. Unfortunately, it turns out that unbeknownst to you, a genetic variant controls weight gain and a whole extended family showed up at your experiment early on and they all got allocated to ’experimental and none of them to
control (since you didn’t need to randomize, right? you were making sure the groups were matched on weight!). Your experiment is now bogus and misleading. Of course, you could run a second experiment where you make sure the experimental and control groups are matched on weight and also now matched on that genetic variant… but now there’s the potential for some third confounder to hit you. If only you had used randomization - then you would probably have put some of the variants into the other group as well and your results wouldn’t’ve been bogus!
So to deal with Light’s first mistake, simply scheduling every death on the hour will not work because the wake-sleep cycle is still present. If he set up a list and wrote down n criminals for each hour to eliminate the peak-troughs rather than randomizing, could that still go wrong? Maybe: we don’t know what information might be left in the data which an L or Turing could decipher. I can speculate about one possibility - the allocation of each kind of criminal to each hour. If one were to draw up lists and go in order (hey, one doesn’t need randomization, right?), then the order might go
criminals in the morning newspaper, criminals on TV, criminals whose details were not immediately given but were available online, criminals from years ago, historical criminals etc; if the morning-newspaper-criminals start at say 6 AM Japan time… And allocating evenly might be hard, since there’s naturally going to be shortfalls when there just aren’t many criminals that day or the newspapers aren’t publishing (holidays?) etc., so the shortfall periods will pinpoint what the Kira considers
end of the day.
A much safer procedure is thorough-going randomization applied to timing, subjects, and manner of death. Even if we assume that Light was bound and determined to reveal the existence of Kira and gain publicity and international notoriety (a major character flaw in its own right; accomplishing things, taking credit - choose one), he still did not have to reduce his anonymity much past 32 bits.
- Each execution’s time could be determined by a random dice roll (say, a 24-sided dice for hours and a 60-sided dice for minutes).
- Selecting method of death could be done similarly based on easily researched demographic data, although perhaps irrelevant (serving mostly to conceal that a killing has taken place).
- Selecting criminals could be based on internationally accessible periodicals that plausibly every human has access to, such as the New York Times, and deaths could be delayed by months or years to broaden the possibilities as to where the Kira learned of the victim (TV? books? the Internet?) and avoiding issues like killing a criminal only publicized on one obscure Japanese public television channel. And so on.
Let’s remember that all this is predicated on anonymity, and on Light using low-tech strategies; as one person asked me,
why doesn’t Light set up an cryptographic assassination market or just take over the world? He would win without all this cleverness. Well, then it would not be Death Note.
Communicating with a Death Note
One might wonder how much information one could send intentionally with a Death Note, as opposed to inadvertently leak bits about one’s identity. As deaths are by and large publicly known information, we’ll assume the sender and recipient have some sort of pre-arranged key or one-time pad (although one would wonder why they’d use such an immoral and clumsy system as opposed to steganography or messages online).
A death inflicted by a Death has 3 main distinguishing traits which one can control:
TheIf you try some scheme to encode more bits into the choice of assassination, you either wind up with 33 bits or you wind up unable to convey certain combinations of bits and effectively 33 bits anyway - your scheme will tell you that to convey your desperately important message X of 50 bits telling all about L’s true identity and how you discovered it, you need to kill an Olafur Jacobs of Tanzania who weighs more than 200 pounds and is from Taiwan, but alas! Jacobs doesn’t exist for you to kill.
who?is already calculated for us: if it takes 33 bits to specify a unique human, then a particular human can convey 33 bits. Concerns about learnability (how would you learn of an Amazon tribesman’s death?) imply that it’s really <33 bits.
The time is handled by similar reasoning. There is a certain granularity to Death Note kills: even if it is capable of timing deaths down to the nanosecond, one can’t actually witness this or receive records of this. Doctors may note time of death down to the minute, but no finer (and how do you get such precise medical records anyway?). News reports may be even less accurate, noting merely that it happened in the morning or in the late evening. In rare cases like live broadcasts, one may be able to do a little better, but even they tend to be delayed by a few seconds or minutes to allow for buffering, technical glitches be fixed, the stenographers produce the closed captioning, or simply to guard against embarrassing events (like Janet Jackson’s nipple-slip). So we’ll not assume the timing can be more accurate than the minute. But which minutes does a Death Note user have to choose from? Inasmuch as the Death Note is apparently incapable of influencing the past or causing Pratchettian19 superluminal effects, the past is off-limits; but messages also have to be sent in time for whatever they are supposed to influence, so one cannot afford to have a window of a century. If the message needs to affect something within the day, then the user has a window of only minutes, which is bits; if the user has a window of a year, that’s slightly better, as a death’s timing down to the minute could embody as much as bits. (Over a decade then is 22.3 bits, etc.) If we allow timing down to the second, then a year would be 24.9 bits. In any case, it’s clear that we’re not going to get more than 33 bits from the date. On the plus side, an
IP over Deathprotocol would be superior to some other protocols - here, the worse your latency, the more bits you could extract from the packet’s timestamp! Dinosaur Comics on compression schemes:
the circumstances (such as the place)
The circumstances is much more difficult to calculate. We can subdivide it in a lot of ways; here’s one:
location (eg. latitude/longitude)Earth has ~510,072,000,000 square meters of surface area; most of it is entirely useless from our perspective - if someone is in an airplane and dies, how on earth does one figure out the exact square meter he was above? Or on the oceans? Earth has ~148,940,000,000 square meters of land, which is more usable: the usual calculations gives us bits. (Surprised at how similar to the
who?bit calculation this is? But and . The old SF classic Stand on Zanzibar drew its name from the observation that the 7 billion people alive in 2010 would fit in Zanzibar only if they stood shoulder to shoulder - spread them out, and multiply that area by ~18…) This raises an issue that affects all 3: how much can the Death Note control? Can it move victims to arbitrary points in, say, Siberia? Or is it limited to within driving distance? etc. Any of those issues could shrink the 37 bits by a great deal.
cause of deathThe International Classification of Diseases lists upwards of 20,000 diseases, and we can imagine thousands of possible accidental or deliberate deaths. But what matters is what gets communicated: if there are 500 distinct brain cancers but the death is only reported as
brain cancer, the 500 count as 1 for our purposes. But we’ll be generous and go with 20,000 for reported diseases plus accidents, which is bits.
action prior to death
Actions prior to death overlaps with accidental causes; here the series doesn’t help us. Light’s early experiments culminating in the
L, do you know death gods love apples?seem to imply that actions are very limited in entropy as each word took a death (assuming the ordinary English vocabulary of 50,000 words, 16 bits), but other plot events imply that humans can undertake long complex plans at the order of Death Notes (like Mikami bringing the fake Death Note to the final confrontation with Near). Actions before death could be reported in great detail, or they could be hidden under official secrecy like the aforementioned death gods mentioned (Light uniquely privileged in learning it succeeded as part of L testing him). I can’t begin to guess how many distinct narratives would survive transmission or what limits the Note would set. We must leave this one undefined: it’s almost surely more than 10 bits, but how many?
Summing, we get bits per death.
E.T. Jaynes in his posthumous Probability Theory: The Logic of Science (on Bayesian statistics) includes a chapter 5 on
Queer Uses For Probability Theory, discussing such topics as ESP; miracles; heuristics & biases; how visual perception is theory-laden; philosophy of science with regard to Newtonian mechanics and the famed discovery of Neptune; horse-racing & weather forecasting; and finally - section 5.8,
Bayesian jurisprudence. Jaynes’s analysis is somewhat similar in spirit to my above analysis, although mine is not explicitly Bayesian except perhaps in the discussion of gender as eliminating one necessary bit.
The following is an excerpt; see also
It is interesting to apply probability theory in various situations in which we can’t always reduce it to numbers very well, but still it shows automatically what kind of information would be relevant to help us do plausible reasoning. Suppose someone in New York City has committed a murder, and you don’t know at first who it is, but you know that there are 10 million people in New York City. On the basis of no knowledge but this, is the plausibility that any particular person is the guilty one.
How much positive evidence for guilt is necessary before we decide that some man should be put away? Perhaps +40 db, although your reaction may be that this is not safe enough, and the number ought to be higher. If we raise this number we give increased protection to the innocent, but at the cost of making it more difficult to convict the guilty; and at some point the interests of society as a whole cannot be ignored.
For example, if 1000 guilty men are set free, we know from only too much experience that 200 or 300 of them will proceed immediately to inflict still more crimes upon society, and their escaping justice will encourage 100 more to take up crime. So it is clear that the damage to society as a whole caused by allowing 1000 guilty men to go free, is far greater than that caused by falsely convicting one innocent man.
If you have an emotional reaction against this statement, I ask you to think: if you were a judge, would you rather face one man whom you had convicted falsely; or 100 victims of crimes that you could have prevented? Setting the threshold at +40 db will mean, crudely, that on the average not more than one conviction in 10,000 will be in error; a judge who required juries to follow this rule would probably not make one false conviction in a working lifetime on the bench.
In any event, if we took +40 db starting out from -70 db, this means that in order to ensure a conviction you would have to produce about 110 db of evidence for the guilt of this particular person. Suppose now we learn that this person had a motive. What does that do to the plausibility for his guilt? Probability theory says
since , i.e. we consider it quite unlikely that the crime had no motive at all. Thus, the [importance] of learning that the person had a motive depends almost entirely on the probability that an innocent person would also have a motive.
This evidently agrees with our common sense, if we ponder it for a moment. If the deceased were kind and loved by all, hardly anyone would have a motive to do him in. Learning that, nevertheless, our suspect did have a motive, would then be very [important] information. If the victim had been an unsavory character, who took great delight in all sorts of foul deeds, then a great many people would have a motive, and learning that our suspect was one of them is not so [important]. The point of this is that we don’t know what to make of the information that our suspect had a motive, unless we also know something about the character of the deceased. But how many members of juries would realize that, unless it was pointed out to them?
Suppose that a very enlightened judge, with powers not given to judges under present law, had perceived this fact and, when testimony about the motive was introduced, he directed his assistants to determine for the jury the number of people in New York City who had a motive. If this number is then
and equation (5-38) reduces, for all practical purposes, to
You see that the population of New York has canceled out of the equation; as soon as we know the number of people who had a motive, then it doesn’t matter any more how large the city was. Note that (5-39) continues to say the right thing even when is only 1 or 2.
You can go on this way for a long time, and we think you will find it both enlightening and entertaining to do so. For example, we now learn that the suspect was seen near the scene of the crime shortly before. From Bayes’ theorem, the [importance] of this depends almost entirely on how many innocent persons were also in the vicinity. If you have ever been told not to trust Bayes’ theorem, you should follow a few examples like this a good deal further, and see how infallibly it tells you what information would be relevant, what irrelevant, in plausible reasoning.20
In recent years there has grown up a considerable literature on Bayesian jurisprudence; for a review with many references, see Vignaux and Robertson (1996) [This is apparently Interpreting Evidence: Evaluating Forensic Science in the Courtroom –Editor].
Even in situations where we would be quite unable to say that numerical values should be used, Bayes’ theorem still reproduces qualitatively just what your common sense (after perhaps some meditation) tells you. This is the fact that George Polya demonstrated in such o exhaustive detail that the present writer was convinced that the connection must be more than qualitative.
This reasoning would be wrong in the case of Misa Amane, but Misa is an absurd character - a Gothic lolita pop star who falls in love with Light through an extraordinary coincidence and doesn’t flinch at anything, even sacrificing 75% of her lifespan or her memories; hence it’s not surprising to learn on Wikipedia from the author that the motivation for her character was to avoid a
boringall-male cast and be
a cute female. (Death Note is not immune to the Rule of Cool or Sexy.)↩
My first solution involved sex reassignment surgery, but that makes the situation worse, as transsexuals are so rare that an L intelligent enough to anticipate these ultra-rational Death Note users would instantly gain a huge clue: just check everyone on the surgery lists. Anyway, most Death Note users would probably prefer the passing-it-on solution.↩
See the 2011 paper,
Circadian patterns of Wikipedia editorial activity: A demographic analysis.↩
Coverage of this de-anonymization algorithm generally linked it to IMDb ratings, but the authors are clear - you could have those ratings from any source, there’s nothing special about IMDb aside from it being public and online.↩
This sounds like something that ought to be NP-complete, and while the graph isomorphism problem is known to be in NP, it is almost unique in being like integer factorization - it may be very easy or very hard, there is no proof either way. In practice, large real-world graphs tend to be very efficient to solve.↩
From the paper’s abstract:
[we] develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate. Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy
sybilnodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary’s auxiliary information is small.
eg. 97% of the Cambridge, Massachusetts voters could be identified with birth-date and zip code, and 29% by birth-date and just gender.↩
Bubble Trouble: Off-Line De-Anonymization of Bubble Forms, USENIX 2011S Security Symposium; from
New Research Result: Bubble Forms Not So Anonymous:
If bubble marking patterns were completely random, a classifier could do no better than randomly guessing a test set’s creator, with an expected accuracy of 1/92 ≈ 1%. Our classifier achieves over 51% accuracy. The classifier is rarely far off: the correct answer falls in the classifier’s top three guesses 75% of the time (vs. 3% for random guessing) and its top ten guesses more than 92% of the time (vs. 11% for random guessing).
Arvind Narayanan and Vitaly Shmatikov brusquely summarize the implications of their de-anonymization:
So, what’s the solution?
We do not believe that there exists a technical solution to the problem of anonymity in social networks. Specifically, we do not believe that any graph transformation can (a) satisfy a robust definition of privacy, (b) withstand de-anonymization attacks described in our paper, and (c) preserve the utility of the graph for common data-mining and advertising purposes. Therefore, we advocate non-technical solutions.
So, the de-anonymizing just happens behind closed doors:
researchers don’t have the incentive for deanonymization anymore. On the other hand, if malicious entities do it, naturally they won’t talk about it in public, so there will be no PR fallout. Regulators have not been very aggressive in investigating anonymized data releases in the absence of a public outcry, so that may be a negligible risk. Some have questioned whether deanonymization in the wild is actually happening. I think it’s a bit silly to assume that it isn’t, given the economic incentives. Of course, I can’t prove this and probably never can. No company doing it will publicly talk about it, and the privacy harms are so indirect that tying them to a specific data release is next to impossible. I can only offer anecdotes to explain my position: I have been approached multiple times by organizations who wanted me to deanonymize a database they’d acquired, and I’ve had friends in different industries mention casually that what they do on a daily basis to combine different databases together is essentially deanonymization.
In general, there’s no clear distinction between
uselessinformation from the perspective of identifying/breaking privacy/reversing anonymization (emphasis added):
Quasi-identifieris a notion that arises from attempting to see some attributes (such as ZIP code) but not others (such as tastes and behavior) as contributing to re-identifiability. However, the major lesson from the re-identification papers of the last few years has been that any information at all about a person can be potentially used to aid re-identification.
But wait, Wikileaks has revealed the massive expansion of American government secrecy due to the War on Terror and even the supposed friend of transparency, President Obama, has presided over an expansion of President George W. Bush’s secrecy programs and crackdowns on whistle-blowers of all stripes? Oh. Too bad about that, I guess.↩
Given the extremely high global stakes and apparent impossibility of the murders indicating that L is deeply ignorant of extremely important information about what is going on, a more pragmatic L would have simply kidnapped & tortured or assassinated Light as soon as L began to seriously suspect Light.↩
The only things known to go faster than ordinary light is monarchy, according to the philosopher Ly Tin Weedle. He reasoned like this: you can’t have more than one king, and tradition demands that there is no gap between kings, so when a king dies the succession must therefore pass to the heir instantaneously. Presumably, he said, there must be some elementary particles – kingons, or possibly queons – that do this job, but of course succession sometimes fails if, in mid-flight, they strike an anti-particle, or republicon. His ambitious plans to use his discovery to send messages, involving the careful torturing of a small king in order to modulate the signal, were never fully expanded because, at that point, the bar closed.
Note that in these cases we are trying to decide, from scraps of incomplete information, on the truth of an Aristotelian proposition; whether the defendant did or did not commit some well-defined action. This is the situation an issue of fact for which probability theory as logic is designed. But there are other legal situations quite different; for example, in a medical malpractice suit it may be that all parties are agreed on the facts as to what the defendant actually did; the issue is whether he did or did not exercise reasonable judgment. Since there is no official, precise definition of↩
reasonable judgment, the issue is not the truth of an Aristotelian proposition (however, if it were established that he willfully violated one of our Chapter 1 desiderata of rationality, we think that most juries would convict him). It has been claimed that probability theory is basically inapplicable to such situations, and we are concerned with the partial truth of a non-Aristotelian proposition. We suggest, however, that in such cases we are not concerned with an issue of truth at all; rather, what is wanted is a value judgment. We shall return to this topic later (Chapters 13, 18).