Hacker News new | past | comments | ask | show | jobs | submit login
Time-lock encryption (gwern.net)
117 points by kiba on Oct 7, 2013 | hide | past | favorite | 93 comments



There are also real-world solutions. I spent some time brainstorming this a while back.

1) The safest solution, albeit somewhat expensive: launch a Voyager-like probe. It constantly generates private and public key pairs, broadcasting the public keys immediately and the private ones on a schedule. That way, you can pick a known timeframe for decryption. Launching more than one probe, for redundancy, is probably a good idea: you can encrypt with multiple keys.

2) Cheaper but less reliable solutions involve balloon-dropping transmitters that wake up and divulge decryption keys across a huge geographic area where it would be infeasible to find them. A variation is sinking modules very deep in the ocean that wake up and float to the surface.


What about launching a probe that echoes any signals it gets? If it were sufficiently far away, one could send it the private key and destroy the local copy. Nobody's going to leave from Earth (or wherever) and catch up to your private key!


Since you're launching the probe well in advance of sending out the key, a sufficiently well-funded and a paranoid party could send a probe that just trails yours, intercepting your signal and beaming it back to the party in question (encrypted with their private key, of course.)


They already have the info, the problem is preventing the spreading (well, their problem on this scenario) so is just necessary to block the signal back


And even if they did, barring FTL, they couldn't really get it down here faster than the probe itself.


If the probe is destroyed so is the the possibility of decryption. Same for the second option.

In fact the problem is not related to time, but to the processing time and it's hard for two reasons, the math has to be so incredible/good/error free that going the physics way is cheaper and by cheaper I mean the only way (there is nothing more expensive than that thing it does not exist). Now Assumptions related to physics, which part do you assume as "hardcoded", (gwern decided for the cache speed; I found the problem on specialized hardware for it, AMD builds cpu on spec (back to physics))speed of light, and the amount of energy it takes to perform a job, resistance of circuits near absolute zero and therefore the amount of energy required to perform the job given that you can suppose on non paralelizable tasks therefore scalar cpu and therefore given a minimum scheme of circuitry the amount of time on a perfect conditions. This works unless reality(trademark) goes into play; -how much time will it take to humanity to hit the perfect conditions? -which conditions or values do you consider appropriate to measure the time (for releasing the information given the conditions are met?)

What if you just hide (like for example by burring them in the sand on the sahara) 100 devices from which 10 are the original ones and the other ones are backups of the original and you need ten of them to decrypt? (scale times 10^x for paranoia level)


That's why you launch a few of them. Also, the odd's of the probe getting destroyed e.g by an impact are really low. They get lower once you leave earth orbit and really low when you hit interstellar space. And you can probably get to less impact stuff quicker if you go perpendicular to the plain of the ecliptic rather than out.

Your bigger issue is component failure.

So launch a enough to deal with component failure, wait long enough for them to be out of harms way, and then you have a time lock.

Sure it's expensive, but time locks with a known time to unlock or even a decent estimate are not really possible algorithmically.


Excellent, but the point is to make the most secure/cheap solution, he already has a mountain a he could build a room with gigantic 100mt of steel walls with a piece of paper with the pass on it recorded by 100 webcams on 100 diff connections, or send a 10000 probes. Math for building an idea to hack is so good because ideas and intelligence to build them are so expensive the other parts of the problem look more feasible to hack/fix/try


Also it's cheaper than your options, also your options are not that "real world" factible without some serious money.


third tough, what if we take something people would do for something they desire a lot... money, what if we replicate the bitcoin schema and make use of all the hashes as required to decrypt the file? you do already have times for some real world calculations for that.


still playing the "hiding" card. put random players on online games with the answer and make them release on certain time.


That would be a cool use of a CubeSat (as long as it never becomes worthwhile for a wealthy entity to go snatch it out of orbit...)


Oops: s/balloon/parachute/


I like the parallelized hash chain construction idea; I've never seen that before.

One could improve a chunk of the chain by having checkpoints along the way. As the author mentioned, it would suck to be 2 years into mining a 35-year computation, only to make a mistake that you can't detect until the very end.

To add checkpoints, one could release both the original seed of the chain A, and a number of pairs of hashes (x0,y0) (x1,y1) ...

Let's say you wanted to do 1-month chains. Hash the seed A for a week, then take the current value x0 such that H(B)=x0. You know the value of B, since you've been computing the chain. Pick another random value y0, and continue the chain with H(B^y0). Write (x0,y0) in the output, and hash for another week. Do the same for (x1,y1) (x2,y2) and (x3,y3). Each chain then has a seed value and 4 pairs of 'checkpoints'.

When unlocking the crypto puzzle, these checkpoints can't be used to jump ahead in the computation, but they can tell you that you're on the right track.

I think that you could even use a secondary hash chain for the y_n values, so y_n+1=H(y_n). If you also derived y0 from A (e.g. y0=H(A^const) ), you would just need to publish the seed value A and each checkpoint hash x_n in order to have a fully checkpointed crypto puzzle.


One problem I can see with that is you have to be very careful that your hashes (x_i, y_i) don't make it more attractive to, say, solve the n-1th computation (similar to brute-forcing a password hash) than to do the first 1..n-1 computations properly.


Good point! Still, I think the best hash inversion I've seen is slightly under 1/2 the hash space (2/5ths?). Using a 512-bit hash like SHA3, even at an ungodly hash rate (1 TH/s), you still get approximately 2^141 seconds = 2^133 years maximum chain length before it becomes more efficient to invert the hash.


This is a pretty well understood construct, called hash stretching. The correct implementation is in each (or every 5h, etc) iteration you test if the key is valid by trying to decrypt, then continuing if unsuccessful. That way the attacker has no idea what n is.


> I like the parallelized hash chain construction idea; I've never seen that before.

Which, given that it's cryptography we're talking about, probably just means that it's completely insecure. :)


If only there were a series of mirrors, each N light-years away, you could blast a one-time pad out to the mirror of your choice and then announce the date the reflection is expected to arrive.


Nice idea IF we had such mirrors. Afaik the farthest away is on the moon.[0] Also a one-time pad needs to be as large or larger than the data used to encrypt it.[1] So your plan cannot be used for the several gigabytes wikileaks insurance file. But perhaps you can "pad" the key used to encrypt it? Probably.

[0] http://en.wikipedia.org/wiki/Lunar_Laser_Ranging_experiment [1] http://en.wikipedia.org/wiki/One-time_pad


It'd probably be easier to design an ultra-long delay line ASIC that internally maintains a state for a certain count of clock cycles, and spits it out after a while. It might still be possible to do a cryogenic attack, but that wouldn't be very easy to do.

Or maybe some sort of superconductor/magnetic state device.


It's a cute idea, but you have 1/r^4 intensity scaling, which is prohibitive even at lunar distance. The lunar ranging experiments receive "one photon every few seconds" [1]. You could probably do better with bigger equipment, higher power, and perhaps an orbiting base station to skip the atmosphere, but the numbers are going to win before too long.

An active repeater would give you 1/r^2, though. For example, you could ping Voyager I for a 36-hour delay. However, that's likely not feasible at light-year distances, let alone the problem of getting the devices there. :)

[1] http://www.jpl.nasa.gov/news/news.php?feature=605


The use of black holes here could work, if you had an accurate enough representation and or a large enough receiver dish. Fire your one time pat just in front of a moving black hole and have it circle back through the gravity well before the black hole moved on.


I think I have a better scheme. Say you have a 10 bit keyspace or something, and then encrypt a very large number of times with random keys. You don't have to perform as much computation as your adversary. By the law of large numbers the probability to solve all of the puzzles in a much shorter then expected time is low. And it is much less parallellizable then just one encryption with a random key.


Interesting idea, I like the "as much certainty as you want" with the probability.

Why can't this be parallelized? Sure you have to work on one key at a time because they are sequential, but 1000 machines can be cracking that one key. Every key cracked would be distributed and the cluster would start in on the next.

It's a little more complicated, but I'm not seeing how it's really any less parallelizeable.


I think, but am not certain, that with such a small keyspace the IO cost would dominate if you tried to parallelize it. (Assuming each decryption attempt is rather quick).


I/O is inexpensive for even non-state level actors. I have 40,000 spindles on a research cluster that is hardly used for anything...


Yeah every body has $4 million in hard drives just sitting around. At 1TB each that is a 40 peta-byte array, that is still huge in this day an age.


That sounds risky. You're using a cryptographic primitive in a non-standard way, and in danger of having created a weakness.

For example, are you sure that your cipher doesn't have the property that for any text.Encrypt(key1).Encrypt(key2) there's an equivalent text.Encrypt(key3)? Some ciphers have that property (e.g. the one time pad). That would reduce your security from 10^N to... 10.


The article is about research on time lock systems, not about someone actually building one. Thus I think a speculative, "risky", novel idea is appropriate.


I had the same thought initially, but the problem I see is that breaking several keys in the same keyspace shares work. e.g. if you want to invert a one-way function, that's a tough problem. But if it's a 10-bit function, you can just create a full table by brute force.


This is interesting in its own right, but the Assange use case doesn't really make sense to me. Wikileaks doesn't want the encryption to be broken after a certain amount of time, they want it broken based on the condition of assassination.


The primitive for this is a dead man's switch. I wonder what the cryptographic equivalent would be.

Some sort of computational network that will always make progress towards decrypting the data unless the soon to be dead man injects something using his private key that sets the network back preventing completion?

The network can't identify that the soon to be dead man is preventing progress?

Sounds like a fun research project. Maybe tie it to some coin mining network.


The problem with using a secure protocol is that you need to trust the parties to not just instantiate a second version of the protocol without you. If you can trust them to do that, you can just give them shares of the secret and trust them not to recombine the shares unless you die.


I think what you would do is make it computationally infeasible for them to create a separate network. The network exists independent of your individual secret and may be working towards the release of many soon to be dead men's secrets in a peer to peer fashion.

Sure they could avoid the dead man injecting something, but then they would get hit with the full workload. If you coupled this with an economic incentive like bitcoin mining you could get the miners to allow you to drastically increase the amount of work required and make it keep up with the state of the art in technology.

I couldn't say whether you can string enough primitives together to build something like this but it would be really cool if you could. Maybe I should go file a patent or something :-P


"I think what you would do is make it computationally infeasible for them to create a separate network."

If the goal of the network is to release my secret when I fail to participate then there must be a way for the network to operate without my participation. What stops the parties from ignoring the messages I send, thus recovering my secret by simply pretending I died?


I think that is one of the novel pieces of functionality that needs to be created. The soon to be dead man looks like any other miner to the network and submits what appears to be valid work, but he is secretly poisoning the work going to release his secret, and of course he has to be the only one capable of doing that. Maybe later on the network realizes the work it was doing was poisoned and rolls back the poison change and resumes processing. Every time a poison pill is introduced the network must do some amount of work to determine that that the pill must be discarded and that amount is tunable by the soon to be dead man.

"What stops the parties from ignoring the messages I send, thus recovering my secret by simply pretending I died?" They would have to control enough of the network and know who you are and how you are interfacing with the network to stop you. If the network has an incentive like bitcoin mining this could be infeasible for many adversaries.

Tampering is a problem with real dead man's switches as well such as a script that you have to ping or an associate.

I think big computational networks with incentives unlock some interesting doors. If you can assume that the computational majority of the network is playing ball you can ask it to do some interesting things if you have the right private key.


""What stops the parties from ignoring the messages I send, thus recovering my secret by simply pretending I died?" They would have to control enough of the network and know who you are and how you are interfacing with the network to stop you."

OK, but if the assumption is that out of the n parties in the network no more than k parties are malicious, why not just use a k+1 out of n secret sharing scheme? You broadcast a signed message once per month, and if the message does not arrive for some number of months the parties all broadcast their shares and recover the secret.

At best, the role of "proof of work" systems here is in combating sybil attacks, which is only relevant if you want to remove the requirement that I know the people I am issuing shares to. If that is truly advantageous, the system might look like this: first, I broadcast a public key for some non-malleable encryption scheme. Each party willing to participate will then use that key to encrypt a randomly generated ID string that they keep secret. Once I have received the IDs, I broadcast a random string, and each party will use their chosen ID and the random string as a "starting point" in a proof-of-work scheme. The output of the proof of work is then used as the seed to generate a keypair for a symmetric cipher (using an appropriate key derivation function). The parties encrypt the proof-of-work outputs and send the ciphertext to me; I check the proofs and generate the keys locally. Then I encrypt each party's share using the party's symmetric key and send the encrypted share. Then I proceed as before, sending a periodic message.

I suspect, though, that such a construction is overkill; also I have not really evaluated the security of it.

"I think big computational networks with incentives unlock some interesting doors"

Maybe so, but right now I see a solution in search of a problem.

"If you can assume that the computational majority"

Why should I need to assume anything about the computational resources about the participants? We can have threshold secret sharing with unconditional security, and we only need to trust one of the parties for the switch to be secure regardless of the computing power of the rest of the parties.


"At best, the role of "proof of work" systems here is in combating sybil attacks, which is only relevant if you want to remove the requirement that I know the people I am issuing shares to."

That seems pretty fundamental to making the mechanism accessible. If are talking about switches as a service if there is a "fixed" pool of switches and an exploit is found that allows you to compromise each switch component you are out of luck because you didn't actually make materializing the secret difficult.

By requiring actual work to be done and allowing the difficulty of the work to be tuned based on the capacity of the network you make an adversary go up against the math instead of against the people.


"If are talking about switches as a service if there is a "fixed" pool of switches and an exploit is found that allows you to compromise each switch component you are out of luck because you didn't actually make materializing the secret difficult."

If an exploit is found that allows you to compromise each component, then the adversary can just have the components ignore your messages and open your secret. It makes no difference how the system is structured at that point.

"By requiring actual work to be done and allowing the difficulty of the work to be tuned based on the capacity of the network you make an adversary go up against the math instead of against the people."

By using a threshold secret sharing scheme, you ensure that the adversary cannot get the secret regardless of the their own computing resources. You also avoid wasting electricity for the sake of your switch. You also have the advantage of having a well-defined security model that can actually be analyzed formally.

The only reason you would ever want to burn through some CPU cycles is to thwart sybil attacks. Unlike Bitcoin, you do not need to keep doing proofs of work after that, because once the shares are distributed, there is nothing more to do. If the adversary increases his computing power after that, he gains nothing by it, because he will not be given any more shares. Hence the suggestion in my previous post: have the proof of work be coupled to the generation of a public key, and just have the public keys be generated when someone needs to set up a switch.


Your proposal is no better than the 3 letter server farm working alone. If the poisoner is not inside the wealthy attacker, then the attacker knowing this scheme is better off not cooperating with anyone, including you.

Even if that were not the case, your scheme relies on your identity being a secret, which essentially is "security through obscurity".

We have to assume that a dedicated attacker knows no greater incentive than breaking the encryption. Anything else is foolish, IMHO.

Edit: clarification


"Your proposal is no better than the 3 letter server farm working alone. If the poisoner is not inside the wealthy attacker, then the attacker knowing this scheme is better off not cooperating with anyone, including you."

Having the attacker be better off alone is the point.

First of all your assuming that the 3 letter server farm is your adversary. This approach has utility even if some adversaries (which don't happen to be yours) might have enough compute power. The approach has utility if it can ratchet up the difficulty enough that the adversary you care about can't trigger early release. It also means you don't have to maintain any infrastructure that is traceable to you. If an attacker can find and disable the switch before it triggers what good does it do you?

Another advantage of this approach is that you can ratchet up the difficulty to the point that the 3 letter server farm can't complete the task in a reasonable amount of time. I suppose there is hard upper bound in the sense that the network is only going to be a few orders of magnitude more powerful than the 3 letter server farm.

Another assumption is that an adversary finding out the secret and releasing it is the worst case. It really depends which adversary right? One thing you are trying to influence is public distribution, and the people with the incentive to apply the compute power may not be the people with an incentive to act on the information by distributing it publicly. The secret is just going to be an encryption key for decrypting the payload you have put out through other means. It's not black and white whether the utility is zero.

In summation, it's a dead man's switch, and the important part is that it fires if you don't do your thing. There are a lot of other desirable attributes, but real world dead man's switches share many of the same flaws.

"Even if that were not the case, your scheme relies on your identity being a secret, which essentially is "security through obscurity"

I don't actually follow why anonymity is necessary. Dead man's switches are usually because you fear you are not or may not remain anonymous. Anonymity is of course a desirable attribute for a lot of reasons, but if it is blown this approach would still work. Only the person possessing the secret key can generate the poison pills that delay the release of the secret.

Tor is an example of ways to attempt anonymity while participating in a network. I view that as a separate problem space.

I think the real competitor would be a switch that actively monitors a piece of state that has plausible deniability, but that runs afoul of the whole running infrastructure traceable to you thing.


I'd argue that Wikileaks doesn't want it released solely on his death but more like 'Assange's assassination or 10 years, whichever comes first', so it does get released eventually.

And I'm surprised no one sees a use-case in recent events: if I were DPR, I'd be happy if usable time-lock crypto existed so I could, say, lock a wallet of 100k btc for 10 years, and give it to a friend as a backup; I don't have to worry about the friend deciding that they'd like to retire to Patagonia, but if I'm 'shot resisting arrest', at least I left my friend a fortune. Or, even if he hands a time-locked wallet over to the FBI, he's delayed them spending it for 10 years and given the Bitcoin economy that much more time to grow and be able to absorb a sudden infusion of 1% of the money supply.

As it is, he's basically in an all-or-nothing position.


I believe your DPR use-case is possible with Bitcoin, using "lock time" contracts. https://en.bitcoin.it/wiki/Contracts

Every transaction can have a lock time associated with it. This allows the transaction to be pending and replaceable until an agreed-upon future time, specified either as a block index or as a timestamp (the same field is used for both, but values less than 500 million are interpreted as a block index). If a transaction's lock time has been reached, we say it is final.


Yes, the specific case of a fallback is possible, but not the general time-lock case. I think. I was discussing this a little with the Bitcoin developer mentioned there, Mike Hearn, but I quickly got out of my depth and the conversation petered out.


Prepare ordered sterile Petri dishes with nutritious solution. Expose some to bacteria, forming bacterial cultures, but not others. Ones and zeroes.

At first, they will be almost impossible to distinguish (you cannot hurry much your culture medical exams even if willing to spend a lot, right?) After a while, the cultures become obvious.

The timing is heavily dependant of the bacteria's life cycle, but I guess if you stricly control the number of individuals initially put into the dishes, and keep temperature and lighting optimal, you can predict the time-to-observability to around 40%?


Not directly related to this article, but gwern.net seems to be coming up on the front page quite often. Is there an RSS feed for the site? I was unable to find one.


Gwern posts articles, but not in blog format. If you look on the sidebar for this page, for example, you'll see it was originally published in 2011, and just recently modified.

This is the closest thing to what you're looking for: http://gwern.net/Changelog


Ha, close enough I suppose. Thank you!


There is an RSS feed, although your browser may not be displaying it to you.


In reality the easiest solution for this would be to build yourself a time release cellphone or something. A simple time release power switch pulls, then the phone tweets the key to a strong encryption scheme. Build the phone into the walls of a cafe or something, maybe a bit of redundancy (multiple cafes) / and photosensors to detect discovery and disable/destruct the phone.

If you split the key redundantly between the phones, you should be fine.


And now you don't have to trust the math, but all kinds of things in the physical world. A photo sensor for example is disabled by darkness. Digging up the phone by night, anyone? Bad idea, IMHO.


IR LED for illumination, use the camera to detect movement where there should be none.

It would burn a ton of power though.


That doesn't sound very easy.


Maybe there's still a way to use homomorphic encryption. gwern rightly suggests that it causes a big problem if the recipient must decrypt the result of an encrypted computation. However, what if the decrypted result of the computation is never known to the recipient and instead the recipient must use the still-encrypted result of the encrypted homomorphic computation?

It would work like this:

The secret sharer creates some random string called 'a' and some computable function 'f'. The secret sharer also creates an encryption function 'e', and a homomorphic equivalent to 'f', called 'F' so that the following commutes: e(f(x)) = F(e(x)). F acts on encrypted data and gives encrypted results, but is much slower than f.

The secret sharer can comparatively quickly compute e(f(x)), which he or she uses as a key to encrypt a message. However the recipient is only given the values e(x) and F and must use exponentially more computational time to go the more laborious route, computing F(e(x)).


That is effectively a proof of work scheme, which is what most time lock systems are based on. The fundamental problem is that people with more resources can decrypt faster than people with less resources, and you're generally wanting to share with someone who has less not more.


You make a good point, but that's also a shortcoming of all of the purely computational techniques. I think the most you can hope for is 'democratizing' the computational methods, i.e. making them non-parallelizable.

Taking gwern's idea, what if the homomorphic computation is just incrementing your input repeatedly inside of a loop? It seems like that might be hard to parallelize. Or at least, I don't know enough to assume otherwise.


gwern, if you're reading, this section is misleading if not wrong ...:

"But that doesn’t seem very true any more. Devices can differ dramatically now even in the same computers; to take the example of Bitcoin mining, my laptop’s CPU can search for hashes at 4k/sec, or its GPU can search at 54m/second."

This is an example of parallelism and parallelism only.


Are you implying that GPUs execute each hash as slowly as a CPU and are better at hashing simply because they have more processing elements? I knew GPUs had a lot of small cores, but I was unaware that mine had 54000000 / 4000 = 13500 cores.


More or less, yes, that is my implication. Luckily, my sibling comment provides some extra information.

For the example of SHA-1 computation, you mention using FPGAs that finish in 400 clock cycles, which is at most an order of magnitude away from a naive CPU implementation of around 4000 clock cycles. I'm not as familiar with SHA-256.


I think you were using a bad algorithm on your CPU. Assuming any half-recent x86 processor you should be looking at a number like 4 Mhash/second, not 4K.


Under the third section "hashing" it says:

>For example, one could take a hash like bcrypt, give it a random input, and hash it for a month. Each hash depends on the previous hash, and there’s no way to skip from the first hash to the trillionth hash. After a month, you use the final hash as the encryption key, and then release the encrypted file and the random input to all the world. The first person who wants to decrypt the file has no choice but to redo the trillion hashes in order to get the same encryption key you used.

Then it lists this downside:

>"This is pretty clever. If one has a thousand CPUs handy, one can store up 3 years’ of computation-resistance in just a day. This satisfies a number of needs. But what about people who only have a normal computer? Fundamentally, this repeated hashing requires you to put in as much computation as you want your public to expend reproducing the computation, which is not enough. We want to force the public to expend more computation - potentially much more - than we put in. How can we do this?

>It’s hard to see. At least, I haven’t thought of anything clever"

I have! Rather than hash n as your seed, after finding n (which you will still release) hash (n+m) instead, where m is a random number in the range (for example) 0,100. Discard m, do not retain it after you've started the hashes. Release only n. Now, rather than starting at n, they still have to start at n, then when they find that m wasn't 0, they have to try all over again hashing n+1 a trillion times, then when they find it's not a good key, they have to try hashing n+2 a trillion times, and so on until they've bruteforced n+m as the correct initial seed for the start of the hash process. i.e. you make them bruteforce what m must have been.

if m is ten, someone would have to repeat your process up to ten times (an average of 5 times) before they found the seed.

Likewise if m is large, like 1,000,000 the force multiplier is 1,000,000x as much work as you had to do, in the worse case and 500,000 times as much work on average.

I've emailed the offer this suggestion.


This has the same drawbacks as using a small random key, which the article discusses later on:

- They might find the key earlier or later in their search (it adds volatility to the amount of time required).

- Searching for the right value of m is highly parallelizable.


interesting counter argument. but you can still use the "trillion hashes" as a limit to how parallelizaeble it is, and then use m to increase the average amount of work done, but which however can be done in parallel. You are right that this increases unpredictability. You can balance a trade-off between the figure of "trillion" hashes and the value of m, to strike a balance between how much work you have to do to compute it, and how predictable and non-parallelizable the work will be (by increasing m).

e.g. you can work for an hour and increase it by a factor of 10,000, but the ten-thousandfold work will be parallelizable and slightly unpredictable; or you can work for 1000 hours (41 days) and increase the work by a factor of just 100 in the worst case - but the increase will be parallelizable and the unlocker might get lucky.

so you can really balance how much work you're doing with the level of parallelizability/predictability of the reverse.


Make your value of m different each time in the hash and then choose a distribution of m's such that you get your desired output time at least probabilistically. You can balance parallelizability and the level probabilistic trust you need by setting the distribution of m and number of hashes.


[deleted]


Isn't that exactly what the article said? or did I misunderstand?


(oops, yes, I didn't read it very carefully)


Correct me if I am wrong, but this assumes that processor speed will not increase dramatically over the lock period, no?

Seems to me that if one was to be storing information over an extended period of time, say 20 years, as time goes on, it becomes more and more likely that the encryption can still be broken sooner than desired.


Rivest's puzzle (1999) http://people.csail.mit.edu/rivest/lcs35-puzzle-description.... assumes that we're all using 10 GHz processors in 2012.

> The value of t was chosen to take into consideration the growth in computational power due to "Moore's Law". Based on the SEMATECH National Technology Roadmap for Semiconductors (1997 edition), we can expect internal chip speeds to increase by a factor of approximately 13 overall up to 2012, when the clock rates reach about 10GHz. After that improvements seem more difficult, but we estimate that another factor of five might be achievable by 2034. Thus, the overall rate of computation should go through approximately six doublings by 2034.

I asked about progress on this puzzle at stack exchange here http://crypto.stackexchange.com/questions/5831/what-is-the-p... and got some nice answers.


Disclaimer: I have no idea what factors are relevant for cryptographic functions.

In 1999 they had coppermine 1133 single-core @ 1.1ghz (~ 2 gflops), in 2012 we now had sandy bridge 3970 quad-core @ 3.4ghz (> 100 Gflops). So at least according to one measure the factor of increase is more like 50 than 13.


Maybe it's just because is Monday but, besides of the Julian Assange's example and the time capsule, I can't think of another use case of a Time-lock crypto puzzle, anyone?


Court proceedings that would be sealed for 100 years. Secret material, say military records, that need to be secure in the present, but are important historically at some point. You could provide a declassification schedule this way.

You could do a delayed form of historical whistleblowing or confession, so that it doesn't cause you problems today, but history can know what really happened.

Perhaps providing the equivalent of a 'sealed envelope' to prove that something was known or happened on a given date, without having to be present or active to prove it.

It's fun to think of a puzzle, but presuming it was truly delivering on the safety promise, I can see quite a few uses. The real schemes though depend on a lot of varying factors, so in cases where the secrecy was critical, you can see why people wouldn't use it.


According to FutureCrypt.com, time-release encryption can be used for Sealed Bids, Competitive Proposals, and Press Releases. So things like sealed bids for government RFPs could be transmitted electronically rather than "sealed" in a FedEx envelope. Full disclosure: I own FutureCrypt.


Even Assange's example is suspect. The goal is not to release the documents eventually but to release the documents if he dies under suspicious circumstances. That is not really something that can be done with cryptography.

Here is one possible use case: imagine an offline digital cash system, so i.e. the bank will not accept the same token twice. To protect against an unscrupulous seller, the buyer pays by giving the seller a time-lock puzzle with the tokens; if the goods are not delivered by some deadline, the buyer will deposit the tokens at the bank, thus preventing the seller from doing so. Otherwise the seller solves the puzzle and makes the deposit. This is basically an anonymity-preserving escrow service, though in practice there are probably simpler approaches.


I am sure you read about the early-access scandal at Reuters [0], which the quants found especially spooky as they (by definition) weren't first. So releasing high-value financial information this way is my first idea for a use case. Clarification: it seems Reuters intentionally sold the info to high value customers early. But this is an even better reason for the information source to release their info this way.

[0] http://qz.com/92786/more-evidence-that-data-thomson-reuters-...


A trust for your children that you don't want them to be able to access until they reach a certain age.


If you want to leave something that you are sure won't be accessed before your death (and you don't want to have to trust some third party). Although I guess it might fit the definition of "time capsule".



It's a really beautiful and easy to understand and genius algorithm, and it perfectly meets its design goals. It works very analogously to the following:

Pretend your secret is an integer. You want to distribute clues as to your secret integer to N of your friends such that any K of them can collude to figure it out, but K-1 of them can't figure out anything about it at all.

So you construct a (K-1)-degree polynomial of one variable, f(x). All of the coefficients of the terms of f(x) are random, except choose the y-intercept (i.e. the constant co-efficient) to be your secret. Then calculate and distribute the numbers f(1), f(2), ..., f(N) to your N friends.

K points on the 2D co-ordinate plane uniquely identify your single (K-1)-degree polynomial, which will have your specific secret y-intercept. However (K-1) points will pick out an entire family of potential (K-1)-degree polynomials. And, in fact, for every single possible secret you could have chosen, there's a (K-1)-degree polynomial that goes through (K-1) points and the possible secret value. So, (K-1) of your friends colluding really don't have any additional information at all about your secret.

Shamir's secret sharing works just like that, but done with integers modulo a prime. (And the prime has to be larger than your secret.)


This sounds like the ideal opportunity to have a map tattooed to our scalps like old fashioned pirates. Or to the back of some kid (like on Waterworld).


Possible variant on this scheme to make it harder to estimate the amount of time required: Don't use a fixed number of hash iterations. Instead, use a bitcoin-ish scheme like: "the key to this file is given by hashing 'xxxx' until the hash's bottom 8 bits are 0"


The problem here is that the amount of work needed to generate the key is not known beforehand either.

I think a better approach (that would also not leak bits of the key) would be to say "hash until the hash of the hash is xxx". So basically you hash until you get xxx and then take the previous iteration's result as a key.


Given a sequence of N hashes, you could select any hash A as the initial hash, any hash B as the key, and the successor of B (C) as the key to the key.

After choosing B and C, choose an A whose distance to B and C is proportional to the amount of work you wish to require.


Your "given a sequence of N hashes" is a bit of a deus ex machina here :)


Create it using 50% of your lifetime. Or create one for your children.


This should still be fairly parallelizeable, shouldn't it?

You go ahead and take a number of crackers. Each cracker starts with a random hash and randomizes it for a fixed number of times, or until the bottom 8 bits of the hash are 0. Once it is done hashing, it goes to a central database and stores it's hash sequence. After that, it picks another hash which is not contained in a hash sequence and repeats the process.

Finally, you'd probably have one node just doing the boring hash iteration while checking if it can skip according to the generated hash sequences from the database. I haven't done any math how much of an improvement this would be, but I would suspect a pretty decent improvement given enough nodes.


I don't see how this would work. What key is the encryptor supposed to use to encrypt?


1. Generate a random number. 2. s- or bcrypt it. 3. Encrypt the data with the result of step 2. 4. Give people most of the random number, minus N bits at the end. (And tell them how many bits they are missing.)

They will have to run s/bcrypt (N^2)/2 time to get your date. Am I missing anything?


The issue is that you can parallelize the "unlocking" operation. So, if one person is attempting to unlock, they solve in 10 years. If two people attempt, it's solved in 5 years. If the NSA puts all their computers on it, it takes 1 second.

The author is proposing methods that are forcibly sequential.


Schedule a google calendar event in the future.

Set up an email alert so it sends the key to an email.

This assumes google isn't going to discontinue its calendar service.


Might be an unfortunate assumption.


You're also trusting Google here, as your key must be accessible by google calendar's event in order to send.


So? To google it looks like a random character sequence. They don't know what it is. It can even be encrypted, if you're that paranoid.




Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: