Hacker News new | past | comments | ask | show | jobs | submit login
Time-Lock Encryption (2011) (gwern.net)
79 points by jeffreyrogers on Oct 13, 2014 | hide | past | favorite | 28 comments



I'm curious why some people think that something like this would even be possible in principle.

My first intuition was that this is impossible (at least not with math alone). I have no (proper) proof however.

Ultimately anything could be infinitely parallelized, by brute-forcing the answer itself. If we put the answer safely out of reach for brue forcing, then all that remains is a number of methods of actually calculating it. Those methods can either be arduous but applicable by everyone or fast but requiring some pre-known secret.

For a puzzle that could be solvable by more than one ardous method, it is to be expected that ultimately everyone would be using the fastest one of those - i.e. the author and the public alike, since the methods would be applicable by everyone.

For a puzzle that would be solvable by a fast method requiring some secret, the solution would always be brute-forcable on that secret, and thus have a wide timesrange in which it could be solved, depending on the level of parallelism.

The only way out I see here is to construct a puzzle with more than one arduous method - one of them obviously faster than the other, but only publish one of the methods and use the other one yourself to calculate the time-locked data. This may or may not be possible - but I don't think it's something that can be automated, since it requires creative input, considering how a new kind of puzzle would need to be created for every new piece (or at least every new author) of time-locked data, since otherwise the less-arduous methods would become public knowledge too. (And of course there's always the risk of there being other smart people searching for and eventually discorvering that other less-arduous method, before the desired timespan has elapsed.)


> My first intuition was that this is impossible

Isn't that the case for so many problems in cryptography, though? DH key exchange, zero-knowledge proofs, secure multi-party computation, fully homomorphic encryption - all achieve things you could be forgiven for assuming were impossible.


Of course it's possible to try all possible keys, and by definition that's parallelizable, but the idea is that that should be hard enough to be infeasible. Let me give an example that should be impossible according to your reasoning: Imagine I (magically) had a method that allowed one to disclose information that would allow someone to figure out an RSA private key after two years. That would clearly satisfy gwern's original requirements - it's easy to encrypt my data, publish the public key and the "magic hint". Yes, an attacker could just try all possible RSA private keys in parallel - but that's not a realistic attack.


Did you read the part of the article about forcing serialisation of a parallel task by using hash chaining?

That seems to provide a good way of making the encryption process fast but decryption very slow.


On a related note are there public time servers out there that sign their timestamps cryptographically?

i.e. are there sources of temporal truth which can not be mocked and imitated, such that so long as you have their public key, you can ping them for the time and confirm that the timestamp they sent you was signed by them?


However, this is susceptible to a man-in-the-middle attack, still. A malicious man-in-the-middle could simply send an old timestamp which was already validated. The best alternative would be to require the time server to return a signed timestamp plus challenge to prove that it was sent by the actual server. Unfortunately, this would incur computational cost on the part of the time server, which may make such a scheme impractical.


Some TSAs (GeoTrust) support TLS which solves this problem. Surprisingly (given they sell the product to do that) it doesn't seem many others do...


Pretty much.

It's used extensively in code signing and document signing. If a code signed file has a signed timestamp, it will continue to be treated as valid after expiration so long as the CA doesn't mark your certificate for lifetime signing (really only StartCom Class 2).

If you open Adobe Reader, you can sign PDF files. Just click the second tab on the right, Certificates and then Timestamp. You can use http://timestamp.digicert.com/ as the TSA (time stamping authority) for free.


NTP does take integrity checking into account [0], though due to export restrictions and regulations the methods have varied over the years and are non-ideal. Specifically, the risk of MITM attacks/abuse is acknowledged as is the risk of DoS via the increased computational overhead of crypto [1].

[0] http://www.ntp.org/ntpfaq/NTP-s-algo-crypt.htm

[1] http://en.m.wikipedia.org/wiki/Network_Time_Protocol#Securit...


Not plain timestamps, but there is such as thing as a trusted Time Stamping Authority which will sign a document hash + timestamp to prove that information existed before a certain date. http://en.wikipedia.org/wiki/Trusted_timestamping


Previous discussion from a little more than a year ago: https://news.ycombinator.com/item?id=6508179


Indeed. As others have noted, one of Gwern's many works reaches the front page of HN every few months - usually for its second or third time. Rather than finding this an annoyance, I enjoy these repeat postings; the ideas of which often represent ongoing community conversations (or spark new ones).

If this topic is of interest to you, then you might also enjoy this thread [0] from June, in addition to its home on GitHub [1].

[0] https://news.ycombinator.com/item?id=7847687

[1] https://github.com/petertodd/timelock


I wasn't aware of the previous discussion, however, it's not surprising to me that this has shown up before.

Part of the reason I posted this is because a somewhat related article from the New York Times on decentralized computing made the front page today. I figured that HN readers interested in that might be interested in this essay as well. Plus, they might know of some further research on the topic which can help me learn more about the topic :)


I see that several suggestions involve using the Bitcoin blockchain to provide a fixed time delay. (Since blocks are given a varying difficulty so that only 'x' coins are produced an hour.)

However, IMO a much more interesting idea is the reverse of this: use time-lock encryption in bitcoin itself. If we can decide upon a non cpu-bound mechanism (and non memory-bound) then suddenly bitcoin can be billions of times more efficient. We wouldn't need thousands of miners constantly hashing the blockchain to solve an arbitrarily complex problem. Instead, the time lock can be used to ensure only 'n' Bitcoin are mined per hour, just like now.

This would save all the wasted CPU time (and considerable power costs) that Bitcoin currently relies upon.


You may want to reread how Bitcoin actually works, especially about transactions and the proof-of-work concept.

The "wasted CPU time" is Bitcoin is not about mining, but about signing transactions. Sure, it makes it constantly harder for miners, but it also makes it constantly harder for attackers to insert fake transactions (that could make real transactions appear to have never happened, opening the possibility for all types of fraud).

This is what this whole "wasted" CPU power is all about. Increasing the "efficiency" of Bitcoin would destroy this security mechanism, making it almost trivial for attackers to game the system.


Not at all. There is an artificial difficulty added to the hashing and signing to ensure that only a certain amount of bitcoins are mined per hour.

The security in the protocol comes from the distributed effort whereby everyone verifies other people's hashing. The difficulty level does not make it more or less secure.


Having all that hashing going on is what makes the "distributed verification" meaningful. An attacker can compromise the bitcoin network if they comprise 51% of the overall hashrate. So if you lower that hashrate, the network becomes more vulnerable.

The ratio of hashrate:"mining reward" is arbitrary, a matter of monetary policy (one reason I believe Dogecoin is a much more viable long-term currency), and we could (if there were a network consensus) adjust it to give people more or fewer bitcoins for the same amount of hashing work. But there's no way to "save all the wasted CPU time (and considerable power costs)". Security relies on the network doing more hashing than any attacker could.


It still doesn't make it more vulnerable.

If the hashing is made more efficient, that doesn't make it simpler for an attacker to grab a bigger % of the total hashing. After all, it's easier for everyone, so everyone can do more.

(In fact, if you had a bitcoin system whereby mining didn't use 100% cpu, I'd guess that more people would mine, since you could use miners for other purposes and not be forced to buy a dedicated miner box.)

I'm not talking about the mining reward either, that's irrelevant. If the reward for a mining block is 10 bitcoins, and the difficulty level is such that each block will take 5 minutes to mine (n.b. numbers made up, no idea what the current values are), you could drop the difficulty so that a block takes 5 seconds to mine, but are time-locked so that the next block can still only begin to be mined after 5 minutes.

End result: Miners race to mine a block, then are sitting idle until the next block time-unlocks. We still have the same problem for the attacker; they still have to take over 51% of the hashrate but as they can't defeat the timelock they have no way to improve their position.


> If the hashing is made more efficient, that doesn't make it simpler for an attacker to grab a bigger % of the total hashing. After all, it's easier for everyone, so everyone can do more.

Sure, but "more efficient" doesn't mean anything when the whole point is to burn CPU cycles. If you figured out a clever algorithm to do hashing 2x as fast, that would just mean everyone did 2x as many hashes, burning just as much power as before.

> End result: Miners race to mine a block, then are sitting idle until the next block time-unlocks. We still have the same problem for the attacker; they still have to take over 51% of the hashrate but as they can't defeat the timelock they have no way to improve their position.

I guess that works, but you're assuming a magical timelock that can be synchronized to the millisecond across the distributed bitcoin network, and can't be sped up by applying more computing power. That kind of timelock seems very implausible; I can't imagine how one would even begin to go about implementing such a thing (given that the whole point of bitcoin is not to rely on any centralized authority).


Sure, but "more efficient" doesn't mean anything when the whole point is to burn CPU cycles. If you figured out a clever algorithm to do hashing 2x as fast, that would just mean everyone did 2x as many hashes, burning just as much power as before.

By more efficient, I'm talking about being able to force all clients to not work 100% of the time by way of the timelock, not about hashing 2x or 3x as fast.

That kind of timelock seems very implausible; I can't imagine how one would even begin to go about implementing such a thing

Nor can I; however that was the subject of the original article that we're all commenting on, it discusses possible solutions to this (none unfortunately are up to scratch). If such a timelock could exist, then you could reduce the CPU load of bitcoin down to minimal levels without affecting the currency.


> By more efficient, I'm talking about being able to force all clients to not work 100% of the time by way of the timelock, not about hashing 2x or 3x as fast.

You can't expect an attacker to play within your rules. There's nothing that prevents an attacker to put all their CPU power against the network.

> Nor can I; however that was the subject of the original article that we're all commenting on, it discusses possible solutions to this (none unfortunately are up to scratch). If such a timelock could exist, then you could reduce the CPU load of bitcoin down to minimal levels without affecting the currency.

If such a timelock could exist, you wouldn't need Bitcoin at all. You could create an entirely new type of network that doesn't need any CPU load at all.

But as of now, the only secure timestamping that doesn't rely on trusting some central authority ... well, that's the proof-of-work concept of Bitcoin. And it works by replacing "physical items" by CPU power (or more exacty, by using energy rather than matter). If you find a third way, it would be great, but it would still have to depend on some physical resource that is limited (and thus being "wasted" for the network's purpose).


So, I'm not entirely sure what the point is of the (2011) tags is, but my understanding is these are consistently and regularly updated and changed, made sure to be up to date. How much of a regularly maintained scholarly work needs to change before the year changes?


Well, for one, the first sentence references a file that has since been decrypted (IIRC). so it is useful for the reader to understand at what point in the timeline of events this piece was initially written, even if the technical aspects are consistently updated.


Wiki (not the greatest source I'll admit) says none of the three insurance files have been cracked


Maybe (2011, 2014) then, or something similar?


I gave a lightning talk at ToorCamp 2014 on an approach I thought up. Need to properly blogify it, but slides are here: https://docs.google.com/presentation/d/1sP88HzQIaezyQMOkQf6X...

The idea is the security rests inside a smart card. You have a key protected by the card, and in order to release it, you prove a sequence of bitcoin blocks have passed.


Wouldn't it be possible to fool this by forking the blockchain immediately after the first difficulty adjustment after the last remembered block, and then forging timestamps to force the difficulty to go down? You'd need a pretty substantial amount of hashing power to generate the next 2016 blocks, but it doesn't need to be a 51% attack. Over a long enough time scale it ought to be feasible to fool this.

Of course, if you're syncing back up on a reasonably regular basis this won't be practical. But for 'time capsule' applications this ought to work.


I sort of touch on that in the slides. If the smart card didn't take difficulty into account, you could bypass it simply by generating a bunch of low level difficulty transactions, with minimal effort.

You would need to also model how much difficulty each proof of work should have. It's a risk, as some major bitcoin collapse could leave you effectively with no ability to decrypt your saved package. The closer you're able to accurately model how much difficulty there will be over the time period, the more secure the system will be. The more lenient, the less power an attacker would need.

Even 5% of the bitcoin block network's hashing power for the duration of the life of the secret would represent a substantial amount of computing power.




Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: