Time-lock encryption

How do you encrypt a file such that it can be decrypted after a date, but not before? Use serial computations for proof-of-work using successive squaring, chained hashes, or witness encryption on blockchains.
computer-science, cryptography, Bitcoin, shell
2011-05-242019-05-06 finished certainty: highly likely importance: 8

In cryp­tog­ra­phy, it is easy to adjust encryp­tion of data so that one, some, or all peo­ple can decrypt it, or some com­bi­na­tion there­of. It is not so easy to achieve adjustable decrypt­abil­ity over time, a “time-lock crypto”: for some uses (data escrow, leak­ing, insur­ance, last-re­sort Bit­coin back­ups etc), one wants data which is dis­trib­uted only after a cer­tain point in time.

I sur­vey tech­niques for time-lock cryp­to. Pro­pos­als often resort to trust­ed-third-par­ties, which are vul­ner­a­bil­i­ties. A bet­ter time-lock crypto pro­posal replaces trust­ed-third-par­ties with forcibly ser­ial proof-of-work using num­ber squar­ing and guar­an­tee­ing unlock­ing not after a cer­tain point in time but after suffi­cient com­pu­ta­tion-time has been spent; it’s unclear how well num­ber-squar­ing resists opti­miza­tion or short­cuts. I sug­gest a new time-lock crypto based on chained hash­es; hashes have been heav­ily attacked for other pur­pos­es, and may be safer than num­ber-squar­ing. Final­ly, I cover obfus­ca­tion & wit­ness-en­cryp­tion which, com­bined with proof-of-work, can be said to solve time-lock crypto but cur­rently remain infea­si­ble.

Julian Assange & Wik­ileaks made head­lines in 2010 when they released an “insur­ance file”, an 1.4GB -en­crypted file avail­able through Bit­Tor­rent. It’s gen­er­ally assumed that copies of the encryp­tion key have been left with Wik­ileaks sup­port­ers who will, in the appro­pri­ate con­tin­gency like Assange being assas­si­nat­ed, leak the key online to the thou­sands of down­load­ers of the insur­ance file, who will then read and pub­li­cize what­ever con­tents as in it (spec­u­lated to be addi­tional US doc­u­ments Man­ning gave Wik­ileak­s); this way, if the worst hap­pens and Wik­ileaks can­not afford to keep digest­ing the files to even­tu­ally release them at its leisure in the way it cal­cu­lates will have the most impact, the files will still be released and some­one will be very unhap­py.

Of course, this is an all-or-noth­ing strat­e­gy. Wik­ileaks has no guar­an­tees that the file will not be released pre­ma­ture­ly, nor guar­an­tees that it will even­tu­ally be released. Any one of those Wik­ileaks sup­port­ers could become dis­affected and leak the key at any time—or if there’s only 1 sup­port­er, they might lose the key to a glitch or become dis­affected in the oppo­site direc­tion and refuse to trans­mit the key to any­one. (Hope Wik­ileaks kept back­ups of the key!) If one trusts the per­son with the key absolutely, that’s fine. But would­n’t it be nice if one did­n’t have to trust another per­son like that? Cryp­tog­ra­phy does really well at elim­i­nat­ing the need to trust oth­ers, so maybe there’re bet­ter schemes.

Now, it’s hard to imag­ine how some abstract math could observe an assas­si­na­tion and decrypt embar­rass­ing files. Per­haps a differ­ent ques­tion could be answered—­can you design an encryp­tion scheme which requires no trusted par­ties but can only be bro­ken after a cer­tain date?


This sort of cryp­tog­ra­phy would be use­ful for many things; from “Time-Lock Puz­zles in the Ran­dom Ora­cle Model” (Mah­moody et al 2011):

In addi­tion to the basic use of ‘send­ing mes­sages to the future’, there are many other poten­tial uses of timed-re­lease cryp­to. Rivest, Shamir and Wag­ner 1996 sug­gest, among other uses, delayed pay­ments, sealed-bid auc­tions and . Boneh & Naor 2000 define timed com­mit­ments and timed sig­na­tures and show that they can be used for fair con­tract sign­ing, hon­esty-p­re­serv­ing auc­tions and more.

Cen­sor­ship resis­tance in cryp­tocur­ren­cies (self-ex­e­cut­ing hash-then-com­mit), doc­u­ment embar­goes (eg. legal or clas­si­fied doc­u­ments or con­fes­sions or coercible diaries, or secu­rity researchers who are often threat­ened), Receip­t-free vot­ing, , and other appli­ca­tions; a cute (al­beit use­less) appli­ca­tion is “Offline Sub­mis­sion with RSA Time-Lock Puz­zles”, Jer­schow & Mauve 2010:

Our main con­tri­bu­tion is an offline sub­mis­sion pro­to­col which enables an author being cur­rently offline to com­mit to his doc­u­ment before the dead­line by con­tin­u­ously solv­ing an RSA puz­zle based on that doc­u­ment. When regain­ing Inter­net con­nec­tiv­i­ty, he sub­mits his doc­u­ment along with the puz­zle solu­tion which is a proof for the timely com­ple­tion of the doc­u­ment.

One could also lock away files from one­self: seal one’s diary for a long time, or use it to enforce com­puter time-outs (change the pass­word and require X min­utes to decrypt it). When Ross Ulbricht/Dread Pirate Roberts of was arrested in Octo­ber 2013 and his com­puter with his Bit­coin hoard seized, I thought of another use: con­di­tional trans­fers of wealth. Ulbricht could cre­ate a time-locked copy of his bit­coins and give it to a friend. Because bit­coins can be trans­ferred, he can at any time use his own unlocked copy to ren­der the copy use­less and does­n’t have to worry about the friend steal­ing all the bit­coins from him, but if he is, say, “shot resist­ing arrest”, his friend can stil­l—even­tu­al­ly—re­cover the coins. This spe­cific Bit­coin sce­nario may be pos­si­ble with the Bit­coin pro­to­col using nLockTime, in which case one has a nice offline dig­i­tal cash sys­tem, as bet­terunix sug­gests:

Here is one pos­si­ble use case: imag­ine an offline dig­i­tal cash sys­tem, so i.e. the bank will not accept the same token twice. To pro­tect against an unscrupu­lous sell­er, the buyer pays by giv­ing the seller a time-lock puz­zle with the tokens; if the goods are not deliv­ered by some dead­line, the buyer will deposit the tokens at the bank, thus pre­vent­ing the seller from doing so. Oth­er­wise the seller solves the puz­zle and makes the deposit. This is basi­cally an anonymi­ty-p­re­serv­ing escrow ser­vice, though in prac­tice there are prob­a­bly sim­pler approach­es.

This use gen­er­al­izes beyond dark­net mar­kets to all ser­vices or enti­ties hold­ing Bit­coins: the ser­vice can pro­vide users with a time-locked ver­sion of the pri­vate keys, and reg­u­larly move its fund­s—if the ser­vice ever goes down, the users can retrieve their funds. One not-so-cute use is in defeat­ing . “Anti-Em­u­la­tion Through Time-Lock Puz­zles”, Ebringer 2008 out­lines it: one starts a pro­gram with a small time-lock puz­zle which must be solved before the pro­gram does any­thing evil, in the hopes that the antivirus scan­ner will give up or stop watch­ing before the puz­zle has been solved and the pro­gram decrypts the evil pay­load; the puz­zle’s math back­ing means no antivirus soft­ware can ana­lyze or solve the puz­zle first. The basic func­tion­al­ity can­not be black­listed as it is used by legit­i­mate cryp­tog­ra­phy soft­ware such as which would be expen­sive col­lat­eral dam­age.

No trusted third-parties

Note that this bars a lot of the usual sug­ges­tions for cryp­tog­ra­phy schemes. For exam­ple the gen­eral approach of (eg. Bel­lare & Gold­wasser 1996) if you trust some peo­ple, you can just adopt a pro­to­col where they XOR together their keys to get the mas­ter key for the pub­licly dis­trib­uted encrypted file. Or if you only trust some of those peo­ple (but are unsure which will try to betray you and either release early or late), you can adopt where k of the n peo­ple suffice to recon­struct the mas­ter key like Rabin & Thorpe 2006. (And you can con­nect mul­ti­ple groups, so each decrypts some nec­es­sary keys for the next group; but this gives each group a con­sec­u­tive veto on release…) Or per­haps some­thing could be devised based on like Crescenzo et al 1999 or Blake & Chan 2004; but then don’t you need the to sur­vive on the net­work? A vari­ant on trusted time­stamp­ing, Intel would like to sug­gest, might be out­sourc­ing trust to a “Proof of Elapsed Time” mech­a­nism imple­mented using Intel’s tam­per-proof hard­ware mod­ule ; too bad that SGX has had secu­rity issues since well before 1 Or (but don’t you need to be on the net­work, or risk all the par­ties say­ing ‘screw it, we’re too impa­tient, let’s just pool our secrets and decrypt the file now’?) or you could exploit physics and use the speed of light to com­mu­ni­cate with a remote com­puter on a space­craft (ex­cept now we’re trust­ing the space­craft as our third par­ty, hop­ing no one stole its onboard pri­vate key & is able to decrypt our trans­mis­sions instan­ta­neous­ly)…

One approach is to focus on cre­at­ing prob­lems which can be solved with a large but pre­cise amount of work, rea­son­ing that if the prob­lem can’t be solved in less than a mon­th, then you can use that as a way to guar­an­tee the file can’t be decrypted within a mon­th’s time. (This would be a .) This has its own prob­lems2, but it at least deliv­ers what it promises

One could encrypt the file against infor­ma­tion that will be known in the future, like stock prices—ex­cept wait, how can you find out what stock prices will be a year from now? You can’t use any­thing that is pub­lic knowl­edge now because that’d let the file be decrypted imme­di­ate­ly, and by defi­n­i­tion you don’t have access to infor­ma­tion cur­rently unknown but which will be known in the future, and if you gen­er­ate the infor­ma­tion your­self plan­ning to release it, now you have prob­lem­s—you can’t even trust your­self (what if you are abruptly assas­si­nated like ?) much less your con­fed­er­ates.

Successive squaring

At this point, let’s see what the crypto experts have to say. Googling to see what the exist­ing lit­er­a­ture was (after I’d thought of the above schemes), I found that the rel­e­vant term is “time-lock puz­zles” (from anal­ogy with the bank vault ). In par­tic­u­lar, // have pub­lished a 1996 paper on the top­ic, “Time-lock puz­zles and timed-re­lease crypto”. Appar­ently the ques­tion was first raised by on the in an email from 1993-02-10. May also dis­cusses the topic briefly in his , ch14.5; unfor­tu­nate­ly, May’s solu­tion (14.5.1) is essen­tially to punt to the legal sys­tem and rely on legal priv­i­lege and eco­nomic incen­tives to keep keys pri­vate.

Rivest et al agree with us that

There are 2 nat­ural approaches to imple­ment­ing timed-re­lease cryp­to:

  • Use ‘time-lock puz­zles’—com­pu­ta­tional prob­lems that can not be solved with­out run­ning a com­puter con­tin­u­ously for at least a cer­tain amount of time.
  • Use trusted agents who promise not to reveal cer­tain infor­ma­tion until a spec­i­fied date.

And that for time-lock puz­zles:

Our goal is thus to design time-lock puz­zles that, to the great extent pos­si­ble, are ‘intrin­si­cally sequen­tial’ in nature, and can not be solved sub­stan­tially faster with large invest­ments in hard­ware. In par­tic­u­lar, we want our puz­zles to have the prop­erty that putting com­put­ers to work together in par­al­lel does­n’t speed up find­ing the solu­tion. (Solv­ing the puz­zle should be like hav­ing a baby: two women can’t have a baby in 4.5 month­s.)

Rivest et al then points out that the most obvi­ous approach—en­crypt the file to a ran­dom short key, short enough that brute-forc­ing takes only a few months/years as opposed to eon­s—is flawed because brute-forc­ing a key is very and amenable to 3. (And as well, the ran­dom­ness of search­ing a key space means that the key might be found very early or very late; any esti­mate of how long it will take to brute force is just a guess.) One cute appli­ca­tion of the same brute-forc­ing idea is where the time-lock puz­zle is used to hide a key for a sec­ond-party to com­mu­ni­cate with the first-party cre­ator, but it has the same draw­back: it has the cre­ator make many time-lock puz­zles (any of which could be used by the sec­ond-par­ty) and raises the cost to the attacker (who might have to crack each puz­zle), but can be defeated by a fea­si­bly wealthy attack­er, and offers only prob­a­bilis­tic guar­an­tees (what if the attacker cracks the same puz­zle the sec­ond-party hap­pens to choose?).

Rivest et al pro­pose a scheme in which one encrypts the file with a very strong key as usu­al, but then one encrypts the key in such a way that one must cal­cu­late where t is the adjustable diffi­culty fac­tor. With the orig­i­nal num­bers, one can eas­ily avoid doing the suc­ces­sive squar­ings. This has the nice prop­erty that the puz­zle con­struc­tor invests only com­put­ing pow­er, but the solver has to spend com­put­ing pow­er. (This scheme works in the ran­dom ora­cle mod­el, but Barak & Mah­moody-Ghidary 2009 proves that is the best you can do.) Fur­ther refine­ments to the mod­u­lar expo­nen­ti­a­tion pro­posal have been made, such as Kup­pusamy & Ran­gasamy 2015.

LCS35: Rivest’s Time-lock Experiment

Rivest has actu­ally used this scheme for a 1999 time cap­sule com­mem­o­rat­ing the ; he expected his puz­zle to take ~35 years. As of 14 years lat­er, 2013, min­i­mal progress seems to have been made; if any­thing, the break­down of CPU clock­speeds seems to imply that it will take far more than 35 years.4 Rivest offers some advice for any­one attempt­ing to unlock this time-lock puz­zle (may or may not be related to Mao’s 2000 paper “Time-Lock Puz­zle with Exam­inable Evi­dence of Unlock­ing Time”):

An inter­est­ing ques­tion is how to pro­tect such a com­pu­ta­tion from errors. If you have an error in year 3 that goes unde­tect­ed, you may waste the next 32 years of com­put­ing. Adi Shamir has pro­posed a slick means of check­ing your com­pu­ta­tion as you go, as fol­lows. Pick a small (50-bit) prime c, and per­form the com­pu­ta­tion mod­ulo rather than just mod­ulo n. You can check the result mod­ulo c when­ever you like; this should be a extremely effec­tive check on the com­pu­ta­tion mod­ulo n as well.

Constant factors

How well does this work? The com­plex­ity seems cor­rect, but I worry about the con­stant fac­tors. Back in 1996, com­put­ers were fairly homo­ge­neous, and Rivest et al could rea­son­ably write

We know of no obvi­ous way to par­al­lelize it to any large degree. (A small amount of par­al­leliza­tion may be pos­si­ble within each squar­ing.) The degree of vari­a­tion in how long it might take to solve the puz­zle depends on the vari­a­tion in the speed of sin­gle com­put­ers and not on one’s total bud­get. Since the speed of hard­ware avail­able to indi­vid­ual con­sumers is within a small con­stant fac­tor of what is avail­able to large intel­li­gence orga­ni­za­tions, the differ­ence in time to solu­tion is rea­son­ably con­trol­lable.

But I won­der how true this is. Suc­ces­sive squar­ing does not seem to be a very com­plex algo­rithm to imple­ment in hard­ware. There are more exotic tech­nolo­gies than GPUs we might worry about, like s which may be spe­cial­ized for suc­ces­sive squar­ing; if prob­lems like the can be , why not mul­ti­pli­ca­tion? Or, since squar­ing seems sim­ple, is it rel­e­vant to fore­casts of ser­ial speed that there are tran­sis­tor pro­to­types going as high as 100GHz? Offhand, I don’t know of any com­pelling argu­ment to the effect that there are no large con­stan­t-fac­tor speedups pos­si­ble for multiplication/successive-squaring. Indeed, the gen­eral approach of expo­nen­ti­a­tion and fac­tor­ing has to worry about the fact that the com­plex­ity of fac­tor­ing has never been proven (and could still be very fast) and that there are speedups with quan­tum tech­niques like .

sug­gests (De­cem­ber 2016 FHI AI/blockchain work­shop) that the uncer­tainty of the effi­ciency of squar­ing (or hash­ing) could be reduced by the honeypot/“mon­ey­pot” strat­e­gy, sim­i­lar to some long­stand­ing Bit­coin rewards for break­ing hash func­tions: release sev­eral time-locks (per­haps on a blockchain like Ethereum) with the pri­vate key con­trol­ling large mon­e­tary rewards; this incen­tivizes peo­ple to unlock them as effi­ciently as pos­si­ble to claim the reward, and by mov­ing the reward at par­tic­u­lar times, reveal an upper bound on how quickly a given time-lock can be solved. (This would­n’t nec­es­sar­ily mea­sure the speed of open­ing for state-level actors, who are insen­si­tive to mon­e­tary rewards, but it would still help—as­sum­ing peo­ple cared enough to set up large boun­ties.)

2019 Unlocking of LCS35

In April 2019, it was announced that not one but two solu­tions of Rivest’s puz­zle had been made: the first solu­tion of the puz­zle was made by Bel­gian pro­gram­mer Bernard Fab­rot as a hob­by, who skipped Rivest’s Java imple­men­ta­tion to run a squar­ing rou­tine for ~3.5 years on an ordi­nary Intel Core i7-6700 CPU core; and the sec­ond was done in 2 months by the Cryp­tophage research project using new squar­ing algo­rithms opti­mized to run on an with an ASIC imple­men­ta­tion pos­si­bly run­ning in ~6 days. (The Cryp­tophage project tar­geted Rivest’s puz­zle to inves­ti­gate the robust­ness of time-lock cryp­to, and the poten­tial of “Ver­i­fi­able Delay Func­tions”, as dubbed by Boneh et al 2018, for cryp­tocur­ren­cy.) Rivest com­ment­ed:

“There have been hard­ware and soft­ware advances beyond what I pre­dicted in 1999,” says MIT pro­fes­sor Ron Rivest, who first announced the puz­zle in April 1999 tied to a cel­e­bra­tion of 35 years of research at MIT’s Lab­o­ra­tory for Com­puter Sci­ence (now CSAIL). “The puz­zle’s fun­da­men­tal chal­lenge of doing roughly 80 tril­lion squar­ings remains unbro­ken, but the resources required to do a sin­gle squar­ing have been reduced by much more than I pre­dict­ed.”

This res­o­lu­tion to Rivest’s exper­i­ment sug­gests that con­struct­ing reli­able time-lock, which are truly ser­ial and rea­son­ably tight in best-case unlock­ing time, may be quite diffi­cult.

Weak keys

Let’s think about alter­na­tives to suc­ces­sive squar­ing.

The first approach that jumps to mind and seems to be crypto folk­lore is to encrypt the file, but with a rel­a­tively short or weak key, one which will take years to brute­force. Instead of a sym­met­ri­cal key at 256 bits, per­haps one of 50 bits or 70 bits?

This fails for two major rea­sons:

  1. vari­ance:

    we real­ize that we can guar­an­tee on aver­age how much work it will take to brute­force the key, but we can­not guar­an­tee how much time it will take. A weak key could be cracked at any time given good luck, and across many uses of time-lock cryp­to, there will be instances where a weak key would be cracked almost imme­di­ate­ly, to the poten­tially extreme harm of the encrypter.

  2. differ­ing amounts of com­put­ing resources, par­tic­u­larly par­al­lel com­put­ing: It may take a CPU years to brute­force a cho­sen key-length, but take a clus­ter of CPUs just months.

    Or worse than that, with­out invok­ing clus­ters or super­com­put­er­s—de­vices can differ dra­mat­i­cally now even in the same com­put­ers; to take the exam­ple of min­ing, my lap­top’s 2GHz CPU can search for hashes at 4k/sec, or its sin­gle out­dated GPU can search at 54M/second5. This 1200x speedup is not because the GPU’s clock­speed is 2400GHz or any­thing like that, it is because the GPU has hun­dreds of small spe­cial­ized proces­sors which are able to com­pute hash­es, and the par­tic­u­lar appli­ca­tion does not require the proces­sors to coor­di­nate or any­thing like that which might slow them down. Inci­den­tal­ly, this imbal­ance between CPUs and high­ly-par­al­lel spe­cial­ized chips has had the neg­a­tive effect of cen­tral­iz­ing Bit­coin min­ing pow­er, reduc­ing the secu­rity of the net­work.6 Many have moved to clus­ters of GPUs because they offer such great speedups; as have a num­ber of cryp­to­graphic appli­ca­tions such as gen­er­at­ing7 .

    Some­one who tries to time-lock a file using a par­al­leliz­able form of work ren­ders them­selves vul­ner­a­ble to any attack­ers like the NSA or bot­nets with large num­bers of com­put­ers, but may also ren­der them­selves vul­ner­a­ble to an ordi­nary com­put­er-gamer with 2 new GPUs: it would not be very use­ful to have a time-lock which guar­an­tees the file will be locked between a year and a mil­len­ni­um, depend­ing on how many & what kind of peo­ple bother to attack it and whether Moore’s law con­tin­ues to increase the par­al­lel-pro­cess­ing power avail­able.

    There does­n’t seem to be any obvi­ous way to solve this with a key mech­a­nism. Even if one is using 100 lay­ers of encryp­tion, each short key can be brute-forced almost arbi­trar­ily fast by a well-re­sourced or even state-level actor.

Many weak keys

We can solve the vari­ance prob­lem by requir­ing mul­ti­ple short keys to decrypt; 1 short key will be cracked ‘quickly’ an unac­cept­able frac­tion of instances, but if 100 short keys are all required to decrypt, then the bino­mial prob­a­bil­ity of all 100 being cracked ‘quickly’ by chance is accept­ably low. (Re­quir­ing mul­ti­ple keys can be done by mul­ti­ple lay­ers of encryp­tion on the file, onion-style, or by defin­ing the decryp­tion key as all the weak keys XORed togeth­er.)

This does­n’t solve the serial/computing-power objec­tion: some actors will be able to crack each short key much faster by using highly par­al­lel resources, and it makes lit­tle differ­ence whether it’s one weak key or many—they will still open it ear­lier than other actors.

So pub­lic or sym­met­ri­cal encryp­tion may not be directly usable as the time-lock prim­i­tive. We want an approach which is inher­ently ser­ial and unpar­al­leliz­able.



For exam­ple, one could take a like , give it a ran­dom input, and hash it for a month. Each hash depends on the pre­vi­ous hash, and there’s no way to skip from the first hash to the tril­lionth hash. After a mon­th, you use the final hash as the encryp­tion key, and then release the encrypted file and the ran­dom input to all the world. The first per­son who wants to decrypt the file has no choice but to redo the tril­lion hashes in ser­ial order to reach the same encryp­tion key you used.

Nor can the gen­eral pub­lic (or the NSA) exploit any par­al­lelism they have avail­able, because each hash depends sen­si­tively on the hash before it—the is a key prop­erty to cryp­to­graphic hash­es.

This is sim­ple to imple­ment: take a seed, hash it, and feed the result­ing hash into the algo­rithm repeat­edly until enough time or iter­a­tions have passed; then encrypt the file with it as a passphrase. Here is a straight­for­ward shell imple­men­ta­tion using SHA-512 & num­ber of iter­a­tions:

timelock () {
 local -i I="$1"
 local HASH_SEED="$2"

 while [ $I -gt 0 ]
  HASH_SEED=$(echo $HASH_SEED | sha512sum | cut --delimiter=' ' --fields=1)
  I=$[$I - 1]

 echo $HASH_SEED }

timelock_encrypt () {
  local KEY=$(timelock $1 $2)
  local PLAINTEXT_FILE="$3"
  echo "$KEY" | gpg --passphrase-fd 0 --symmetric --output $PLAINTEXT_FILE.gpg $PLAINTEXT_FILE
# same as timelock_encrypt except the GPG command changes to do decryption
timelock_decrypt () {
  local KEY=$(timelock $1 $2)
  local ENCRYPTED_FILE="$3"
  echo "$KEY" | gpg --passphrase-fd 0 --decrypt $ENCRYPTED_FILE

# Example usage:
# $ rm foo.*
# $ echo "this is a test" > foo.txt
# $ timelock_encrypt 100 random_seed foo.txt
# Reading passphrase from file descriptor 0
# $ rm foo.txt
# $ timelock_decrypt 100 random_seed foo.txt.gpg
# Reading passphrase from file descriptor 0
# gpg: CAST5 encrypted data
# gpg: encrypted with 1 passphrase
# this is a test

Another sim­ple imple­men­ta­tion of ser­ial hash­ing (SHA-256) for time-lock­ing has been writ­ten in Python.

Chained hashes

On the other hand, there seems to be a way that the orig­i­nal per­son run­ning this algo­rithm can run it in par­al­lel: one gen­er­ates n ran­dom inputs (for n CPUs, pre­sum­ably), and sets them hash­ing as before for m iter­a­tions (say, a mon­th). Then, one sets up a chain between the n result­s—the final hash of seed 1 is used as a key to encrypt seed 2, the final hash of which was the encryp­tion for seed 3, and so on. Then one releases the encrypted file, first seed, and the encrypted seeds. Now the pub­lic has to hash the first seed for a mon­th, and only then can it decrypt the sec­ond seed, and start hash­ing that for a mon­th, and so on. Since the encryp­tion for each seed is uncrack­able, and there’s no way to “jump for­ward” in suc­ces­sive hash­ing, there is no way to reach the final result faster than a sin­gle hash can be com­puted the n*m times in ser­ial by a sin­gle processor/circuit. Karl Gluck sug­gests that like repeated squar­ing, it’s even pos­si­ble to add error-de­tec­tion to the pro­ce­dure8. (A some­what sim­i­lar scheme, “A Guided Tour Puz­zle for Denial of Ser­vice Pre­ven­tion”, uses net­work latency rather than hash out­puts as the chained data—­clients bounce from resources to resource—but this obvi­ously requires an online server and is unsuit­able for our pur­pos­es, among other prob­lems.)

This is pretty clever. If one has a thou­sand CPUs handy, one can store up 3 years’ of com­pu­ta­tion-re­sis­tance in just a day. This sat­is­fies a num­ber of needs. (May­be; please do not try to use this for a real appli­ca­tion before get­ting a proof from a cryp­tog­ra­pher that chained hash­ing is secure.)

Specifi­cal­ly, you could use a hash for which highly opti­mized imple­men­ta­tions have already been cre­ated and put into mass pro­duc­tion as ASICs, which have been hand-op­ti­mized down to the min­i­mal num­ber of gates pos­si­ble (prob­a­bly a few thou­sand). Because of Bit­coin min­ing, SHA-256 is a plau­si­ble can­di­date here. (There is one pro­posal for slightly faster cir­cuits, but it would be unus­able for time-locks on its own because it would deter­min­is­ti­cally fail on some hash­es, which would likely be encoun­tered while fol­low­ing a hash-chain.) Ser­ial speedups would lead to pro­por­tional reduc­tions in time to unlock.

To allow for increases in com­put­ing pow­er, it’s prob­a­bly not nec­es­sary to adjust for Moore’s law: ser­ial speeds and clock fre­quen­cies have stag­nated con­sid­er­ably in recent years, with most of the gains going to par­al­lelism or energy effi­cien­cy, nei­ther of which helps with a hash-chain. More con­cern­ing is exotic tech­nol­ogy like or graphene for imple­ment­ing cir­cuits in the giga­hertz or ter­a­hertz range. (Such an exotic chip might cost a lot, but only a few would be nec­es­sary to com­pute hash-chains far faster than expect­ed.)

Imple­ment­ing chained hash­ing is not as easy as sin­gle-threaded hash­ing, but there are cur­rently 2 imple­men­ta­tions:

  1. Peter Todd & Amir Taaki: imple­men­ta­tion in Python (announce­ment with boun­ties on exam­ple time-locked files; Red­dit & HN dis­cus­sion), with some clever Bit­coin inte­gra­tion
  2. Dorian John­son, imple­men­ta­tion in Go

It is not impos­si­ble to com­bine the decrypter pro­grams with a spec­i­fied time-locked file, cre­at­ing a —which stores data safely encrypt­ed; can­not be decrypted before a cer­tain time peri­od; and requires no third-par­ties what­so­ev­er. In other words, chained-hashes sat­isfy our orig­i­nal desire for a self­-de­crypt­ing file.


But what about peo­ple who only have a nor­mal com­put­er? Fun­da­men­tal­ly, this repeated hash­ing requires you to put in as much com­pu­ta­tion as you want your pub­lic to expend repro­duc­ing the com­pu­ta­tion, which may not be enough even with a GPU imple­men­ta­tion. One may not want ‘merely’ a ratio of 1:100 like that a stan­dard desk­top GPU might be able to provide, but much high­er—1:10000 or more. (Offhand, I can’t think of any appli­ca­tion for time-lock crypto which requires such high ratios, but it may be nec­es­sary for doing time-locks at any kind of indus­trial scale or for exotic appli­ca­tion­s.)

We want to force the pub­lic to expend more com­pu­ta­tion—po­ten­tially much more—than we put in. How can we do this?

It’s hard to see. At least, I haven’t thought of any­thing clever. promises to let us encode arbi­trary com­pu­ta­tions into an encrypted file, so one could imag­ine imple­ment­ing the above hash chains inside the homo­mor­phic com­pu­ta­tion, or per­haps just encod­ing a loop count­ing up to a large num­ber. There are two prob­lems with try­ing to apply homo­mor­phic encryp­tion:

  1. I don’t know how one would let the pub­lic decrypt the result of the homo­mor­phic encryp­tion with­out also let­ting them tam­per with the loop.

    Sup­pose one spec­i­fies that the final out­put is encrypted to a weak key, so one sim­ply has to run the homo­mor­phic sys­tem to com­ple­tion and then put in a lit­tle effort to break the homo­mor­phic encryp­tion to get the key which unlocks a file; what stops some­one from break­ing the homo­mor­phic encryp­tion at the very start, man­u­ally exam­in­ing the run­ning pro­gram, and short­cut­ting to the end result? Of course, one could pos­tu­late that what was under the homo­mor­phic encryp­tion was some­thing like a hash-chain where pre­dict­ing the result is impos­si­ble—but then why bother with the homo­mor­phic encryp­tion? Just use that instead!

  2. and in any case, homo­mor­phic encryp­tion as of 2013 is a net com­pu­ta­tional loss: it takes as much or more time to cre­ate such a pro­gram as it would take to run the pro­gram, and is no good.

Distributed time-lock systems

Pooling key-cracking

Gre­gory Maxwell has a Feb­ru­ary 2013 sketch for an alt­coin which allows coop­er­a­tive dis­trib­uted solv­ing of time-locks (as opposed to solo efforts to tick through a hash-chain):

POW which turns the dis­trib­uted com­pu­ta­tion into tick­ing for time­lock encryp­tion

  • An infi­nite sequence of noth­ing-up-my-sleeve num­bers are taken as an infi­nite sequence of ECC pub­lic keys. Search­ing the pow involves find­ing dis­tin­guished points along a Pol­lard’s rho DLP solu­tion try­ing to crack the key. When the key is cracked the prob­lem is advanced to the next key.
  • Peo­ple can then encrypt mes­sages with all of the keys between now and some­time in the future and net­work will crack them, achiev­ing a time­lock.
  • Prob­a­bly incom­pat­i­ble with merged min­ing and other POW schemes.
  • Mak­ing the diffi­culty adap­tive either makes far in the future mes­sages impos­si­ble (be­cause the prob­lem size would­n’t be known long in advance), or requires increas­ingly big head­ers as the diffi­culty would require work­ing on mul­ti­ple prob­lems con­cur­rent­ly.
  • The obvi­ous con­struc­tions using ECDLP as the asym­met­ric prob­lem are not progress free.

To explain this idea in a lit­tle less jar­gony form, as best as I under­stand it: take some stan­dard PRNG; come up with a seed in some pub­licly accept­able way, such as using the binary expan­sion of the con­stant π; define every, say, 100 bits of the out­put as a pub­lic key (with an as-yet-un­known cor­re­spond­ing pri­vate key); any­one who wants to time­lock a file can encrypt the file using one of these “pub­lic keys” and releases the encrypted file to the pub­lic as nor­mal; now, peo­ple inter­ested in unlock­ing them (“min­ers”) work on brute-forc­ing each chunk of 100 bits one by one, and then the mes­sages for that key can be read by any­one; depend­ing on how much com­put­ing power is col­lec­tively being spent by the min­ers, you spec­ify inter­vals by pick­ing a pub­lic key X keys in the future (eg if each 100 bit key takes 1 month to crack and you want to delay open­ing by a year/12 months, then you encrypt your file to the 12th uncracked key). Or if you are wor­ried about peo­ple ‘skip­ping ahead’ and tar­get­ing a future key specifi­cal­ly, the pro­to­col could be that each file is encrypted to all suc­ces­sive keys (so it would be an onion of encryp­tion lay­ers: #1(#2(#3(…(#12(­plain­text file))))), so it can only be directly attacked by a loner when the first 11 keys have been cracked).

A time­lock alt­coin could incen­tivize par­tic­i­pa­tion in unlock­ing files and be more pre­dictable than solo hash-chain files.

(A sim­i­lar idea may or may not have been pro­posed in the Chi­ne­se-lan­guage pub­li­ca­tion Ke et al 2014.)

Bitcoin as a clock


If one could cre­ate a pro­gram which is a black­box such that run­ning the pro­gram reveals noth­ing about its con­tents besides its out­put, then one has the start of a time-lock: cre­ate a nor­mal encrypted archive, cre­ate a black­box which only emits its con­tent after cer­tain con­di­tions are met, stick the key inside the black­box, and dis­trib­ute both the black­box and the archive. The encrypted archive is uncrack­able on its own, and if the black­box really does obfus­cate its con­tents beyond recog­ni­tion, the key is safe too. But once the proper con­di­tions are met, any­one with a copy of both can reveal the data.

What are proper con­di­tions and how does a pro­gram tell the true time? The local com­put­er’s clock is a lie, and trust­ing any time­stamp servers rein­tro­duces trusted third-par­ties. But this is the same prob­lem as dig­i­tal cur­rency faces, and the solu­tion can be the same: rely on proof-of-work as embod­ied in the Bit­coin blockchain! The black­box can ask a copy of the blockchain and check that n blocks have passed and thus a ungodly num­ber of hashes have been spent to cre­ate this blockchain; to the extent that no entity has as much hash­power as the Bit­coin blockchain’s min­ers col­lec­tively do, this is a reli­able and trust­wor­thy ‘clock’. (Amus­ing­ly, any­one who wanted to spend hash­power to try to open the black­box ear­ly, would be best off becom­ing a Bit­coin min­er, to both defray their expenses and to cause a short speedup of blocks until the min­ing diffi­culty ratch­ets up to com­pen­sate.) With time is rede­fined as blocks which hap­pen every ~10 min­utes, some­one who wants to release data in a year could include the cur­rent block’s hash and spec­ify that the black­box wants at least, say, 52596 more valid blocks before it will unlock & print the decryp­tion key9.

Such a black­box could be cre­ated by “indis­tin­guisha­bil­ity obfus­ca­tion” (recently devel­oped & related to ) as pointed out by Liu et al 2015 & Liu et al 2018

This scheme gains sev­eral ben­e­fits over chained-hashes/multiple-weak-keys/successive-squaring/Maxwell’s-distributed-key-cracking: no one needs to invest com­pu­ta­tions to cre­ate the time-lock (aside from cre­at­ing the black­box), no one (much less redun­dant decrypters) needs to invest com­pu­ta­tions to unlock it on sched­ule (as long as the Bit­coin blockchain keeps tick­ing, it will unlock ‘on its own’), and the tim­ing can be made much more pre­cise (likely down to the day given the sto­chas­tic nature of find­ing blocks which uncer­tainty is much less than that in how much com­put­ing pow­ers decrypters would invest or how opti­mized their imple­men­ta­tions of squar­ing etc would be). The user would­n’t even need to install crypto soft­ware or down­load the full Bit­coin blockchain since the black­box could be com­bined with the encrypted archive and a Inter­net API to the blockchain to cre­ate a secure, one-click con­ve­nient, self­-de­crypt­ing self­-ex­tract­ing file. The black­box approach is the best of all worlds and could be said to entirely solve time-lock cryp­to.

Unfor­tu­nate­ly, obfus­ca­tion (like homo­mor­phic encryp­tion) is

  1. so cut­ting-edge that it may still be inse­cure
  2. com­pu­ta­tion­ally imprac­ti­cal as of 2017, and no one could ever suc­cess­fully exe­cute such a black­box

So it works in prin­ci­ple but not in prac­tice.

Random encodings

Bitan­sky et al 2015 show that given obfus­ca­tion, one can con­struct a nor­mal proof-of-work time-lock with­out using an exter­nal clock, where the black­box stalls for many time-steps before spit­ting out the key, but with­out the poten­tial­ly-ex­pen­sive setup of chained-hash­es.


For­tu­nate­ly, indis­tin­guisha­bil­ity obfus­ca­tion may be overkill. A weaker but more com­putable cryp­to, wit­ness encryp­tion, has also recently popped up: where an obfus­cated black­box can make its computation/requirement arbi­trar­ily com­pli­cated and is Tur­ing-com­plete, wit­ness encryp­tion instead lets one encrypt a file such that the decryp­tion key is the solu­tion to a (pos­si­bly unsolved) prob­lem like 3SAT.

This raises the ques­tion: is it pos­si­ble to encode a check of future Bit­coin blockchain length as a NP prob­lem and thus cre­ate an encrypted file which can only be decrypted after enough ticks of the Bit­coin clock?

Liu et al 2015 answer: yes! Liu show that a check of the blockchain (specifi­cal­ly, that the SHA-256 hash included in each block rep­re­sents ade­quate proof-of-work) can be encod­ed, and so the con­struc­tion is at least pos­si­ble (although fea­si­bil­ity remains unclear), and Liu et al 2015 pro­vide a C pro­gram to check proof-of-work which can be com­piled into a for­mula whose solu­tion is NP-com­plete and hence is usable as a wit­ness encryp­tion.

Thus, Liu et al indi­cate that wit­ness encryp­tion could help pro­vide the best of both worlds: the tick­ing clock is still done by proof-of-work as in suc­ces­sive squar­ing, but the PoW is a now-hy­per­-op­ti­mized hash so there is lit­tle risk of future opti­miza­tion open­ing the time-lock early like in chained hash­es, and since the Bit­coin blockchain will con­tinue to be built regard­less of any inter­est in time-locks, the clock keeps tick­ing with­out the risk of no-one car­ing enough to try to open the time-lock as the time-lock is pig­gy­back­ing on the brute­forc­ing already being done.

The two main down­sides are that the hashes are not nec­es­sar­ily inher­ently seri­al­ized or unique, and the need for exotic wit­ness encryp­tion.

The first means that the secu­rity offered by a Bit­coin blockchain time-lock is going to be, like PoW on trans­ac­tions, an eco­nomic kind of secu­ri­ty: hypo­thet­i­cal­ly, a pow­er­ful actor could if it really wanted to, con­struct an entire fake ‘chain’ (ie a set of hashes with suffi­cient lead­ing zeroes); if the time-lock encodes the con­sen­sus rules, they can exploit the diffi­culty adjust­ment to ratchet diffi­culty down as fast as pos­si­ble to even­tu­ally gen­er­ate hashes for free. As of 2018, to cre­ate valid Bit­coin PoW hashes at all requires vast min­ing farms and for almost all intents & pur­pos­es, this is ade­quate secu­ri­ty, much as it is for the finan­cial trans­ac­tions on the Bit­coin blockchain. One could prob­a­bly include addi­tional con­di­tions like “diffi­culty can­not decrease more than 50% total for the blocks” (equiv­a­lent to spec­i­fy­ing a min­i­mum num­ber of lead­ing zeroes for all hashes involved), although this then risks open­ing too late (what if Bit­coin diffi­culty crashes for years to come because of a bub­ble?), and the time-locker is forced to make a trade­off. Depend­ing on the exact encod­ing, there may be still other ways to cheat—­could one use hashes from other blockchains or past hash­es? So there are details which need to be worked out.

Sec­ond­ly, unfor­tu­nate­ly, as is com­mon with new & exotic cryp­tog­ra­phy, the cur­rent imple­men­ta­tions of wit­ness encryp­tion seem to be slow enough that like obfus­ca­tion, wit­nesses are not yet a viable solu­tion. How­ev­er, wit­ness encryp­tion seems likely to be opti­mized much fur­ther and it is pos­si­ble that there are more spe­cial­ized solu­tions for blockchains, so it is an approach well worth watch­ing.

Distributed secret-sharing with smart contracts

A sim­ple way to try to improve on the stan­dard third-party secret-shar­ing method for time-lock­ing would be to imple­ment them on smart contracts/blockchains like Ethereum in order to sup­port easy decen­tral­ized oper­a­tion by many par­tic­i­pants and to allow cryp­to-e­co­nomic meth­ods. An Ethereum imple­men­ta­tion (Bit­coin smart con­tracts might be too lim­ited to sup­port the nec­es­sary func­tion­al­i­ty) might go like this:

  1. a locker sends an ETH fee to a time-lock­ing smart con­tract for encrypt­ing a spe­cific file until Ethereum block t (us­ing the blockchain as the clock)

    One could also try to make a time-lock token on top of Ethereum under the logic that an exchange rate serves as an addi­tional way to penal­ize attack­ers, but of course this adds com­plex­ity and poten­tially addi­tional vul­ner­a­bil­i­ty—an attacker could buy in, run attacks while col­lect­ing fees, exploit decrypted data for profit, sell their stake, set up shorts, and then delib­er­ately reveal suc­cess­ful attacks to crash the token exchange rates and profit 3 differ­ent ways from the attack.

  2. a quo­rum of n par­tic­i­pants vol­un­teer to be a secret-sharer and each one pro­vides a pub­lic key plus a large deposit of ETH

  3. when a quo­rum forms, the pub­lic keys in a k-of-m secret-shar­ing pub­lic key, which pro­vides a new pub­lic key which the locker can encrypt their file to; pre­sum­ably they do so and dis­trib­ute it off-blockchain some­how

  4. if one of the secret keys is sent to the con­tract before t arrives, the sender receives a frac­tion of the respec­tive deposit, and the rest is destroyed

    • if any­one obtains a secret key from a secret-sharer (eg by hack­ing them or buy­ing it from them), they have an incen­tive to betray them by reveal­ing the secret key early and claim­ing the frac­tion of their deposit. This pro­vides cryp­to-e­co­nomic secu­rity against early rev­e­la­tion of secret keys by avoid­ing “noth­ing at stake”.
  5. after t arrives, within a few blocks, each secret-sharer pro­vides their pri­vate key on blockchain to the smart con­tract, which ver­i­fies that they are valid pri­vate keys and sends back to the secret-sharer their deposit + a frac­tion of the fee; now any­one can read off k shares and decrypt the file after time t

    • if a secret-sharer fails to pro­vide their share, their large deposit is destroyed to penal­ize them

This con­tract seems like it would indeed ensure that keys are released after time t, which is one half of a work­ing time-lock (re­li­able global decrypt­ing after a cer­tain point with­out fur­ther par­tic­i­pa­tion by the orig­i­nal lock­er). But the other half, reli­able non-de­cryp­tion before time t, appears to be flawed: unfor­tu­nate­ly, this does­n’t seem like it would work because despite the deposit-pre­com­mit­men­t+de­fec­tion-in­cen­tive, there is still “noth­ing at stake” for a par­tic­i­pant because there’s no way to prove on-blockchain that the file has­n’t been decrypted before time t. On-blockchain vis­i­bil­ity is ade­quate for Proof-of-Work and Proof-of-S­take because par­tic­i­pants in PoW can observe any dou­ble-spend­ing and pun­ish the PoW own­ers (who own ASICs and large cap­i­tal invest­ments) via social mech­a­nisms like forks, and for Casper-style slash­ing PoS because dou­ble-val­i­dat­ing can be proven on-chain and their stakes slashed, but the invis­i­bil­ity of decryp­tion means that a defrauded locker can never prove that any or all of the secret-shar­ers have defrauded them by col­lud­ing & decrypt­ing ear­ly.

Because of this, the con­tract is vul­ner­a­ble to Sybil attacks: if the con­tract is profitable and the ETH fees cover the cost to par­tic­i­pants of mak­ing deposits/key storage/uptime, then a large entity can by defi­n­i­tion profitably par­tic­i­pate while pre­tend­ing to be most/all par­tic­i­pants, and eas­ily decrypt many files even if lock­ers require a large k to decrypt. (The require cap­i­tal might be quite small; Join­Mar­ket only required $32k Bit­coin cap­i­tal to de-anonymize trans­ac­tions, well within the capa­bil­ity of many indi­vid­u­als, never mind attack­ers like blockchain analy­sis com­pa­nies.) A large entity also ben­e­fits from some degree of economies of scale com­pared to a truly dis­trib­uted quo­rum, as it only needs 1 blockchain copy, and can more eas­ily invest in 100% uptime to avoid acci­den­tally los­ing deposits to down­time, allow­ing it to out­com­pete; it can fur­ther increase profits poten­tially enor­mously by sys­tem­at­i­cally decrypt­ing & exploit­ing any­thing locked (as it is likely that any file any­one is going to the trou­ble of time-lock­ing is a valu­able file to some­one, oth­er­wise why both­er?). So such a con­tract is vul­ner­a­ble to a self­-fund­ing, unde­tectable, unpre­ventable, per­ma­nent Sybil attack.

The vul­ner­a­bil­ity may extend to actual dis­trib­uted quo­rums as well: indi­vid­ual shares can be bribed and attacked, and it’s not clear that the deposit mech­a­nism can pre­vent all coun­ter-mea­sures or com­pu­ta­tions. (They prob­a­bly aren’t as effi­cient as the main Sybil attack, but would allow attack­ing prior time-locks or for attack­ing just a few time-lock­s.) For exam­ple, an attacker can try to defuse the slash­ing con­di­tion and buy shares from k secret-shar­ers sim­ply by pub­lish­ing Ethereum con­tracts with exactly the same mech­a­nism but pro­vid­ing an addi­tional deposit+fee-share from the attack­er; this insures the secret-shar­ers from defec­tion—they can sell their share to the attacker and if the attacker ‘defects’, they merely claim the sec­ond deposit and are made whole; thus, either way they get back their deposit and their share of the fee, and since the attacker no longer ben­e­fits on net, the attacker won’t defect. Coun­ter-con­tracts may not be sta­ble (the sharer could defect in turn), but an attacker has addi­tional options: like secret-shar­ing itself, an attacker could take advan­tage of mul­ti­-party com­pu­ta­tion to enable bribed shar­ers to recon­struct the full key in secret with­out risk­ing reveal­ing their indi­vid­ual shares & thus their deposits. More attacks may be pos­si­ble, and the smart con­tract must be robust to all pos­si­ble attacks and coun­ter-con­tracts brib­ing the secret-shar­ers.

Vulnerability of one-way functions

As it turns out, “Time-Lock Puz­zles in the Ran­dom Ora­cle Model” (Mah­moody, Moran, and Vad­han 2011; slides) directly & for­mally ana­lyzes the gen­eral power of one-way func­tions used for time-lock puz­zles assum­ing a . Unfor­tu­nate­ly, they find an oppo­nent can exploit the ora­cle to gain speedups. For­tu­nate­ly, the cruder scheme where one ‘stores up’ com­pu­ta­tion (re­peat­edly ask­ing the ora­cle at inputs based on its pre­vi­ous out­put) still works under their assump­tions:

A time-lock puz­zle with a lin­ear gap in par­al­lel time. Although our neg­a­tive results rule out ‘strong’ time-lock puz­zles, they still leave open the pos­si­bil­ity for a weaker ver­sion: one that can be gen­er­ated with n par­al­lel queries to the ora­cle but requires n rounds of adap­tive queries to solve. In a pos­i­tive result, we show that such a puz­zle can indeed be con­struct­ed…Although this work rules out black­-box con­struc­tions (with a super-con­stant gap) from one-way per­mu­ta­tions and col­li­sion-re­sis­tant hash func­tions, we have no rea­son to believe that time-lock puz­zles based on other con­crete prob­lems (e.g., lat­tice-based prob­lems) do not exist. Extend­ing our approach to other gen­eral assump­tions (e.g., trap­door per­mu­ta­tions) is also an inter­est­ing open prob­lem.

That is, the puz­zle con­struc­tor can con­struct the puz­zle in par­al­lel, and the solver has to solve it seri­al­ly.

Memory-bound hashes

Of course, one could ask the same ques­tion of my orig­i­nal pro­pos­al—what makes you think that hash­ing can’t be sped up? You already sup­plied an exam­ple where cryp­to­graphic hashes were sped up aston­ish­ingly by a GPU, Bit­coin min­ing.

The differ­ence is that hash­ing can be made to stress the weak­est part of any mod­ern com­puter sys­tem, the ter­ri­ble band­width and latency10; the hash can blow the fast die-level caches (the & its ) and force con­stant fetches from the main RAM. They were devised for anti-s­pam proof-of-work sys­tems that would­n’t unfairly penal­ize cell­phones & PDAs while still being costly on desk­tops & work­sta­tions (which rules out the usual func­tions like that stress the CPU). For exam­ple, the 2003 “On Mem­o­ry-Bound Func­tions for Fight­ing Spam”; from the abstract:

Bur­rows sug­gested that, since mem­ory access speeds vary across machines much less than do CPU speeds, mem­o­ry-bound func­tions may behave more equi­tably than CPU-bound func­tions; this approach was first explored by Abadi, Bur­rows, Man­asse, and Wob­ber [8]. We fur­ther inves­ti­gate this intrigu­ing pro­pos­al. Specifi­cal­ly, we…

2. Pro­vide an abstract func­tion and prove an asymp­tot­i­cally tight amor­tized lower bound on the num­ber of mem­ory accesses required to com­pute an accept­able proof of effort; specifi­cal­ly, we prove that, on aver­age, the sender of a mes­sage must per­form many unre­lated accesses to mem­o­ry, while the receiver, in order to ver­ify the work, has to per­form sig­nifi­cantly fewer access­es; 3. Pro­pose a con­crete instan­ti­a­tion of our abstract func­tion, inspired by the RC4 stream cipher; 4. Describe tech­niques to per­mit the receiver to ver­ify the com­pu­ta­tion with no mem­ory access­es; 5. Give exper­i­men­tal results show­ing that our con­crete mem­o­ry-bound func­tion is only about four times slower on a 233 MHz set­top box than on a 3.06 GHz work­sta­tion, and that speedup of the func­tion is lim­ited even if an adver­sary knows the access sequence and uses opti­mal off-line cache replace­ment.

Abadi 2005, “Mod­er­ately hard, mem­o­ry-bound func­tions” develop more mem­o­ry-bound func­tions and bench­mark them (par­tially repli­cated by Das & Doshi 2004):

…we give exper­i­men­tal results for five mod­ern machines that were bought within a two-year period in 2000–2002, and which cover a range of per­for­mance char­ac­ter­is­tics. All of these machines are some­times used to send e-mail-even the set­top box,which is employed as a quiet machine in a home­…None of the machines have huge caches-the largest was on the server machine, which has a 512KB cache. Although the clock speeds of the machines vary by a fac­tor of 12, the mem­ory read times vary by a fac­tor of only 4.2. This mea­sure­ment con­firms our premise that mem­ory read laten­cies vary much less than CPU speeds.

…At the high end, the server has lower per­for­mance than one might expect, because of a com­plex pipeline that penal­izes branch­ing code. In gen­er­al, higher clock speeds cor­re­late with higher per­for­mance, but the cor­re­la­tion is far from per­fec­t…Sec­ond, the desk­top machine is the most cost-effec­tive one for both CPU-bound and mem­o­ry-bound com­pu­ta­tions; in both cas­es, attack­ers are best served by buy­ing the same type of machines as ordi­nary users. Final­ly, the mem­o­ry-bound func­tions suc­ceed in main­tain­ing a per­for­mance ratio between the slow­est and fastest machines that is not much greater than the ratio of mem­ory read times.

Colin Per­ci­val con­tin­ues the gen­eral trend in the con­text of find­ing pass­words schemes which are resis­tant to cheap brute-forc­ing, invent­ing scrypt in the 2009 paper “Stronger Key Deriva­tion via Sequen­tial Mem­o­ry-Hard Func­tions”. Per­ci­val notes that design­ing a really good mem­o­ry-bound func­tion requires not overly rely­ing on latency since his proofs do not incor­po­rate laten­cy, although in prac­tice this might not be so bad:

Exist­ing widely used hash func­tions pro­duce out­puts of up to 512 bits (64 bytes), closely match­ing the cache line sizes of mod­ern CPUs (typ­i­cally 32–128 bytes), and the com­put­ing time required to hash even a very small amount of data (typ­i­cally 200–2000 clock cycles on mod­ern CPUs, depend­ing on the hash used) is suffi­cient that the mem­ory latency cost (typ­i­cally 100–500 clock cycles) does not dom­i­nate the run­ning time of ROMix.

How­ev­er, as semi­con­duc­tor tech­nol­ogy advances, it is likely that nei­ther of these facts will remain true. Mem­ory laten­cies, mea­sured in com­par­i­son to CPU per­for­mance or mem­ory band­width, have been steadily increas­ing for decades, and there is no rea­son to expect that this will cease — to the con­trary, switch­ing delays impose a lower bound of Ω(log N) on the latency of access­ing a word in an N-byte RAM, while the speed of light imposes a lower bound of Ω( √N) for 2-di­men­sional cir­cuits. Fur­ther­more, since most appli­ca­tions exhibit sig­nifi­cant local­ity of ref­er­ence, it is rea­son­able to expect cache design­ers to con­tinue to increase cache line sizes in an attempt to trade mem­ory band­width for (avoid­ed) mem­ory laten­cy.

In order to avoid hav­ing ROMix become laten­cy-lim­ited in the future, it is nec­es­sary to apply it to larger hash func­tions. While we have only proved that ROMix is sequen­tial mem­o­ry-hard under the Ran­dom Ora­cle mod­el, by con­sid­er­ing the struc­ture of the proof we note that the full strength of this model does not appear to be nec­es­sary.

Per­ci­val con­structs a pass­word algo­rithm on his new hash func­tion and then cal­cu­lates costs using 2002 cir­cuit prices

When used for inter­ac­tive logins, it is 35 times more expen­sive than bcrypt and 260 times more expen­sive than PBKDF2; and when used for file encryp­tion — where, unlike bcrypt and PBKDF2, scrypt uses not only more CPU time but also increases the die area required — scrypt increases its lead to a fac­tor of 4000 over bcrypt and 20000 over PBKDF2.

That is quite a differ­ence between the hash­es, espe­cially con­sid­ered that bcrypt and PBKDF2 were already engi­neered to have adjustable diffi­culty for sim­i­lar rea­sons to our time-lock crypto puz­zles.

  1. I don’t see any dis­cus­sions of how SGX would imple­ment time-lock, exact­ly. But one sys­tem might be to have an SGX enclave pro­duce a pub­lic key, encrypt the file to it with a tar­get date, then the enclave can be queried for a decrypted ver­sion at any time; it makes a request to an Intel server or to a SGX-based blockchain for the cur­rent time, and if the time is cor­rectly signed and is after the tar­get date, it decrypts, oth­er­wise it does noth­ing.↩︎

  2. Ebringer 2008, apply­ing time-lock puz­zles to enhanc­ing the abil­ity of com­puter viruses & tro­jans to defeat anti-virus scan­ners, describes Rivest’s orig��i­nal suc­ces­sive-squar­ing solu­tion some­what sar­cas­ti­cal­ly:

    Even in the orig­i­nal paper, the authors strug­gled to find a plau­si­ble use for it. To actu­ally use the con­struc­tion as a “time-lock” requires pre­dict­ing the speed of CPUs in the future, result­ing, at best, in a fuzzy release-date. This assumes that some­one cares enough to want what is allegedly wrapped up in the puz­zle to bother to com­pute the puz­zle in the first place. It is not obvi­ous that in the major­ity of sit­u­a­tions, this would have a clear advan­tage over, say, leav­ing the infor­ma­tion with a legal firm with instruc­tions to release it on a par­tic­u­lar date. Although this paper pro­poses a prac­ti­cal use for time-lock puz­zles, the orig­i­nal authors would prob­a­bly be dis­mayed that there is still not a wide­spread usage that appears to be of net ben­e­fit to human­i­ty.

    On the other hand, a sim­i­lar crit­i­cism could and has been made about (supporters/users must expend mas­sive com­put­ing power con­stantly just to keep it work­ing, with no com­pu­ta­tional advan­tage over attack­er­s), and that sys­tem has worked well in prac­tice.↩︎

  3. Colin Per­ci­val’s “Inse­cu­rity in the Jun­gle (disk)” presents a table giv­ing times for brute-forc­ing hashes given var­i­ous hard­ware; most dra­mat­i­cal­ly, <$1m of cus­tom hard­ware could brute­force a ran­dom 10 char­ac­ter string in 2 hours. (Hard­ware reaps extreme per­for­mance gains mostly when when few mem­ory accesses are required, and a few fast oper­a­tions applied to small amounts of data; this is because flex­i­bil­ity imposes over­head, and when the over­head is incurred just to run fast instruc­tions, the over­head dom­i­nates the entire oper­a­tion. For exam­ple, graph­ics chips do just a rel­a­tive hand­ful of math to a frame, again and again, and so they gain orders of mag­ni­tude speedups by being spe­cial­ized chip­s—as does any other pro­gram which is like that, which includes cryp­to­graphic hashes designed for speed like the ones Bit­coin uses.)

    gives an older exam­ple using (also being used for Bit­coin hash­ing):

    An impor­tant con­sid­er­a­tion to be made is that CPU-bound hash func­tions are still vul­ner­a­ble to hard­ware imple­men­ta­tions. For exam­ple, the lit­er­a­ture pro­vides effi­cient hard­ware imple­men­ta­tions of SHA-1 in as low as 5000 gates, and able to pro­duce a result in less than 400 clock cycles2. Since mul­ti­-mil­lion gate FPGAs can be pur­chased at less than $129 price points3, it fol­lows that an attacker can build a fully unrolled hard­ware cracker for about $6,455. Such a design, if clocked at 100MHz can try about 300,000 keys/second for the algo­rithm pro­posed above.

  4. As of 2013, the high­est-fre­quency con­sumer CPU avail­able, the AMD FX-9000, tops out at 5GHz with few prospects for sub­stan­tial increases in fre­quen­cy; Rivest pro­jected 10GHz and increas­ing:

    Based on the SEMATECH National Tech­nol­ogy Roadmap for Semi­con­duc­tors (1997 edi­tion), we can expect inter­nal chip speeds to increase by a fac­tor of approx­i­mately 13 over­all up to 2012, when the clock rates reach about 10GHz. After that improve­ments seem more diffi­cult, but we esti­mate that another fac­tor of five might be achiev­able by 2034. Thus, the over­all rate of com­pu­ta­tion should go through approx­i­mately six dou­blings by 2034.

    Moore’s law, in the orig­i­nal for­mu­la­tion of tran­sis­tors per dol­lar, may have con­tin­ued post-1997, but the gains increas­ingly came by par­al­lelism, which is not use­ful for repeated squar­ing.↩︎

  5. Actual num­bers; the differ­ence really is that large.↩︎

  6. While there are tens or hun­dreds of thou­sands of nodes in the Bit­coin P2P net­work, only a few of them are actual min­ers because CPU min­ing has become use­less—the big min­ers, who have large server farms of GPUs or ASICs, col­lec­tively con­trol much of the hash pow­er. This has not yet been a prob­lem, but may. Using a (par­tial­ly) mem­o­ry-bound hash func­tion is one of the sell­ing points of a com­pet­ing Bit­coin cur­ren­cy, Lite­coin.↩︎

  7. eg. the 2008 Graves the­sis, “High per­for­mance pass­word crack­ing by imple­ment­ing rain­bow tables on nVidia graph­ics cards (Ise­Crack)” claims a 100x speedup over CPU gen­er­a­tion of rain­bow tables, or the actively devel­oped util­i­ty, Rain­bow­Crack (which you can even buy the gen­er­ated rain­bow tables from).↩︎

  8. On Hacker News

    To add check­points, one could release both the orig­i­nal seed of the chain A, and a num­ber of pairs of hashes … Let’s say you wanted to do 1-month chains. Hash the seed A for a week, then take the cur­rent value x0 such that H(B) = x0. You know the value of B, since you’ve been com­put­ing the chain. Pick another ran­dom value y0, and con­tinue the chain with . Write in the out­put, and hash for another week. Do the same for and . Each chain then has a seed value and 4 pairs of ‘check­points’. When unlock­ing the crypto puz­zle, these check­points can’t be used to jump ahead in the com­pu­ta­tion, but they can tell you that you’re on the right track. I think that you could even use a sec­ondary hash chain for the yn val­ues, so . If you also derived y0 from A (e.g.), you would just need to pub­lish the seed value A and each check­point hash xn in order to have a fully check­pointed crypto puz­zle.

  9. Or one could instead define time as difficulty/hashes invest­ed, to avoid a pos­si­ble attack where a fake blockchain is cre­ated where each block drops in diffi­culty as much as pos­si­ble to make cre­at­ing long blockchains easy. The down­side is that while # of blocks over time is fairly tightly pre­dictable, hash­power will change over time based on the Bit­coin econ­omy and the error in your fore­cast accu­mu­late. Prob­a­bly the best approach would be set­ting a min­i­mum num­ber of blocks and hash­pow­er, to get the ben­e­fit of pre­cise tim­ing while only need­ing a loose esti­mate of hash­power to avoid giv­ing an attacker much of an advan­tage. The trade­off a user chooses will depend on whether it is more impor­tant to ensure unlock­ing by a par­tic­u­lar time, or ensure it remains locked until a par­tic­u­lar time. The fur­ther out the unlock date, the harder accu­rate unlock­ing becomes, and users may need to become highly con­ser­v­a­tive—as illus­trated by the real-world out­come of Rivest’s con­test!↩︎

  10. From Abadi 2005:

    Fast CPUs run much faster than slow CPUs—consider a 2.5GHz PC ver­sus a 33MHz Palm PDA. More­over, in addi­tion to high clock rates, high­er-end com­puter sys­tems also have sophis­ti­cated pipelines and other advan­ta­geous fea­tures. If a com­pu­ta­tion takes a few sec­onds on a new PC, it may take a minute on an old PC, and sev­eral min­utes on a PDA. That seems unfor­tu­nate for users of old PCs, and prob­a­bly unac­cept­able for users of PDAs…we are con­cerned with find­ing mod­er­ately hard func­tions that most com­puter sys­tems will eval­u­ate at about the same speed. We envi­sion that high­-end sys­tems might eval­u­ate these func­tions some­what faster than low-end sys­tems, per­haps even 2–10 times faster (but not 10–100 faster, as CPU dis­par­i­ties might imply). More­over, the best achiev­able price-per­for­mance should not be sig­nifi­cantly bet­ter than that of a typ­i­cal legit­i­mate clien­t…A mem­o­ry-bound func­tion is one whose com­pu­ta­tion time is dom­i­nated by the time spent access­ing mem­o­ry. The ratios of mem­ory laten­cies of machines built in the last five years is typ­i­cally no greater than two, and almost always less than four. (Mem­ory through­put tends to be less uni­form, so we focus on laten­cy.) A mem­o­ry-bound func­tion should access loca­tions in a large region of mem­ory in an unpre­dictable way, in such a way that caches are ineffec­tu­al…Other pos­si­ble appli­ca­tions include estab­lish­ing shared secrets over inse­cure chan­nels and the timed release of infor­ma­tion, using mem­o­ry-bound vari­ants of Merkle puz­zles [Merkle 1978] and of time-lock puz­zles [May 1993; Rivest et al. 1996], respec­tive­ly. We dis­cuss these also in sec­tion 4.

    And here I thought I was being orig­i­nal in sug­gest­ing mem­o­ry-bound func­tions for time-lock puz­zles! Tru­ly, “there is noth­ing new under the sun”.↩︎