Time-lock encryption

How do you encrypt a file such that it can be decrypted after a date, but not before? Use serial computations for proof-of-work using successive squaring, chained hashes, or witness encryption on blockchains.
computer-science, cryptography, Bitcoin, shell
2011-05-242019-05-06 finished certainty: highly likely importance: 8

In cryp­tog­ra­phy, it is easy to ad­just en­cryp­tion of data so that one, some, or all peo­ple can de­crypt it, or some com­bi­na­tion there­of. It is not so easy to achieve ad­justable de­crypt­abil­ity over time, a “time-lock crypto”: for some uses (data es­crow, leak­ing, in­sur­ance, last-re­sort Bit­coin back­ups etc), one wants data which is dis­trib­uted only after a cer­tain point in time.

I sur­vey tech­niques for time-lock cryp­to. Pro­pos­als often re­sort to trust­ed-third-par­ties, which are vul­ner­a­bil­i­ties. A bet­ter time-lock crypto pro­posal re­places trust­ed-third-par­ties with forcibly se­r­ial proof-of-work us­ing num­ber squar­ing and guar­an­tee­ing un­lock­ing not after a cer­tain point in time but after suffi­cient com­pu­ta­tion-time has been spent; it’s un­clear how well num­ber-squar­ing re­sists op­ti­miza­tion or short­cuts. I sug­gest a new time-lock crypto based on chained hash­es; hashes have been heav­ily at­tacked for other pur­pos­es, and may be safer than num­ber-squar­ing. Fi­nal­ly, I cover ob­fus­ca­tion & wit­ness-en­cryp­tion which, com­bined with proof-of-work, can be said to solve time-lock crypto but cur­rently re­main in­fea­si­ble.

Ju­lian As­sange & Wik­ileaks made head­lines in 2010 when they re­leased an “in­sur­ance file”, an 1.4GB -en­crypted file avail­able through Bit­Tor­rent. It’s gen­er­ally as­sumed that copies of the en­cryp­tion key have been left with Wik­ileaks sup­port­ers who will, in the ap­pro­pri­ate con­tin­gency like As­sange be­ing as­sas­si­nat­ed, leak the key on­line to the thou­sands of down­load­ers of the in­sur­ance file, who will then read and pub­li­cize what­ever con­tents as in it (spec­u­lated to be ad­di­tional US doc­u­ments Man­ning gave Wik­ileak­s); this way, if the worst hap­pens and Wik­ileaks can­not afford to keep di­gest­ing the files to even­tu­ally re­lease them at its leisure in the way it cal­cu­lates will have the most im­pact, the files will still be re­leased and some­one will be very un­hap­py.

Of course, this is an al­l-or-noth­ing strat­e­gy. Wik­ileaks has no guar­an­tees that the file will not be re­leased pre­ma­ture­ly, nor guar­an­tees that it will even­tu­ally be re­leased. Any one of those Wik­ileaks sup­port­ers could be­come dis­affected and leak the key at any time—or if there’s only 1 sup­port­er, they might lose the key to a glitch or be­come dis­affected in the op­po­site di­rec­tion and refuse to trans­mit the key to any­one. (Hope Wik­ileaks kept back­ups of the key!) If one trusts the per­son with the key ab­solutely, that’s fine. But would­n’t it be nice if one did­n’t have to trust an­other per­son like that? Cryp­tog­ra­phy does re­ally well at elim­i­nat­ing the need to trust oth­ers, so maybe there’re bet­ter schemes.

Now, it’s hard to imag­ine how some ab­stract math could ob­serve an as­sas­si­na­tion and de­crypt em­bar­rass­ing files. Per­haps a differ­ent ques­tion could be an­swered—­can you de­sign an en­cryp­tion scheme which re­quires no trusted par­ties but can only be bro­ken after a cer­tain date?


This sort of cryp­tog­ra­phy would be use­ful for many things; from “Time-Lock Puz­zles in the Ran­dom Or­a­cle Model” (Mah­moody et al 2011):

In ad­di­tion to the ba­sic use of ‘send­ing mes­sages to the fu­ture’, there are many other po­ten­tial uses of timed-re­lease cryp­to. Rivest, Shamir and Wag­ner 1996 sug­gest, among other us­es, de­layed pay­ments, sealed-bid auc­tions and . Boneh & Naor 2000 de­fine timed com­mit­ments and timed sig­na­tures and show that they can be used for fair con­tract sign­ing, hon­esty-p­re­serv­ing auc­tions and more.

Cen­sor­ship re­sis­tance in cryp­tocur­ren­cies (self-ex­e­cut­ing hash-then-com­mit), doc­u­ment em­bar­goes (eg. le­gal or clas­si­fied doc­u­ments or con­fes­sions or co­ercible di­aries, or se­cu­rity re­searchers who are often threat­ened), Re­ceip­t-free vot­ing, , and other ap­pli­ca­tions; a cute (al­beit use­less) ap­pli­ca­tion is “Offline Sub­mis­sion with RSA Time-Lock Puz­zles”, Jer­schow & Mauve 2010:

Our main con­tri­bu­tion is an offline sub­mis­sion pro­to­col which en­ables an au­thor be­ing cur­rently offline to com­mit to his doc­u­ment be­fore the dead­line by con­tin­u­ously solv­ing an RSA puz­zle based on that doc­u­ment. When re­gain­ing In­ter­net con­nec­tiv­i­ty, he sub­mits his doc­u­ment along with the puz­zle so­lu­tion which is a proof for the timely com­ple­tion of the doc­u­ment.

One could also lock away files from one­self: seal one’s di­ary for a long time, or use it to en­force com­puter time-outs (change the pass­word and re­quire X min­utes to de­crypt it). When Ross Ul­bricht/­Dread Pi­rate Roberts of was ar­rested in Oc­to­ber 2013 and his com­puter with his Bit­coin hoard seized, I thought of an­other use: con­di­tional trans­fers of wealth. Ul­bricht could cre­ate a time-locked copy of his bit­coins and give it to a friend. Be­cause bit­coins can be trans­ferred, he can at any time use his own un­locked copy to ren­der the copy use­less and does­n’t have to worry about the friend steal­ing all the bit­coins from him, but if he is, say, “shot re­sist­ing ar­rest”, his friend can stil­l—even­tu­al­ly—re­cover the coins. This spe­cific Bit­coin sce­nario may be pos­si­ble with the Bit­coin pro­to­col us­ing nLockTime, in which case one has a nice offline dig­i­tal cash sys­tem, as bet­terunix sug­gests:

Here is one pos­si­ble use case: imag­ine an offline dig­i­tal cash sys­tem, so i.e. the bank will not ac­cept the same to­ken twice. To pro­tect against an un­scrupu­lous sell­er, the buyer pays by giv­ing the seller a time-lock puz­zle with the to­kens; if the goods are not de­liv­ered by some dead­line, the buyer will de­posit the to­kens at the bank, thus pre­vent­ing the seller from do­ing so. Oth­er­wise the seller solves the puz­zle and makes the de­posit. This is ba­si­cally an anonymi­ty-p­re­serv­ing es­crow ser­vice, though in prac­tice there are prob­a­bly sim­pler ap­proach­es.

This use gen­er­al­izes be­yond dark­net mar­kets to all ser­vices or en­ti­ties hold­ing Bit­coins: the ser­vice can pro­vide users with a time-locked ver­sion of the pri­vate keys, and reg­u­larly move its fund­s—if the ser­vice ever goes down, the users can re­trieve their funds. One not-so-cute use is in de­feat­ing . “An­ti-Em­u­la­tion Through Time-Lock Puz­zles”, Ebringer 2008 out­lines it: one starts a pro­gram with a small time-lock puz­zle which must be solved be­fore the pro­gram does any­thing evil, in the hopes that the an­tivirus scan­ner will give up or stop watch­ing be­fore the puz­zle has been solved and the pro­gram de­crypts the evil pay­load; the puz­zle’s math back­ing means no an­tivirus soft­ware can an­a­lyze or solve the puz­zle first. The ba­sic func­tion­al­ity can­not be black­listed as it is used by le­git­i­mate cryp­tog­ra­phy soft­ware such as which would be ex­pen­sive col­lat­eral dam­age.

No trusted third-parties

Note that this bars a lot of the usual sug­ges­tions for cryp­tog­ra­phy schemes. For ex­am­ple the gen­eral ap­proach of (eg. Bel­lare & Gold­wasser 1996) if you trust some peo­ple, you can just adopt a pro­to­col where they XOR to­gether their keys to get the mas­ter key for the pub­licly dis­trib­uted en­crypted file. Or if you only trust some of those peo­ple (but are un­sure which will try to be­tray you and ei­ther re­lease early or late), you can adopt where k of the n peo­ple suffice to re­con­struct the mas­ter key like Ra­bin & Thorpe 2006. (And you can con­nect mul­ti­ple groups, so each de­crypts some nec­es­sary keys for the next group; but this gives each group a con­sec­u­tive veto on re­lease…) Or per­haps some­thing could be de­vised based on like Crescenzo et al 1999 or Blake & Chan 2004; but then don’t you need the to sur­vive on the net­work? A vari­ant on trusted time­stamp­ing, In­tel would like to sug­gest, might be out­sourc­ing trust to a “Proof of Elapsed Time” mech­a­nism im­ple­mented us­ing In­tel’s tam­per-proof hard­ware mod­ule ; too bad that SGX has had se­cu­rity is­sues since well be­fore 1 Or (but don’t you need to be on the net­work, or risk all the par­ties say­ing ‘screw it, we’re too im­pa­tient, let’s just pool our se­crets and de­crypt the file now’?) or you could ex­ploit physics and use the speed of light to com­mu­ni­cate with a re­mote com­puter on a space­craft (ex­cept now we’re trust­ing the space­craft as our third par­ty, hop­ing no one stole its on­board pri­vate key & is able to de­crypt our trans­mis­sions in­stan­ta­neous­ly)…

One ap­proach is to fo­cus on cre­at­ing prob­lems which can be solved with a large but pre­cise amount of work, rea­son­ing that if the prob­lem can’t be solved in less than a mon­th, then you can use that as a way to guar­an­tee the file can’t be de­crypted within a mon­th’s time. (This would be a .) This has its own prob­lems2, but it at least de­liv­ers what it promises

One could en­crypt the file against in­for­ma­tion that will be known in the fu­ture, like stock prices—ex­cept wait, how can you find out what stock prices will be a year from now? You can’t use any­thing that is pub­lic knowl­edge now be­cause that’d let the file be de­crypted im­me­di­ate­ly, and by de­fi­n­i­tion you don’t have ac­cess to in­for­ma­tion cur­rently un­known but which will be known in the fu­ture, and if you gen­er­ate the in­for­ma­tion your­self plan­ning to re­lease it, now you have prob­lem­s—you can’t even trust your­self (what if you are abruptly as­sas­si­nated like ?) much less your con­fed­er­ates.

Successive squaring

At this point, let’s see what the crypto ex­perts have to say. Googling to see what the ex­ist­ing lit­er­a­ture was (after I’d thought of the above schemes), I found that the rel­e­vant term is “time-lock puz­zles” (from anal­ogy with the bank vault ). In par­tic­u­lar, // have pub­lished a 1996 pa­per on the top­ic, “Time-lock puz­zles and timed-re­lease crypto”. Ap­par­ently the ques­tion was first raised by on the in an email from 1993-02-10. May also dis­cusses the topic briefly in his , ch14.5; un­for­tu­nate­ly, May’s so­lu­tion (14.5.1) is es­sen­tially to punt to the le­gal sys­tem and rely on le­gal priv­i­lege and eco­nomic in­cen­tives to keep keys pri­vate.

Rivest et al agree with us that

There are 2 nat­ural ap­proaches to im­ple­ment­ing timed-re­lease cryp­to:

  • Use ‘time-lock puz­zles’—com­pu­ta­tional prob­lems that can not be solved with­out run­ning a com­puter con­tin­u­ously for at least a cer­tain amount of time.
  • Use trusted agents who promise not to re­veal cer­tain in­for­ma­tion un­til a spec­i­fied date.

And that for time-lock puz­zles:

Our goal is thus to de­sign time-lock puz­zles that, to the great ex­tent pos­si­ble, are ‘in­trin­si­cally se­quen­tial’ in na­ture, and can not be solved sub­stan­tially faster with large in­vest­ments in hard­ware. In par­tic­u­lar, we want our puz­zles to have the prop­erty that putting com­put­ers to work to­gether in par­al­lel does­n’t speed up find­ing the so­lu­tion. (Solv­ing the puz­zle should be like hav­ing a baby: two women can’t have a baby in 4.5 month­s.)

Rivest et al then points out that the most ob­vi­ous ap­proach—en­crypt the file to a ran­dom short key, short enough that brute-forc­ing takes only a few month­s/years as op­posed to eon­s—is flawed be­cause brute-forc­ing a key is very and amenable to 3. (And as well, the ran­dom­ness of search­ing a key space means that the key might be found very early or very late; any es­ti­mate of how long it will take to brute force is just a guess.) One cute ap­pli­ca­tion of the same brute-forc­ing idea is where the time-lock puz­zle is used to hide a key for a sec­ond-party to com­mu­ni­cate with the first-party cre­ator, but it has the same draw­back: it has the cre­ator make many time-lock puz­zles (any of which could be used by the sec­ond-par­ty) and raises the cost to the at­tacker (who might have to crack each puz­zle), but can be de­feated by a fea­si­bly wealthy at­tack­er, and offers only prob­a­bilis­tic guar­an­tees (what if the at­tacker cracks the same puz­zle the sec­ond-party hap­pens to choose?).

Rivest et al pro­pose a scheme in which one en­crypts the file with a very strong key as usu­al, but then one en­crypts the key in such a way that one must cal­cu­late where t is the ad­justable diffi­culty fac­tor. With the orig­i­nal num­bers, one can eas­ily avoid do­ing the suc­ces­sive squar­ings. This has the nice prop­erty that the puz­zle con­struc­tor in­vests only 𝒪(log n) com­put­ing pow­er, but the solver has to spend 𝒪(n2) com­put­ing pow­er. (This scheme works in the ran­dom or­a­cle mod­el, but Barak & Mah­moody-Ghidary 2009 proves that is the best you can do.) Fur­ther re­fine­ments to the mod­u­lar ex­po­nen­ti­a­tion pro­posal have been made, such as Kup­pusamy & Ran­gasamy 2015.

LCS35: Rivest’s Time-lock Experiment

Rivest has ac­tu­ally used this scheme for a 1999 time cap­sule com­mem­o­rat­ing the ; he ex­pected his puz­zle to take ~35 years. As of 14 years lat­er, 2013, min­i­mal progress seems to have been made; if any­thing, the break­down of CPU clock­speeds seems to im­ply that it will take far more than 35 years.4 Rivest offers some ad­vice for any­one at­tempt­ing to un­lock this time-lock puz­zle (may or may not be re­lated to Mao’s 2000 pa­per “Time-Lock Puz­zle with Ex­am­inable Ev­i­dence of Un­lock­ing Time”):

An in­ter­est­ing ques­tion is how to pro­tect such a com­pu­ta­tion from er­rors. If you have an er­ror in year 3 that goes un­de­tect­ed, you may waste the next 32 years of com­put­ing. Adi Shamir has pro­posed a slick means of check­ing your com­pu­ta­tion as you go, as fol­lows. Pick a small (50-bit) prime c, and per­form the com­pu­ta­tion mod­ulo rather than just mod­ulo n. You can check the re­sult mod­ulo c when­ever you like; this should be a ex­tremely effec­tive check on the com­pu­ta­tion mod­ulo n as well.

Constant factors

How well does this work? The com­plex­ity seems cor­rect, but I worry about the con­stant fac­tors. Back in 1996, com­put­ers were fairly ho­mo­ge­neous, and Rivest et al could rea­son­ably write

We know of no ob­vi­ous way to par­al­lelize it to any large de­gree. (A small amount of par­al­leliza­tion may be pos­si­ble within each squar­ing.) The de­gree of vari­a­tion in how long it might take to solve the puz­zle de­pends on the vari­a­tion in the speed of sin­gle com­put­ers and not on one’s to­tal bud­get. Since the speed of hard­ware avail­able to in­di­vid­ual con­sumers is within a small con­stant fac­tor of what is avail­able to large in­tel­li­gence or­ga­ni­za­tions, the differ­ence in time to so­lu­tion is rea­son­ably con­trol­lable.

But I won­der how true this is. Suc­ces­sive squar­ing does not seem to be a very com­plex al­go­rithm to im­ple­ment in hard­ware. There are more ex­otic tech­nolo­gies than GPUs we might worry about, like s which may be spe­cial­ized for suc­ces­sive squar­ing; if prob­lems like the can be , why not mul­ti­pli­ca­tion? Or, since squar­ing seems sim­ple, is it rel­e­vant to fore­casts of se­r­ial speed that there are tran­sis­tor pro­to­types go­ing as high as 100GHz? Offhand, I don’t know of any com­pelling ar­gu­ment to the effect that there are no large con­stan­t-fac­tor speedups pos­si­ble for mul­ti­pli­ca­tion/­suc­ces­sive-squar­ing. In­deed, the gen­eral ap­proach of ex­po­nen­ti­a­tion and fac­tor­ing has to worry about the fact that the com­plex­ity of fac­tor­ing has never been proven (and could still be very fast) and that there are speedups with quan­tum tech­niques like .

sug­gests (De­cem­ber 2016 FHI AI/blockchain work­shop) that the un­cer­tainty of the effi­ciency of squar­ing (or hash­ing) could be re­duced by the hon­ey­pot/“mon­ey­pot” strat­e­gy, sim­i­lar to some long­stand­ing Bit­coin re­wards for break­ing hash func­tions: re­lease sev­eral time-locks (per­haps on a blockchain like Ethereum) with the pri­vate key con­trol­ling large mon­e­tary re­wards; this in­cen­tivizes peo­ple to un­lock them as effi­ciently as pos­si­ble to claim the re­ward, and by mov­ing the re­ward at par­tic­u­lar times, re­veal an up­per bound on how quickly a given time-lock can be solved. (This would­n’t nec­es­sar­ily mea­sure the speed of open­ing for state-level ac­tors, who are in­sen­si­tive to mon­e­tary re­wards, but it would still help—as­sum­ing peo­ple cared enough to set up large boun­ties.)

2019 Unlocking of LCS35

In April 2019, it was an­nounced that not one but two so­lu­tions of Rivest’s puz­zle had been made: the first so­lu­tion of the puz­zle was made by Bel­gian pro­gram­mer Bernard Fab­rot as a hob­by, who skipped Rivest’s Java im­ple­men­ta­tion to run a squar­ing rou­tine for ~3.5 years on an or­di­nary In­tel Core i7-6700 CPU core; and the sec­ond was done in 2 months by the Cryp­tophage re­search project us­ing new squar­ing al­go­rithms op­ti­mized to run on an with an ASIC im­ple­men­ta­tion pos­si­bly run­ning in ~6 days. (The Cryp­tophage project tar­geted Rivest’s puz­zle to in­ves­ti­gate the ro­bust­ness of time-lock cryp­to, and the po­ten­tial of “Ver­i­fi­able De­lay Func­tions”, as dubbed by Boneh et al 2018, for cryp­tocur­ren­cy.) Rivest com­ment­ed:

“There have been hard­ware and soft­ware ad­vances be­yond what I pre­dicted in 1999,” says MIT pro­fes­sor Ron Rivest, who first an­nounced the puz­zle in April 1999 tied to a cel­e­bra­tion of 35 years of re­search at MIT’s Lab­o­ra­tory for Com­puter Sci­ence (now CSAIL). “The puz­zle’s fun­da­men­tal chal­lenge of do­ing roughly 80 tril­lion squar­ings re­mains un­bro­ken, but the re­sources re­quired to do a sin­gle squar­ing have been re­duced by much more than I pre­dict­ed.”

This res­o­lu­tion to Rivest’s ex­per­i­ment sug­gests that con­struct­ing re­li­able time-lock, which are truly se­r­ial and rea­son­ably tight in best-case un­lock­ing time, may be quite diffi­cult.

Weak keys

Let’s think about al­ter­na­tives to suc­ces­sive squar­ing.

The first ap­proach that jumps to mind and seems to be crypto folk­lore is to en­crypt the file, but with a rel­a­tively short or weak key, one which will take years to brute­force. In­stead of a sym­met­ri­cal key at 256 bits, per­haps one of 50 bits or 70 bits?

This fails for two ma­jor rea­sons:

  1. vari­ance:

    we re­al­ize that we can guar­an­tee on av­er­age how much work it will take to brute­force the key, but we can­not guar­an­tee how much time it will take. A weak key could be cracked at any time given good luck, and across many uses of time-lock cryp­to, there will be in­stances where a weak key would be cracked al­most im­me­di­ate­ly, to the po­ten­tially ex­treme harm of the en­crypter.

  2. differ­ing amounts of com­put­ing re­sources, par­tic­u­larly par­al­lel com­put­ing: It may take a CPU years to brute­force a cho­sen key-length, but take a clus­ter of CPUs just months.

    Or worse than that, with­out in­vok­ing clus­ters or su­per­com­put­er­s—de­vices can differ dra­mat­i­cally now even in the same com­put­ers; to take the ex­am­ple of min­ing, my lap­top’s 2GHz CPU can search for hashes at 4k/sec, or its sin­gle out­dated GPU can search at 54M/sec­ond5. This 1200x speedup is not be­cause the GPU’s clock­speed is 2400GHz or any­thing like that, it is be­cause the GPU has hun­dreds of small spe­cial­ized proces­sors which are able to com­pute hash­es, and the par­tic­u­lar ap­pli­ca­tion does not re­quire the proces­sors to co­or­di­nate or any­thing like that which might slow them down. In­ci­den­tal­ly, this im­bal­ance be­tween CPUs and high­ly-par­al­lel spe­cial­ized chips has had the neg­a­tive effect of cen­tral­iz­ing Bit­coin min­ing pow­er, re­duc­ing the se­cu­rity of the net­work.6 Many have moved to clus­ters of GPUs be­cause they offer such great speedups; as have a num­ber of cryp­to­graphic ap­pli­ca­tions such as gen­er­at­ing7 .

    Some­one who tries to time-lock a file us­ing a par­al­leliz­able form of work ren­ders them­selves vul­ner­a­ble to any at­tack­ers like the NSA or bot­nets with large num­bers of com­put­ers, but may also ren­der them­selves vul­ner­a­ble to an or­di­nary com­put­er-gamer with 2 new GPUs: it would not be very use­ful to have a time-lock which guar­an­tees the file will be locked be­tween a year and a mil­len­ni­um, de­pend­ing on how many & what kind of peo­ple bother to at­tack it and whether Moore’s law con­tin­ues to in­crease the par­al­lel-pro­cess­ing power avail­able.

    There does­n’t seem to be any ob­vi­ous way to solve this with a key mech­a­nism. Even if one is us­ing 100 lay­ers of en­cryp­tion, each short key can be brute-forced al­most ar­bi­trar­ily fast by a well-re­sourced or even state-level ac­tor.

Many weak keys

We can solve the vari­ance prob­lem by re­quir­ing mul­ti­ple short keys to de­crypt; 1 short key will be cracked ‘quickly’ an un­ac­cept­able frac­tion of in­stances, but if 100 short keys are all re­quired to de­crypt, then the bi­no­mial prob­a­bil­ity of all 100 be­ing cracked ‘quickly’ by chance is ac­cept­ably low. (Re­quir­ing mul­ti­ple keys can be done by mul­ti­ple lay­ers of en­cryp­tion on the file, onion-style, or by defin­ing the de­cryp­tion key as all the weak keys XORed to­geth­er.)

This does­n’t solve the se­ri­al/­com­put­ing-power ob­jec­tion: some ac­tors will be able to crack each short key much faster by us­ing highly par­al­lel re­sources, and it makes lit­tle differ­ence whether it’s one weak key or many—they will still open it ear­lier than other ac­tors.

So pub­lic or sym­met­ri­cal en­cryp­tion may not be di­rectly us­able as the time-lock prim­i­tive. We want an ap­proach which is in­her­ently se­r­ial and un­par­al­leliz­able.



For ex­am­ple, one could take a like , give it a ran­dom in­put, and hash it for a month. Each hash de­pends on the pre­vi­ous hash, and there’s no way to skip from the first hash to the tril­lionth hash. After a mon­th, you use the fi­nal hash as the en­cryp­tion key, and then re­lease the en­crypted file and the ran­dom in­put to all the world. The first per­son who wants to de­crypt the file has no choice but to redo the tril­lion hashes in se­r­ial or­der to reach the same en­cryp­tion key you used.

Nor can the gen­eral pub­lic (or the NSA) ex­ploit any par­al­lelism they have avail­able, be­cause each hash de­pends sen­si­tively on the hash be­fore it—the is a key prop­erty to cryp­to­graphic hash­es.

This is sim­ple to im­ple­ment: take a seed, hash it, and feed the re­sult­ing hash into the al­go­rithm re­peat­edly un­til enough time or it­er­a­tions have passed; then en­crypt the file with it as a passphrase. Here is a straight­for­ward shell im­ple­men­ta­tion us­ing SHA-512 & num­ber of it­er­a­tions:

timelock () {
 local -i I="$1"
 local HASH_SEED="$2"

 while [ $I -gt 0 ]
  HASH_SEED=$(echo $HASH_SEED | sha512sum | cut --delimiter=' ' --fields=1)
  I=$[$I - 1]

 echo $HASH_SEED }

timelock_encrypt () {
  local KEY=$(timelock $1 $2)
  local PLAINTEXT_FILE="$3"
  echo "$KEY" | gpg --passphrase-fd 0 --symmetric --output $PLAINTEXT_FILE.gpg $PLAINTEXT_FILE
# same as timelock_encrypt except the GPG command changes to do decryption
timelock_decrypt () {
  local KEY=$(timelock $1 $2)
  local ENCRYPTED_FILE="$3"
  echo "$KEY" | gpg --passphrase-fd 0 --decrypt $ENCRYPTED_FILE

# Example usage:
# $ rm foo.*
# $ echo "this is a test" > foo.txt
# $ timelock_encrypt 100 random_seed foo.txt
# Reading passphrase from file descriptor 0
# $ rm foo.txt
# $ timelock_decrypt 100 random_seed foo.txt.gpg
# Reading passphrase from file descriptor 0
# gpg: CAST5 encrypted data
# gpg: encrypted with 1 passphrase
# this is a test

An­other sim­ple im­ple­men­ta­tion of se­r­ial hash­ing (SHA-256) for time-lock­ing has been writ­ten in Python.

Chained hashes

On the other hand, there seems to be a way that the orig­i­nal per­son run­ning this al­go­rithm can run it in par­al­lel: one gen­er­ates n ran­dom in­puts (for n CPUs, pre­sum­ably), and sets them hash­ing as be­fore for m it­er­a­tions (say, a mon­th). Then, one sets up a chain be­tween the n re­sult­s—the fi­nal hash of seed 1 is used as a key to en­crypt seed 2, the fi­nal hash of which was the en­cryp­tion for seed 3, and so on. Then one re­leases the en­crypted file, first seed, and the en­crypted seeds. Now the pub­lic has to hash the first seed for a mon­th, and only then can it de­crypt the sec­ond seed, and start hash­ing that for a mon­th, and so on. Since the en­cryp­tion for each seed is un­crack­able, and there’s no way to “jump for­ward” in suc­ces­sive hash­ing, there is no way to reach the fi­nal re­sult faster than a sin­gle hash can be com­puted the n*m times in se­r­ial by a sin­gle proces­sor/­cir­cuit. Karl Gluck sug­gests that like re­peated squar­ing, it’s even pos­si­ble to add er­ror-de­tec­tion to the pro­ce­dure8. (A some­what sim­i­lar scheme, “A Guided Tour Puz­zle for De­nial of Ser­vice Pre­ven­tion”, uses net­work la­tency rather than hash out­puts as the chained data—­clients bounce from re­sources to re­source—but this ob­vi­ously re­quires an on­line server and is un­suit­able for our pur­pos­es, among other prob­lems.)

This is pretty clever. If one has a thou­sand CPUs handy, one can store up 3 years’ of com­pu­ta­tion-re­sis­tance in just a day. This sat­is­fies a num­ber of needs. (May­be; please do not try to use this for a real ap­pli­ca­tion be­fore get­ting a proof from a cryp­tog­ra­pher that chained hash­ing is se­cure.)

Specifi­cal­ly, you could use a hash for which highly op­ti­mized im­ple­men­ta­tions have al­ready been cre­ated and put into mass pro­duc­tion as ASICs, which have been hand-op­ti­mized down to the min­i­mal num­ber of gates pos­si­ble (prob­a­bly a few thou­sand). Be­cause of Bit­coin min­ing, SHA-256 is a plau­si­ble can­di­date here. (There is one pro­posal for slightly faster cir­cuits, but it would be un­us­able for time-locks on its own be­cause it would de­ter­min­is­ti­cally fail on some hash­es, which would likely be en­coun­tered while fol­low­ing a hash-chain.) Se­r­ial speedups would lead to pro­por­tional re­duc­tions in time to un­lock.

To al­low for in­creases in com­put­ing pow­er, it’s prob­a­bly not nec­es­sary to ad­just for Moore’s law: se­r­ial speeds and clock fre­quen­cies have stag­nated con­sid­er­ably in re­cent years, with most of the gains go­ing to par­al­lelism or en­ergy effi­cien­cy, nei­ther of which helps with a hash-chain. More con­cern­ing is ex­otic tech­nol­ogy like or graphene for im­ple­ment­ing cir­cuits in the gi­ga­hertz or ter­a­hertz range. (Such an ex­otic chip might cost a lot, but only a few would be nec­es­sary to com­pute hash-chains far faster than ex­pect­ed.)

Im­ple­ment­ing chained hash­ing is not as easy as sin­gle-threaded hash­ing, but there are cur­rently 2 im­ple­men­ta­tions:

  1. Pe­ter Todd & Amir Taaki: im­ple­men­ta­tion in Python (an­nounce­ment with boun­ties on ex­am­ple time-locked files; Red­dit & HN dis­cus­sion), with some clever Bit­coin in­te­gra­tion
  2. Do­rian John­son, im­ple­men­ta­tion in Go

It is not im­pos­si­ble to com­bine the de­crypter pro­grams with a spec­i­fied time-locked file, cre­at­ing a —which stores data safely en­crypt­ed; can­not be de­crypted be­fore a cer­tain time pe­ri­od; and re­quires no third-par­ties what­so­ev­er. In other words, chained-hashes sat­isfy our orig­i­nal de­sire for a self­-de­crypt­ing file.


But what about peo­ple who only have a nor­mal com­put­er? Fun­da­men­tal­ly, this re­peated hash­ing re­quires you to put in as much com­pu­ta­tion as you want your pub­lic to ex­pend re­pro­duc­ing the com­pu­ta­tion, which may not be enough even with a GPU im­ple­men­ta­tion. One may not want ‘merely’ a ra­tio of 1:100 like that a stan­dard desk­top GPU might be able to provide, but much high­er—1:10000 or more. (Offhand, I can’t think of any ap­pli­ca­tion for time-lock crypto which re­quires such high ra­tios, but it may be nec­es­sary for do­ing time-locks at any kind of in­dus­trial scale or for ex­otic ap­pli­ca­tion­s.)

We want to force the pub­lic to ex­pend more com­pu­ta­tion—po­ten­tially much more—than we put in. How can we do this?

It’s hard to see. At least, I haven’t thought of any­thing clever. promises to let us en­code ar­bi­trary com­pu­ta­tions into an en­crypted file, so one could imag­ine im­ple­ment­ing the above hash chains in­side the ho­mo­mor­phic com­pu­ta­tion, or per­haps just en­cod­ing a loop count­ing up to a large num­ber. There are two prob­lems with try­ing to ap­ply ho­mo­mor­phic en­cryp­tion:

  1. I don’t know how one would let the pub­lic de­crypt the re­sult of the ho­mo­mor­phic en­cryp­tion with­out also let­ting them tam­per with the loop.

    Sup­pose one spec­i­fies that the fi­nal out­put is en­crypted to a weak key, so one sim­ply has to run the ho­mo­mor­phic sys­tem to com­ple­tion and then put in a lit­tle effort to break the ho­mo­mor­phic en­cryp­tion to get the key which un­locks a file; what stops some­one from break­ing the ho­mo­mor­phic en­cryp­tion at the very start, man­u­ally ex­am­in­ing the run­ning pro­gram, and short­cut­ting to the end re­sult? Of course, one could pos­tu­late that what was un­der the ho­mo­mor­phic en­cryp­tion was some­thing like a hash-chain where pre­dict­ing the re­sult is im­pos­si­ble—but then why bother with the ho­mo­mor­phic en­cryp­tion? Just use that in­stead!

  2. and in any case, ho­mo­mor­phic en­cryp­tion as of 2013 is a net com­pu­ta­tional loss: it takes as much or more time to cre­ate such a pro­gram as it would take to run the pro­gram, and is no good.

Distributed time-lock systems

Pooling key-cracking

Gre­gory Maxwell has a Feb­ru­ary 2013 sketch for an alt­coin which al­lows co­op­er­a­tive dis­trib­uted solv­ing of time-locks (as op­posed to solo efforts to tick through a hash-chain):

POW which turns the dis­trib­uted com­pu­ta­tion into tick­ing for time­lock en­cryp­tion

  • An in­fi­nite se­quence of noth­ing-up-my-sleeve num­bers are taken as an in­fi­nite se­quence of ECC pub­lic keys. Search­ing the pow in­volves find­ing dis­tin­guished points along a Pol­lard’s rho DLP so­lu­tion try­ing to crack the key. When the key is cracked the prob­lem is ad­vanced to the next key.
  • Peo­ple can then en­crypt mes­sages with all of the keys be­tween now and some­time in the fu­ture and net­work will crack them, achiev­ing a time­lock.
  • Prob­a­bly in­com­pat­i­ble with merged min­ing and other POW schemes.
  • Mak­ing the diffi­culty adap­tive ei­ther makes far in the fu­ture mes­sages im­pos­si­ble (be­cause the prob­lem size would­n’t be known long in ad­vance), or re­quires in­creas­ingly big head­ers as the diffi­culty would re­quire work­ing on mul­ti­ple prob­lems con­cur­rent­ly.
  • The ob­vi­ous con­struc­tions us­ing ECDLP as the asym­met­ric prob­lem are not progress free.

To ex­plain this idea in a lit­tle less jar­gony form, as best as I un­der­stand it: take some stan­dard PRNG; come up with a seed in some pub­licly ac­cept­able way, such as us­ing the bi­nary ex­pan­sion of the con­stant π; de­fine every, say, 100 bits of the out­put as a pub­lic key (with an as-yet-un­known cor­re­spond­ing pri­vate key); any­one who wants to time­lock a file can en­crypt the file us­ing one of these “pub­lic keys” and re­leases the en­crypted file to the pub­lic as nor­mal; now, peo­ple in­ter­ested in un­lock­ing them (“min­ers”) work on brute-forc­ing each chunk of 100 bits one by one, and then the mes­sages for that key can be read by any­one; de­pend­ing on how much com­put­ing power is col­lec­tively be­ing spent by the min­ers, you spec­ify in­ter­vals by pick­ing a pub­lic key X keys in the fu­ture (eg if each 100 bit key takes 1 month to crack and you want to de­lay open­ing by a year/12 months, then you en­crypt your file to the 12th un­cracked key). Or if you are wor­ried about peo­ple ‘skip­ping ahead’ and tar­get­ing a fu­ture key specifi­cal­ly, the pro­to­col could be that each file is en­crypted to all suc­ces­sive keys (so it would be an onion of en­cryp­tion lay­ers: #1(#2(#3(…(#12(­plain­text file))))), so it can only be di­rectly at­tacked by a loner when the first 11 keys have been cracked).

A time­lock alt­coin could in­cen­tivize par­tic­i­pa­tion in un­lock­ing files and be more pre­dictable than solo hash-chain files.

(A sim­i­lar idea may or may not have been pro­posed in the Chi­ne­se-lan­guage pub­li­ca­tion Ke et al 2014.)

Bitcoin as a clock


If one could cre­ate a pro­gram which is a black­box such that run­ning the pro­gram re­veals noth­ing about its con­tents be­sides its out­put, then one has the start of a time-lock: cre­ate a nor­mal en­crypted archive, cre­ate a black­box which only emits its con­tent after cer­tain con­di­tions are met, stick the key in­side the black­box, and dis­trib­ute both the black­box and the archive. The en­crypted archive is un­crack­able on its own, and if the black­box re­ally does ob­fus­cate its con­tents be­yond recog­ni­tion, the key is safe too. But once the proper con­di­tions are met, any­one with a copy of both can re­veal the da­ta.

What are proper con­di­tions and how does a pro­gram tell the true time? The lo­cal com­put­er’s clock is a lie, and trust­ing any time­stamp servers rein­tro­duces trusted third-par­ties. But this is the same prob­lem as dig­i­tal cur­rency faces, and the so­lu­tion can be the same: rely on proof-of-work as em­bod­ied in the Bit­coin blockchain! The black­box can ask a copy of the blockchain and check that n blocks have passed and thus a un­godly num­ber of hashes have been spent to cre­ate this blockchain; to the ex­tent that no en­tity has as much hash­power as the Bit­coin blockchain’s min­ers col­lec­tively do, this is a re­li­able and trust­wor­thy ‘clock’. (A­mus­ing­ly, any­one who wanted to spend hash­power to try to open the black­box ear­ly, would be best off be­com­ing a Bit­coin min­er, to both de­fray their ex­penses and to cause a short speedup of blocks un­til the min­ing diffi­culty ratch­ets up to com­pen­sate.) With time is re­de­fined as blocks which hap­pen every ~10 min­utes, some­one who wants to re­lease data in a year could in­clude the cur­rent block’s hash and spec­ify that the black­box wants at least, say, 52596 more valid blocks be­fore it will un­lock & print the de­cryp­tion key9.

Such a black­box could be cre­ated by “in­dis­tin­guisha­bil­ity ob­fus­ca­tion” (re­cently de­vel­oped & re­lated to ) as pointed out by Liu et al 2015 & Liu et al 2018

This scheme gains sev­eral ben­e­fits over chained-hash­es/­mul­ti­ple-weak-keys/­suc­ces­sive-squar­ing/­Maxwell’s-dis­trib­ut­ed-key-crack­ing: no one needs to in­vest com­pu­ta­tions to cre­ate the time-lock (a­side from cre­at­ing the black­box), no one (much less re­dun­dant de­crypters) needs to in­vest com­pu­ta­tions to un­lock it on sched­ule (as long as the Bit­coin blockchain keeps tick­ing, it will un­lock ‘on its own’), and the tim­ing can be made much more pre­cise (likely down to the day given the sto­chas­tic na­ture of find­ing blocks which un­cer­tainty is much less than that in how much com­put­ing pow­ers de­crypters would in­vest or how op­ti­mized their im­ple­men­ta­tions of squar­ing etc would be). The user would­n’t even need to in­stall crypto soft­ware or down­load the full Bit­coin blockchain since the black­box could be com­bined with the en­crypted archive and a In­ter­net API to the blockchain to cre­ate a se­cure, one-click con­ve­nient, self­-de­crypt­ing self­-ex­tract­ing file. The black­box ap­proach is the best of all worlds and could be said to en­tirely solve time-lock cryp­to.

Un­for­tu­nate­ly, ob­fus­ca­tion (like ho­mo­mor­phic en­cryp­tion) is

  1. so cut­ting-edge that it may still be in­se­cure
  2. com­pu­ta­tion­ally im­prac­ti­cal as of 2017, and no one could ever suc­cess­fully ex­e­cute such a black­box

So it works in prin­ci­ple but not in prac­tice.

Random encodings

Bi­tan­sky et al 2015 show that given ob­fus­ca­tion, one can con­struct a nor­mal proof-of-work time-lock with­out us­ing an ex­ter­nal clock, where the black­box stalls for many time-steps be­fore spit­ting out the key, but with­out the po­ten­tial­ly-ex­pen­sive setup of chained-hash­es.


For­tu­nate­ly, in­dis­tin­guisha­bil­ity ob­fus­ca­tion may be overkill. A weaker but more com­putable cryp­to, wit­ness en­cryp­tion, has also re­cently popped up: where an ob­fus­cated black­box can make its com­pu­ta­tion/re­quire­ment ar­bi­trar­ily com­pli­cated and is Tur­ing-com­plete, wit­ness en­cryp­tion in­stead lets one en­crypt a file such that the de­cryp­tion key is the so­lu­tion to a (pos­si­bly un­solved) prob­lem like 3SAT.

This raises the ques­tion: is it pos­si­ble to en­code a check of fu­ture Bit­coin blockchain length as a NP prob­lem and thus cre­ate an en­crypted file which can only be de­crypted after enough ticks of the Bit­coin clock?

Liu et al 2015 an­swer: yes! Liu show that a check of the blockchain (specifi­cal­ly, that the SHA-256 hash in­cluded in each block rep­re­sents ad­e­quate proof-of-work) can be en­cod­ed, and so the con­struc­tion is at least pos­si­ble (although fea­si­bil­ity re­mains un­clear), and Liu et al 2015 pro­vide a C pro­gram to check proof-of-work which can be com­piled into a for­mula whose so­lu­tion is NP-com­plete and hence is us­able as a wit­ness en­cryp­tion.

Thus, Liu et al in­di­cate that wit­ness en­cryp­tion could help pro­vide the best of both worlds: the tick­ing clock is still done by proof-of-work as in suc­ces­sive squar­ing, but the PoW is a now-hy­per­-op­ti­mized hash so there is lit­tle risk of fu­ture op­ti­miza­tion open­ing the time-lock early like in chained hash­es, and since the Bit­coin blockchain will con­tinue to be built re­gard­less of any in­ter­est in time-locks, the clock keeps tick­ing with­out the risk of no-one car­ing enough to try to open the time-lock as the time-lock is pig­gy­back­ing on the brute­forc­ing al­ready be­ing done.

The two main down­sides are that the hashes are not nec­es­sar­ily in­her­ently se­ri­al­ized or unique, and the need for ex­otic wit­ness en­cryp­tion.

The first means that the se­cu­rity offered by a Bit­coin blockchain time-lock is go­ing to be, like PoW on trans­ac­tions, an eco­nomic kind of se­cu­ri­ty: hy­po­thet­i­cal­ly, a pow­er­ful ac­tor could if it re­ally wanted to, con­struct an en­tire fake ‘chain’ (ie a set of hashes with suffi­cient lead­ing ze­roes); if the time-lock en­codes the con­sen­sus rules, they can ex­ploit the diffi­culty ad­just­ment to ratchet diffi­culty down as fast as pos­si­ble to even­tu­ally gen­er­ate hashes for free. As of 2018, to cre­ate valid Bit­coin PoW hashes at all re­quires vast min­ing farms and for al­most all in­tents & pur­pos­es, this is ad­e­quate se­cu­ri­ty, much as it is for the fi­nan­cial trans­ac­tions on the Bit­coin blockchain. One could prob­a­bly in­clude ad­di­tional con­di­tions like “diffi­culty can­not de­crease more than 50% to­tal for the blocks” (e­quiv­a­lent to spec­i­fy­ing a min­i­mum num­ber of lead­ing ze­roes for all hashes in­volved), al­though this then risks open­ing too late (what if Bit­coin diffi­culty crashes for years to come be­cause of a bub­ble?), and the time-locker is forced to make a trade­off. De­pend­ing on the ex­act en­cod­ing, there may be still other ways to cheat—­could one use hashes from other blockchains or past hash­es? So there are de­tails which need to be worked out.

Sec­ond­ly, un­for­tu­nate­ly, as is com­mon with new & ex­otic cryp­tog­ra­phy, the cur­rent im­ple­men­ta­tions of wit­ness en­cryp­tion seem to be slow enough that like ob­fus­ca­tion, wit­nesses are not yet a vi­able so­lu­tion. How­ev­er, wit­ness en­cryp­tion seems likely to be op­ti­mized much fur­ther and it is pos­si­ble that there are more spe­cial­ized so­lu­tions for blockchains, so it is an ap­proach well worth watch­ing.

Distributed secret-sharing with smart contracts

A sim­ple way to try to im­prove on the stan­dard third-party se­cret-shar­ing method for time-lock­ing would be to im­ple­ment them on smart con­tract­s/blockchains like Ethereum in or­der to sup­port easy de­cen­tral­ized op­er­a­tion by many par­tic­i­pants and to al­low cryp­to-e­co­nomic meth­ods. An Ethereum im­ple­men­ta­tion (Bit­coin smart con­tracts might be too lim­ited to sup­port the nec­es­sary func­tion­al­i­ty) might go like this:

  1. a locker sends an ETH fee to a time-lock­ing smart con­tract for en­crypt­ing a spe­cific file un­til Ethereum block t (us­ing the blockchain as the clock)

    One could also try to make a time-lock to­ken on top of Ethereum un­der the logic that an ex­change rate serves as an ad­di­tional way to pe­nal­ize at­tack­ers, but of course this adds com­plex­ity and po­ten­tially ad­di­tional vul­ner­a­bil­i­ty—an at­tacker could buy in, run at­tacks while col­lect­ing fees, ex­ploit de­crypted data for profit, sell their stake, set up shorts, and then de­lib­er­ately re­veal suc­cess­ful at­tacks to crash the to­ken ex­change rates and profit 3 differ­ent ways from the at­tack.

  2. a quo­rum of n par­tic­i­pants vol­un­teer to be a se­cret-sharer and each one pro­vides a pub­lic key plus a large de­posit of ETH

  3. when a quo­rum forms, the pub­lic keys in a k-of-m se­cret-shar­ing pub­lic key, which pro­vides a new pub­lic key which the locker can en­crypt their file to; pre­sum­ably they do so and dis­trib­ute it off-blockchain some­how

  4. if one of the se­cret keys is sent to the con­tract be­fore t ar­rives, the sender re­ceives a frac­tion of the re­spec­tive de­posit, and the rest is de­stroyed

    • if any­one ob­tains a se­cret key from a se­cret-sharer (eg by hack­ing them or buy­ing it from them), they have an in­cen­tive to be­tray them by re­veal­ing the se­cret key early and claim­ing the frac­tion of their de­posit. This pro­vides cryp­to-e­co­nomic se­cu­rity against early rev­e­la­tion of se­cret keys by avoid­ing “noth­ing at stake”.
  5. after t ar­rives, within a few blocks, each se­cret-sharer pro­vides their pri­vate key on blockchain to the smart con­tract, which ver­i­fies that they are valid pri­vate keys and sends back to the se­cret-sharer their de­posit + a frac­tion of the fee; now any­one can read off k shares and de­crypt the file after time t

    • if a se­cret-sharer fails to pro­vide their share, their large de­posit is de­stroyed to pe­nal­ize them

This con­tract seems like it would in­deed en­sure that keys are re­leased after time t, which is one half of a work­ing time-lock (re­li­able global de­crypt­ing after a cer­tain point with­out fur­ther par­tic­i­pa­tion by the orig­i­nal lock­er). But the other half, re­li­able non-de­cryp­tion be­fore time t, ap­pears to be flawed: un­for­tu­nate­ly, this does­n’t seem like it would work be­cause de­spite the de­posit-pre­com­mit­men­t+de­fec­tion-in­cen­tive, there is still “noth­ing at stake” for a par­tic­i­pant be­cause there’s no way to prove on-blockchain that the file has­n’t been de­crypted be­fore time t. On-blockchain vis­i­bil­ity is ad­e­quate for Proof-of-Work and Proof-of-S­take be­cause par­tic­i­pants in PoW can ob­serve any dou­ble-spend­ing and pun­ish the PoW own­ers (who own ASICs and large cap­i­tal in­vest­ments) via so­cial mech­a­nisms like forks, and for Casper-style slash­ing PoS be­cause dou­ble-val­i­dat­ing can be proven on-chain and their stakes slashed, but the in­vis­i­bil­ity of de­cryp­tion means that a de­frauded locker can never prove that any or all of the se­cret-shar­ers have de­frauded them by col­lud­ing & de­crypt­ing ear­ly.

Be­cause of this, the con­tract is vul­ner­a­ble to Sybil at­tacks: if the con­tract is profitable and the ETH fees cover the cost to par­tic­i­pants of mak­ing de­posit­s/key stor­age/up­ti­me, then a large en­tity can by de­fi­n­i­tion profitably par­tic­i­pate while pre­tend­ing to be most/all par­tic­i­pants, and eas­ily de­crypt many files even if lock­ers re­quire a large k to de­crypt. (The re­quire cap­i­tal might be quite small; Join­Mar­ket only re­quired $32k Bit­coin cap­i­tal to de-anonymize trans­ac­tions, well within the ca­pa­bil­ity of many in­di­vid­u­als, never mind at­tack­ers like blockchain analy­sis com­pa­nies.) A large en­tity also ben­e­fits from some de­gree of economies of scale com­pared to a truly dis­trib­uted quo­rum, as it only needs 1 blockchain copy, and can more eas­ily in­vest in 100% up­time to avoid ac­ci­den­tally los­ing de­posits to down­time, al­low­ing it to out­com­pete; it can fur­ther in­crease profits po­ten­tially enor­mously by sys­tem­at­i­cally de­crypt­ing & ex­ploit­ing any­thing locked (as it is likely that any file any­one is go­ing to the trou­ble of time-lock­ing is a valu­able file to some­one, oth­er­wise why both­er?). So such a con­tract is vul­ner­a­ble to a self­-fund­ing, un­de­tectable, un­pre­ventable, per­ma­nent Sybil at­tack.

The vul­ner­a­bil­ity may ex­tend to ac­tual dis­trib­uted quo­rums as well: in­di­vid­ual shares can be bribed and at­tacked, and it’s not clear that the de­posit mech­a­nism can pre­vent all coun­ter-mea­sures or com­pu­ta­tions. (They prob­a­bly aren’t as effi­cient as the main Sybil at­tack, but would al­low at­tack­ing prior time-locks or for at­tack­ing just a few time-lock­s.) For ex­am­ple, an at­tacker can try to defuse the slash­ing con­di­tion and buy shares from k se­cret-shar­ers sim­ply by pub­lish­ing Ethereum con­tracts with ex­actly the same mech­a­nism but pro­vid­ing an ad­di­tional de­posit+fee-share from the at­tack­er; this in­sures the se­cret-shar­ers from de­fec­tion—they can sell their share to the at­tacker and if the at­tacker ‘de­fects’, they merely claim the sec­ond de­posit and are made whole; thus, ei­ther way they get back their de­posit and their share of the fee, and since the at­tacker no longer ben­e­fits on net, the at­tacker won’t de­fect. Coun­ter-con­tracts may not be sta­ble (the sharer could de­fect in turn), but an at­tacker has ad­di­tional op­tions: like se­cret-shar­ing it­self, an at­tacker could take ad­van­tage of mul­ti­-party com­pu­ta­tion to en­able bribed shar­ers to re­con­struct the full key in se­cret with­out risk­ing re­veal­ing their in­di­vid­ual shares & thus their de­posits. More at­tacks may be pos­si­ble, and the smart con­tract must be ro­bust to all pos­si­ble at­tacks and coun­ter-con­tracts brib­ing the se­cret-shar­ers.

Vulnerability of one-way functions

As it turns out, “Time-Lock Puz­zles in the Ran­dom Or­a­cle Model” (Mah­moody, Moran, and Vad­han 2011; slides) di­rectly & for­mally an­a­lyzes the gen­eral power of one-way func­tions used for time-lock puz­zles as­sum­ing a . Un­for­tu­nate­ly, they find an op­po­nent can ex­ploit the or­a­cle to gain speedups. For­tu­nate­ly, the cruder scheme where one ‘stores up’ com­pu­ta­tion (re­peat­edly ask­ing the or­a­cle at in­puts based on its pre­vi­ous out­put) still works un­der their as­sump­tions:

A time-lock puz­zle with a lin­ear gap in par­al­lel time. Al­though our neg­a­tive re­sults rule out ‘strong’ time-lock puz­zles, they still leave open the pos­si­bil­ity for a weaker ver­sion: one that can be gen­er­ated with n par­al­lel queries to the or­a­cle but re­quires n rounds of adap­tive queries to solve. In a pos­i­tive re­sult, we show that such a puz­zle can in­deed be con­struct­ed…Although this work rules out black­-box con­struc­tions (with a su­per-con­stant gap) from one-way per­mu­ta­tions and col­li­sion-re­sis­tant hash func­tions, we have no rea­son to be­lieve that time-lock puz­zles based on other con­crete prob­lems (e.g., lat­tice-based prob­lems) do not ex­ist. Ex­tend­ing our ap­proach to other gen­eral as­sump­tions (e.g., trap­door per­mu­ta­tions) is also an in­ter­est­ing open prob­lem.

That is, the puz­zle con­struc­tor can con­struct the puz­zle in par­al­lel, and the solver has to solve it se­ri­al­ly.

Memory-bound hashes

Of course, one could ask the same ques­tion of my orig­i­nal pro­pos­al—what makes you think that hash­ing can’t be sped up? You al­ready sup­plied an ex­am­ple where cryp­to­graphic hashes were sped up as­ton­ish­ingly by a GPU, Bit­coin min­ing.

The differ­ence is that hash­ing can be made to stress the weak­est part of any mod­ern com­puter sys­tem, the ter­ri­ble band­width and la­tency10; the hash can blow the fast die-level caches (the & its ) and force con­stant fetches from the main RAM. They were de­vised for an­ti-s­pam proof-of-work sys­tems that would­n’t un­fairly pe­nal­ize cell­phones & PDAs while still be­ing costly on desk­tops & work­sta­tions (which rules out the usual func­tions like that stress the CPU). For ex­am­ple, the 2003 “On Mem­o­ry-Bound Func­tions for Fight­ing Spam”; from the ab­stract:

Bur­rows sug­gested that, since mem­ory ac­cess speeds vary across ma­chines much less than do CPU speeds, mem­o­ry-bound func­tions may be­have more eq­ui­tably than CPU-bound func­tions; this ap­proach was first ex­plored by Abadi, Bur­rows, Man­asse, and Wob­ber [8]. We fur­ther in­ves­ti­gate this in­trigu­ing pro­pos­al. Specifi­cal­ly, we…

2. Pro­vide an ab­stract func­tion and prove an as­ymp­tot­i­cally tight amor­tized lower bound on the num­ber of mem­ory ac­cesses re­quired to com­pute an ac­cept­able proof of effort; specifi­cal­ly, we prove that, on av­er­age, the sender of a mes­sage must per­form many un­re­lated ac­cesses to mem­o­ry, while the re­ceiver, in or­der to ver­ify the work, has to per­form sig­nifi­cantly fewer ac­cess­es; 3. Pro­pose a con­crete in­stan­ti­a­tion of our ab­stract func­tion, in­spired by the RC4 stream ci­pher; 4. De­scribe tech­niques to per­mit the re­ceiver to ver­ify the com­pu­ta­tion with no mem­ory ac­cess­es; 5. Give ex­per­i­men­tal re­sults show­ing that our con­crete mem­o­ry-bound func­tion is only about four times slower on a 233 MHz set­top box than on a 3.06 GHz work­sta­tion, and that speedup of the func­tion is lim­ited even if an ad­ver­sary knows the ac­cess se­quence and uses op­ti­mal off-line cache re­place­ment.

Abadi 2005, “Mod­er­ately hard, mem­o­ry-bound func­tions” de­velop more mem­o­ry-bound func­tions and bench­mark them (par­tially repli­cated by Das & Doshi 2004):

…we give ex­per­i­men­tal re­sults for five mod­ern ma­chines that were bought within a two-year pe­riod in 2000–2002, and which cover a range of per­for­mance char­ac­ter­is­tics. All of these ma­chines are some­times used to send e-mail-even the set­top box,which is em­ployed as a quiet ma­chine in a home­…None of the ma­chines have huge caches-the largest was on the server ma­chine, which has a 512KB cache. Al­though the clock speeds of the ma­chines vary by a fac­tor of 12, the mem­ory read times vary by a fac­tor of only 4.2. This mea­sure­ment con­firms our premise that mem­ory read la­ten­cies vary much less than CPU speeds.

…At the high end, the server has lower per­for­mance than one might ex­pect, be­cause of a com­plex pipeline that pe­nal­izes branch­ing code. In gen­er­al, higher clock speeds cor­re­late with higher per­for­mance, but the cor­re­la­tion is far from per­fec­t…Sec­ond, the desk­top ma­chine is the most cost-effec­tive one for both CPU-bound and mem­o­ry-bound com­pu­ta­tions; in both cas­es, at­tack­ers are best served by buy­ing the same type of ma­chines as or­di­nary users. Fi­nal­ly, the mem­o­ry-bound func­tions suc­ceed in main­tain­ing a per­for­mance ra­tio be­tween the slow­est and fastest ma­chines that is not much greater than the ra­tio of mem­ory read times.

Colin Per­ci­val con­tin­ues the gen­eral trend in the con­text of find­ing pass­words schemes which are re­sis­tant to cheap brute-forc­ing, in­vent­ing scrypt in the 2009 pa­per “Stronger Key De­riva­tion via Se­quen­tial Mem­o­ry-Hard Func­tions”. Per­ci­val notes that de­sign­ing a re­ally good mem­o­ry-bound func­tion re­quires not overly re­ly­ing on la­tency since his proofs do not in­cor­po­rate la­ten­cy, al­though in prac­tice this might not be so bad:

Ex­ist­ing widely used hash func­tions pro­duce out­puts of up to 512 bits (64 bytes), closely match­ing the cache line sizes of mod­ern CPUs (typ­i­cally 32–128 bytes), and the com­put­ing time re­quired to hash even a very small amount of data (typ­i­cally 200–2000 clock cy­cles on mod­ern CPUs, de­pend­ing on the hash used) is suffi­cient that the mem­ory la­tency cost (typ­i­cally 100–500 clock cy­cles) does not dom­i­nate the run­ning time of ROMix.

How­ev­er, as semi­con­duc­tor tech­nol­ogy ad­vances, it is likely that nei­ther of these facts will re­main true. Mem­ory la­ten­cies, mea­sured in com­par­i­son to CPU per­for­mance or mem­ory band­width, have been steadily in­creas­ing for decades, and there is no rea­son to ex­pect that this will cease — to the con­trary, switch­ing de­lays im­pose a lower bound of Ω(log N) on the la­tency of ac­cess­ing a word in an N-byte RAM, while the speed of light im­poses a lower bound of Ω( √N) for 2-di­men­sional cir­cuits. Fur­ther­more, since most ap­pli­ca­tions ex­hibit sig­nifi­cant lo­cal­ity of ref­er­ence, it is rea­son­able to ex­pect cache de­sign­ers to con­tinue to in­crease cache line sizes in an at­tempt to trade mem­ory band­width for (avoid­ed) mem­ory la­ten­cy.

In or­der to avoid hav­ing ROMix be­come la­ten­cy-lim­ited in the fu­ture, it is nec­es­sary to ap­ply it to larger hash func­tions. While we have only proved that ROMix is se­quen­tial mem­o­ry-hard un­der the Ran­dom Or­a­cle mod­el, by con­sid­er­ing the struc­ture of the proof we note that the full strength of this model does not ap­pear to be nec­es­sary.

Per­ci­val con­structs a pass­word al­go­rithm on his new hash func­tion and then cal­cu­lates costs us­ing 2002 cir­cuit prices

When used for in­ter­ac­tive lo­gins, it is 35 times more ex­pen­sive than bcrypt and 260 times more ex­pen­sive than PBKDF2; and when used for file en­cryp­tion — where, un­like bcrypt and PBKDF2, scrypt uses not only more CPU time but also in­creases the die area re­quired — scrypt in­creases its lead to a fac­tor of 4000 over bcrypt and 20000 over PBKDF2.

That is quite a differ­ence be­tween the hash­es, es­pe­cially con­sid­ered that bcrypt and PBKDF2 were al­ready en­gi­neered to have ad­justable diffi­culty for sim­i­lar rea­sons to our time-lock crypto puz­zles.

  1. I don’t see any dis­cus­sions of how SGX would im­ple­ment time-lock, ex­act­ly. But one sys­tem might be to have an SGX en­clave pro­duce a pub­lic key, en­crypt the file to it with a tar­get date, then the en­clave can be queried for a de­crypted ver­sion at any time; it makes a re­quest to an In­tel server or to a SGX-based blockchain for the cur­rent time, and if the time is cor­rectly signed and is after the tar­get date, it de­crypts, oth­er­wise it does noth­ing.↩︎

  2. Ebringer 2008, ap­ply­ing time-lock puz­zles to en­hanc­ing the abil­ity of com­puter viruses & tro­jans to de­feat an­ti-virus scan­ners, de­scribes Rivest’s orig­i­nal suc­ces­sive-squar­ing so­lu­tion some­what sar­cas­ti­cal­ly:

    Even in the orig­i­nal pa­per, the au­thors strug­gled to find a plau­si­ble use for it. To ac­tu­ally use the con­struc­tion as a “time-lock” re­quires pre­dict­ing the speed of CPUs in the fu­ture, re­sult­ing, at best, in a fuzzy re­lease-date. This as­sumes that some­one cares enough to want what is al­legedly wrapped up in the puz­zle to bother to com­pute the puz­zle in the first place. It is not ob­vi­ous that in the ma­jor­ity of sit­u­a­tions, this would have a clear ad­van­tage over, say, leav­ing the in­for­ma­tion with a le­gal firm with in­struc­tions to re­lease it on a par­tic­u­lar date. Al­though this pa­per pro­poses a prac­ti­cal use for time-lock puz­zles, the orig­i­nal au­thors would prob­a­bly be dis­mayed that there is still not a wide­spread us­age that ap­pears to be of net ben­e­fit to hu­man­i­ty.

    On the other hand, a sim­i­lar crit­i­cism could and has been made about (sup­port­er­s/users must ex­pend mas­sive com­put­ing power con­stantly just to keep it work­ing, with no com­pu­ta­tional ad­van­tage over at­tack­er­s), and that sys­tem has worked well in prac­tice.↩︎

  3. Colin Per­ci­val’s “In­se­cu­rity in the Jun­gle (disk)” presents a ta­ble giv­ing times for brute-forc­ing hashes given var­i­ous hard­ware; most dra­mat­i­cal­ly, <$1$12011m of cus­tom hard­ware could brute­force a ran­dom 10 char­ac­ter string in 2 hours. (Hard­ware reaps ex­treme per­for­mance gains mostly when when few mem­ory ac­cesses are re­quired, and a few fast op­er­a­tions ap­plied to small amounts of data; this is be­cause flex­i­bil­ity im­poses over­head, and when the over­head is in­curred just to run fast in­struc­tions, the over­head dom­i­nates the en­tire op­er­a­tion. For ex­am­ple, graph­ics chips do just a rel­a­tive hand­ful of math to a frame, again and again, and so they gain or­ders of mag­ni­tude speedups by be­ing spe­cial­ized chip­s—as does any other pro­gram which is like that, which in­cludes cryp­to­graphic hashes de­signed for speed like the ones Bit­coin us­es.)

    gives an older ex­am­ple us­ing (also be­ing used for Bit­coin hash­ing):

    An im­por­tant con­sid­er­a­tion to be made is that CPU-bound hash func­tions are still vul­ner­a­ble to hard­ware im­ple­men­ta­tions. For ex­am­ple, the lit­er­a­ture pro­vides effi­cient hard­ware im­ple­men­ta­tions of SHA-1 in as low as 5000 gates, and able to pro­duce a re­sult in less than 400 clock cy­cles2. Since mul­ti­-mil­lion gate FPGAs can be pur­chased at less than $129$1002011 price points3, it fol­lows that an at­tacker can build a fully un­rolled hard­ware cracker for about $6,455$50002011. Such a de­sign, if clocked at 100MHz can try about 300,000 keys/sec­ond for the al­go­rithm pro­posed above.

  4. As of 2013, the high­est-fre­quency con­sumer CPU avail­able, the AMD FX-9000, tops out at 5GHz with few prospects for sub­stan­tial in­creases in fre­quen­cy; Rivest pro­jected 10GHz and in­creas­ing:

    Based on the SEMATECH Na­tional Tech­nol­ogy Roadmap for Semi­con­duc­tors (1997 edi­tion), we can ex­pect in­ter­nal chip speeds to in­crease by a fac­tor of ap­prox­i­mately 13 over­all up to 2012, when the clock rates reach about 10GHz. After that im­prove­ments seem more diffi­cult, but we es­ti­mate that an­other fac­tor of five might be achiev­able by 2034. Thus, the over­all rate of com­pu­ta­tion should go through ap­prox­i­mately six dou­blings by 2034.

    Moore’s law, in the orig­i­nal for­mu­la­tion of tran­sis­tors per dol­lar, may have con­tin­ued post-1997, but the gains in­creas­ingly came by par­al­lelism, which is not use­ful for re­peated squar­ing.↩︎

  5. Ac­tual num­bers; the differ­ence re­ally is that large.↩︎

  6. While there are tens or hun­dreds of thou­sands of nodes in the Bit­coin P2P net­work, only a few of them are ac­tual min­ers be­cause CPU min­ing has be­come use­less—the big min­ers, who have large server farms of GPUs or ASICs, col­lec­tively con­trol much of the hash pow­er. This has not yet been a prob­lem, but may. Us­ing a (par­tial­ly) mem­o­ry-bound hash func­tion is one of the sell­ing points of a com­pet­ing Bit­coin cur­ren­cy, Lite­coin.↩︎

  7. eg. the 2008 Graves the­sis, “High per­for­mance pass­word crack­ing by im­ple­ment­ing rain­bow ta­bles on nVidia graph­ics cards (Ise­Crack)” claims a 100x speedup over CPU gen­er­a­tion of rain­bow ta­bles, or the ac­tively de­vel­oped util­i­ty, Rain­bow­Crack (which you can even buy the gen­er­ated rain­bow ta­bles from).↩︎

  8. On Hacker News

    To add check­points, one could re­lease both the orig­i­nal seed of the chain A, and a num­ber of pairs of hashes … Let’s say you wanted to do 1-month chains. Hash the seed A for a week, then take the cur­rent value x0 such that H(B) = x0. You know the value of B, since you’ve been com­put­ing the chain. Pick an­other ran­dom value y0, and con­tinue the chain with . Write in the out­put, and hash for an­other week. Do the same for and . Each chain then has a seed value and 4 pairs of ‘check­points’. When un­lock­ing the crypto puz­zle, these check­points can’t be used to jump ahead in the com­pu­ta­tion, but they can tell you that you’re on the right track. I think that you could even use a sec­ondary hash chain for the yn val­ues, so . If you also de­rived y0 from A (e.g.), you would just need to pub­lish the seed value A and each check­point hash xn in or­der to have a fully check­pointed crypto puz­zle.

  9. Or one could in­stead de­fine time as diffi­cul­ty/hashes in­vest­ed, to avoid a pos­si­ble at­tack where a fake blockchain is cre­ated where each block drops in diffi­culty as much as pos­si­ble to make cre­at­ing long blockchains easy. The down­side is that while # of blocks over time is fairly tightly pre­dictable, hash­power will change over time based on the Bit­coin econ­omy and the er­ror in your fore­cast ac­cu­mu­late. Prob­a­bly the best ap­proach would be set­ting a min­i­mum num­ber of blocks and hash­pow­er, to get the ben­e­fit of pre­cise tim­ing while only need­ing a loose es­ti­mate of hash­power to avoid giv­ing an at­tacker much of an ad­van­tage. The trade­off a user chooses will de­pend on whether it is more im­por­tant to en­sure un­lock­ing by a par­tic­u­lar time, or en­sure it re­mains locked un­til a par­tic­u­lar time. The fur­ther out the un­lock date, the harder ac­cu­rate un­lock­ing be­comes, and users may need to be­come highly con­ser­v­a­tive—as il­lus­trated by the re­al-world out­come of Rivest’s con­test!↩︎

  10. From Abadi 2005:

    Fast CPUs run much faster than slow CPUs—consider a 2.5GHz PC ver­sus a 33MHz Palm PDA. More­over, in ad­di­tion to high clock rates, high­er-end com­puter sys­tems also have so­phis­ti­cated pipelines and other ad­van­ta­geous fea­tures. If a com­pu­ta­tion takes a few sec­onds on a new PC, it may take a minute on an old PC, and sev­eral min­utes on a PDA. That seems un­for­tu­nate for users of old PCs, and prob­a­bly un­ac­cept­able for users of PDAs…we are con­cerned with find­ing mod­er­ately hard func­tions that most com­puter sys­tems will eval­u­ate at about the same speed. We en­vi­sion that high­-end sys­tems might eval­u­ate these func­tions some­what faster than low-end sys­tems, per­haps even 2–10 times faster (but not 10–100 faster, as CPU dis­par­i­ties might im­ply). More­over, the best achiev­able price-per­for­mance should not be sig­nifi­cantly bet­ter than that of a typ­i­cal le­git­i­mate clien­t…A mem­o­ry-bound func­tion is one whose com­pu­ta­tion time is dom­i­nated by the time spent ac­cess­ing mem­o­ry. The ra­tios of mem­ory la­ten­cies of ma­chines built in the last five years is typ­i­cally no greater than two, and al­most al­ways less than four. (Mem­ory through­put tends to be less uni­form, so we fo­cus on la­ten­cy.) A mem­o­ry-bound func­tion should ac­cess lo­ca­tions in a large re­gion of mem­ory in an un­pre­dictable way, in such a way that caches are in­effec­tu­al…Other pos­si­ble ap­pli­ca­tions in­clude es­tab­lish­ing shared se­crets over in­se­cure chan­nels and the timed re­lease of in­for­ma­tion, us­ing mem­o­ry-bound vari­ants of Merkle puz­zles [Merkle 1978] and of time-lock puz­zles [May 1993; Rivest et al. 1996], re­spec­tive­ly. We dis­cuss these also in sec­tion 4.

    And here I thought I was be­ing orig­i­nal in sug­gest­ing mem­o­ry-bound func­tions for time-lock puz­zles! Tru­ly, “there is noth­ing new un­der the sun”.↩︎