王甜甜 于雙元 徐保民
摘 要:針對(duì)區(qū)塊鏈中工作量證明(PoW)共識(shí)機(jī)制下區(qū)塊截留攻擊導(dǎo)致的挖礦困境問題,將礦池間的博弈行為視作迭代的囚徒困境(IPD)模型, 采用深度強(qiáng)化學(xué)習(xí)的策略梯度算法研究IPD的策略選擇。利用該算法將每個(gè)礦池視為獨(dú)立的智能體(Agent), 將礦工的潛入率量化為強(qiáng)化學(xué)習(xí)中的行為分布,通過策略梯度算法中的策略網(wǎng)絡(luò)對(duì)Agent的行為進(jìn)行預(yù)測(cè)和優(yōu)化,最大化礦工的人均收益,并通過模擬實(shí)驗(yàn)驗(yàn)證了策略梯度算法的有效性。實(shí)驗(yàn)發(fā)現(xiàn),前期礦池處于相互攻擊狀態(tài),平均收益小于1,出現(xiàn)了納什均衡的問題;經(jīng)過policy gradient算法的自我調(diào)整后,礦池由相互攻擊轉(zhuǎn)變?yōu)橄嗷ズ献?,每個(gè)礦池的潛入率趨于0,人均收益趨于1。實(shí)驗(yàn)結(jié)果表明,policy gradient算法可以解決挖礦困境的納什均衡問題,最大化礦池人均收益。
關(guān)鍵詞:區(qū)塊鏈;工作量證明機(jī)制;博弈論;深度強(qiáng)化學(xué)習(xí);策略梯度算法
中圖分類號(hào):TP183
文獻(xiàn)標(biāo)志碼:A
Abstract: In view of the mining dilemma problem caused by block withholding attack under Proof of Work (PoW) consensus mechanism in the blockchain, the game behavior between mining pools was regarded as an Iterative Prisoners Dilemma (IPD) model and the policy gradient algorithm of deep reinforcement learning was used to study IPDs strategy choices. Each mining pool was considered as an independent Agent and the miners infiltration rate was quantified as a behavior distribution in reinforcement learning. The policy network in the policy gradient was used to predict and optimize the Agents behavior in order to maximize miners average revenues. And the effectiveness of the policy gradient algorithm was validated through simulation experiments. Experimental results show that the mining pools attack each other at the beginning with miners average revenue less than 1, which causes Nash equilibrium problem. After selfadjustment by the policy gradient algorithm, the relationship between the mining pools transforms from mutual attack to mutual cooperation with infiltration rate of each mining pool tending to zero and miners average revenue tending to 1. The results show that the policy gradient algorithm can solve the Nash equilibrium problem of mining dilemma and maximize the miners average revenue.
英文關(guān)鍵詞Key words: blockchain; Proof of Work (PoW); game; deep reinforcement learning; policy gradient algorithm
0 引言
區(qū)塊鏈?zhǔn)潜忍貛臶1]等加密貨幣的底層實(shí)現(xiàn)技術(shù),比特幣作為區(qū)塊鏈最為成功的應(yīng)用場(chǎng)景,是在工作量證明(Proof of Work,PoW)的共識(shí)機(jī)制下完成交易內(nèi)容的。在比特幣系統(tǒng)中,每個(gè)節(jié)點(diǎn)都會(huì)參與到區(qū)塊的生產(chǎn)中,并提供一定的PoW,首先生產(chǎn)出區(qū)塊的節(jié)點(diǎn),可以獲得一定的比特幣獎(jiǎng)勵(lì)。這一過程就是“挖礦”,參與挖礦的節(jié)點(diǎn)稱為“礦工”。按照比特幣系統(tǒng)的設(shè)定,區(qū)塊大約10min產(chǎn)生一個(gè),意味著大多數(shù)礦工挖不到區(qū)塊,為獲得相對(duì)穩(wěn)定的收入,礦工會(huì)選擇性地加入礦池進(jìn)行合作挖礦。礦池由礦池管理員和若干礦工組成,礦工會(huì)不斷地向管理員發(fā)送部分工作量證明或完整的工作量證明,礦池管理員會(huì)按照各個(gè)成員的工作量貢獻(xiàn)比分發(fā)收益。
然而有些礦工只向管理員發(fā)送部分工作量證明,若獲取到完整的工作量證明,會(huì)選擇丟棄,即只獲得礦池的部分收益而不貢獻(xiàn)有效算力,這種行為被稱為區(qū)塊截留攻擊(block withholding attack)[2]。礦池可以利用自己的礦工潛入其他礦池,對(duì)其進(jìn)行區(qū)塊截留攻擊以增加自己的收益,但是當(dāng)所有礦池都相互攻擊時(shí),它們的收益將低于互不攻擊的情形,此即PoW共識(shí)漏洞產(chǎn)生的挖礦困境,可視為博弈論中的囚徒困境模型。其存在一個(gè)納什均衡點(diǎn):沒有一方可以通過改變自己的行為策略來提高整體收益[3]。本文的核心內(nèi)容是,如何在PoW共識(shí)機(jī)制下優(yōu)化礦池行為選擇來增加其人均收益,以解決區(qū)塊截留攻擊導(dǎo)致的礦難問題。
1 相關(guān)工作
針對(duì)基于PoW共識(shí)機(jī)制的比特幣系統(tǒng)中存在的挖礦問題,眾多學(xué)者提出了不同的博弈模型。
2014年,Eyal等[4]提出的比特幣系統(tǒng)其實(shí)是脆弱的,在礦工挖礦過程中存在一種稱為Selfish mining策略,即不斷地開采私有區(qū)塊而不發(fā)布,當(dāng)其長(zhǎng)度大于公共鏈時(shí)發(fā)布出來,使得公共鏈?zhǔn)ヒ饬x,從而導(dǎo)致“誠(chéng)實(shí)”礦工的算力資源損失, 這就是常見的區(qū)塊鏈“分叉”問題。針對(duì)這一漏洞,Kiayias等[5]將比特幣挖礦系統(tǒng)簡(jiǎn)化為完備信息的隨機(jī)博弈模型,通過控制挖掘到的區(qū)塊的發(fā)布時(shí)間來控制區(qū)塊鏈主鏈的長(zhǎng)度,并提出Frontier策略(礦工挖掘到區(qū)塊就立即發(fā)布并加入最長(zhǎng)主鏈),分析了不同實(shí)驗(yàn)設(shè)定下礦工算力為多少時(shí),采取Frontier策略最優(yōu); Lewenberg等[6]從合作博弈的角度對(duì)礦工加入礦池的選擇進(jìn)行了分析, 將同一礦池成員視為一個(gè)聯(lián)盟,礦工通過改變加入的礦池來增加收益; Liu等[7]則提出了演化博弈模型,預(yù)先計(jì)算出礦工加入不同礦池的收益后再?zèng)Q定選擇加入哪個(gè)礦池。
以上研究從不同角度對(duì)比特幣挖礦過程建立了博弈模型,但是沒有考慮礦池間相互攻擊的情形,即PoW共識(shí)機(jī)制下產(chǎn)生的挖礦困境問題。2015年,Eyal[3]對(duì)區(qū)塊截留攻擊產(chǎn)生的礦難問題進(jìn)行了研究,從雙池和多池間相互攻擊這兩類情形出發(fā),對(duì)礦池間的博弈進(jìn)行了定性分析,將其視為迭代的囚徒困境(Iterated Prisons Dilemma, IPD)模型,并通過納什均衡理論證明了各礦池收入會(huì)因彼此攻擊而減少,從而促使礦池趨于封閉穩(wěn)定的狀態(tài)。唐長(zhǎng)兵等[8]在此基礎(chǔ)上,對(duì)博弈困境中純策略及混合策略均衡的問題做了進(jìn)一步研究,并利用零行列式(Zero Determinant, ZD)策略對(duì)區(qū)塊截留攻擊博弈進(jìn)行了優(yōu)化。
本文在文獻(xiàn)[3]的基礎(chǔ)上,建立了礦池間的博弈模型,并利用深度強(qiáng)化學(xué)習(xí)的策略梯度(policy gradient)算法[9-10]對(duì)礦池間的博弈行為進(jìn)行了優(yōu)化,提高了礦工的人均收益。
參考文獻(xiàn) (References)
[1] ??? NAKAMOTO S. Bitcoin: a peertopeer electronic cash system [EB/OL]. [2017-10-10]. https://bitcoin.org/bitcoin.pdf.
[2] ??? COURTOIS N T, BAHACK L. On subversive miner strategies and block withholding attack in bitcoin digital currency[J/OL]. arXiv Preprint, 2014, 2014: arXiv:1402.1718 (2014-01-28) [2014-12-02]. https://arxiv.org/abs/1402.1718.
[3] ??? EYAL I. The miners dilemma[C]// Proceedings of the 2015 IEEE Symposium on Security and Privacy. Piscataway, NJ: IEEE, 2015:89-103.
[4] ??? EYAL I, SIRER E G. Majority is not enough: bitcoin mining is vulnerable[C]// FC 2014: International Conference on Financial Cryptography and Data Security. Berlin: Springer, 2014: 436-454.
[5] ??? KIAYIAS A, KOUTSOUPIAS E, KYROPOULOU M, et al. Blockchain mining games[C]// Proceedings of the 2016 ACM Conference on Economics and Computation. New York: ACM, 2016: 365-382.
[6] ??? LEWENBERG Y, BACHRACH Y, SOMPOLINSKY Y, et al. Bitcoin mining pools: a cooperative game theoretic analysis[C]// Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2015: 919-927.
[7] ??? LIU X, WANG W, NIYATO D, et al. Evolutionary game for mining pool selection in blockchain networks[J]. IEEE Wireless Communications Letters, 2017, 7(5): 760-763.
[8] ??? 唐長(zhǎng)兵, 楊珍, 鄭忠龍,等. PoW共識(shí)算法中的博弈困境分析與優(yōu)化[J]. 自動(dòng)化學(xué)報(bào), 2017, 43(9):1520-1531.(TANG C B, YANG Z, ZHENG Z L, et al. Game dilemma analysis and optimization of PoW consensus algorithm[J]. Acta Automatica Sinica, 2017, 43(9):1520-1531.)
[9] ??? SUTTON R S, McALLESTER D, SINGH S, et al. Policy gradient methods for reinforcement learning with function approximation[C]// NIPS 2000: Neural Information Processing Systems. Boston: MIT Press, 2000:1057-1063.
[10] ?? WILLIAMS R J. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning[J].Machine Learning, 1992,8(3/4):229-256.
[11] ?? MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human level control through deep reinforcement learning[J].Nature, 2015,518(7540):529-533.
[12] ?? TAMPUU A, MATIISEN T, KODELJA D, et al. Multiagent cooperation and competition with deep reinforcement learning[J].PLoS One, 2017, 12(4):e0172395.
[13] ?? LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[J/OL]. arXiv Preprint, 2015, 2015: arXiv:1509.02971 [2015-09-09]. https://arxiv.org/abs/1509.02971.
[14] ?? 王兵團(tuán), 張作泉, 趙平福. 數(shù)值分析簡(jiǎn)明教程(大學(xué)數(shù)學(xué)系列叢書)[M]. 北京:清華大學(xué)出版社, 2012:50-60. (WANG B T, ZHANG Z Q, ZHAO P F. Numerical Analysis Concise Tutorial(University Mathematics Series)[M]. Beijing: Tsinghua University Press,2012:50-60.)