By+Tim+Wu
網(wǎng)絡(luò)水軍成千上萬,而如今,機(jī)器人也可以被用來當(dāng)做網(wǎng)絡(luò)水軍,冒充真人刷五星好評、發(fā)布虛假評論、破壞網(wǎng)絡(luò)秩序。別有用心者借助社交媒體的威力發(fā)動“機(jī)器人大軍”,通過偽造的賬戶發(fā)布不實信息,嚴(yán)重威脅了民主投票、選舉和商業(yè)活動的正常進(jìn)行。面對這樣一種“人機(jī)難辨”的威脅,我們該如何應(yīng)對?
When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening. Robots are getting better, every day, at impersonating1 humans. When directed by opportunists, malefactors and sometimes even nation-states,2 they pose a particular threat to democratic societies, which are premised on being open to the people.
Robots posing as people have become a menace3. For popular Broadway shows (need we say Hamilton 4?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediately, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.
Philip Howard, who runs the Computational Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit5, and the recent American and French presidential elections. Twitter is particularly distorted by its millions of robot accounts; during the French election, it was principally Twitter robots who were trying to make#MacronLeaks into a scandal. Facebook has admitted it was essentially hacked during the American election in November last year. In Michigan, Mr. Howard notes,“junk news was shared just as widely as professional news in the days leading up to the election.”
Robots are also being used to attack the democratic features of the administrative state. This spring, the Federal Communications Commission put its proposed revocation of net neutrality up for public comment.6 In previous years such proceedings attracted millions of (human) commentators. This time, someone with an agenda but no actual public support unleashed robots who impersonated (via stolen identities) hundreds of thousands of people,7 flooding the system with fake comments against federal net neutrality rules.
To be sure, todays impersonation-bots are different from the robots imagined in science fiction: They arent sentient8, dont carry weapons and dont have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to“pass”: a name, perhaps a virtual appearance, a credit-card number and, if necessary, a profession, birthday and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.endprint
The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans. Given the degree to which product reviews have been swamped by robots (which tend to hand out five stars with abandon), commercial sabotage in the form of negative bot reviews is not hard to predict.9 In coming years, campaign finance limits will be (and maybe already are) evaded10 by robot armies posing as “small” donors. And actual voting is another obvious target—perhaps the ultimate target.
So far, weve been content to leave the problem to the tech industry, where the focus has been on building defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industry is not a long-term solution. For one thing, the defenses dont actually deter impersonation bots, but perversely reward whoever can beat them.11 And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern, like the millions of fake users who are corrupting the democratic process. Twitter estimates at least 27 million probably fake accounts; researchers suggest the real number is closer to 48 million, yet the company does little about the problem.
The problem is a public as well as private one, and impersonation robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws.12 That would allow for a better offensive strategy: bringing the power of the state to bear on13 the people deploying the robot armies to attack commerce or democracy.
The ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters or potentially help national security unleash counterattacks, which can be necessary when attacks come from overseas. There may be room for deputizing14 private parties to hunt down bad robots. A simple legal remedy would be a “Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.endprint
Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about. The use of robots takes advantage of the fact that political campaigns, elections and even open markets make humanistic assumptions, trusting that there is wisdom or at least legitimacy15 in crowds and value in public debate. But when support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel,16 dangerous form of force—the ultimate threat to every democracy.
當(dāng)科幻小說家第一次想象機(jī)器人入侵的時候,他們的想法是機(jī)器人會變得足夠聰明強(qiáng)大,以至于可以用武力占領(lǐng)世界,不管他們是主動地還是受到壞人的指使。事實上,正在發(fā)生的事情比這好不到哪里去。機(jī)器人正變得越來越擅長模仿人類。當(dāng)受到投機(jī)者、犯罪分子,有時甚至是民族國家操縱的時候,機(jī)器人會對以向人民公開為前提的民主社會構(gòu)成一種特殊的威脅。
假裝成人類的機(jī)器人已經(jīng)成為了一種威脅。對于火爆的百老匯演出(我們還需要明說這指的是《漢密爾頓》嗎?)來說,實際上是機(jī)器人——而非人類——買下了許多,也可能是大部分的門票。演出門票當(dāng)即銷售一空,倒票的中間商們(確切地說是邪惡的機(jī)器人操縱者們)便賺了幾百萬的不義之財。
菲利普·霍華德是牛津大學(xué)計算機(jī)政治宣傳研究項目的負(fù)責(zé)人,他對英國脫歐投票以及最近一次的美國和法國總統(tǒng)大選期間宣傳機(jī)器人的使用進(jìn)行了研究。幾百萬的機(jī)器人賬戶使得推特上的信息變得非常失真;在法國大選期間,正是推特上的機(jī)器人賬戶試圖讓“馬克龍郵件泄露事件”升級為一樁丑聞。臉書已經(jīng)承認(rèn),他們曾在去年11月份的美國大選期間曾被黑客攻擊?;羧A德先生說,在密歇根州,“在總統(tǒng)選舉的前幾天,垃圾新聞像專業(yè)新聞一樣被廣泛傳播?!?/p>
機(jī)器人也正被用來破壞行政國家的民主特征。今年春天,美國聯(lián)邦通信委員會將“廢除網(wǎng)絡(luò)中立原則”的提案發(fā)布到網(wǎng)上供大眾討論。在以前,這樣的提案討論程序吸引了數(shù)以百萬計的(真人)評論者的參與。但這一次,別有意圖卻沒有獲得真正公眾支持的人,通過竊取身份的方式,發(fā)動了機(jī)器人來冒充成千上萬的真人評論者,使得評論系統(tǒng)里充滿了反對聯(lián)邦網(wǎng)絡(luò)中立原則的虛假評論。
毫無疑問的是,如今冒充真人網(wǎng)民的機(jī)器人與科幻小說中假想的機(jī)器人是不同的:他們沒有感知,沒有攜帶武器,也沒有實體。相反,這些冒充人類的機(jī)器人只要與人類足夠相似就可以“通過”驗證:一個名字,或許還要一個虛擬的形象,一個信用卡號碼,如果有必要的話,還需要一個職業(yè)、生日以及家庭住址。程序或腳本給了這些機(jī)器人生命,任何人都可以通過這些程序和腳本來獲得冒充成千上萬網(wǎng)民的能力。
由于機(jī)器人正被訓(xùn)練得越來越擅長模仿人類,這個問題幾乎肯定會變得更加嚴(yán)重,甚至還會擴(kuò)散到生活中的更多領(lǐng)域??紤]到產(chǎn)品評論欄被機(jī)器人評論淹沒的程度(無節(jié)制地刷五星),由機(jī)器人填寫的負(fù)面評價所導(dǎo)致的商業(yè)破壞也并不難預(yù)測。在未來的幾年里,競選活動資金的限制將會被(或者可能已經(jīng)被)偽裝成“小額”捐贈者的機(jī)器人大軍所規(guī)避。實質(zhì)性的投票是機(jī)器人的另一個明確目標(biāo)——或許是它的終極目標(biāo)。
迄今為止,我們已經(jīng)滿足于將這個問題留給科技行業(yè)。他們的關(guān)注點是建立防御措施,通常是以“驗證碼”(“全自動區(qū)分計算機(jī)和人類的公開圖靈測試”)的形式,用那些讓人討厭的“請?zhí)顚懸韵聝?nèi)容”來證明你不是機(jī)器人。但是將這一問題全部交給科技行業(yè)并不是一個長遠(yuǎn)的解決方案。一方面,這些防御體系并不能真正地阻止偽裝成真人用戶的機(jī)器人,反而會事與愿違地獎勵那些通過測試的人。另一方面,或許對一個民主國家來說,最大的問題在于,像臉書和推特這樣的公司缺乏實在的、經(jīng)濟(jì)上的動機(jī)來解決諸如數(shù)以百萬的虛假用戶破壞民主進(jìn)程這些涉及公共利益的問題。 據(jù)推特估計,至少有兩千七百萬疑似假賬戶;研究者們估測真實數(shù)字接近四千八百萬,但是推特公司對該問題幾乎不采取任何措施。
這個問題既是一個公共問題,也是一個私人問題,用來冒充人類的機(jī)器人應(yīng)被視為法律上的“人類的敵人”,就像海盜以及其他罪犯一樣。這一點使得我們可以采用一個更具攻擊性的策略:借用國家力量去對付那些使用機(jī)器人大軍來破壞商業(yè)或者民主的人。
理想的反機(jī)器人運動將采取科技與法律相結(jié)合的手段。改良后的機(jī)器人檢測系統(tǒng)能夠幫助我們找到機(jī)器人的指使者,或者能夠幫助國家安全部門發(fā)動反擊,這在應(yīng)對來自海外的機(jī)器人襲擊時尤其必要。委托私人機(jī)構(gòu)來追捕惡意機(jī)器人也是有可行空間的。一個簡單的法律補(bǔ)救措施是運用名為“刀鋒戰(zhàn)士”的法規(guī),它規(guī)定使用任何隱藏機(jī)器人真實身份冒充人類的程序都是違法的。自動化的程序需表明“我是個機(jī)器人”。當(dāng)和虛假機(jī)器人打交道時,知道其真實身份會更加得心應(yīng)手。
使用機(jī)器人來偽造支持,竊取選票或者破壞民主,也正是科幻作家過去向人們所警告的一種惡行。用機(jī)器人冒充人類利用了政治運動、選舉甚至是開放市場都基于人本主義假設(shè)的這一弱點,相信群眾是有智慧的或至少是有正當(dāng)性的,并且公眾辯論是有價值的。但當(dāng)支持率和觀點可以被偽造的時候,壞的、或是不得人心的觀點也可以不通過邏輯而是通過一種新奇而又危險的力量取勝——這種力量是對每個民主國家的終極威脅。
1. impersonate: 冒充,假扮。
2. opportunist: 投機(jī)者;malefactor:犯罪分子,作惡者;nation-state:民族國家,一種國家的形式與意識形態(tài),不只是一個政治及地理的單一實體,在文化與族群上也是一個完整共同體。
3. menace: 危險事物,威脅。
4. Hamilton: 《漢密爾頓》,根據(jù)美國開國元勛亞歷山大·漢密爾頓生平故事改編的百老匯音樂劇,上演后極度火爆,創(chuàng)下了百老匯音樂劇史上票房紀(jì)錄。
5. Brexit: 英國脫歐(即Britain+exit)。
6. revocation:(對法律等的)廢除,撤銷;net neutrality: 網(wǎng)絡(luò)中立原則,即要求互聯(lián)網(wǎng)服務(wù)供應(yīng)商及政府應(yīng)平等處理所有互聯(lián)網(wǎng)上的資料,不差別對待或依不同用戶、內(nèi)容、網(wǎng)站、平臺、應(yīng)用、接取裝置類型或通訊模式而差別收費。
7. agenda: 秘密計劃,秘密目標(biāo);unleash: 發(fā)動,釋放。
8. sentient: // 有感知力的,有知覺的。
9. swamp: 淹沒;with abandon: 恣意地,放縱地;sabotage: //人為破壞,蓄意妨礙。
10. evade: 規(guī)避,逃避。
11. deter: 制止,阻止;perversely: 反常地,事與愿違地。
12. hostis humani generis: 〈拉丁〉人類的敵人;outlaw: 不法之徒。
13. bear on: 對……施加壓力。
14. deputize: 授權(quán)……為代表,委托……為代理。
15. legitimacy: 正當(dāng)性,合法性。
16. manufacture: 捏造,虛構(gòu);novel: 新奇的,不同尋常的。endprint