TED-Ed
自從2017 年人工智能商業(yè)化爆發(fā),尤其是ChatGPT 面世以來(lái),行業(yè)內(nèi)外出現(xiàn)了許多討論的聲音,下面這篇文章節(jié)選自TED 演講,可能會(huì)讓我們用更加理性的態(tài)度看待人工智能。
In the coming years, artificial intelligence(AI) is probably going tochange your life, and likely the entire world. But people have a hard timeagreeing on exactly how.
There’s a big difference between asking a human to do something andgiving that as the 1)objective to an AI system. When you ask a human to getyou a cup of coffee, you don’t mean this should be their life’s mission, andnothing else in the universe matters.
And the problem with the way we build AI systems now is that we givethem a fixed objective. The algorithms require us to 2)specify everything inthe objective. And if you say, “Can we fix the acidification of the oceans?”“Yeah, you could have a catalytic reaction that does thatextremely efficiently, but it consumes a quarter of the oxygenin the atmosphere, which would apparently cause us to diefairly slowly and unpleasantly over the course of severalhours.” The AI system may answered.
So, how do we avoid this problem? You might say, okay,well, just be more careful about specifying the objective—don’t forget the atmospheric oxygen. And then, of course,some side effect of the reaction in the ocean poisons all the?fish. Okay, well I mean don’t kill the fish either. Andthen, well, what about the seaweed? Don’t do anythingthat’s going to cause all the seaweed to die. And onand on and on.
And the reason that we don’t have to do that withhumans is that humans often know that they don’tknow all the things that we care about. For example, ifyou ask a human to get you a cup of coffee, and youhappen to be in the Hotel George Sand in Paris, wherethe coffee is 13 euros a cup, it’s entirely 3)reasonable to come back andsay, “Well, it’s 13 euros, are you sure you want it? Or I could go next doorand get one?” And it’s a perfectly normal thing for a person to do. Foranother example, to ask, “I’m going to repaint your house—is it okay if Itake off the drainpipes and then put them back?” We don’t think of this asa terribly sophisticated capability, but AI systems don’t have it becausethe way we build them now, they have to know the full objective. If webuild systems that know that they don’t know what the objective is, thenthey start to exhibit these behaviors, like asking permission before gettingrid of all the oxygen in the atmosphere.
In all these senses, control over the AI systemcomes from the machine’s uncertainty aboutwhat the true objective is. And it’s when you buildmachines that believe with certainty that theyhave the objective, that’s when you get this sort ofpsychopathic behavior. And I think we see the samething in humans.
There’s an interesting story that E.M. Forster?wrote, where everyone is entirely machine-dependent. The story is reallyabout the fact that if you hand over the management of your civilization tomachines, you then lose the incentive to understand it yourself or to teachthe next generation how to understand it. You can see “WALL-E” actuallyas a modern version, where everyone is enfeebled and infantilized by themachine, and that hasn’t been possible up to now.
We put a lot of our civilization into books, but the books can’t run itfor us. And so we always have to teach the next generation. If you workit out, it’s about a trillion person years of teaching and learning and anunbroken chain that goes back tens of thousands of generations. Whathappens if that chain breaks?
I think that’s something we have to understand as AI moves forward.The actual date of arrival of general purpose AI—you’re not going to beable to 4)pinpoint, it isn’t a single day. It’s also not the case that it’s all ornothing. The impact is going to be increasing. So with every advance inAI, it significantly expands the range of tasks.
So in that sense, I think most experts say by the end of the century,we’re very, very likely to have general purpose AI. The median issomething around 2045. I’m a little more on the conservative side. I thinkthe problem is harder than we think.
I like what John McAfee, he was one ofthe founders of AI, when he was asked thisquestion, he said, somewhere between 5 and500 years. And we’re going to need, I think,several Einsteins to make it happen.
1) objective n. 目標(biāo) 2) specify v. 明確規(guī)定
3) reasonable adj. 明智的 4) pinpoint v. 明確指出
詞組加油站
side effect 副作用
care about 關(guān)心
ask permission 取得許可
在將來(lái)的歲月里,人工智能極有可能會(huì)改變你的生活,甚至有可能改變?nèi)澜?。但人們?duì)于這種改變的呈現(xiàn)方式結(jié)論不一。
要求一個(gè)人做某件事與將其作為目標(biāo)交給人工智能系統(tǒng)是有很大區(qū)別的。當(dāng)你拜托一個(gè)人幫你拿杯咖啡時(shí),你并不是在讓這個(gè)人奉它為人生使命,以致宇宙間再也沒(méi)有更重要的事了。
而我們現(xiàn)在構(gòu)建人工智能系統(tǒng)的問(wèn)題是我們給了它們一個(gè)固定目標(biāo)。算法是要求我們規(guī)定目標(biāo)里的一切。如果你說(shuō):“我們能解決海洋的酸化問(wèn)題嗎?”人工智能可能會(huì)回答:“沒(méi)問(wèn)題,可以形成一種非常有效的化學(xué)反應(yīng),但這將會(huì)吞噬大氣層里四分之一的氧氣,從而導(dǎo)致我們?nèi)悸亍⒉挥淇斓卦趲讉€(gè)小時(shí)后死去?!?/p>
那,我們?cè)撊绾伪苊膺@種問(wèn)題呢?你可能會(huì)說(shuō),好吧,那我們就對(duì)目標(biāo)更具體地說(shuō)明一下——?jiǎng)e忘了大氣層里的氧氣。然后,當(dāng)然也要避免海洋里某種效應(yīng)的副作用會(huì)毒死所有的魚。好吧,那我就再定義一下,也別毒死魚。那么,海藻呢?也別做任何會(huì)導(dǎo)致海藻全部死亡的事。以此類推。
我們對(duì)人類不需要這樣做是因?yàn)槿藗兇蠖济靼鬃约翰⒉豢赡軐?duì)每個(gè)人的愛(ài)好無(wú)不知曉。
例如,如果一個(gè)人拜托你買咖啡,而你剛好在一杯咖啡為13 歐元的巴黎喬治圣德酒店,你很有可能會(huì)再回去問(wèn)一下:“喂,這里咖啡得13 歐元,你還要嗎?要不我去隔壁店里幫你買杯?”這對(duì)人類來(lái)講再正常不過(guò)。又如,當(dāng)你問(wèn)道:“我要重新粉刷你的房子,我可以先把排水管拆了再裝回去嗎?”
我們并不覺(jué)得這是一種特別復(fù)雜厲害的能力,但人工智能系統(tǒng)沒(méi)有這種能力,因?yàn)樵谖覀儺?dāng)下的建構(gòu)方法里,它們必須知道全部目標(biāo)。如果我們構(gòu)建的系統(tǒng)明白它們并不了解目標(biāo),它們就會(huì)開(kāi)始出現(xiàn)此類行動(dòng): 比如在除掉大氣層里的氧氣之前先征求許可。
在這種意義上,對(duì)于人工智能系統(tǒng)的控制源于機(jī)器對(duì)真正目標(biāo)的不確定性。而只有在構(gòu)建對(duì)目標(biāo)自以為有著絕對(duì)肯定性的機(jī)器時(shí),才會(huì)產(chǎn)生這種精神錯(cuò)亂的行為。我覺(jué)得對(duì)于人類,也是相同的理念。
E.M. 福斯特寫過(guò)一篇引人深思的故事。故事里的人們都完全依賴機(jī)器。其中寓意是,如果你把文明的管理權(quán)交給了機(jī)器,那你將會(huì)失去自身了解文明、把文明傳承給下一代的動(dòng)力。我們可以將《機(jī)器人總動(dòng)員》視為現(xiàn)代版:由于機(jī)器,人們變得衰弱與幼兒化,到目前為止,這還不可能。
我們把大量文明寫入書籍,但書籍無(wú)法為我們管理文明。所以,我們必須一直指導(dǎo)下一代。計(jì)算下來(lái),這是一個(gè)在一萬(wàn)億年、數(shù)以萬(wàn)計(jì)的世代之間綿延不絕的教導(dǎo)與學(xué)習(xí)的鏈條。這條鏈如果斷了,將會(huì)如何?
隨著人工智能的發(fā)展,我認(rèn)為這是我們必須了解的事情。我們將無(wú)法精準(zhǔn)地確認(rèn)通用型人工智能真正來(lái)臨的時(shí)日,因?yàn)槟遣⒉粫?huì)是一日之功。也并不是存在或不存在的兩項(xiàng)極端。這方面的影響力將是與日俱增的。所以隨著人工智能的進(jìn)步,它所能完成的任務(wù)將顯著擴(kuò)展。
這樣一看,我覺(jué)得大部分的專家都說(shuō)我們極有可能在21 世紀(jì)末前生產(chǎn)通用型人工智能。中位數(shù)位置在2045 年左右。我對(duì)此偏于保守派。我認(rèn)為問(wèn)題比我們想象的還要難。
我喜歡人工智能的發(fā)明家之一約翰·麥卡菲對(duì)這個(gè)問(wèn)題的答案: 他說(shuō),應(yīng)該在5 到500 年之間。我覺(jué)得,這得要幾位愛(ài)因斯坦才能實(shí)現(xiàn)。