国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

人工智能會(huì)懂常識(shí)嗎?

2019-08-07 17:35ByMelanieMitchell
英語學(xué)習(xí) 2019年8期
關(guān)鍵詞:常識(shí)差距物體

By Melanie Mitchell

Picture yourself driving down a city street. You go around a curve, and suddenly see something in the middle of the road ahead. What should you do?

Of course, the answer depends on what that “something” is. A torn paper bag, a lost shoe, or a tumbleweed1? You can drive right over it without a second thought, but youll definitely swerve2 around a pile of broken glass. Youll probably stop for a dog standing in the road but move straight into a flock of pigeons, knowing that the birds will fly out of the way. You might plough right through a pile of snow, but veer around a carefully constructed snowman.3 In short, youll quickly determine the actions that best fit the situation—what humans call having “common sense.”

Human drivers arent the only ones who need common sense; its lack in artificial intelligence (AI) systems will likely be the major obstacle to the wide deployment4 of fully autonomous cars. Even the best of todays self-driving cars are challenged by the object-in-the-road problem. Perceiving “obstacles”that no human would ever stop for, these vehicles are liable to slam on the brakes5 unexpectedly, catching other motorists off-guard. Rear-ending6 by human drivers is the most common accident involving self-driving cars.

The challenges for autonomous vehicles probably wont be solved by giving cars more training data or explicit rules for what to do in unusual situations. To be trustworthy, these cars need common sense: broad knowledge about the world and an ability to adapt that knowledge in novel7 circumstances. While todays AI systems have made impressive strides in domains ranging from image recognition to language processing, their lack of a robust foundation of common sense makes them susceptible to unpredictable and unhumanlike errors.8

在許多特定的考試科目如算術(shù)、閱讀中,計(jì)算機(jī)都能取得高于人類平均水平的成績(jī)。但是否就可以說,人工智能已經(jīng)擁有人類的智商了呢?或許,比考試更難的是生活中大大小小的常識(shí)。這些常識(shí)在判斷中起著至關(guān)重要的作用,但是又龐雜瑣碎,很難習(xí)得。人工智能與人腦的差距,可能恰恰就在這里。

Common sense is multifaceted, but one essential aspect is the mostly tacit “core knowledge” that humans share9—knowledge we are born with or learn by living in the world. That includes vast knowledge about the properties10 of objects, animals, other people and society in general, and the ability to flexibly apply this knowledge in new situations. You can predict, for example, that while a pile of glass on the road wont fly away as you approach, a flock of birds likely will. If you see a ball bounce in front of your car, for example, you know that it might be followed by a child or a dog running to retrieve11 it. From this perspective, the term “common sense” seems to capture exactly what current AI cannot do: use general knowledge about the world to act outside prior training or pre-programmed rules.

Todays most successful AI systems use deep neural networks. These are algorithms trained to spot patterns, based on statistics gleaned from extensive collections of human-labelled examples.12 This process is very different from how humans learn. We seem to come into the world equipped with innate knowledge of certain basic concepts that help to bootstrap our way to understanding—including the notions of discrete objects and events, the three-dimensional nature of space, and the very idea of causality itself.13 Humans also seem to be born with nascent concepts of sociality: Babies can recognise simple facial expressions,they have inklings about language and its role in communication, and rudimentary strategies to entice adults into communication.14 Such knowledge is so elemental and immediate that we arent even conscious we have it, or that it forms the basis for all future learning. A big lesson from decades of AI research is how hard it is to teach such concepts to machines.

On top of their innate knowledge, children also exhibit innate drives to actively explore the world, figure out the causes and effects of events, make predictions, and enlist15 adults to teach them what they want to know. The formation of concepts is tightly linked to children developing motor skills and awareness of their own bodies—for example, it appears that babies start to reason about why other people reach for objects at the same time that they can do such reaching for themselves. While todays state-of-the-art machine-learning systems start out as blank slates, and function as passive, bodiless learners of statistical patterns; by contrast, common sense in babies grows via innate knowledge combined with learning thats embodied, social, active and geared towards creating and testing theories of the world.16

The history of implanting common sense in AI systems has largely focused on cataloguing human knowledge: manually programming, crowdsourcing, or web-mining commonsense “assertions” or computational representations of stereotyped situations.17 But all such attempts face a major, possibly fatal obstacle: Much of our core intuitive knowledge is unwritten, unspoken, and not even in our conscious awareness.

The US Defense Advanced Research Projects Agency (DARPA)18, a major funder of AI research, recently launched a four-year programme on “Foundations of Human Common Sense” that takes a different approach. It challenges researchers to create an AI system that learns from“experience” in order to attain the cognitive abilities of an 18-month-old baby. It might seem strange that matching a baby is considered a grand challenge for AI, but this reflects the gulf19 between AIs success in specific, narrow domains and more general, robust intelligence.

Core knowledge in infants develops along a predictable timescale, according to developmental psychologists. For example, around the age of two to five months, babies exhibit knowledge of “object permanence”20: If an object is blocked by another object, the first object still exists, even though the baby cant see it. At this time babies also exhibit awareness that when objects collide, they dont pass through one another, but their motion changes; they also know that “agents”—entities with intentions, such as humans or animals—can change objects motion. Between nine and 15 months, infants come to have a basic “theory of mind”: they understand what another person can or cannot see and, by 18 months, can recognise when another person displays the need for help.

Since babies under 18 months cant tell us what theyre thinking, some cognitive milestones have to be inferred indirectly. This usually involves experiments that test “violation of expectation.” Here, a baby watches one of two staged scenarios, only one of which conforms to commonsense expectations. The theory is that a baby will look for a longer time at the scenario that violates her expectations, and indeed, babies tested in this way look longer when the scenario does not make sense.

In DARPAs Foundations of Human Common Sense challenge, each team of researchers is charged with developing a computer program—a simulated “commonsense agent”—that learns from videos or virtual reality. DARPAs plan is to evaluate these agents by performing experiments similar to those that have been carried out on infants and measuring the agents “violation of expectation signals.”

This wont be the first time that AI systems are evaluated on tests designed to gauge21 human intelligence. In 2015, one group showed that an AI system could match a four-year-olds performance on an IQ test, resulting in the BBC reporting that “AI had IQ of four-year-old child.” More recently,researchers at Stanford University created a “reading” test that became the basis for the New York Post reporting that “AI systems are beating humans in reading comprehension.” These claims are misleading, however. Unlike humans who do well on the same test, each of these AI systems was specifically trained in a narrow domain and didnt possess any of the general abilities the test was designed to measure. As the computer scientist Ernest Davis at New York University warned: “The public can easily jump to the conclusion that, since an AI program can pass a test, it has the intelligence of a human that passes the same test.”

I think its possible—even likely—that something similar will happen with DARPAs initiative. It could produce an AI program specifically trained to pass DARPAs tests for cognitive milestones, yet possess none of the general intelligence that gives rise to these milestones in humans. I suspect theres no shortcut to actual common sense, whether one uses an encyclopaedia22, training videos or virtual environments. To develop an understanding of the world, an agent needs the right kind of innate knowledge, the right kind of learning architecture, and the opportunity to actively grow up in the world. They should experience not just physical reality, but also all of the social and emotional aspects of human intelligence that cant really be separated from our “cognitive” capabilities.

While weve made remarkable progress, the machine intelligence of our current age remains narrow and unreliable. To create more general and trustworthy AI, we might need to take a radical step backward: to design our machines to learn more like babies, instead of training them specifically for success against particular benchmarks23. After all, parents dont directly train their kids to exhibit “violation of expectation” signals; how infants behave in psychology experiments is simply a side effect of their general intelligence. If we can figure out how to get our machines to learn like children, perhaps after some years of curiosity-driven, physical and social learning, these young“commonsense agents” will finally become teenagers—ones who are sufficiently sensible to be entrusted with the car keys.

1. tumbleweed: 風(fēng)滾草,通常生長(zhǎng)于北美和澳大利亞,枯萎后在地面處折落,隨風(fēng)像球一樣四處滾動(dòng)。

2. swerve: 突然改變方向,急轉(zhuǎn)彎。

3. plough through: 猛地撞過;veer: 改變方向,轉(zhuǎn)向。

4. deployment: 使用,運(yùn)用。

5. slam on the brake: 猛踩剎車。

6. rear-ending: 追尾。

7. novel: 新的,新奇的。

8. 雖然今天的人工智能系統(tǒng)在圖像識(shí)別、語言處理等方面都取得了長(zhǎng)足進(jìn)步,但因?yàn)樗鼈內(nèi)狈ΤWR(shí)的堅(jiān)實(shí)基礎(chǔ),所以容易犯下一些不可預(yù)測(cè)的、人類不會(huì)犯的錯(cuò)誤。robust: 堅(jiān)實(shí)的,強(qiáng)有力的;susceptible: 易受影響的,易受傷害的。

9. multifaceted: 多方面的;tacit:不言明的。

10. property: 屬性,特性。

11. retrieve: 找回,取回。

12. 這些算法可以從大量人類標(biāo)記的例子中收集數(shù)據(jù),然后識(shí)別出各種模式。algorithm:(計(jì)算機(jī)的)算法,計(jì)算程序;glean: 緩慢而艱難地收集(信息)。

13. innate: 固有的,與生俱來的;bootstrap: 通過努力來達(dá)到;discrete: 分離的,不相關(guān)的;causality: 因果關(guān)系。

14. 同樣,人類似乎生來就有社會(huì)性的觀念,比如嬰兒能夠識(shí)別簡(jiǎn)單的面部表情,大致知道語言及其在交流中的作用,也有一些基本策略來吸引成人與之互動(dòng)。nascent: 新生的,萌芽的;inkling: 略知,模糊印象;rudimentary: 基本的,初步的;entice: 誘惑。

15. enlist: 爭(zhēng)?。◣椭蛑С郑?。

16. state-of-the-art: 最新的,最前沿的;blank slates: 白板;gear towards:(使)準(zhǔn)備好,(使)合適。

17. 統(tǒng)觀過去,在人工智能系統(tǒng)中植入常識(shí)的主要關(guān)注點(diǎn)都是對(duì)人類知識(shí)的編目,具體方法包括人工編程、眾包、在網(wǎng)上挖掘關(guān)于常識(shí)的“判斷”或者用計(jì)算機(jī)來展示固定情景等。implant:植入;catalogue: 將……編入目錄;crowdsourcing: 眾包,從廣泛群體尤其是在線社區(qū)中獲取想法、服務(wù)或內(nèi)容的方法。

18. DARPA: 美國(guó)國(guó)防高級(jí)研究計(jì)劃局,是美國(guó)國(guó)防部下屬的一個(gè)行政機(jī)構(gòu),負(fù)責(zé)研發(fā)用于軍事用途的高新科技。

19. gulf: 鴻溝,巨大差距。

20. object permanence: 客體永久性,即兒童理解物體是作為獨(dú)立實(shí)體而存在的,即使不能知覺到,這些物體也依然存在。

21. gauge: 測(cè)量。

22. encyclopaedia: 百科全書。

23. benchmark: 基準(zhǔn)(點(diǎn))。

猜你喜歡
常識(shí)差距物體
靠不住的常識(shí)
難分高下,差距越來越小 2017年電影總票房排行及2018年3月預(yù)告榜
近視600度以上,這5條常識(shí)務(wù)必知道
縮小急救城鄉(xiāng)差距應(yīng)入“法”
回歸常識(shí)
幻想和現(xiàn)實(shí)差距太大了
懸浮的雞蛋
超越常識(shí)