卡倫·郝文 周曉玲譯
Algorithms are increasingly shaping childrens lives, but new guardrails could prevent them from getting hurt. 算法不斷影響孩子們的生活,新的防護措施能保護他們免受傷害。
Algorithms can change the course of childrens lives. Kids are interacting with Alexas1 that can record their voice data and influence their speech and social development. Theyre binging on videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews.
Algorithms are also increasingly used to determine what their education is like, whether theyll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance2.
Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at UNICEF, the United Nations Children Fund.
Vosloo led the drafting of a new set of guidelines from UNICEF designed to help governments and companies develop AI policies that consider childrens needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989.
The guidelines arent meant to be yet another set of AI principles, many of which already say the same things. In January of this year, a Harvard Berkman Klein Center review of 36 of the most prominent documents guiding national and company AI strategies found eight common themes—among them privacy, safety, fairness, and explainability.
Rather, the UNICEF guidelines are meant to complement these existing themes and tailor them to children. For example, AI systems shouldnt just be explainable—they should be explainable to kids. They should also consider childrens unique developmental needs. “Children have additional rights to adults,” Vosloo says. Theyre also estimated to account for at least one-third of online users. “Were not talking about a minority group here,” he points out.
In addition to mitigating AI harms, the goal of the principles is to encourage the development of AI systems that could improve childrens growth and well-being. If theyre designed well, for example, AI-based learning tools have been shown to improve childrens critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities. Emotional AI assistants, though relatively nascent, could provide mental-health support and have been demonstrated to improve the social skills of autistic children. Face recognition, used with careful limitations, could help identify children whove been kidnapped or trafficked.
Children should also be educated about AI and encouraged to participate in its development. It isnt just about protecting them, Vosloo says. Its about empowering them and giving them the agency to shape their future.
UNICEF isnt the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.
The announcement follows a year after BAAI released the Beijing AI principles, understood to be the guiding values for Chinas national AI development. The new principles outlined specifically for children are meant to be “a concrete implementation” of the more general ones, says Yi Zeng, the director of the AI Ethics and Sustainable Development Research Center at BAAI who led their drafting. They closely align with UNICEFs guidelines, also touching on privacy, fairness, explainability, and child well-being, though some of the details are more specific to Chinas concerns. A guideline to improve childrens physical health, for example, includes using AI to help tackle environmental pollution.
While the two efforts are not formally related, the timing is also not coincidental. After a flood of AI principles in the last few years, both lead drafters say creating more tailored guidelines for children was a logical next step. “Talking about disadvantaged groups, of course children are the most disadvantaged ones,” Zeng says. “This is why we really need [to give] special care to this group of people.” The teams conferred with one another as they drafted their respective documents. When UNICEF held a consultation workshop in East Asia, Zeng attended as a speaker.
UNICEF now plans to run a series of pilot programs with various partner countries to observe how practical and effective their guidelines are in different contexts. BAAI has formed a working group with representatives from some of the largest companies driving the countrys national AI strategy, including education technology company TAL, consumer electronics company Xiaomi, computer vision company Megvii, and internet giant Baidu. The hope is to get them to start heeding the principles in their products and influence other companies and organizations to do the same.
Both Vosloo and Zeng hope that by articulating the unique concerns AI poses for children, the guidelines will raise awareness of these issues. “We come into this with eyes wide open,” Vosloo says. “We understand this is kind of new territory for many governments and companies. So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzed—if we see AI made more explainable to children or to their caregivers—that would be a win for us.”
算法能改變孩子們的人生軌跡?,F(xiàn)在,孩子們和亞馬遜的智能語音助手Alexa互動,Alexa能記錄他們的聲音數(shù)據(jù),影響他們的言語發(fā)展和社會性發(fā)展。他們沉溺于在抖音國際版和優(yōu)兔上觀看推薦系統(tǒng)推送給他們的視頻,這些視頻最終會影響他們的世界觀。
算法也越來越多地用于決定兒童所受的教育,以及他們能否獲得醫(yī)療服務,甚至判定父母是否適合撫養(yǎng)他們。有時這會導致極壞的結(jié)果:比如,2020年夏季,成千上萬名學生因為算法——用于代替因疫情而取消的標準化測試——對其學習成績作出不準確的預測而失去大學入學資格。
換言之,兒童常處于使用人工智能(AI)與被人工智能利用的前列,他們可能因而受到傷害。聯(lián)合國兒童基金會(UNICEF)數(shù)字連接政策專家史蒂夫·沃斯盧認為:“由于兒童正處于智力、情感和身體的成長發(fā)育期,他們的可塑性很強?!?/p>
沃斯盧在UNICEF牽頭起草了一套新的指導原則,旨在幫助政府和企業(yè)制定考慮到兒童需求的AI政策。這9條新原則于2020年9月16日發(fā)布,是多次征詢決策者、兒童發(fā)展研究者、AI從業(yè)者和世界各地兒童代表意見的成果。這9條也參照了1989年聯(lián)合國通過的人權(quán)公約《兒童權(quán)利公約》。
很多AI原則內(nèi)容雷同,這套指導原則并不是要重復一遍。今年1月,哈佛大學伯克曼·克萊因互聯(lián)網(wǎng)與社會中心發(fā)表了一篇報告,他們調(diào)查了36份指導國家和企業(yè)AI策略的最重要文件,發(fā)現(xiàn)了8個共同主題,包括隱私、安全、公正和可解釋性。
其實,UNICEF制定的這套原則,是為了對已有的主題進行補充,使它們更適合兒童。例如,AI系統(tǒng)不應該僅僅是可解釋的,還應是兒童可以理解的。AI系統(tǒng)也應考慮兒童特有的發(fā)展需求。沃斯盧說:“兒童擁有除了成人權(quán)利以外的其他一些權(quán)利?!倍?,據(jù)估計,兒童至少占網(wǎng)絡用戶的三分之一。他指出:“我們正在談論的并非是少數(shù)群體。”
除了減輕AI帶來的危害,這些原則也是為了鼓勵開發(fā)可促進兒童成長和福祉的AI體系。例如,經(jīng)證實,如果設計良好,基于人工智能的學習工具能提升兒童的批判性思維和解決問題的技能,并且能為有學習障礙的孩子提供幫助。盡管處于較為初級的階段,情感AI助手能提供心理健康支持,現(xiàn)已證明還能改善自閉癥兒童的社交技能。面部識別技術(shù)如果謹慎使用,能幫助識別被綁架或拐賣的兒童。
還應該教授兒童AI相關(guān)知識,鼓勵他們參與AI開發(fā)。沃斯盧說,這不僅是為了保護他們,也是為了賦予他們能動性,使其具備塑造自己未來的能力。
UNICEF不是唯一考慮這個問題的機構(gòu)。在它發(fā)布這些原則草案的前一天,北京智源人工智能研究院(BAAI),這個由科學技術(shù)部和北京市政府支持的機構(gòu),也發(fā)布了一套兒童AI原則。
BAAI曾發(fā)布被視作中國國家AI發(fā)展指導性原則的《人工智能北京共識》,一年后,該機構(gòu)又發(fā)布了這套由其人工智能倫理與可持續(xù)發(fā)展研究中心主任曾毅帶頭起草的新原則。曾毅說,這套專為兒童制定的新原則,意在“落實”那些比較籠統(tǒng)的原則。它們與UNICEF的原則很接近,也涉及隱私、公正、可解釋性和兒童福祉,不過有些細節(jié)更針對中國關(guān)注的問題。比如,其中一條原則是促進兒童身體健康,包括使用AI技術(shù)幫助應對環(huán)境污染問題。
盡管這兩項行動并無正式關(guān)聯(lián),可它們的發(fā)布時機卻并非巧合。過去幾年,大批AI原則涌現(xiàn),兩位首席起草人都認為,為兒童制定更具針對性的原則自然是下一步。曾毅說:“談及弱勢群體,兒童自然是最弱勢的,這就是為什么我們切實需要特別關(guān)懷這類群體?!眱蓚€小組在起草各自文件時進行了協(xié)商。UNICEF在東亞舉辦咨詢研討會時,曾毅曾到會發(fā)言。
UNICEF現(xiàn)計劃在各個合作國家開展一系列試點項目,觀察這套原則在不同背景下的實用性和有效性。BAAI和幾家大型企業(yè)的代表組建了一個工作小組,這些企業(yè)驅(qū)動著中國國家AI戰(zhàn)略,包括科技教育公司好未來、消費類電子產(chǎn)品公司小米、計算機視覺公司曠視科技,以及互聯(lián)網(wǎng)巨頭百度。他們希望此舉能使這些公司開始在產(chǎn)品中重視這些原則,帶動其他企業(yè)和機構(gòu)也這樣做。
沃斯盧和曾毅都希望,通過闡明AI對兒童構(gòu)成的獨特挑戰(zhàn),這些原則將提升公眾對這些問題的意識。沃斯盧說:“我們參與這項工作時便心中有數(shù)了。我們知道,對很多政府和企業(yè)來說,這是一片全新的領(lǐng)域。如果假以時日,企業(yè)在AI產(chǎn)品開發(fā)周期,或者政府在相關(guān)政策制定周期里,我們看到讓兒童參與其中的更多案例,對兒童的數(shù)據(jù)采集和分析更謹慎;如果我們看到AI越來越能被孩子或他們的監(jiān)護人所理解,就說明我們成功了?!?/p>
(譯者為“《英語世界》杯”翻譯大賽獲獎者)
1亞馬遜基于云計算所開發(fā)的智能語音助手,最初搭載在亞馬遜智能音箱Echo上。與蘋果的Siri和微軟的Cortana一樣,Alexa的設計宗旨是響應各種命令,甚至可以與用戶對話。
2 2020年夏天,英國政府因為疫情封鎖,采用軟件預測學生成績,使大約40%的學生分數(shù)低于預期,沒有被預想的大學錄取。