薩姆·瓊斯/文 袁峰/譯
A performance with living and mechanical partners can teach researchers how to design more relatable1 bots.
人機搭檔表演能教會研究人員如何設(shè)計更讓人認同的機器人。
A dancer shrouded in shades of blue rises to her feet and steps forward on stage. Under a spotlight, she gazes at her partner: a tall, sleek2 robotic arm. As they dance together, the machine’s fluid movements make it seem less stere-otypically3 robotic—and, researchers hope, more trustworthy.
舞者從深淺不一的藍色光暈中站起身來,走上舞臺。聚光燈下,她凝視著舞伴——一架頎長、優(yōu)美的機械臂。人機共舞時,機械臂動作流暢,看起來并不刻板機械,研究人員希望這也會讓它看上去更可靠。
“When a human moves one joint, it isn’t the only thing that moves. The rest of our body follows along,” says Amit Rogel, a music technology graduate researcher at Georgia Institute of Technology. “There’s this slight continuity that almost all animals have, and this is really what makes us feel human in our movements.” Rogel programmed this subtle follow-through4 into robotic arms to help create FOREST, a performance collaboration between researchers at Georgia Tech, dancers at Kennesaw State University and a group of robots.
“當(dāng)人活動關(guān)節(jié)時,不只是關(guān)節(jié)在動,身體其他部位也順勢而動?!弊糁蝸喞砉W(xué)院音樂技術(shù)研究生研究員阿米特·羅杰爾說,“幾乎所有動物都具有這種細微的動作連貫性,而這確實讓我們感覺自己是人而非機器?!绷_杰爾將這種微妙的順勢動作編入機械臂的程序,助力創(chuàng)作“福雷斯特”——佐治亞理工學(xué)院研究人員、肯尼索州立大學(xué)舞者和一組機器人三方協(xié)作的表演項目。
The goal is not only to create a memorable performance, but to put into practice what the researchers have learned about building trust between humans and robots. Robotics are already widely used, and the number of collaborative robots—which work with humans on tasks such as tending factory machines and inspecting manufacturing equipment—is expected to climb significantly in the coming years. But although they are becoming more common, trust in them is still low—and this makes humans more reluctant to work with them. “People may not understand how the robot operates, nor what it wants to accomplish,” says Harold Soh, a computer scientist at the National University of Singapore. He was not involved in the project, but his work focuses on human-robot interaction and developing more trustworthy collaborative robots.
該項目的目標不僅是創(chuàng)作令人難忘的表演,而且還要將研究人員對建立人類與機器人互信的認識付諸實踐。機器人技術(shù)已得到廣泛應(yīng)用,與人類協(xié)同執(zhí)行照看工廠機器和檢查制造設(shè)備等任務(wù)的協(xié)作機器人的數(shù)量有望在未來幾年大幅攀升。但盡管機器人日益常見,人們對它們的信任度卻仍很低,因而越加不愿意與其協(xié)作?!叭藗兛赡懿涣私鈾C器人如何運作,也不明白它想要完成什么任務(wù)?!毙录悠聡⒋髮W(xué)計算機科學(xué)家蘇順鴻表示。他雖未參與該項目,但其工作側(cè)重于人類與機器人交互和開發(fā)更值得信賴的協(xié)作機器人。
Although humans love cute fictional machines like R2-D2 or WALL-E, the best real-world robot for a given task may not have the friendliest looks, or move in the most appealing way. “Calibrating5 trust can be difficult when the robot’s appearance and behavior are markedly different from humans,” Soh says. However, he adds, even a disembodied6 robot arm can be designed to act in a way that makes it more relatable7. “Conveying emotion and social messages via a combination of sound and motion is a compelling approach that can make interactions more fluent and natural,” he explains.
人類喜愛R2-D2或WALL-E之類可愛的科幻機器人,但現(xiàn)實世界中執(zhí)行特定任務(wù)的最佳機器人未必是外表最友善或動作最迷人的?!爱?dāng)機器人的外表和行為與人類迥然不同時,難以通過調(diào)試建立信任?!碧K順鴻說,但他又指出,即便是無軀體的機械臂,也可設(shè)計得行為舉止更讓人認同。他解釋說:“通過聲音與動作相結(jié)合的方式表達情感和傳達社交信息具有說服力,能使交互更加順暢、自然?!?/p>
That’s why the Georgia Tech team decided to program nonhumanoid8 machines to appear to convey emotion, through both motion and sound. Rogel’s latest work in this area builds off years of research. For instance, to figure out which sounds best convey specific emotions, Georgia Tech researchers asked singers and guitarists to look at a diagram called an “emotion wheel,” pick an emotion, and then sing or play notes to match that feeling. The researchers then trained a machine learning model—one they planned to embed in the robots—on the resulting data set. They wanted to allow the robots to produce a vast range of sounds, some more complex than others. “You could say, ‘I want it to be a little bit happy, a little excited and a little bit calm,’” says project collaborator Gil Weinberg, director of Georgia Tech’s Center for Music Technology.
正因如此,佐治亞理工學(xué)院團隊決定為非人形機器編制程序,使其看似能通過動作和聲音表達情感。羅杰爾在這一領(lǐng)域的最新工作建立在多年研究的基礎(chǔ)上。例如,為了弄清哪些聲音最能表達特定情感,佐治亞理工學(xué)院研究人員讓多名歌手和吉他手觀看《情感輪盤》示意圖,挑選一種情感,然后詠唱或演奏匹配的樂音來表達該情感。然后,研究人員運用由此獲得的數(shù)據(jù)集訓(xùn)練一個機器學(xué)習(xí)模型——他們計劃將該模型嵌入機器人。他們想讓機器人發(fā)出各種各樣的聲音,其中一些聲音比其他聲音更復(fù)雜。佐治亞理工學(xué)院音樂技術(shù)中心主任、項目協(xié)作者吉爾·溫伯格說:“你可以說,‘我希望它有些許快樂、些許興奮、些許平靜?!?/p>
Next, the team worked to tie those sounds to movement. In 2020, the researchers had demonstrated that combining movement with emotion-based sound improved trust in robotic arms in a virtual setting (a requirement fostered by the pandemic). But that experiment only needed the robots to perform four different gestures to convey four different emotions. To broaden a machine’s emotional-movement options for his new study, which has been conditionally accepted for publication in Frontiers in Robotics and AI, Rogel waded through9 research related to human body language. “For each one of those body language [elements], I looked at how to adapt that to a robotic movement,” he says. Then, dancers affiliated with Kennesaw State University helped the scientists refine those movements. As the performers moved in ways intended to convey emotion, Rogel and fellow researchers recorded them with cameras and motion-capture suits, and subsequently generated algorithms so that the robots could match those movements. “I would ask [Rogel], ‘can you make the robots breathe?’ And the next week, the arms would be kind of ‘inhaling’ and ‘exhaling,’” says Kennesaw State University dance professor Ivan Pulinkala.
接下來,研究團隊將這些聲音與動作結(jié)合起來。2020年,研究人員證明,將動作與情感性聲音相結(jié)合,增進了人們在虛擬環(huán)境中對機械臂的信任(這是疫情催生的要求)。但這項實驗只需機器人做出四種不同手勢來表達四種不同情感。羅杰爾費心進行與人類肢體語言相關(guān)的研究,從而在自己的新研究中拓寬了機器的情感動作選項。該研究已被《機器人與人工智能前沿》雜志擬錄用。他表示:“對于其中每一種肢體語言[元素],我都在研究如何使其適配機器人動作?!彪S后,肯尼索州立大學(xué)下屬的舞者們協(xié)助科研人員優(yōu)化了這些動作。當(dāng)表演者舞動傳情時,羅杰爾與其他研究人員用相機和動作捕捉服予以記錄,然后生成算法,以便機器人能適配這些動作?!拔視枺_杰爾),‘你能讓機器人呼吸嗎?’于是在下一周,機器臂將會做‘吸氣’‘呼氣’動作。”肯尼索州立大學(xué)舞蹈學(xué)教授伊萬·普林卡拉說。
Pulinkala choreographed10 the FOREST performance, which put into practice what the researcher-dancer team learned about creating and deploying emotion-based sounds and movements. “My approach was to kind of breathe a sense of life into the robots and have the dancers [appear] more ‘mechanized,’” Pulinkala says, reflecting on the start of the collaboration. “I asked, ‘How can the robots have more emotional physicality11? And how does a dancer then respond to that?’”
普林卡拉編排了這場弗雷斯特表演,將研究人員與舞者聯(lián)合團隊在創(chuàng)作和運用富于情感的聲音和動作方面的認識付諸實踐?!拔业淖龇ㄊ墙o機器人注入生命感,而讓舞者[顯得]更具‘機械感’。”普林卡拉說,并回想起合作之初:“我自問,‘機器人如何才能有更多的情感性體征?舞者對此又作何回應(yīng)?’”
According to the dancers, this resulted in machines that seemed a little more like people. Christina Massad, a freelance professional dancer and an alumna of Kennesaw State University, recalls going into the project thinking she would be dancing around the robots—not with them. But she says her mindset shifted as soon as she saw the fluidity of the robots’ movements, and she quickly started viewing them as more than machines. “In one of the first rehearsals, I accidentally bumped into one, and I immediately told it, ‘Oh my gosh, I’m so sorry,’” she says. “Amit laughed and told me, ‘It’s okay, it’s just a robot.’ But it felt like more than a robot.”
舞者們說這導(dǎo)致機器看上去更有點像人??死锼沟倌取ゑR薩德是一名自由職業(yè)舞者,也是肯尼索州立大學(xué)校友。她記得加入這個項目時,還以為自己會圍著機器人跳舞,而不是與其共舞。但她說,看到機器人動作流暢,她的看法頓時改變,隨即不再將它們視為單純的機器?!霸谧畛醯囊淮闻啪氈?,我不小心撞到一個機器人,立馬就對它說,‘天哪,真對不起?!彼f?!鞍⒚滋匦χ鴮ξ艺f,‘沒事,它只是個機器人?!伤o人的感覺卻不僅僅是機器人?!?/p>
Soh says he finds the performance fascinating and thinks it could bring value to the field of human-robot relationships. “The formation and dynamics of trust in human-robot teams is not well-understood,” he says, “and this work may shed light on the evolution of trust in teams.”
蘇順鴻說,他發(fā)現(xiàn)機器人的表演引人入勝,認為這或可對研究人類與機器人的關(guān)系具有意義?!叭藗兩形闯浞至私馊祟惻c機器人組合中信任的形成和發(fā)展變化?!彼f,“這項工作可使人進一步了解人機組合中信任的演化?!?/p>
(譯者為“《英語世界》杯”翻譯大賽獲獎?wù)撸?/p>
1 relatable能讓人認同的,能讓人產(chǎn)生共鳴的。? 2 sleek線條流暢的,造型優(yōu)美的。? 3 stereotypically模式化地,刻板地。? 4 follow-through順勢動作。
5 calibrate調(diào)諧,調(diào)適。
6 disembodied脫離軀體的;由看不見的人發(fā)出的。? 7 relatable可明白的,可理解的。? 8 nonhumanoid非人形的,非類人的。
9 wade through艱難地處理,費力地閱讀。? 10 choreograph設(shè)計舞蹈動作,編舞。
11 physicality身體特征,肉體性。