莉薩·法齊奧
Why humans stink at finding falsehoods.人類(lèi)為何拙于發(fā)現(xiàn)謊言。
Heres a quick quiz for you:
· In the biblical story, what was Jonah swallowed by?
· How many animals of each kind did Moses take on the Ark?
Did you answer “whale” to the first question and “two” to the second? Most people do… even though theyre well aware that it was Noah, not Moses who built the ark in the biblical story.
Psychologists like me call this phenomenon the Moses Illusion. Its just one example of how people are very bad at picking up on factual errors in the world around them. Even when people know the correct information, they often fail to notice errors and will even go on to use that incorrect information in other situations.
Research from cognitive psychology shows that people are naturally poor fact-checkers and it is very difficult for us to compare things we read or hear to what we already know about a topic. In whats been called an era of “fake news,” this reality has important implications for how people consume journalism, social media, and other public information.
Failing to notice what you know is wrong
The Moses Illusion has been studied repeatedly since the 1980s. It occurs with a variety of questions and the key finding is that—even though people know the correct information—they dont notice the error and proceed to answer the question.
In the original study, 80 percent of the participants failed to notice the error in the question despite later correctly answering the question “Who was it that took the animals on the Ark?”
The Moses Illusion demonstrates what psychologists call knowledge neglect1—people have relevant knowledge, but they fail to use it.
One way my colleagues and I have studied this knowledge neglect is by having people read fictional stories that contain true and false information about the world. For example, one story is about a characters summer job at a planetarium. Some information in the story is correct: “Lucky me, I had to wear some huge old space suit. I dont know if I was supposed to be anyone in particular—maybe I was supposed to be Neil Armstrong, the first man on the moon.” Other information is incorrect: “First I had to go through all the regular astronomical facts, starting with how our solar system works, that Saturn is the largest planet, etc.”
Later, we give participants a trivia test2 with some new questions (Which precious gem is red?) and some questions that relate to the information from the story (What is the largest planet in the solar system?). We reliably find positive effects of reading the correct information within the story—participants are more likely to answer “Who was the first person to step foot on the moon?” correctly. We also see negative effects of reading the misinformation—participants are both less likely to recall that Jupiter is the largest planet and they are more likely to answer with Saturn.
These negative effects of reading false information occur even when the incorrect information directly contradicts peoples prior knowledge. In one study, my colleagues and I had people take a trivia test two weeks before reading the stories. Thus, we knew what information each person did and did not know. Participants still learned false information from the stories they later read. In fact, they were equally likely to pick up false information from the stories when it did and did not contradict their prior knowledge.
Can you improve at noticing incorrect info?
So people often fail to notice errors in what they read and will use those errors in later situations. But what can we do to prevent this influence of misinformation?
Expertise or greater knowledge seems to help, but it doesnt solve the problem. Even biology graduate students will attempt to answer distorted questions such as “Water contains two atoms of helium and how many atoms of oxygen?”
Many of the interventions my colleagues and I have implemented to try to reduce peoples reliance on the misinformation have failed or even backfired. One initial thought was that participants would be more likely to notice the errors if they had more time to process the information. So, we presented the stories in a book-on-tape format and slowed down the presentation rate. But instead of using the extra time to detect and avoid the errors, participants were even more likely to produce the misinformation from the stories on a later trivia test.
Next, we tried highlighting the critical information in a red font. We told readers to pay particular attention to the information presented in red with the hope that paying special attention to the incorrect information would help them notice and avoid the errors. Instead, they paid additional attention to the errors and were thus more likely to repeat them on the later test.
The one thing that does seem to help is to act like a professional fact-checker. When participants are instructed to edit the story and highlight any inaccurate statements, they are less likely to learn misinformation from the story. Similar results occur when participants read the stories sentence by sentence and decide whether each sentence contains an error.
Its important to note that even these “fact-checking” readers miss many of the errors and still learn false information from the stories. For example, in the sentence-by-sentence detection task participants caught about 30 percent of the errors. But given their prior knowledge they should have been able to detect at least 70 percent.
Quirks of psychology make us miss mistakes
Why are human beings so bad at noticing errors and misinformation? Psychologists believe that there are at least two forces at work.
First, people have a general bias to believe that things are true. (After all, most things that we read or hear are true.) In fact, theres some evidence that we initially process all statements as true and that it then takes cognitive effort to mentally mark them as false.
Second, people tend to accept information as long as its close enough to the correct information. Natural speech often includes errors, pauses, and repeats. (“She was wearing a blue—um, I mean, a black, a black dress.”) One idea is that to maintain conversations we need to go with the flow—accept information that is “good enough” and just move on.
And people dont fall for these illusions when the incorrect information is obviously wrong. For example, people dont try and answer the question “How many animals of each kind did Nixon take on the Ark?” and people dont believe that Pluto is the largest planet after reading it in a fictional story.
Detecting and correcting false information is difficult work and requires fighting against the ways our brains like to process information. Critical thinking alone wont save us. Our psychological quirks put us at risk of falling for misinformation, disinformation and propaganda. Professional fact-checkers provide an essential service in hunting out incorrect information in the public view. As such, they are one of our best hopes for zeroing in on errors and correcting them, before the rest of us read or hear the false information and incorporate it into what we know of the world.
做個(gè)快速測(cè)試吧:
·《圣經(jīng)》故事中,什么吞了約拿?
·摩西帶到方舟上的動(dòng)物,每種帶了多少?
你的答案是否分別是“鯨”和“兩個(gè)”?大多數(shù)人都會(huì)這么回答吧……雖然明知《圣經(jīng)》故事中建造方舟的是諾亞而不是摩西。
我是研究心理學(xué)的,這種現(xiàn)象在心理學(xué)中被稱(chēng)為“摩西錯(cuò)覺(jué)”。這僅僅是人拙于發(fā)現(xiàn)身邊事實(shí)性錯(cuò)誤的例證之一。即使人們知道正確的信息,也常常察覺(jué)不到錯(cuò)誤,甚至還會(huì)在其他場(chǎng)合沿用錯(cuò)誤信息。
認(rèn)知心理學(xué)的研究表明,人類(lèi)天生就不善于核實(shí),也很難將耳聞目見(jiàn)的信息與已有認(rèn)知進(jìn)行比較。在這個(gè)人們認(rèn)為“假新聞”泛濫的年代,這一發(fā)現(xiàn)對(duì)于了解大眾如何閱讀新聞、社交媒體及其他公眾信息有著重要的啟示。
認(rèn)知錯(cuò)誤而不自知
20世紀(jì)80年代以來(lái),“摩西錯(cuò)覺(jué)”一直是人們反復(fù)研究的對(duì)象。研究采用的問(wèn)題各種各樣,而重要的發(fā)現(xiàn)就是:雖然人們知道正確的信息,卻并未察覺(jué)到題目中的錯(cuò)誤,仍會(huì)繼續(xù)回答問(wèn)題。
在開(kāi)頭測(cè)試涉及的研究中,80%的參與者未能注意到第二個(gè)問(wèn)題中的錯(cuò)誤,雖然后續(xù)被問(wèn)及“誰(shuí)把動(dòng)物帶上了方舟”這一問(wèn)題時(shí),這些人都回答對(duì)了。
“摩西錯(cuò)覺(jué)”證實(shí)了心理學(xué)家所說(shuō)的“知識(shí)忽略”,即人們雖然具備相關(guān)知識(shí),卻未能調(diào)用。
我和同事在研究“知識(shí)忽略”時(shí),采用的一種方法便是讓參與者讀虛構(gòu)小說(shuō),小說(shuō)提及的現(xiàn)實(shí)世界的信息有真有假。例如,有個(gè)故事講主人公暑假在天文館打工。故事中,有些信息是正確的,比如:“幸運(yùn)如我,不得不穿著肥大、陳舊的航天服。我不清楚自己是不是在扮演某個(gè)航天員——或許就是登月第一人尼爾·阿姆斯特朗?!庇行┬畔t是錯(cuò)誤的,比如:“首先,我得過(guò)一遍天文常識(shí),從太陽(yáng)系的運(yùn)行開(kāi)始,諸如土星是太陽(yáng)系中最大的行星?!?/p>
接著,我們對(duì)參與者進(jìn)行了知識(shí)測(cè)試,其中包含一些新問(wèn)題(什么寶石是紅色的?)以及一些跟故事所交代信息相關(guān)的問(wèn)題(太陽(yáng)系最大的行星是什么?)。結(jié)果表明,故事中的正確信息確實(shí)發(fā)揮了積極作用,對(duì)于“誰(shuí)第一個(gè)登上了月球?”這樣的問(wèn)題,參與者的正確率較高。然而,我們也注意到閱讀錯(cuò)誤信息的負(fù)面影響,參與者往往不太容易想起木星是太陽(yáng)系最大的行星,而更可能答土星。
即使錯(cuò)誤信息與人們此前掌握的知識(shí)相左,閱讀錯(cuò)誤信息依然會(huì)產(chǎn)生負(fù)面影響。例如,在一項(xiàng)研究中,我和同事安排參與者先進(jìn)行知識(shí)測(cè)試,兩周后,再讀小說(shuō)故事。這樣,我們就知道每個(gè)人之前知道哪些信息,不知道哪些信息。然而,參與者還是記住了后續(xù)所讀故事中的錯(cuò)誤信息。事實(shí)上,無(wú)論故事中的信息與其先前掌握的知識(shí)是否相悖,參與者都同樣可能受到錯(cuò)誤信息的誤導(dǎo)。
能否提高發(fā)現(xiàn)錯(cuò)誤信息的能力?
總之,人們常常發(fā)現(xiàn)不了所讀內(nèi)容中的錯(cuò)誤,而且后續(xù)還會(huì)把這些錯(cuò)誤信息用在其他地方。那我們?cè)鯓硬拍鼙苊馐苠e(cuò)誤信息的誤導(dǎo)呢?
了解專(zhuān)業(yè)知識(shí)或擴(kuò)大知識(shí)面看似有幫助,但并不能解決問(wèn)題。即使是生物專(zhuān)業(yè)研究生,面對(duì)“水分子包含兩個(gè)氦原子和多少氧原子”這類(lèi)錯(cuò)誤問(wèn)題時(shí),也會(huì)試圖回答。
為了減少人們對(duì)錯(cuò)誤信息的依賴(lài),我和同事嘗試了各種干預(yù)措施,其中很多要么失敗,要么適得其反。最初,我們認(rèn)為如果參與者有更多的時(shí)間處理信息,就更可能留意到錯(cuò)誤。于是,我們采用有聲書(shū)的形式播放故事,并放慢了播放速度。然而,參與者并未將增加的時(shí)間用于甄別錯(cuò)誤、避免犯錯(cuò),在后續(xù)測(cè)試中,他們受故事中錯(cuò)誤信息誤導(dǎo)的幾率反而有增無(wú)減。
接著,我們又嘗試將關(guān)鍵信息以紅色字體突出顯示,并提醒參與者特別注意紅字部分,希望對(duì)錯(cuò)誤信息的特別關(guān)注可以幫助他們發(fā)現(xiàn)錯(cuò)誤、避免犯錯(cuò)。結(jié)果,由于特別留意了錯(cuò)誤信息,參與者在后續(xù)測(cè)試中反而更容易重復(fù)那些錯(cuò)誤。
唯一看起來(lái)有用的方法就是讓參與者扮演專(zhuān)業(yè)的核查員。我們要求參與者對(duì)故事進(jìn)行編輯,標(biāo)注出任何失實(shí)的陳述,結(jié)果他們答題時(shí)受故事中錯(cuò)誤信息誤導(dǎo)的幾率降低了。另外,讓參與者逐句閱讀故事,確定每句是否有錯(cuò),也收到了同樣的效果。
需要注意的是,即便是這些“核查員”,也漏掉了許多錯(cuò)誤,仍然會(huì)受故事中錯(cuò)誤信息的誤導(dǎo)。例如,在逐句檢查任務(wù)中,參與者找出了約30%的錯(cuò)誤,但鑒于其先前掌握的知識(shí),其錯(cuò)誤識(shí)別率本應(yīng)至少達(dá)到70%。
心理傾向?qū)е氯祟?lèi)忽視錯(cuò)誤
為什么人們甄別錯(cuò)誤和虛假信息的能力如此之差?心理學(xué)家認(rèn)為至少有兩點(diǎn)原因。
首先,人們一般都傾向于認(rèn)為所見(jiàn)所聞是真的(畢竟,其中多數(shù)都是真的)。其實(shí),有些證據(jù)表明,我們的大腦最初是將所有信息視為真實(shí)的,之后經(jīng)過(guò)一番認(rèn)知努力識(shí)別出其中的錯(cuò)誤信息。
其次,只要信息顯得跟正確的差不多,人們就傾向于接受??谡Z(yǔ)中,人們常常出現(xiàn)錯(cuò)誤、停頓和重復(fù)。(“她穿著件藍(lán)色……呃,不對(duì),是黑色,黑色的連衣裙?!保┯杏^點(diǎn)認(rèn)為,為了推進(jìn)對(duì)話,人們需要順其自然——只要信息“還算過(guò)得去”就可以繼續(xù)往下聊。
如果信息存在明顯錯(cuò)誤,人們則不會(huì)陷入錯(cuò)覺(jué)。例如,如果問(wèn)題換成“尼克松帶到方舟上的動(dòng)物,每種帶了多少”,回答者就不會(huì)落入圈套,人們也不會(huì)相信虛構(gòu)小說(shuō)中所寫(xiě)的“冥王星是太陽(yáng)系中最大的行星”。
甄別、糾錯(cuò)并非易事,這要求我們對(duì)抗大腦本能的信息處理機(jī)制。光靠批判性思考無(wú)法讓我們脫困。種種心理傾向?qū)е挛覀兒苋菀资苠e(cuò)誤信息、虛假信息和政治宣傳的影響。專(zhuān)業(yè)核查員可以為找出公眾視野中的錯(cuò)誤信息提供必要的服務(wù)。因此,這些人是最有希望揪出并糾正錯(cuò)誤的人選之一,可防止其他人讀到或聽(tīng)到錯(cuò)誤信息并將其代入自己對(duì)現(xiàn)實(shí)世界的既有認(rèn)知。? ? ? ? ? ? ? ? ? ? ? □
(譯者單位:上海大學(xué))