演讲MP3+双语文稿:社交媒体的虚假信息是如何扰乱民主、经济与公众的?
教程:TED音频  浏览:227  
  • 00:00/00:00
  • 提示:点击文章中的单词,就可以看到词义解释

    听力课堂TED音频栏目主要包括TED演讲的音频MP3及中英双语文稿,供各位英语爱好者学习使用。本文主要内容为演讲MP3+双语文稿:社交媒体的虚假信息是如何扰乱民主、经济与公众的?,希望你会喜欢!

    [演讲者及介绍]Sinan Aral

    数据科学家、企业家、投资者Sinan Aral揭示了社交媒体是如何扰乱我们的民主、经济和公众的。

    [演讲主题]虚假信息大行其道,我们如何捍卫真相?

    [中英文字幕]

    翻译者 Ivana Korom 校对者 Krystian Aparta

    00:14

    So, on April 23 of 2013, the Associated Press put out the following tweet on Twitter. It said, "Breaking news: Two explosions at the White House and Barack Obama has been injured." This tweet was retweeted 4,000 times in less than five minutes, and it went viral thereafter.

    013 年 4 月 23 日,美联社在推特上发布了 这样一条推文: “突发新闻: 白宫发生两起爆炸,巴拉克·奥巴马受伤。” 在不到五分钟的时间里,这条推文被转发了四千次,随后也在网络上被疯传。

    00:41

    Now, this tweet wasn't real news put out by the Associated Press. In fact it was false news, or fake news, that was propagated by Syrian hackers that had infiltrated the Associated Press Twitter handle. Their purpose was to disrupt society, but they disrupted much more. Because automated trading algorithms immediately seized on the sentiment on this tweet, and began trading based on the potential that the president of the United States had been injured or killed in this explosion. And as they started tweeting, they immediately sent the stock market crashing, wiping out 140 billion dollars in equity value in a single day.

    不过,这条推文并不是 美联社发布的真实新闻。事实上,这是一则不实新闻,或者说是虚假新闻,是由入侵了美联社推特账号 的叙利亚黑客扩散的。他们的目的是扰乱社会,但他们扰乱的远不止于此。因为自动交易算法 立刻捕捉了这条推文的情感,【注:机器学习中对主观性文本的情感分析】 并且根据美国总统在这次爆炸中 受伤或丧生的可能性,开始了交易。而当他们开始发推时,股市迅速随之崩盘,一日之内便蒸发了 1400 亿美元的市值。

    01:25

    Robert Mueller, special counsel prosecutor in the United States, issued indictments against three Russian companies and 13 Russian individuals on a conspiracy to defraud the United States by meddling in the 2016 presidential election. And what this indictment tells as a story is the story of the Internet Research Agency, the shadowy arm of the Kremlin on social media. During the presidential election alone, the Internet Agency's efforts reached 126 million people on Facebook in the United States, issued three million individual tweets and 43 hours' worth of YouTube content. All of which was fake -- misinformation designed to sow discord in the US presidential election.

    美国特别检察官罗伯特·穆勒 起诉了三家俄罗斯公司 以及十三个俄罗斯人,指控他们干预 2016 年美国总统大选,合谋诓骗美国。而这次起诉讲述的 是互联网研究机构的故事,即俄罗斯政府在社交媒体上 布下的影影绰绰的手腕。仅在总统大选期间,互联网机构就 影响了 1.26 亿名 美国 Facebook 用户,发布了 300 万条推文,以及 43 个小时的 Youtube 内容。这一切都是虚假的—— 通过精心设计的虚假信息,在美国总统大选中播下不和的种子。

    02:21

    A recent study by Oxford University showed that in the recent Swedish elections, one third of all of the information spreading on social media about the election was fake or misinformation.

    牛津大学最近的一项研究显示,在近期的瑞典大选中,在社交媒体上传播 的关于大选的信息中,有三分之一 是虚假或谬误信息。

    02:35

    In addition, these types of social-media misinformation campaigns can spread what has been called "genocidal propaganda," for instance against the Rohingya in Burma, triggering mob killings in India.

    另外,这些通过社交媒体 进行的误导活动 可以传播所谓的“种族清洗宣传”,例如在缅甸煽动对罗兴亚人的迫害,或者在印度引发暴徒杀人。

    02:50

    We studied fake news and began studying it before it was a popular term. And we recently published the largest-ever longitudinal study of the spread of fake news online on the cover of "Science" in March of this year. We studied all of the verified true and false news stories that ever spread on Twitter, from its inception in 2006 to 2017. And when we studied this information, we studied verified news stories that were verified by six independent fact-checking organizations. So we knew which stories were true and which stories were false. We can measure their diffusion, the speed of their diffusion, the depth and breadth of their diffusion, how many people become entangled in this information cascade and so on. And what we did in this paper was we compared the spread of true news to the spread of false news. And here's what we found.

    我们在虚假新闻变成热点之前 就开始了对虚假新闻的研究。最近,我们发表了一项 迄今最大型的关于虚假新闻 在网络传播的纵向研究,在今年三月登上了《科学》期刊封面。我们研究了推特上传播的所有 核实过的真假新闻,范围是自 2006 年推特创立到 2017 年。在我们研究这些讯息时,我们通过六家独立的 事实核查机构验证,以确认新闻故事的真实性。所以我们清楚哪些新闻是真的,哪些是假的。我们可以测量 这些新闻的扩散程度,扩散速度,以及深度与广度,有多少人被卷入这个信息级联。【注:人们加入信息更具说服力的团体】 我们在这篇论文中 比较了真实新闻和 虚假新闻的传播程度。这是我们的研究发现。

    03:48

    We found that false news diffused further, faster, deeper and more broadly than the truth in every category of information that we studied, sometimes by an order of magnitude. And in fact, false political news was the most viral. It diffused further, faster, deeper and more broadly than any other type of false news. When we saw this, we were at once worried but also curious. Why? Why does false news travel so much further, faster, deeper and more broadly than the truth?

    我们发现,在我们研究 的所有新闻类别中,虚假新闻都比真实新闻传播得 更远、更快、更深、更广,有时甚至超出一个数量级。事实上,虚假的政治新闻 传播速度最快。它比任何其他种类的虚假新闻 都扩散得更远、更快、更深、更广。我们看到这个结果时,我们立刻感到担忧,但同时也很好奇。为什么? 为什么虚假新闻比真相 传播得更远、更快、更深、更广?

    04:21

    The first hypothesis that we came up with was, "Well, maybe people who spread false news have more followers or follow more people, or tweet more often, or maybe they're more often 'verified' users of Twitter, with more credibility, or maybe they've been on Twitter longer." So we checked each one of these in turn. And what we found was exactly the opposite. False-news spreaders had fewer followers, followed fewer people, were less active, less often "verified" and had been on Twitter for a shorter period of time. And yet, false news was 70 percent more likely to be retweeted than the truth, controlling for all of these and many other factors.

    我们想到的第一个假设是,“可能传播虚假新闻的人 有更多的关注者,或者关注了更多人,或者发推更频繁,或者他们中有更多 推特的‘认证’用户,可信度更高,或者他们在推特上的时间更长。” 因此,我们挨个检验了这些假设。我们发现,结果恰恰相反。假新闻散布者有更少关注者,关注的人更少,活跃度更低,更少被“认证”,使用推特的时间更短。然而,在控制了这些和很多其他变量之后,虚假新闻比真实新闻 被转发的可能性高出了 70%。

    05:01

    So we had to come up with other explanations. And we devised what we called a "novelty hypothesis." So if you read the literature, it is well known that human attention is drawn to novelty, things that are new in the environment. And if you read the sociology literature, you know that we like to share novel information. It makes us seem like we have access to inside information, and we gain in status by spreading this kind of information.

    我们不得不提出别的解释。于是,我们设想了一个 “新颖性假设”。如果各位对文献有所了解,会知道一个广为人知的现象是,人类的注意力会被新颖性所吸引,也就是环境中的新事物。如果各位了解社会学文献的话,你们应该知道,我们喜欢分享 新鲜的信息。这使我们看上去像是 能够获得内部消息,通过传播这类信息,我们的地位可以获得提升。

    05:30

    So what we did was we measured the novelty of an incoming true or false tweet, compared to the corpus of what that individual had seen in the 60 days prior on Twitter. But that wasn't enough, because we thought to ourselves, "Well, maybe false news is more novel in an information-theoretic sense, but maybe people don't perceive it as more novel."

    因此我们把刚收到的真假推文 和用户前 60 天内 在推特上看过的语库比较,以衡量刚收到的推文的新颖度。但这还不够,因为我们想到,“可能在信息论的层面 虚假新闻更加新颖,但也许在人们的感知里,它并没有很新鲜。”

    05:54

    So to understand people's perceptions of false news, we looked at the information and the sentiment contained in the replies to true and false tweets. And what we found was that across a bunch of different measures of sentiment -- surprise, disgust, fear, sadness, anticipation, joy and trust -- false news exhibited significantly more surprise and disgust in the replies to false tweets. And true news exhibited significantly more anticipation, joy and trust in reply to true tweets. The surprise corroborates our novelty hypothesis. This is new and surprising, and so we're more likely to share it.

    因此,为了理解 人们对虚假新闻的感知,我们研究了对真假推文的回复中 包含的信息和情感。我们发现,在多种不同的情感量表上—— 惊讶,厌恶,恐惧,悲伤,期待,喜悦,信任—— 对虚假新闻的回复里 明显表现出了 更多的惊讶和厌恶。而对真实新闻的回复里,表现出的则是 更多的期待、喜悦,和信任。这个意外事件证实了 我们的新颖性假设。这很新鲜、很令人惊讶,所以我们更可能把它分享出去。

    06:43

    At the same time, there was congressional testimony in front of both houses of Congress in the United States, looking at the role of bots in the spread of misinformation. So we looked at this too -- we used multiple sophisticated bot-detection algorithms to find the bots in our data and to pull them out. So we pulled them out, we put them back in and we compared what happens to our measurement. And what we found was that, yes indeed, bots were accelerating the spread of false news online, but they were accelerating the spread of true news at approximately the same rate. Which means bots are not responsible for the differential diffusion of truth and falsity online. We can't abdicate that responsibility, because we, humans, are responsible for that spread.

    同时,在美国国会两院前 进行的国会作证 提到了机器人账号(注:一种使用 自动化脚本执行大量简单任务的软件) 在传播虚假信息时的作用。因此我们也对这一点进行了研究—— 我们使用多个复杂的 机器人账号探测算法,寻找并提取出了 我们数据中的机器人账号。我们把机器人账号移除,再把它们放回去,并比较其对我们的测量 产生的影响。我们发现,确实,机器人账号加速了 虚假新闻在网络上的传播,但它们也在以大约相同的速度 加速真实新闻的传播。这意味着,机器人账号 并不是造成网上虚实信息 传播差距的原因。我们不能推脱这个责任,因为要对这种传播负责的,是我们人类自己。

    07:35

    Now, everything that I have told you so far, unfortunately for all of us, is the good news.

    对于我们大家来说 都很不幸的是,刚刚我告诉各位的一切 都是好消息。

    07:43

    The reason is because it's about to get a whole lot worse. And two specific technologies are going to make it worse. We are going to see the rise of a tremendous wave of synthetic media. Fake video, fake audio that is very convincing to the human eye. And this will powered by two technologies.

    原因在于,形势马上要大幅恶化了。而两种特定的技术 会将形势变得更加糟糕。我们将会目睹 一大波合成媒体的剧增。虚假视频、虚假音频,对于人类来说都能以假乱真。这是由两项技术支持的。

    08:06

    The first of these is known as "generative adversarial networks." This is a machine-learning model with two networks: a discriminator, whose job it is to determine whether something is true or false, and a generator, whose job it is to generate synthetic media. So the synthetic generator generates synthetic video or audio, and the discriminator tries to tell, "Is this real or is this fake?" And in fact, it is the job of the generator to maximize the likelihood that it will fool the discriminator into thinking the synthetic video and audio that it is creating is actually true. Imagine a machine in a hyperloop, trying to get better and better at fooling us.

    其一是所谓的“生成对抗网络”。这是一个由两个网络组成 的机器学习模型: 一个是判别网络,负责分辨样本的真假; 另一个是生成网络,负责产生合成媒体。生成网络产生 合成视频或音频,而判别网络则试图分辨,“这是真的还是假的?” 事实上,生成网络的任务是 尽可能地欺骗判别网络,让判别网络误以为 它合成的视频和音频 其实是真的。想象一台处于超级循环中的机器,试图变得越来越擅长欺骗我们。

    08:51

    This, combined with the second technology, which is essentially the democratization of artificial intelligence to the people, the ability for anyone, without any background in artificial intelligence or machine learning, to deploy these kinds of algorithms to generate synthetic media makes it ultimately so much easier to create videos.

    第二项技术,简而言之,就是在民众中 的人工智能的民主化,即让任何人 不需要任何人工智能或 机器学习的背景,也能调用这些算法 生成人工合成媒体。这两种技术相结合,让制作视频变得如此容易。

    09:15

    The White House issued a false, doctored video of a journalist interacting with an intern who was trying to take his microphone. They removed frames from this video in order to make his actions seem more punchy. And when videographers and stuntmen and women were interviewed about this type of technique, they said, "Yes, we use this in the movies all the time to make our punches and kicks look more choppy and more aggressive." They then put out this video and partly used it as justification to revoke Jim Acosta, the reporter's, press pass from the White House. And CNN had to sue to have that press pass reinstated.

    白宫曾发布过一个 虚假的、篡改过的视频,内容为一名记者和一个试图抢夺 他的麦克风的实习生的互动。他们从视频中移除了一些帧,让他的行动显得更有攻击性。而当摄影师和替身演员 被采访问及这种技术时,他们说,“是的,我们经常 在电影中使用这种技术, 让我们的出拳和踢腿动作 看上去更具打击感,更加有气势。” 他们于是发布了这个视频,将其作为部分证据,试图撤销视频中的记者,吉姆·阿考斯塔 的白宫新闻通行证。于是 CNN 不得不提出诉讼,要求恢复该新闻通行证。

    10:01

    There are about five different paths that I can think of that we can follow to try and address some of these very difficult problems today. Each one of them has promise, but each one of them has its own challenges. The first one is labeling. Think about it this way: when you go to the grocery store to buy food to consume, it's extensively labeled. You know how many calories it has, how much fat it contains -- and yet when we consume information, we have no labels whatsoever. What is contained in this information? Is the source credible? Where is this information gathered from? We have none of that information when we are consuming information. That is a potential avenue, but it comes with its challenges. For instance, who gets to decide, in society, what's true and what's false? Is it the governments? Is it Facebook? Is it an independent consortium of fact-checkers? And who's checking the fact-checkers?

    我能想到我们可以走 的五条不同道路,以试图解决当今我们面对 的这些异常艰难的问题。每一种措施都带来希望,但每一种也有其自身的挑战。第一种措施是贴上标签。可以这么想: 当你去超市购买食品时,食品上会有详细的标签。你可以得知它有多少卡路里,含有多少脂肪—— 然而当我们摄取信息时,我们没有任何标签。这个信息中含有什么? 其来源是否可信? 这个信息是从哪里收集的? 在我们摄取信息时,我们并没有以上的任何信息。这是一种可能的解决办法,但它有自身的挑战。比如说,在社会中,有谁能决定信息的真伪? 是政府吗? 是 Facebook 吗? 是由事实核查机构 组成的独立联盟吗? 谁又来对事实核查机构 进行核查呢?

    11:03

    Another potential avenue is incentives. We know that during the US presidential election there was a wave of misinformation that came from Macedonia that didn't have any political motive but instead had an economic motive. And this economic motive existed, because false news travels so much farther, faster and more deeply than the truth, and you can earn advertising dollars as you garner eyeballs and attention with this type of information. But if we can depress the spread of this information, perhaps it would reduce the economic incentive to produce it at all in the first place.

    另一种可能的解决手段是奖励措施。我们知道,在美国总统大选期间,有一波虚假信息来源于马其顿,他们没有任何政治动机,相反,他们有经济动机。这个经济动机之所以存在,是因为虚假新闻比真相传播得 更远、更快、更深,你可以使用这类信息 博取眼球、吸引注意,从而通过广告赚钱。但如果我们能抑制 这类信息的传播,或许就能在源头减少 生产这类信息的经济动机。

    11:41

    Third, we can think about regulation, and certainly, we should think about this option. In the United States, currently, we are exploring what might happen if Facebook and others are regulated. While we should consider things like regulating political speech, labeling the fact that it's political speech, making sure foreign actors can't fund political speech, it also has its own dangers. For instance, Malaysia just instituted a six-year prison sentence for anyone found spreading misinformation. And in authoritarian regimes, these kinds of policies can be used to suppress minority opinions and to continue to extend repression.

    第三,我们可以考虑进行监管,毫无疑问,我们应当考虑这个选项。现在,在美国,我们在探索当 Facebook 和其它平台 受到监管时,会发生什么事情。我们应当考虑的措施包括: 监管政治言论,对政治言论进行标签,确保外国参与者无法资助政治言论,但这也有自己的风险。举个例子,马来西亚刚刚颁布法案,对任何散布不实消息的人 处以六年监禁。而在独裁政权中,这种政策可以被利用 以压制少数群体的意见,继续扩大压迫。

    12:25

    The fourth possible option is transparency. We want to know how do Facebook's algorithms work. How does the data combine with the algorithms to produce the outcomes that we see? We want them to open the kimono and show us exactly the inner workings of how Facebook is working. And if we want to know social media's effect on society, we need scientists, researchers and others to have access to this kind of information. But at the same time, we are asking Facebook to lock everything down, to keep all of the data secure.

    第四种可能的解决方法是透明度。我们想了解 Facebook 的算法是怎样运作的。数据是怎样与算法结合,得出我们看到的结果? 我们想让他们开诚布公,为我们披露 Facebook 内部 具体是如何运作的。而如果我们想知道 社交媒体对社会的影响,我们需要科学家、研究人员 和其他人能够入手这种信息。但与此同时,我们还要求 Facebook 锁上一切,保证所有数据的安全。

    13:01

    So, Facebook and the other social media platforms are facing what I call a transparency paradox. We are asking them, at the same time, to be open and transparent and, simultaneously secure. This is a very difficult needle to thread, but they will need to thread this needle if we are to achieve the promise of social technologies while avoiding their peril.

    因此,Facebook 和其他社交媒体平台 正面对我称之为的“透明性悖论”。我们要求他们 在开放、透明的同时 保证安全。这是非常艰难的挑战,这些公司必须直面挑战,才能在实现社交科技承诺的同时 回避它们带来的危害。

    13:25

    The final thing that we could think about is algorithms and machine learning. Technology devised to root out and understand fake news, how it spreads, and to try and dampen its flow. Humans have to be in the loop of this technology, because we can never escape that underlying any technological solution or approach is a fundamental ethical and philosophical question about how do we define truth and falsity, to whom do we give the power to define truth and falsity and which opinions are legitimate, which type of speech should be allowed and so on. Technology is not a solution for that. Ethics and philosophy is a solution for that.

    我们能想到的最后一个解决手段是 算法和机器学习。有的科技被开发出来,用于拔除和理解虚假新闻,了解它们的传播方式,并试图降低其扩散。人类需要跟进这种科技,因为我们无法逃避的是,在任何科技解答或手段的背后 都有一个根本的伦理与哲学问题: 我们如何定义真实和虚伪,我们将定义真伪的权力托付于谁,哪些意见是合法的,哪种言论能被允许,诸如此类。科技并非对这个问题的解答, 伦理学和哲学才是。

    14:11

    Nearly every theory of human decision making, human cooperation and human coordination has some sense of the truth at its core. But with the rise of fake news, the rise of fake video, the rise of fake audio, we are teetering on the brink of the end of reality, where we cannot tell what is real from what is fake. And that's potentially incredibly dangerous.

    人类决策、人类合作和人类协调 的几乎每一个理论,其核心都存在某种程度的真相。但随着虚假新闻、 虚假视频、 虚假音频的崛起,我们正在现实终结 的边缘摇摇欲坠,在这里我们无法分辨 何为真实,何为虚假。这有可能是极度危险的。

    14:39

    We have to be vigilant in defending the truth against misinformation. With our technologies, with our policies and, perhaps most importantly, with our own individual responsibilities, decisions, behaviors and actions.

    我们必须保持警惕,拒绝虚假信息,捍卫真相—— 通过我们的技术,我们的政策,以及,或许也是最重要的,通过我们自己的责任感、 决定、行为,和举动。

    14:58

    Thank you very much.

    谢谢大家。

    14:59

    (Applause)

    (掌声)

    0/0
      上一篇:演讲MP3+双语文稿:美国的投票选举制该如何优化? 下一篇:演讲MP3+双语文稿:如何激发“被遗忘的中产阶级”的潜力

      本周热门

      受欢迎的教程