金融时报:假新闻的终极杀手锏?
教程:金融时报原文阅读  浏览:233  
  • 提示:点击文章中的单词,就可以看到词义解释

    假新闻的终极杀手锏?

    斯坦福大学的研究人员发明了Face2Face技术,能够轻松地伪造某个人的视频和音频。这种新技术引发了人们的担忧——或许在不久的将来,某人可以轻松地伪造出美国总统宣战的讲话视频,而白宫恐怕很难及时作出反应。这一技术会成为假新闻的终极杀手锏吗?

    测试中可能遇到的词汇和知识:

    snippet['snɪpɪt] 小片,小部分

    febrile ['fiːbraɪl] 发烧的, 热病的

    incendiary[ɪn'sendiəri] 煽动的, 教唆的

    nefarious[nɪ'feəriəs] 违法的, 邪恶的

    disconcert [ˌdɪskən'sɜːt] 使困惑, 破坏

    bogus['bəʊɡəs] 假的, 伪造的

    The ultimate fake news scenario(704 words)

    By Anjana Ahuja

    Imagine looking in a mirror and seeing not your own reflection but that of Donald Trump. Each time you contort your face, you simultaneously contort his. You smile, he smiles. You scowl, he scowls. You control, in real time, the face of the president of the US.

    That is the sinister potential of Face2Face, a technology developed by researchers at Stanford University in California that allows someone to transpose their facial gestures on to the video of someone else.

    Now imagine marrying that “facial re-enactment” technology to artfully snipped audio clips of the president's previous public pronouncements. You post your creation on YouTube: a convincing snippet of Mr Trump declaring nuclear war against North Korea. In the current febrile climate, the incendiary video might well go viral before the White House can scramble a denial.

    It is the ultimate fake news scenario but not an inconceivable one: scientists have already demonstrated the concept by altering YouTube videos of George HW Bush, Barack Obama and Vladimir Putin.

    Now Darpa, the Defense Advanced Research Projects Agency in the US, has embarked on a research programme called MediFor (short for media forensics). Darpa says its programme is about levelling a field that “currently favours the manipulator”, a nefarious advantage that becomes a national security concern if the goal of forgery is propaganda or misinformation.

    The five-year programme is intended to turn out a system capable of analysing hundreds of thousands of images a day and immediately assessing if they have been tampered with. Professor Hany Farid, a computer scientist at Dartmouth College, New Hampshire, is among the academics involved. He specialises in detecting the manipulation of images, and his work includes assignments for law enforcement agencies and media organisations.

    “I've now seen the technology get good enough that I'm very concerned,” Prof Farid told Nature last week. “At some point, we will reach a stage when we can generate realistic video, with audio, of a world leader, and that's going to be very disconcerting.” He describes the attempt to keep up with the manipulators as a technological arms race.

    At the moment, spotting fakery takes time and expert knowledge, meaning that the bulk of bogus pictures slip into existence unchallenged. The first step with a questionable picture is to feed it into a reverse image search, such as Google Image Search, which will retrieve the picture if it has appeared elsewhere (this has proven surprisingly useful in uncovering scientific fraud, in instances when authors have plagiarised graphs).

    Photographs can be scrutinised for unusual edges or disturbances in colour. A colour image is composed of single, one-colour pixels. The lone dots are combined in particular ways to create the many hues and shading in a photograph. Inserting another image, or airbrushing something out, disrupts that characteristic pixellation. Shadows are another giveaway. Professor Farid cites a 2012 viral video of an eagle snatching a child: his speedy analysis revealed inconsistent shadows, exposing the film as a computer-generated concoction.

    Researchers at Massachusetts Institute of Technology have also developed an ingenious method of determining whether the people in video clips are real or animated. By magnifying video clips and checking colour differences in a person's face, they can deduce whether the person has a pulse. Interestingly, some legal experts have argued that computer-generated child pornography should be covered by the First Amendment, which protects free speech. Cases have turned on experts being able to detect whether offending material contains live victims.

    Machine learning is aiding the fraudsters: determined fakers can build “generative adversarial networks”. A GAN is a sort of Jekyll-and-Hyde network that, on the one hand, generates images and, on the other, rejects those that do not measure up authentically against a library of images. The result is a machine with its own inbuilt devil's advocate, able to teach itself how to generate hard-to-spot fakes.

    Not all artifice, however, is malevolent: two students built a program capable of producing art that looks like …art. Their source was the WikiArt database of 100,000 paintings: the program, GANGogh, has since generated creations that would not look out of place on a millionaire's wall.

    Such is the epic reach of digital duplicity: it threatens not only to disrupt politics and destabilise the world order, but also to reframe our ideas about art.

    请根据你所读到的文章内容,完成以下自测题目:

    1.What is the purpose of the first paragraph in the passage ?

    A.To indicate the danger of facial re-enactment technology.

    B.To raise awareness of the new fake news scenario.

    C.To explain how the facial re-enactment technology works.

    D.To demonstrated the link between technology and politician.

    答案(1)

    2.Which of the following statements about Face2Face is true ?

    A.It has caused widespread worry about the ultimate fake news scenario.

    B.It was developed by researchers at Massachusetts Institute of Technology.

    C.It has gone viral on social media especially on YouTube since released.

    D.It is widely used for generating realistic video of the president's pronouncements.

    答案(2)

    3.What is MediFor according to the article ?

    A.A new technology which can identify video forgery effectively.

    B.A research programme which develops image forgery detection techniques.

    C.A YouTube channel which alters videos of famous world leaders.

    D.An agency which responsible for the development of emerging technologies.

    答案(3)

    4.Which of the following methods cannot be used for spotting fake pictures ?

    A.Feeding pictures into a reverse image search.

    B.Scrutinising pictures for unusual edges or disturbances in colour.

    C.Searching for inconsistent shadows in questionable pictures.

    D.Inserting another image or airbrushing something out.

    答案(4)

    * * *

    (1)答案:C.To explain how the facial re-enactment technology works.

    解释:作者在第一段提到了在想象中伪造特朗普的影像,解释了Face2Face技术的功能。

    (2)答案:A.It has caused widespread worry about the ultimate fake news scenario.

    解释:人们开始担心这种新技术会催生完善的新闻造假技术。

    (3)答案:B.A research programme which develops image forgery detection techniques.

    解释:MediFor是一个为期五年的研究项目,主要研究能够快速检查并识别经过处理的图片的技术。

    (4)答案:D.Inserting another image or airbrushing something out.

    解释:目前,想辨别出伪造的照片,我们可以借助图片反向搜索,或寻找照片上不自然的线条和混乱的色彩,也可以检查影子是否有问题。

    0/0
      上一篇:金融时报:搞导弹不如搞贸易 下一篇:金融时报:圣依纳爵——16世纪的现代管理学先驱

      本周热门

      受欢迎的教程