有关人工智能潜在恶意用途的最新报告,读上去就像是反乌托邦电视剧《黑镜》(Black Mirror)下一季的广告。
Drones using facial recognition technology to hunt down and kill victims. Information being manipulated to distort the social media feeds of targeted individuals. Cleaning robots being hacked to bomb VIPs. The potentially harmful uses of AI are as vast as the human imagination.
无人机利用面部识别技术搜寻并杀害受害者。信息正受到操纵,为的是篡改目标个人的社交媒体消息。清洁机器人被黑客入侵,来轰炸非常重要的人物。人类的想象力有多广,人工智能的潜在有害用途就有多广。
One of the big questions of our age is: how can we maximise the undoubted benefits of AI while limiting its downsides? It is a tough challenge. All technologies are dualistic, particularly so with AI given it can significantly increase the scale and potency of malicious acts and lower their costs.
我们这个时代的一个重要问题是:我们如何将人工智能毋庸置疑的好处最大化,同时限制它的坏处呢?这是一项艰巨的挑战。所有的技术都具有两面性,人工智能尤其如此,因为它能够极大地扩大恶意行为的规模和危害,同时降低成本。
The report, written by 26 researchers from several organisations including OpenAI, Oxford and Cambridge universities, and the Electronic Frontier Foundation, performs a valuable, if scary, service in flagging the threats from the abuse of powerful technology by rogue states, criminals and terrorists. Where it is less compelling is coming up with possible solutions.
这份报告由来自OpenAI、牛津大学(Oxford)、剑桥大学(Cambridge)以及Electronic Frontier Foundation等几家机构的26名研究员撰写。尽管可怕,但报告颇有意义地警示了流氓政府、犯罪分子和恐怖分子滥用强大技术而带来的威胁。报告在提出可能的解决方案方面则不那么令人信服。
Much of the public concern about AI focuses on the threat of an emergent superintelligence and the mass extinction of our species. There is no doubt that the issue of how to “control” artificial general intelligence, as it is known, is a fascinating and worthwhile debate. But in the words of one AI expert, it is probably “a second half of the 21st century problem”.
公众对于人工智能的担忧在很大程度上聚焦于超级智能出现的威胁以及人类的大规模灭绝。毫无疑问,如何“控制”人工通用智能的问题是一场有趣且有意义的辩论。但用一位人工智能专家的话来说,这可能是“21世纪后半叶的问题”。
The latest report highlights how we should already be worrying today about the abuse of relatively narrow AI. Human evil, incompetence and poor design will remain a bigger threat for the foreseeable future than some omnipotent and omniscient Terminator-style Skynet.
这份最新报告强调了我们现在就应该如何担心相对狭义的人工智能的滥用。在可预见的将来,与某个无所不能、无所不知的《终结者》(Terminator)式的天网(Skynet)相比,人类罪恶、无能和糟糕设计是更严重的威胁。
AI academics have led a commendable campaign to highlight the dangers of so-called lethal autonomous weapons systems. The United Nations is now trying to turn that initiative into workable international protocols.
人工智能学者领导了一场值得称赞的运动,强调所谓的致命自动武器系统的危险。联合国(UN)现在正努力将这项倡议转化为可行的全球协议。
Some interested philanthropists, including Elon Musk and Sam Altman, have also sunk money into research institutes focusing on AI safety, including one that co-wrote the report. Normally, researchers who call for more money to be spent on research should be treated with some scepticism. But there are estimated to be just 100 researchers in the western world grappling with the issue. That seems far too few, given the scale of the challenge.
一些感兴趣的慈善家,包括埃隆•马斯克(Elon Musk)和山姆•奥尔特曼(Sam Altman),已将资金投入专注人工智能安全的研究机构,包括一家联合撰写这份报告的机构。通常,对于呼吁扩大研究经费的研究人员,应该报以一定怀疑。但据估计,在西方世界,只有100名研究人员在应对这个问题。鉴于这项挑战的规模,这个数字似乎太小。
Governments need to raise their understanding in this area. In the US, the creation of a federal robotics commission to develop relevant governmental expertise would be a good idea. The British government is sensibly expanding the remit of the Alan Turing Institute to encompass AI.
各国政府需要提升他们在这个领域的认识。在美国,创建一个联邦机器人委员会发展相关政府专业技能将是个好办法。英国政府正明智地扩大图灵研究所(Alan Turing Institute)的职权范围,将人工智能囊括在内。
Some tech companies have already engaged the public on ethical issues concerning AI, and the rest should be encouraged to do so. Arguably, they should also be held liable for the misuse of their AI-enabled products in the same way that pharmaceutical firms are responsible for the harmful side-effects of their drugs.
一些科技公司已让公众参与到与人工智能相关的道德问题,其他公司也应被鼓励这么做。可以说,它们还应对它们的人工智能产品的不当使用负责,就像制药企业对药物的有害副作用负责一样。
Companies should be deterred from rushing AI-enabled products to market before they have been adequately tested. Just as the potential flaws of cyber security systems are sometimes explored by co-operative hackers, so AI services should be stress-tested by other expert users before their release.
应防止公司在人工智能产品接受充分测试之前匆忙将其推向市场。就像网络安全系统有时会用合作黑客查探潜在漏洞一样,人工智能服务应在发布之前由其他专家使用者实施压力测试。
Ultimately, we should be realistic that only so much can ever be done to limit the abuse of AI. Rogue regimes will inevitably use it for bad ends. We cannot uninvent scientific discovery. But we should, at least, do everything possible to restrain its most immediate and obvious downsides.
最后,我们应现实一点,要限制人工智能的滥用,我们能做的只有这么多。流氓政权将不可避免地将其用于罪恶的目的。我们不能消灭科学发现。但至少,我们应尽我们所能限制其最直接、最明显的缺点。