当前位置:游戏 > 游戏业界 > 业界资讯 > 正文

GMIC北京2017霍金教授演讲(7)

2017-04-30 14:53:57      参与评论()人


但是人工智能也有可能是人类文明史的终结,除非我们学会如何避免危险。我曾经说过,人工智能的全方位发展可能招致人类的灭亡,比如最大化使用智能性自主武器。今年早些时候,我和一些来自世界各国的科学家共同在联合国会议上支持其对于核武器的禁令。这次协商于上周开始,[为了避免歧义删掉这句]我们正在焦急的等待协商结果。目前,九个核大国可以控制大约一万四千个核武器,它们中的任何一个都可以将城市夷为平地,放射性废物会大面积污染农田,最可怕的危害是诱发核冬天,火和烟雾会导致全球的小冰河期。这一结果使全球粮食体系崩塌,末日般动荡,很可能导致大部分人死亡。我们作为科学家,对核武器承担着特殊的责任,因为正是科学家发明了它们,并发现它们的影响比最初预想的更加可怕。


At this stage, I may have possibly frightened you all here today, with talk of doom。 I apologize。 But it is important that you, as attendees to today‘s conference, recognize the position you hold in influencing future researchand development of today’s technology。 I believe that we join together, to call for support of international treaties, or signing letters presented to individual government all powers。 Technology leaders and scientists are doing what they can, to obviate the rise of uncontrollable AI。


现阶段,我对灾难的探讨可能惊吓到了在座的各位。很抱歉。但是作为今天的与会者,重要的是,你们要认清自己在影响当前技术的未来研发中的位置。我相信我们团结在一起,来呼吁国际条约的支持或者签署呈交给各国政府的公开信,科技领袖和科学家正极尽所能避免不可控的人工智能的崛起。


In October last year, I opened a new center in Cambridge, England, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research。 The Leverhulme Centre for the Future of Intelligence, is a multi-disciplinary institute, dedicated to researching the future of intelligence, as crucial to the future of our civilisation and our species。 We spend a great deal of time studying history, which let‘s face it, is mostly the history of stupidity。 So it’s a welcome change, that people are studying instead the future of intelligence。 We are aware of the potential dangers, but I am at heart an optimist, and believe that the potential benefits of creating intelligence are huge。 Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world, by industrialisation。

关闭