当前位置:游戏 > 游戏业界 > 业界资讯 > 正文

GMIC北京2017霍金教授演讲(5)

2017-04-30 14:53:57      参与评论()人


随着这些领域的发展,从实验室研究到有经济价值的技术形成良性循环。哪怕很小的性能改进,都会带来巨大的经济效益,进而鼓励更长期、更伟大的投入和研究。目前人们广泛认同,人工智能的研究正在稳步发展,而它对社会的影响很可能扩大,潜在的好处是巨大的,甚至文明所产生的一切,都可能是人类智能的产物;但我们无法预测我们可能取得什么成果,这种成果可能是被人工智能工具放大过的。但是,正如我说过的,根除疾病和贫穷并不是完全不可能,由于人工智能的巨大潜力,研究如何(从人工智能)获益并规避风险是非常重要的。


Artificial intelligence research is now progressing rapidly。 And this research can be discussed as short-term and long-term。 Some short-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars。 For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident, and a large probability of a small accident。 Other concerns relate to lethal intelligent autonomous weapons。 Should they be banned。 If so, how should autonomy be precisely defined。 If not, how should culpability for any misuse or malfunction be apportioned。 Otherissues include privacy concerns, as AI becomes increasingly able to interpretlarge surveillance datasets, and how to best manage the economic impact of jobs displaced by AI。


现在,关于人工智能的研究正在迅速发展。这一研究可以从短期和长期来讨论。一些短期的担忧在无人驾驶方面,从民用无人机到自主驾驶汽车。比如说,在紧急情况下,一辆无人驾驶汽车不得不在小风险的大事故和大概率的小事故之间进行选择。另一个担忧在致命性智能自主武器。他们是否该被禁止?如果是,那么自主该如何精确定义。如果不是,任何使用不当和故障的过失应该如何问责。还有另外一些担忧,由人工智能逐渐可以解读大量监控数据引起的隐私和担忧,以及如何管理因人工智能取代工作岗位带来的经济影响。


Long-term concerns, comprise primarily of the potential loss of control of AI systems, via the rise of super-intelligences that do not act in accordance with human wishes, and that such powerful systems would threaten humanity。 Are such days topic outcomes possible。 If so, how might these situations arise。 What kind of investments in research should be made, to better understand and to address the possibility of the rise of a dangerous super-intelligence, or the occurrence of an intelligence explosion。

关闭