首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

The dawn of artificial intelligence

原文

ISurveillanceanddislocationsare not, though, what worries Messrs Hawking, Musk and Gates, or what inspires aphalanxof futuristic AI films that Hollywood has recently unleashed onto cinema screens. Their concern is altogether more distant and moreapocalyptic: the threat of autonomous machines with superhumancognitivecapacity and interests that conflict with those ofHomo sapiens.

IISuch artificially intelligent beings are still a very long way off; indeed, it may never be possible to create them. Despite a century of poking and prodding at the brain, psychologists,neurologists, sociologists and philosophers are still a long way from an understanding of how a mind might be made—or what one is.And the business case for even limited intelligence of the general sort—the sort that has interests and autonomy—is far from clear. A car that drives itself better than its owner sounds like a boon; a car with its own ideas about where to go, less so.

IIIBut even if the prospect of what Mr Hawking calls “full” AI is still distant, it is prudent for societies to plan for how to cope. That is easier than it seems, not least because humans have been creating autonomous entities with superhuman capacities andunalignedinterests for some time. Governmentbureaucracies, markets and armies: all can do things which unaided, unorganized humans cannot. All need autonomy to function, all can take on life of their own and all can do great harm if not set up in a just manner and governed by laws and regulations.

IVThese parallels should comfort the fearful; they also suggest concrete ways for societies to develop AI safely. Just as armies need civilian oversight, markets are regulated and bureaucracies must be transparent and accountable, so AI systems must be open toscrutiny. Because systems designers cannot foresee every set of circumstances, there must also be anoff-switch. These constraints can be put in place withoutcompromisingprogress. From the nuclear bomb to traffic rules, mankind has used technicalingenuityand legalstricturesto constrain other powerful innovations.

VThespectreof eventually creating an autonomous non-human intelligence is so extraordinary that it risks overshadowing the debate. Yes, there areperils. But they should not obscure the huge benefits from the dawn of AI.

词汇短语

1*. surveillance[sə'veɪl(ə)ns] n. 监督;监视

2*. dislocation [,dɪslə(ʊ)'keɪʃ(ə)n] n. 转位;混乱;[医] 脱臼

3*. phalanx ['fælæŋks] n. 方阵;密集阵

4*. Apocalyptic [ə'pɑkə'lɪptɪk] adj. 启示录的;天启的

5. cognitive ['kɒɡnɪtɪv]adj. 认知的,认识的

6. Homo sapiens 智人(现代人的学名)

7. neurologist [,njʊə'rɒlədʒɪst] n.神经学家

8. unaligned [,ʌnə'laɪnd] adj. 不结盟的,不一致的

9. bureaucracy [,bjʊ(ə)'rɒkrəsɪ] n. 官僚机构

10. scrutiny ['skruːtɪnɪ] n. 详细审查

11. off-switch 关闭

12. compromising ['kɒmprəmaɪzɪŋ] adj. 妥协的

13. ingenuity [,ɪndʒɪ'njuːɪtɪ] n. 独创性;精巧

14*.stricture ['strɪktʃə] n. 限制,约束

15*.spectre ['spektə] n. 幽灵;妖怪;恐惧

16*.peril['perɪl] n. 危险;冒险

翻译点评

ISurveillanceanddislocationsare not, though, what worries Messrs Hawking, Musk and Gates, or what inspires aphalanxof futuristic AI films that Hollywood has recently unleashed onto cinema screens. Their concern is altogether more distant and moreapocalyptic: the threat of autonomous machines with superhuman cognitive capacity and interests that conflict with those ofHomo sapiens.

翻译:霍金、马斯克和盖茨担心的并非监视和错位,好莱坞最近上映的一系列关于未来人工智能的电影也并非由此找到的灵感。他们有更加长远和重大的担忧:具有超出人类认知能力而利益与人类相冲突的自动化机器会给人类带来巨大的威胁。

IISuch artificially intelligent beings are still a very long way off; indeed, it may never be possible to create them. Despite a century of poking and prodding at the brain,psychologists, neurologists,sociologists and philosophers are still a long way from an understanding of how a mind might be made—or what one is. And the business case for even limited intelligence of the general sort—the sort that has interests and autonomy—is far from clear. A car that drives itself better than its owner sounds like a boon; a car with its own ideas about where to go, less so.

翻译:这种人工智能距离我们还很遥远。事实上,这种人工智能有可能永远无法创造出来。尽管对大脑的研究已经持续了一个世纪,心理学家、神经学家、社会学家和哲学家还远远无法了解思想是如何产生的,或者说思想是什么。即便对于一般类型的有限的智能,即有兴趣和自主性的智能类型,其运作情况也很不明确。一辆车比它的车主驾驶技术更好,听起来是好事,但是一辆车对去哪儿有自己的想法,那可就不是好事了。

IIIBut even if the prospect of what Mr Hawking calls “full” AI is still distant, it is prudent for societies to plan for how to cope. That is easier than it seems, not least because humans have been creating autonomous entities with superhuman capacities andunalignedinterests for some time. Governmentbureaucracies, markets and armies: all can do things which unaided, unorganized humans cannot. All need autonomy to function, all can take on life of their own and all can do great harm if not set up in a just manner and governed by laws and regulations.

翻译:即使被霍金称为“全面”人工智能化的前景还很遥远,社会仍然需要谨慎对待,计划好应对方案。这其实不难,人类创造具有超人能力并且利益不一致的自主实体已经有一段时间了。政府机关、市场和军队,都可以做到孤立无组织的个人无法做到的事,都需要独立自主的运作,都可以主宰自己的生活,而如果不是以正当的方式建立并受法律法规管控,都会造成巨大的损失。

IVThese parallels should comfort the fearful; they also suggest concrete ways for societies to develop AI safely. Just as armies need civilian oversight, markets are regulated and bureaucracies must be transparent and accountable, so AI systems must be open toscrutiny. Because systems designers cannot foresee every set of circumstances, there must also be anoff-switch. These constraints can be put in place withoutcompromisingprogress. From the nuclear bomb to traffic rules, mankind has used technicalingenuityand legalstricturesto constrain other powerful innovations.

翻译:这些类似产物应该能够宽慰害怕人工智能的人,也为社会安全地发展人工智能指出具体路径。正如军队需要人民监督,市场需要规范,政府部门要透明负责一样,人工智能系统也要接受公开监管。由于系统设计者无法预测所有情况,所以必须还要设立停机开关。这些限制可以在不妨碍人工智能发展的前提下进行。无论是核弹还是交通规则,人类都已经运用过科技创新和法律约束来限制一些强大的发明。

VThespectreof eventually creating an autonomous non-human intelligence is so extraordinary that it risks overshadowing the debate. Yes, there areperils. But they should not obscure the huge benefits from the dawn of AI.

翻译:人们始终担心最终创造出自主的非人类智能,这种担忧如此强烈,几乎盖过了争论。是的,人工智能具有风险。但是这不应掩盖人工智能技术发展初期带来的巨大利益。

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20171211G0W1AG00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券