首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

人工智能时代 如何与AI安全共处

人工智能时代

如何与AI安全共处

Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.

强大的 AI 需要与人类价值观相一致才够可靠。这是否意味着 AI 最终将不得不对这些价值观进行管制?剑桥哲学家Huw Price和 Karina Vold思考了超智能时代安全与自主的权衡。

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next?

True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that are capable of human-level performance on the full range of tasks that we ourselves can tackle.

这是 AI 的十年, 一个又一个惊人的壮举陆续出现。一个击败所有的人和其他象棋机器人的AI , 它只学习了短短四小时。那是昨天的新闻, 接下来等着我们的是什么?

诚然, 这些惊人的成就都属于所谓的狭义人工智能, 即机器只执行高度专门化的任务。但许多专家认为这种限制只是暂时的。到本世纪中叶, 我们可能拥有人工通用智能 (AGI)--拥有人类处理事务水平的机器。

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines?

一旦机器的发展加速,它们成为比我们设计更优的机器,这对我们来说意味着什么?我们能否确保这类机器的安全,并且值得与我们共存?

On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more we ask it to do for us, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, who didn’t really want his breakfast to turn to gold as he put it to his lips.

在好的方面, 在很多事情上AI已经是有用的和有利可图的, 超级 AI 可能是超级有用的, 超级有利可图。但是, AI 越强大, 我们要求它为我们做的越多, 更重要的是, 谨慎规定它的目标和任务就越重要 。民间传说中充斥着一些人的故事, 他们要错了东西, 带来灾难性的后果--例如, 国王迈达斯, 在他把早餐放进嘴里的时候他并不想自己的早餐变成金子。

So we need to make sure that powerful AI machines are ‘human-friendly’ – that they have goals reliably aligned with our own values. One thing that makes this task difficult is that by the standards we want the machines to aim for, we ourselves do rather poorly. Humans are far from reliably human-friendly. We do many terrible things to each other and to many other sentient creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

因此, 我们需要确保强大的 AI 机器是 "人类友好"-他们的目标可靠地与我们自己的价值观达成一致。这项任务很困难是因为,按照我们希望机器向人类某些标准看齐, 但这些方面我们自己做得相当差。人类远不是“可靠的人类友好”的范本。我们对彼此和与我们共享地球的许多其他有情生物做了许多可怕的事情。如果超常智慧机器做得比我们好得多, 那我们就麻烦大了。我们将拥有放大我们自己本性的黑暗面的强大的新智慧 。

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll have the smarts for the job. If there are routes to the uplands, they’ll be better than us at finding them, and steering us in the right direction. They might be our guides to a much better world.

为了安全起见, 我们希望机器在道德上和认知上的超人一样。我们希望他们向着道德高地前进, 而不是我们许多人常常陷入的低谷。幸好他们会有担起这份工作的智慧。如果有通往高地的路线, 他们会比我们更好地找到他们, 并引导我们朝着正确的方向前进。他们可能是把我们引向一个更美好世界的向导, 。

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever ‘it’ actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views.

然而, 这一乌托邦式的愿景存在两大问题。一是我们如何让机器开始旅途, 另一个是到达这个目的地意味着什么。"开始" 的问题是, 我们需要告诉机器他们正在寻找什么, 给他们足够的清晰度和精确度, 这样不管它实际上是什么,我们都可以确信他们会找到它。这是一个艰巨的挑战, 因为我们对自己理想都会感到困惑和矛盾, 而且不同的社区都可能有不同的看法。

The ‘destination’ problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

"目的地" 的问题是, 在把自己掌握在这些道德指导和看门人手中时, 我们可能牺牲了自己的自主权——这是使我们成为人类的一个重要部分。

Just to focus on one aspect of these difficulties, we are deeply tribal creatures. We find it very easy to ignore the suffering of strangers, and even to contribute to it, at least indirectly. For our own sakes, we should hope that AI will do better. It is not just that we might find ourselves at the mercy of some other tribe’s AI, but that we could not trust our own, if we had taught it that not all suffering matters. This means that as tribal and morally fallible creatures, we need to point the machines in the direction of something better. How do we do that? That’s the getting started problem.

只关注一个方面的困难, 我们是非常部落的生物。我们发现人类很容易忽略陌生人的痛苦, 甚至至少是间接地加深这痛苦,。为了我们自己, 我们应该希望 AI 能做得更好。这不仅是因为我们可能会受到属于别的族群的 AI 的摆布, 还是因为我们不能相信我们自己, 如果我们教它不是所有的痛苦都很重要。这意味着, 作为部落和道德上容易犯错的生物, 我们需要把机器指向更好的方向。我们该怎么做?这就是开始的问题。

As for the destination problem, suppose that we succeed. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own tribes, for example.

至于目的地问题, 假设我们成功了。那些比我们坚持道德高地的机器可能会阻止我们目前认为理所当然的一些失误。例如, 我们可能失去我们依自己喜好进行歧视的自由。

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to keep slaves, or to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical overlords – sanctimonious silicon curtailing our options? They might be so good at doing it that we don’t notice the fences; but is this the future we want, a life in a well-curated moral zoo?

丧失行为不端的自由并不总是一件坏事, 当然: 剥夺使用奴隶、或让孩子在工厂工作, 或在餐馆吸烟的自由是进步的迹象。但我们是否已经准备好迎接道德霸主——虚伪的硅来减少我们的选择?我们注意不到的地方他们可能会做得非常好,但在一个精心规划的道德动物园里生活,但是我们想要的未来吗?

These issues might seem far-fetched, but they are already on our doorsteps. Imagine we want an AI to handle resource allocation decisions in our health system, for example. It might do so much more fairly and efficiently than humans can manage, with benefits for patients and taxpayers. But we’d need to specify its goals correctly (e.g. to avoid discriminatory practices), and we’d be depriving some humans (e.g. senior doctors) of some of the discretion they presently enjoy. So we already face the getting started and destination problems. And they are only going to get harder.

这些问题可能看起来遥不可及, 但它们已经来到我们的门前。例如, 我们想要一个 AI 来处理我们的卫生系统中的资源分配决策问题。它可能比人类能做得更公平、更有效率, 对病人和纳税人都有好处。但我们需要正确地给定它的目标 (例如, 避免歧视性做法), 我们将剥夺一些人 (例如高级医生) 的一些他们目前享有的酌处权。因此, 我们已经面临的“开始”和“目的地”问题。这些问题只会变得更难。

This isn’t the first time that a powerful new technology has had moral implications. Speaking about the dangers of thermonuclear weapons in 1954, Bertrand Russell argued that to avoid wiping ourselves out “we have to learn to think in a new way”. He urged his listener to set aside tribal allegiances and “consider yourself only as a member of a biological species... whose disappearance none of us can desire.”

这不是第一次有一种强大的新技术带来道德上的影响。谈到1954年核武器的危险, 罗素说, 为了避免将我们自己从地球上抹去, "我们必须学会以一种新的方式思考"。他敦促他的听众把部落的忠诚放在一边, "把自己看成是一个生物物种的成员,我们不希望看到任何一个一个成员的消失。”

We have survived the nuclear risk so far, but now we have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if so it will require the same cooperative spirit, the same willingness to set aside tribalism, that Russell had in mind.

到目前为止, 我们已经幸免于核危险, 但现在我们有了一个新的强大的技术需要处理——技术本身, 一种新的思维方式。为了我们自己的安全, 我们需要向这些新的思想家们指出正确的方向, 让他们为我们做得很好。目前尚不清楚这是否可能, 但如果是这样, 它将需要同样的合作精神, 同样的意愿, 抛弃部落主义, 罗素的话应该铭记在心。

But that’s where the parallel stops. Avoiding nuclear war means business as usual. Getting the long-term future of life with AI right means a very different world. Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. That means a radical end to human exceptionalism. All the more reason to think about the destination now, and to be careful about what we wish for.

但是, 核危机与人工智能发展的相似性也到此为止。避免核战争意味着像往常一样做生意。用 AI 的权利来获得更长远的未来意味着一个截然不同的世界。一般,智力和道德推理通常被认为是人类独一无二的能力。但是安全似乎要求我们把它们当作一个包裹: 如果我们要给机器提供一般的情报, 我们也需要给他们道义上的权威。这意味着人类例外论的根本终结。现在,这给了我们更多理由去思考目的地相关的问题, 并且要小心我们希望得到什么。

Professor Huw Price and Dr Karina Vold are at the Faculty of Philosophy and the Leverhulme Centre for the Future of Intelligence, where they work on 'Agents and persons'. This theme explores the nature and future of AI agency and personhood, and our impact on our human sense on what it means to be a person.

Huw教授和Karina Vold博士在哲学系和智能未来的 Leverhulme 中心从事研究, 他们在那里就 "代理人和人" 工作进行探索。这一主题探讨了 AI 代理和人格的性质和未来, 以及我们对人类意义的影响。

此文为“牛津剑桥校友会”公众号原创,翻译版权归属“牛津剑桥校友会”。转载请联系我们并且注明出处

(微信名:牛津剑桥校友会; 微信ID:Oxbridge-alumini,经授权转载)。感谢您对我们的关注与支持。

▲原文及图片选自牛津官网

▲编译:Snow

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20180421G1E8FH00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券