机器人也学坏?

Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock's Norman Bates, it does not have an optimistic view of the world.

诺曼是一个用来理解图像的算法,不过,就像它的希区柯克电影中的同名任务诺曼.贝茨一样,它对世界的看法并不积极。

When a "normal" algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: "A group of birds sitting on top of a tree branch."

当一个人工智能产生的“正常”算法被问到它在 一个抽象图像中看到了什么时,它会选择一些愉快的回答:“一群鸟坐在树梢上。”

Norman sees a man being electrocuted.

而诺曼则会看到一个人正在被处决。

And where "normal" AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

而当“正常”的AI看到两个人对面站立时,诺曼看到的是一个人从窗户上跳了下去。

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from "the dark corners of the net" would do to its world view.

这种精神变态的算法是由麻省理工学院的一支团队创建的,这是实验的一部分,而实验是为了看看用“网上最黑暗的角落”的数据来训练人工智能时会对它的世界观产生什么影响。

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

他们给这个软件看了一些人们死于可怕的环境的图像,图像是从红迪新闻网的小组中筛选的。

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

然后,这个可以解读图像,描绘它看到了什么内容的人工智能程序,他们给它看了一些墨水图画然后问它从中看到了什么。

These abstract images are traditionally used by psychologists to help assess the state of a patient's mind, in particular whether they perceive the world in a negative or positive light.

这种抽象的图画一般是心理学家用来评估患者的心理状态的,尤其是他们对世界是积极还是消极的看法。

Norman's view was unremittingly bleak - it saw dead bodies, blood and destruction in every image.

诺曼的看法总是非常消极的——它从每幅图里都能看到死尸,血液和破坏。

Alongside Norman, another AI was trained on more normal images of cats, birds and people.

相比诺曼,另一个人工智能算法平时是用正常的猫,鸟和人们的图像来训练的。

It saw far more cheerful images in the same abstract blots.

它会从同样的抽象画中看出更为愉悦的图像。

The fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT's Media Lab which developed Norman.

诺曼的回答要黑暗得多的事实表明了机器学习这个新世界的一条残酷现实,伊亚德.若万教授说,他是这支来自麻省理工学院媒体实验室的负责研发诺曼的的三人团队中的一员。

"Data matters more than the algorithm.

“数据比算法重要得多。”

"It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."

“它表明了一个观点就是我们用来训练人工智能的数据会反映在它们认知世界和行为的方式上。”

Artificial intelligence is all around us these days - Google recently showed off AI making a phone call with a voice virtually indistinguishable from a human one, while fellow Alphabet firm Deepmind has made algorithms that can teach themselves to play complex games.

如今我们周围到处都是人工智能——谷歌最近展示了一个可以打电话的人工智能,它的声音跟人类的声音没什么区别,而Alphabet旗下的另一家公司Deepmind也推出了可以自学玩复杂游戏的算法。

And AI is already being deployed across a wide variety of industries, from personal digital assistants, email filtering, search, fraud prevention, voice and facial recognition and content classification.

而人工智能也已经在很多行业开展了运用,包括私人数字助理,电子邮件过滤,搜索,纠错,语音和面部识别还有内容分类等。

It can generate news, create new levels in video games, act as a customer service agent, analyse financial and medical reports and offer insights into how data centres can save energy.

它可以产生出新闻,创下电子游戏的新纪录,扮演客服人员,分析财政和医学报告还可以为数据中心该如何节能提供建议。

But if the experiment with Norman proves anything it is that AI trained on bad data can itself turn bad.

但如果诺曼实验确实能证明一些事的话,它可能说明那些靠坏的数据训练出来的人工智能自己可能也会变坏。

  • 发表于:
  • 原文链接https://kuaibao.qq.com/s/20180602G0US4900?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码关注腾讯云开发者

领取腾讯云代金券