原创译文 | 埃隆·马斯克都在担心,具备感知力的人工智能会不会伤害人类?

导读:上一期了解了2018世界移动通信大会的相关介绍,今天我们来了解一下关于有感知力的AI的相关内容(文末更多往期译文推荐)

没有什么科幻小说比人工智能背叛人类的情节更能吸引观众。也许这是因为人工智能让我们开始真正面对人类这个概念。 但是从HAL 9000到Skynet到西部世界起义中的机器人,对具备感知力的人工智能的担忧感觉非常真实。即使埃隆·马斯克也担心人工智能的能力。

这些担心有没有根据呢? 也许有,也许没有。 也许具备感知力的人工智能不会伤害人类,因为它比算法更能理解我们。虽然人工智能不断取得惊人的进展,但真正有感知力的机器可能需要数十年才能完成。 也就是说,科学家正在拼凑功能和特征,使机器人越来越具备感知力。

获得自我意识

自我意识本身并不意味着意识或感觉,但它是使人工智能或机器人看起来更自然、更有生命力的重要基础特征。这也不是科幻小说。现在我们已经有了可以在其环境中获得基本自我意识的人工智能。

不久前,Google的Deep Mind为有机地学习如何走路做好了准备。结果很可笑:当人工智能的躯体穿越虚拟障碍时,网络上的人们都在取笑它那飘忽不定的手臂。但这项技术确实令人印象深刻。程序员没有教它走路,而是使机器能够调整方向,感知周围的事物。从那里开始,人工智能就像摇摇欲坠的孩子一样,学会了在不同的地形上行走。

Deep Mind的身体是虚拟的,但哥伦比亚大学的Hod Lipson开发出了一种类似蜘蛛状的机器人,它以相同的方式穿越物理空间。 机器人感知周围环境,经过大量的练习和各种慌乱,最终学会走路。如果研究人员添加或移除一条腿,该机器将利用其知识进行调整和重新学习。

寻求主动

人工智能最大的限制之一是它往往不能为自己定义问题。人工智能的目标通常是由它的人类创造者来定义的,然后研究人员不断训练机器来实现这个特定的目的。因为我们通常设计人工智能来执行特定的任务,而没有赋予它自己设定新目标的主动性,所以你可能不用担心机器人很快就会变成无赖或者奴役人类。但是也不要觉得太安全,因为科学家已经在帮助机器人设定和实现新的目标。

Ryota Kanai和他在东京创业公司Araya的团队通过向机器人灌输好奇心来激励机器人克服障碍。这些机器人发现,在探索他们的环境时,如果没有奔跑的开始程序,他们就无法爬上山。人工智能机器人发现了这个问题,并通过实验得出了一个独立于团队的解决方案。

创造意识

上述每一个构建模块都使科学家们更接近于实现最终的人工智能,这种人工智能就像人类一样是敏感而有意识的。但是这种飞跃在道德伦理上是有争议的,目前已经存在关于我们是否需要制定法律来保护机器人权利以及何时需要制定法律的争论。科学家们还在质疑如何测试人工智能意识,将Blade Runner的标志性的Voight-Kampff机器(一种用于测试机器人自我意识的测谎机)变为现实。

测试意识的一个策略是由Susan Schneider和Edwin Turner提出的AI意识测试。这有点像图灵测试,但它不是测试机器人是否能通过人类测试,而是寻找暗示意识的属性。测试将提出一些问题,以确定机器人是否可以在身体外构思自己,或者是否能理解像“来世”这样的概念。

尽管如此,测试还是有限制的。因为它是基于自然语言,所以没有语言能力但仍然可能会感知意识的人工智能不能参与测试。复杂的人工智能甚至可以很好地模仿人类,从而导致误报。在这种情况下,研究人员必须完全切断人工智能与互联网的连接,以确保它在测试前获得自己的信息。

现在我们只能模仿。现在的机器人并不是第一个代表真正人类的机器人。当BINA 48机器人与她所基于的人BINA Rothblatt见面时,机器人抱怨说,当想到真正的女人时,会出现“身份危机”。

Rothblatt在讨论机器人与她相似之后告诉BINA 48:“我认为人们不会死,死亡是可以选择的。”罗斯布拉特的梦想能够通过在机器中创造意识实现吗?

我们仍然不知道意识是什么

现在关于有感知力的人工智能的问题是:我们还不知道意识到底是什么。在建立真正有感知力的人工智能之前,我们必须先定义它。即便如此,栩栩如生的人工智能已经带来了道德伦理问题。移动助理的滥用就是一个很好的例子。甚至有可能,关于有感知力的机器人的道德伦理问题可能会限制科学家对它们进行深入研究。

那么,我们应该害怕有感知力的机器人,还是相反呢?

原文

Researchers are already building the foundation for sentient AI

Few sci-fi tropes enthrall audiences more reliably than the plot of artificial intelligence betraying mankind. Perhaps this is because AI makes us confront the very idea of what it means to be human. But from HAL 9000 to Skynet to the robots in Westworld’s uprising, fears of sentient AI feel very real. Even Elon Musk worries about what AI is capable of.

Are these fears unfounded? Maybe, maybe not. Perhaps a sentient AI wouldn’t harm humans because it would empathize with us better than an algorithm ever could. And while AI continues to make amazing developments, a truly sentient machine is likely decades away. That said, scientists are piecing together features and characteristics that inch robots ever closer to sentience.

Gaining self-awareness

Self-awareness in and of itself doesn’t indicate consciousness or sentience, but it’s an important base characteristic for making an AI or robot appear more natural and living. And this isn’t science fiction, either. We already have AI that can gain rudimentary self-awareness within its environment.

Not long ago, Google’s Deep Mind made waves for organically learning how to walk. The result was pretty humorous; people across the web poked fun at the erratic arm flailing of the AI’s avatar as it navigated virtual obstacles. But the technology is really quite impressive. Rather than teach it to walk, programmers enabled the machine to orient itself and sense surrounding objects in the landscape. From there, the AI taught itself to walk across different kinds of terrain, just like a teetering child would.

Deep Mind’s body was virtual, but Hod Lipson of Columbia Universitydeveloped a spider-like robot that traverses physical space in much the same way. The robot senses its surroundings and, through much practice and fidgeting, teaches itself to walk. If researchers add or remove a leg, the machine uses its knowledge to adapt and learn anew.

Seeking initiative

One of the greatest limits to AI is that it often can’t define problems for itself. An AI’s goals are typically defined by its human creators, and then researchers train the machine to fulfill that specific purpose. Because we typically design AI to perform specific tasks without giving it the self-initiative to set new goals, you probably don’t have to worry about a robot going rogue and enslaving humanity anytime soon. But don’t feel too safe, because scientists are already working on helping bots set and achieve new goals.

Ryota Kanai and his team at Tokyo startup Araya motivated bots to overcome obstacles by instilling them with curiosity. In exploring their environment, these bots discovered they couldn’t climb a hill without a running start. The AI identified the problem and, through experimentation, arrived at a solution, independently of the team.

Creating consciousness

Each of the above building blocks brings scientists a step closer to achieving the ultimate artificial intelligence, one that is sentient and conscious, just like a human. Such a leap forward is ethically contentious, and there’s already debate about whether, and when, we will need to create laws to protect robots’ rights. Scientists are also questioning how to test for AI consciousness, turningBlade Runner’s iconic Voight-Kampff machine, a polygraph machine for testing robots’ self-awareness, into reality.

One strategy for testing consciousness is the AI Consciousness Test proposed by Susan Schneider and Edwin Turner. It’s a bit like the Turing Test, but instead of testing whether a bot passes for a human, it looks for properties that suggest consciousness. The test would ask questions to determine whether a bot can conceive of itself outside of a physical body or can understand concepts like the afterlife.

There are limits to the test, though. Because it’s based on natural language, AI that is incapable of speech but still might experience consciousness wouldn’t be able to participate. Sophisticated AI might even mimic humans well enough to cause a false positive. In this case, researchers would have to completely sever the AI’s connection to the internet to make sure it gained its own knowledge before testing.

For now, mimicry is all we have. And current bots aren’t the first to stand in for real humans. When robot BINA 48 met with the human she’s based on, Bina Rothblatt, the bot complained about having an “identity crisis” when thinking about the real woman.

“I don’t think people have to die,” Rothblatt told BINA 48 after discussing how closely the robot resembles her. “Death is optional.” Could Rothblatt’s dream come true by creating consciousness in machines?

We still don’t know what consciousness is

The problem in asking about sentient AI is that we still don’t know what consciousness actually is. We’ll have to define it before we can build truly conscious artificial intelligence. That said, lifelike AI already presents ethical concerns. The abuse of mobile assistants is one good example of this. It’s even possible the ethical concerns surrounding sentient bots could limit scientists from pursuing them at all.

So, should we fear the sentient bots, or is it the other way around?

文章编辑:天天

原文发布于微信公众号 - 灯塔大数据(DTbigdata)

原文发表时间:2018-03-07

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏机器人网

最悲催的三流职业:一位机械设计师的神吐槽,想成为优秀设计师必看

按照国内的某种说法,人才分四类:首先是搞艺术,因为中华艺术是神圣的所以咱们常人无法达到;然后一流人才做销售,在中国各种政府采购及规则,能把销售做好不亚于搞艺术。...

5506
来自专栏VRPinea

VR游戏《Anamorphine》|颓废与清明交织出的混沌与矛盾,让这部作品颜值与内涵兼具

《Anamorphine》是一部充满矛盾的VR游戏作品:极致的浪漫、梦幻,充斥在周遭的艺术气息,具有回忆特质的年代感,颓废、破旧、丧与清明、空灵、柔软交织着的混...

1360
来自专栏跨界架构师

你,如何才能不被社会淘汰?记于程序员的七年之痒

        算算正式踏上工作岗位至今,也是第七个年头了。这一路走来,从初学者,到骨干,到管理者,深刻的认识到了职场的残酷。同时也经历了公司业务调整后的人员裁...

1702
来自专栏灯塔大数据

荐读|富有哲理的12条大数据金句 让你秒成大数据“砖家”

对《圣经》所有了解的朋友,可能会知道在《圣经》(启示录21章2节)中有个广为传颂的名言: ——看哪!上帝的帐幕在人间。他要与人同往,他们要作他的子民;上帝要亲自...

2974
来自专栏SimpleAI

Why Writing?

对于计算机,一开始我是抗拒的,高考填志愿,第一志愿是金融,第二志愿是国际金融,第三志愿想来想去填了一个管理科学。。。以我高出录取线两分的成绩,我理所当然地被分到...

965
来自专栏腾讯研究院的专栏

王迁:电子游戏直播的著作权问题研究

王迁  华东政法大学知识产权学院教授   摘   要:对电子游戏的直播涉及向公众传播游戏中的影视作品和音乐作品。判断未经许可的直播是否侵权,不应以是否影响电...

4387
来自专栏大数据文摘

业界 | 装着IBM大脑的悬浮机器人将飞向太空!

下一个通往国际空间站的飞船装载着近3吨的研究和补给材料,里面包括了通常你能想到的东西,比如沉淀物研究材料,一个植物用温度计,一个给巨型机器人Canadarm的替...

820
来自专栏PPV课数据科学社区

人类首次把跑车送上太空, 9本好书让你Don’t Panic!

2月7号凌晨,在全球逾十万现场观众瞩目下,佛罗里达肯尼迪航天中心 LC-39A 发射工位,SpaceX的“重型猎鹰”(Falcon Heavy)首飞成功并完成了...

2994
来自专栏量子位

脑机交互有多难?五位顶级科学家剖析马斯克的Neuralink

唐旭 陈桦 编译 量子位 报道 | 公众号 QbitAI ? 言出必行——这是伊隆·马斯克留给世人的印象。我们的意思是,不管他会提出多么疯狂的计划,比如用于火星...

2925
来自专栏PPV课数据科学社区

大数据与私生活

  电影Ex Machina中基于搜索引擎提供数据的仿生脑   我的智能手环用了将近一年,今天它上面唯一的一颗小按钮突然脱落,掉进地毯缝里再也找不到了。一年以前...

3265

扫码关注云+社区

领取腾讯云代金券