前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >原创译文 | 埃隆·马斯克都在担心,具备感知力的人工智能会不会伤害人类?

原创译文 | 埃隆·马斯克都在担心,具备感知力的人工智能会不会伤害人类?

作者头像
灯塔大数据
发布2018-04-03 15:47:34
1.1K0
发布2018-04-03 15:47:34
举报
文章被收录于专栏:灯塔大数据灯塔大数据

导读:上一期了解了2018世界移动通信大会的相关介绍,今天我们来了解一下关于有感知力的AI的相关内容(文末更多往期译文推荐)

没有什么科幻小说比人工智能背叛人类的情节更能吸引观众。也许这是因为人工智能让我们开始真正面对人类这个概念。 但是从HAL 9000到Skynet到西部世界起义中的机器人,对具备感知力的人工智能的担忧感觉非常真实。即使埃隆·马斯克也担心人工智能的能力。

这些担心有没有根据呢? 也许有,也许没有。 也许具备感知力的人工智能不会伤害人类,因为它比算法更能理解我们。虽然人工智能不断取得惊人的进展,但真正有感知力的机器可能需要数十年才能完成。 也就是说,科学家正在拼凑功能和特征,使机器人越来越具备感知力。

获得自我意识

自我意识本身并不意味着意识或感觉,但它是使人工智能或机器人看起来更自然、更有生命力的重要基础特征。这也不是科幻小说。现在我们已经有了可以在其环境中获得基本自我意识的人工智能。

不久前,Google的Deep Mind为有机地学习如何走路做好了准备。结果很可笑:当人工智能的躯体穿越虚拟障碍时,网络上的人们都在取笑它那飘忽不定的手臂。但这项技术确实令人印象深刻。程序员没有教它走路,而是使机器能够调整方向,感知周围的事物。从那里开始,人工智能就像摇摇欲坠的孩子一样,学会了在不同的地形上行走。

Deep Mind的身体是虚拟的,但哥伦比亚大学的Hod Lipson开发出了一种类似蜘蛛状的机器人,它以相同的方式穿越物理空间。 机器人感知周围环境,经过大量的练习和各种慌乱,最终学会走路。如果研究人员添加或移除一条腿,该机器将利用其知识进行调整和重新学习。

寻求主动

人工智能最大的限制之一是它往往不能为自己定义问题。人工智能的目标通常是由它的人类创造者来定义的,然后研究人员不断训练机器来实现这个特定的目的。因为我们通常设计人工智能来执行特定的任务,而没有赋予它自己设定新目标的主动性,所以你可能不用担心机器人很快就会变成无赖或者奴役人类。但是也不要觉得太安全,因为科学家已经在帮助机器人设定和实现新的目标。

Ryota Kanai和他在东京创业公司Araya的团队通过向机器人灌输好奇心来激励机器人克服障碍。这些机器人发现,在探索他们的环境时,如果没有奔跑的开始程序,他们就无法爬上山。人工智能机器人发现了这个问题,并通过实验得出了一个独立于团队的解决方案。

创造意识

上述每一个构建模块都使科学家们更接近于实现最终的人工智能,这种人工智能就像人类一样是敏感而有意识的。但是这种飞跃在道德伦理上是有争议的,目前已经存在关于我们是否需要制定法律来保护机器人权利以及何时需要制定法律的争论。科学家们还在质疑如何测试人工智能意识,将Blade Runner的标志性的Voight-Kampff机器(一种用于测试机器人自我意识的测谎机)变为现实。

测试意识的一个策略是由Susan Schneider和Edwin Turner提出的AI意识测试。这有点像图灵测试,但它不是测试机器人是否能通过人类测试,而是寻找暗示意识的属性。测试将提出一些问题,以确定机器人是否可以在身体外构思自己,或者是否能理解像“来世”这样的概念。

尽管如此,测试还是有限制的。因为它是基于自然语言,所以没有语言能力但仍然可能会感知意识的人工智能不能参与测试。复杂的人工智能甚至可以很好地模仿人类,从而导致误报。在这种情况下,研究人员必须完全切断人工智能与互联网的连接,以确保它在测试前获得自己的信息。

现在我们只能模仿。现在的机器人并不是第一个代表真正人类的机器人。当BINA 48机器人与她所基于的人BINA Rothblatt见面时,机器人抱怨说,当想到真正的女人时,会出现“身份危机”。

Rothblatt在讨论机器人与她相似之后告诉BINA 48:“我认为人们不会死,死亡是可以选择的。”罗斯布拉特的梦想能够通过在机器中创造意识实现吗?

我们仍然不知道意识是什么

现在关于有感知力的人工智能的问题是:我们还不知道意识到底是什么。在建立真正有感知力的人工智能之前,我们必须先定义它。即便如此,栩栩如生的人工智能已经带来了道德伦理问题。移动助理的滥用就是一个很好的例子。甚至有可能,关于有感知力的机器人的道德伦理问题可能会限制科学家对它们进行深入研究。

那么,我们应该害怕有感知力的机器人,还是相反呢?

原文

Researchers are already building the foundation for sentient AI

Few sci-fi tropes enthrall audiences more reliably than the plot of artificial intelligence betraying mankind. Perhaps this is because AI makes us confront the very idea of what it means to be human. But from HAL 9000 to Skynet to the robots in Westworld’s uprising, fears of sentient AI feel very real. Even Elon Musk worries about what AI is capable of.

Are these fears unfounded? Maybe, maybe not. Perhaps a sentient AI wouldn’t harm humans because it would empathize with us better than an algorithm ever could. And while AI continues to make amazing developments, a truly sentient machine is likely decades away. That said, scientists are piecing together features and characteristics that inch robots ever closer to sentience.

Gaining self-awareness

Self-awareness in and of itself doesn’t indicate consciousness or sentience, but it’s an important base characteristic for making an AI or robot appear more natural and living. And this isn’t science fiction, either. We already have AI that can gain rudimentary self-awareness within its environment.

Not long ago, Google’s Deep Mind made waves for organically learning how to walk. The result was pretty humorous; people across the web poked fun at the erratic arm flailing of the AI’s avatar as it navigated virtual obstacles. But the technology is really quite impressive. Rather than teach it to walk, programmers enabled the machine to orient itself and sense surrounding objects in the landscape. From there, the AI taught itself to walk across different kinds of terrain, just like a teetering child would.

Deep Mind’s body was virtual, but Hod Lipson of Columbia Universitydeveloped a spider-like robot that traverses physical space in much the same way. The robot senses its surroundings and, through much practice and fidgeting, teaches itself to walk. If researchers add or remove a leg, the machine uses its knowledge to adapt and learn anew.

Seeking initiative

One of the greatest limits to AI is that it often can’t define problems for itself. An AI’s goals are typically defined by its human creators, and then researchers train the machine to fulfill that specific purpose. Because we typically design AI to perform specific tasks without giving it the self-initiative to set new goals, you probably don’t have to worry about a robot going rogue and enslaving humanity anytime soon. But don’t feel too safe, because scientists are already working on helping bots set and achieve new goals.

Ryota Kanai and his team at Tokyo startup Araya motivated bots to overcome obstacles by instilling them with curiosity. In exploring their environment, these bots discovered they couldn’t climb a hill without a running start. The AI identified the problem and, through experimentation, arrived at a solution, independently of the team.

Creating consciousness

Each of the above building blocks brings scientists a step closer to achieving the ultimate artificial intelligence, one that is sentient and conscious, just like a human. Such a leap forward is ethically contentious, and there’s already debate about whether, and when, we will need to create laws to protect robots’ rights. Scientists are also questioning how to test for AI consciousness, turningBlade Runner’s iconic Voight-Kampff machine, a polygraph machine for testing robots’ self-awareness, into reality.

One strategy for testing consciousness is the AI Consciousness Test proposed by Susan Schneider and Edwin Turner. It’s a bit like the Turing Test, but instead of testing whether a bot passes for a human, it looks for properties that suggest consciousness. The test would ask questions to determine whether a bot can conceive of itself outside of a physical body or can understand concepts like the afterlife.

There are limits to the test, though. Because it’s based on natural language, AI that is incapable of speech but still might experience consciousness wouldn’t be able to participate. Sophisticated AI might even mimic humans well enough to cause a false positive. In this case, researchers would have to completely sever the AI’s connection to the internet to make sure it gained its own knowledge before testing.

For now, mimicry is all we have. And current bots aren’t the first to stand in for real humans. When robot BINA 48 met with the human she’s based on, Bina Rothblatt, the bot complained about having an “identity crisis” when thinking about the real woman.

“I don’t think people have to die,” Rothblatt told BINA 48 after discussing how closely the robot resembles her. “Death is optional.” Could Rothblatt’s dream come true by creating consciousness in machines?

We still don’t know what consciousness is

The problem in asking about sentient AI is that we still don’t know what consciousness actually is. We’ll have to define it before we can build truly conscious artificial intelligence. That said, lifelike AI already presents ethical concerns. The abuse of mobile assistants is one good example of this. It’s even possible the ethical concerns surrounding sentient bots could limit scientists from pursuing them at all.

So, should we fear the sentient bots, or is it the other way around?

文章编辑:天天

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-03-07,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 灯塔大数据 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
腾讯云小微
腾讯云小微,是一套腾讯云的智能服务系统,也是一个智能服务开放平台,接入小微的硬件可以快速具备听觉和视觉感知能力,帮助智能硬件厂商实现语音人机互动和音视频服务能力。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档