首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

没有人的介入,人工智能仍很愚蠢

Without Humans,Artificial Intelligence Is Still Pretty Stupid

没有人的介入,人工智能仍很愚蠢

Christopher Mims 克里斯托弗·米姆斯

If you want to understand the limitations of the algorithms that control what we see and hear, take a look at Facebook Inc.'s experimental remedy for revenge porn.

To stop an ex from sharing nude pictures of you, you have to share nudes with Facebook itself. Not uncomfortable enough? Facebook also says a real live human will have to check them out.

Without that human review, it would be too easy to exploit Facebook's antirevenge-porn service to take down legitimate images. Artificial intelligence, it turns out, has a hard time telling the difference between your naked body and a nude by Titian.

The internet giants that tout their AI bona fides have tried to make their algorithms as human-free as possible,and that's been a problem. It has become increasingly apparent over the past year that building systems without humans“in the loop” can lead to disastrous outcomes,as actual human brains figure out how to exploit them.

Whether it's winning at games like Go or keeping watch for Russian influence operations, the best AI-powered systems require humans to play an active role in their creation, tending and operation.

Almost every big company using AI to automate processes has a need for humans as a part of that AI, says Panos Ipeirotis, a professor at New York University's Stern School of Business.

AI's constant hunger for human brains is based on our increasing demand for services. The more we ask for, the less likely a computer algorithm can go it alone—while the combination can be more effective and efficient. For example,bank workers who previously read every email in search of fraud now make better use of their time investigating emails the AI flags as suspicious, says Dr. Ipeirotis.

A machine-learning-based AI system is a piece of software that learns, almost like a primitive insect. That means that it can't be programmed—it must be taught.

To teach them, humans feed these systems examples, and they need truckloads. To build an AI filter to identify extremist content on YouTube, humans at Google manually reviewed over a million videos to flag qualifying examples, says a Google spokeswoman.

Even when an AI has been trained, its judgment is never perfect. Human oversight is still needed, especially with material in which context matters, such as those extremist YouTube posts. While AI can take down 83% before a single human flags them, says Google,the remaining 17% needs humans. But this serves as further training: This data can then be fed back into the algorithm to improve it.

There are many cases when AI can barely perform a task at all, as in the case of Facebook's nude pic filter.

Systems at risk of being gamed by fraudsters also require constant human attention,says Dr. Ipeirotis. AIs, once trained, are inexhaustible,but this is a curse as much as a blessing: People who outsmart the algorithm can multiply their results a millionfold.

Humans, on the other hand, are slower than AI, but can identify patterns based on very little information. Any time a system must deal with bad actors—like when an entity posing as an American on Twitter is actually a Russian agent—there is no replacement for live staffers.

如果你想了解左右着我们所见所闻的算法的局限性,那就看看脸书公司针对报复性艳照采取的试探性补救措施。

为杜绝前任分享你的裸照,你必须与脸书网站本身分享裸照。还不够尴尬?脸书还说要由大活人来核对这些照片。

没有人为审核,利用脸书的反报复性艳照服务撤下合法图片就会易如反掌。事实证明,人工智能很难分辨你的裸体和提香笔下的裸体画。

标榜自己的人工智能绝对可靠的互联网巨头想方设法让其算法尽可能剔除人的因素,而这一直是难题所在。最近一年来,越来越显而易见的一点是,打造不让人“处在其中”的系统会导致灾难性后果,因为真正的人脑才明白如何利用这些系统。

不论是在像围棋这样的比赛中获胜还是对俄罗斯施加影响的活动进行防范,最好的人工智能助力系统要求人在它们的创造、维护和运作中发挥积极作用。

纽约大学斯特恩商学院教授帕诺斯·伊佩罗蒂斯说,几乎每家使用人工智能来实现流程自动化的大公司都需要人成为人工智能的组成部分。

人工智能对人脑的持续渴求基于我们对服务越来越高的要求。我们要得越多,计算机算法独立完成的可能性就越小——而人机结合则会更加有效和高效。伊佩罗蒂斯博士说,比如,银行工作人员过去要逐条阅读电子邮件来寻找欺诈信息,现在则通过调查人工智能标记为可疑的邮件更好地利用了时间。

基于机器学习的人工智能系统是一个会学习的软件,有点像原始的昆虫。也就是说它不能被编程——它必须受教。

为了教导它们,人类将范例灌输给这些系统,而它们需要海量的范例。谷歌公司的一名女发言人说,为了打造人工智能过滤器来发现YouTube网站上的极端主义内容,谷歌的工作人员手动审查了超过一百万段视频来标记出供参照的范例。

就算是人工智能受过了训练,它的判断也决不会尽善尽美。人类的监督仍然是必需的,特别是对于背景具有重要意义的材料而言,比如YouTube上发布的极端主义内容。谷歌说,虽然人工智能可以在人类凭一己之力将这些内容标记出来之前把83%的此类内容撤下,剩下的17%却需要人的介入。不过这成为进一步的训练:这些数据接下来可以反馈给算法,从而让它得到改进。

很多时候人工智能几乎根本无法完成任务,脸书的裸照过滤器就是如此。

伊佩罗蒂斯博士说,有可能被骗子算计的系统也需要人的持续关注。人工智能一旦得到训练就会不知疲倦,但这既是好事也是祸端:比算法更聪明的人可以百万倍地放大他们的结果。

另一方面,人类的反应速度比人工智能慢,但能够基于很少的一点信息发现规律。只要有系统必须对付坏蛋的情况——比如推特上一个假扮成美国人的家伙其实是个俄罗斯特工——活生生的员工就不可替代。(李凤芹译自美国《华尔街日报》网站11月12日文章)

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20171214A0I47S00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券