首页
学习
活动
专区
工具
TVP
发布

人工智能真的需要像人吗?

Artificial Intelligence doesn’t always need to be more human

There’s no shortage of ethical, moral, and even legal debates raging right now over artificial intelligence’s mimicry of humanity. As technology advances, companies continue to push the boundaries with virtual assistants and conversational AI, striving in most cases to more closely approximate real-life person-to-person interactions. The implication is that “more human” is better.

现在伴随着人工智能对人的模仿,相关的伦理、道德甚至法律争论都不绝于耳。随着技术的进步,商业公司继续拓展虚拟助手和会话人工智能的边界;在大多数情况下,他们努力让人机之间的互动更加接近于现实生活中人与人之间的互动。这里所暗含的追求是,“越像人越好”。

But that’s not necessarily the case.

但事实并非如此。

AI doesn’t need to be more human to serve human needs. It’s time for companies to stop obsessing over how closely their AI approximates real people and start focusing on the real strengths that this transformative technology can bring to consumers, businesses, and society.

为了满足人类的需求,人工智能并不一定要更像人。商业公司现在应该停止关注人工智能与现实人群的接近程度;他们更需要关注的是,这项变革性的技术给消费者、企业和社会带来的真正优势是什么。

Our compulsion to personify

我们的拟人化冲动

The desire to strive for more humanity within technology is understandable. As a species, we’ve long taken pleasure in the personification of animals and inanimate objects, whether it’s chuckling when you see a dog wearing a tiny top hat or doodling a smiley face on a steamy bathroom mirror. Such small modifications can cause people to instinctively react more warmly to an otherwise non-human entity. In fact, a team of researchers in the UK found that simply attaching an image of eyeballs to a supermarket donation bucket prompted a 48 percent increase in contributions.

想要在技术中实现更多的人性化,这样的愿望是可以理解的。作为一种(孤独的)物种,长久以来我们对于动物或者无生命个体的拟人化总会感到高兴。无论是看到戴着礼帽的小狗还是在浴室的镜子上涂画笑脸,我们都会莞尔一笑。类似的改变能让人们对于非人类实体做出更加本能、更加热烈的回应。事实上,英国的一个研究小组发现,只需要将眼球图像粘贴到超市的捐赠桶上就可以使捐献金额增加48%。

On the AI side, consider Magic Leap’s Mica, a shockingly lifelike and responsive virtual assistant who makes eye contact, smiles, and even yawns. A company spokesperson says Mica represents Magic Leap’s effort “to see how far we could push systems to create digital human representations.” But to what end? Just because people might toss more spare change into a donation bucket with eyes doesn’t mean personification of lifeless objects or concepts is always a good idea. If fact, it’s more likely to backfire on companies than you might think.

回到AI上,考虑一下Magic Leap最近推出的令人感到震惊的虚拟助手Mica;它是如此的逼真,甚至可以进行目光接触,微笑和打哈欠。这家公司的发言人表示,Mica代表了Magic Leap的努力,他们想要知道“他们能够在多大程度上推动系统发展,以便创造一个数字化的人类形象”。但是,这样的事有终点吗?仅仅因为人们会向有眼睛的捐赠桶捐赠更多的零钱,我们就能得出结论说,将无生命的物体或者概念拟人化就是一个好注意?但事实是,它比你想象的更可能对公司产生不良影响。

The perils of humanising AI

AI人性化的危险

Already companies that employ automation to replace human interactions are having to contend with legal questions around how these technologies present themselves. In California, Governor Jerry Brown has passed a new law that, when it goes into effect this summer, will require companies to disclose whether they are using automation to communicate with the public. While the intent of the law is to clamp down on bots that are designed to deceive rather than assist, the law’s effects could be far-reaching. But there are far more practical reasons why companies should rethink just how hard they’re trying to make their AI seem human. Consider:

对于那些使用自动化来取代人类互动的公司来说,他们现在不得不面对这些技术呈现方式的法律问题。在(美国)加州,州长Jerry Brown宣布了一项将于今年夏天生效的新法律。该法律要求相关公司必须披露和公众沟通的过程是否使用了自动化技术。尽管这项法律的出台是为了打击那些用于欺诈的机器人,但其法律影响可能是深远的。除此之外,我们还有更实际的理由来解释,为什么商业公司应该重新考虑他们是否需要让人工智能看起来像人类一样。考虑一下下面的内容:

False expectations. In the race to showcase AI innovation, the market has been flooded with single-task, low-utility chatbots with limited capabilities. While it’s OK to employ such technology for basic tasks, humanising such applications can set false expectations in users. If a chatbot presents itself as a human, shouldn’t it be able to do the things that a human can do? This would be the implication. So when customers reach the limitations of an application — say, a chatbot’s basic ability to tell the customer whether there’s an internet outage reported in their area — and seek to do more, the experience immediately becomes frustrating.

错误的期望。在争相展示人工智能的创新竞赛中,功能有限、单任务且低效用的聊天机器人在市场上泛滥成灾。虽然将此类技术用于简单的任务是可以接受的,但是将此类应用拟人化却会给用户带来错误的期望。如果一个聊天机器人的呈现形象是人,那么它不应该能够做到人可以做到的事情吗?这就是拟人化的“弦外之音”。想一想,你想要聊天机器人告诉你,你所在地区的互联网服务是否中断了,它却无能为力。因此,当用户触碰到了应用程序的极限,并且要求更多之后,体验就会变得令人沮丧。

Likewise, humanising virtual assistants can quickly spark very human outrage if the assistant offers little real utility. Just think about Microsoft’s Clippy, the much-reviled eyeballed paperclip who annoyed (but rarely assisted) a generation of Word users.

同样的,如果虚拟助手的实用性很差,那么它的人性化反而会迅速激起人们的愤怒情绪。想一下微软的Clippy,那个令人诟病的没有什么用的长着眼睛的回形针,它惹怒了整整一代Word用户。

Inviting challenges.Similarly, over-humanising a piece of technology can incite users to challenge the technology in the quest to expose its weaknesses. Just think about how people today like to test the limits of assistants like Alexa, asking “her” questions about where she’s from and her likes and dislikes. These challenges are often all in good fun, but that’s not always the case when a person encounters an automated customer service experience that tries to pass itself off as a real agent.

激起挑战。类似地,如果一项技术被过度人性化,那么用户可能会被诱使对技术进行挑战,从而导致技术的弱点被揭露出来。试想一下,现在的人们是如何对Alexa这样的虚拟助手的极限进行测试的呢?人们会询问“她”从哪里来,喜欢什么,又不喜欢什么。这样的“找茬”游戏通常都很有趣,但是当一个人使用自动化客户服务,而这一服务系统又试图像真正的代理“人”一样时,情况就变得有所不同。

Introducing human flaws to AI. Finally — and perhaps most importantly — why are companies seeking to make AI more human-like when its capabilities can for many functions far surpass that of humans? The concept of customer service teams emerged more than 250 years ago, alongside the Industrial Revolution, and people have been complaining about very human customer service failures and inefficiencies ever since. Why would we try to replicate that with machines? Take the basic customer contact center, for example. Companies spend $1.2 trillion on these centers globally, yet many consumers dread the customer service interactions they foster. Slow responses, inaccurate information, transfers, confusing journeys, privacy breaches: These are the limitations that arise when you employ humans to reach across complex, multifaceted organisations. Advanced, transactional, enterprise-grade conversational AI can manage such processes better, and companies should be taking the opportunity to reset customer expectations around these solutions.

为AI带来人的缺点。最后,或许是最重要的一点,当AI(在某方面)的能力远远超过人类时,为什么商业公司还要让AI像人一样呢?在250多年以前,客户服务的概念和工业革命一同出现;从那以后,人们就一直在抱怨,这类由人提供的客户服务效率低下且时常失败。所以,我们为什么要在机器上将失败复制一遍呢?以基本的客户联络中心为例,全球的公司在它们身上总计投入了1.2万亿美元,然而许多消费者对这些客户服务却满怀愤懑。回复缓慢,信息不准确,客服多次切换,隐私泄露;当你在一个复杂的、多面向的组织中雇佣人提供客户服务时,这些限制或者说弊端不可避免就会出现。而先进的、事务性的和企业级的会话AI能够更好的管理这些流程,公司应该抓住机会重塑客户对此类解决方案的期望。

Embracing AI’s non-human strengths

拥抱AI的非人化优势

Instead of spending so much energy trying to humanise AI interactions — and risking the alienation of customers in the process — let’s focus our energy on building the best possible automated technology to help with specific tasks. AI is exceptionally useful when it comes to parsing complex information and enabling seamless transactions — far more efficient and effective, in many cases, than human agents. So let’s elevate and celebrate those enhanced capabilities, not mask them with cutesy names and uncanny avatars by default.

我们不应该花费太多的精力将人工智能人性化,况且在这一过程中我们还要冒着客户会产生疏离感的风险。我们应该集中精力,在特定的任务上建立最好的自动化技术。在对复杂信息进行解析以及实现事务的无缝切换时,AI会非常有用;在很多情况下,它们甚至可以比人更加高效。所以,我们需要把这些强大的功能置于明处,而不需要用可爱的名字和令人惊奇的头像来掩盖它们的存在。

About 60 percent of consumers say their go-to channel for simple customer support inquiries is a digital self-service tool. These people aren’t turning to these tools for chit-chat or their adorable personalities. They’re turning to them for real solutions to their problems, and they’re grateful for the efficiencies when they actually work. That’s not to say these technologies can’t be customised in a way that conveys brand character or creates enjoyable, even playful, customer experiences. But such endeavors should be managed carefully, lest they backfire by setting overly ambitious expectations or alienate audiences via the use of a certain gender or demographic personality.

大约60%的消费者表示,他们通过使用数字自助服务工具来寻求简单的支持、咨询服务。他们使用这些工具并不是因为这些工具可以闲聊或者具有可爱的个性。他们使用这些工具只是因为他们想要找到问题的解决方案,而当这些工具真得有所帮助时他们会对其效率表示感谢。这并不是说,这些技术无法传达品牌特征,或者无法创造愉快的,甚至有趣的客户体验。这只是说,我们需要对系统谨慎进行管理。否则,我们将适得其反;设定了雄心勃勃的期望而导致反噬,想要通过某种性别或者人口特征拉近与用户的距离却最终导致了他们的疏远。

Enterprises today need to set fair expectations with their automation and avoid any personification that might distract or confuse users with regard to what the system is designed to do. AI has the ability to transform interactions for humans, and even humanity itself. But that doesn’t mean it needs to become more human itself.

如今的企业需要对自动化技术设置合理的期望,并努力避免任何可能会分散、混淆用户(对系统)预期的人格化尝试。人工智能有能力改变人机,甚至人与人之间的交互。但这并不意味着它需要变得更像人。

  • 发表于:
  • 原文链接https://kuaibao.qq.com/s/20190612A0M8B300?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券