首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

Keras作者François Chollet:Facebook的算法要“吃人”

作者:Bot

来源:论智

编者按:近日,Facebook泄露隐私一案持续发酵,而就在今天凌晨,谷歌深度学习专家、Keras作者François Chollet发表了一系列推文,详细阐述了他对这个事件的看法。

以下是论智对原文的编译:

对于用NewsFeed来影响用户的情绪和意见、预测用户未来决定(如分手或表白)这档子事,Facebook已经干了很久了,这还是一系列非常“成功”的实验。如果现在人工智能只有一个风险值得人们担心,那就是它能极其高效地、广泛地控制人口。而Facebook一直在试图成为AI的领导者,请看看他如今的影响范围,这才是真正可怕的事情。

作为AI研究领域的一员,如果你需要以各种形式融入进Facebook构建的生态环境中,试想一下,你能干什么?你这是在喂养一只最后可能会“吃人”的怪物。不好意思,也许这个表述太戏剧化,但就我们现在面临的威胁而言,这个词毫不夸张。

Facebook的问题不仅仅在于泄露用户个人隐私,并把数据悉数送进极权主义的圆形监狱里。就我个人的观点而言,其中更令人担忧的是他把数据信息消费作为心理控制媒介。时代在变化,未来我们的世界将在很大程度上被两个趋势长期左右:一个是我们都在失去自己的生活,工作也好,家庭也好,我们把消费和获取信息都寄托给了网络;另一个是AI,它正变得越来越聪明。

这两种趋势在算法层面重叠,继而一起影响人们的数字内容消费。社交媒体的算法是不透明的,我们浏览的内容、我们联系的好友、我们阅读的观点、我们收到的反馈……这些事都正在被算法逐步掌控。这么多年来,我们一直暴露在互联网环境中,现在算法已经综合多年的消费信息掌握了相当大的权利,无论是对于我们的生活,还是对于我们人格的塑造。而随着人们把生活重心迁移到数字领域,在这里,我们又将受到数字领域的的统治者——AI算法的深刻影响。

多年来,如果我们阅读的新闻(无论是真是假)是Facebook安排的的,看到的政治权利迭代是Facebook决定的,连自己的受众都是Facebook分配的,那么Facebook实际上是在控制我们的政治信仰和世界观。可能你还不知道,Facebook很早就开始进行通过调整新闻推送、预测用户决定来控制和影响不知情用户的实验,就现在已知的数据而言,这系列实验至少有2013起。

简而言之,Facebook可以即时考评我们的一切,并控制我们所消费的内容。当你用算法实现了感知和行动,你关注的还只是一个人工智能问题;但如果你开始用算法为人类行为建立一整套优化体系,那这就成了一个强化学习循环。

这是一个循环——你可以在里面观察目标当前的状态,并不断调整输入的信息,直到你从目标身上看到你期望的想法和行为。

当前人工智能研究领域一个很有前途的方向(尤其是Facebook重点关注的方向)是开发算法来尽可能有效地解决这些优化问题:如何关闭循环,如何实现对目标智能体的完全掌控。在数据泄露这件事里,我们是那些小白鼠。

这也更清晰地揭示了一个现实,就是人类的思想是高度脆弱的,它能被社会轻易操纵。在思考这些问题时,我列了一份简短的心理攻击模式清单,其中有一些已经被长期用于互联网广告推送,例如心理学意义上的社会正强化和负强化。它们在形式上是虚无的,甚至没有固定目标,但在效果上几乎百试百灵。从信息安全的角度看,我们可以把它们看作是“安全漏洞”,可以接管整个计算机系统的已知漏洞。

如果我们还保有人类的心智,那么这些漏洞就永远不会被修补,因为它们是我们的存在方式、是我们的DNA、是我们的心理学。在个人层面上,每个人都没有实际的方法来保护自己。这是一个静态的、脆弱的系统,但它将遭受智能AI算法越来越频繁的攻击。算法会完整地观察我们的所作所为、我们的信仰,然后把我们消费的内容掌握在自己手中。

更重要的是,大规模的人口控制——特别是政治控制——所用到的AI算法一点都不先进,它的智能水平只够管理我们的日常饮食信息。它没有自我意识,没有超级智能,但它的确成了一个可怕的威胁。

那么既然大规模的人口控制是可能的,或者说在理论上是可行的,世界怎么还没毁灭呢?简而言之,我认为这是因为我们的AI研究水平真的很低。但这个局面是会发生变化的,我们并不是没有继续上升的空间,而只是卡在了技术瓶颈上。

直到2015年,整个行业的所有广告定位算法都还只限于逻辑回归,事实上,这种情形到今天也没发生多大变化——只有极小数巨头开始转向更先进的模式。这也是我们常在网上看到许多和自己毫不相关的广告的原因。算法还不成熟。同样的,那些被敌对国家用来引导公众舆论的社交机器人也不是AI。现在这些技术都很原始,但也只是现在。

近年来AI技术取得了飞速进展,而这些技术正在开始被部署进算法和社交机器人中。直到2016年左右,公众才在新闻和广告上得知深度学习的概念,但Facebook已经在IT上投入了大量资金。所以谁知道接下来会发生什么呢?Facebook的钱已经砸进去了,他的目标也很明确:成为AI领域的领导者。这又代表着什么呢?当NewsFeed再次给我们推送新闻,Facebook你告诉我,你用的是哪门子的RL/AI算法。

我们正在寻找一个强大的实体,以建立包含20亿人的精细心理画像信息,并在上面运行大规模的行为来操纵实验,旨在开发世界上最顶尖的AI技术。

就我个人而言,这样的描述让我感到害怕。

如果你在AI领域工作,请不要帮助他们。不要玩他们的游戏,不要参与他们的研究生态系统。请显示出一些人类的良心。

小结

作为一名人工智能领域的从业人员,François Chollet对技术被滥用感到痛心是可以理解的,他对Facebook的批判也针针见血,发人深省。但通篇文章读下来,这位对ML/DL做出过巨大贡献的专家这次会不会有些太激动了。

“如果你在AI领域工作,请不要帮助他们。不要玩他们的游戏,不要参与他们的研究生态系统。”我们很难相信这样的话会出自一名深度学习专家之口。如果没有公众贡献数据,许多研究将难以进展,而大多数AI技术是朝着服务人类的目标前进的。这样的倡导是不是有些矫枉过正?

但换句话说,现在美国互联网广告市场占有率第一的公司是谷歌。用国外网友的评论就是:Facebook is a weird way to spell "Google"(Facebook就是Google的一种怪异写法)。如果Facebook劣迹般般,那谷歌真的会是业界白莲花吗?

小编不敢置评,只想看贝佐斯遛狗。

英语原版

Facebook has been running for a long time a very "successful" series of experiments about using the newsfeed to control users' moods and opinions, as well as predicting users' future decisions (e.g. predicting breakups and new relationships).

If there's one AI risk you should be worried about, this is it: extremely effective, scalable population control. And the fact that Facebook has been attempting to become a leader in AI is genuinely terrifying when you can see the full extent of the implications.

As a member of the AI research community: next you participate in Facebook's AI ecosystem in any way, think about what you are enabling. You are feeding a monster that may end up eating us all. Sorry if I sound dramatic, but the threat level we face deserves that vocabulary.

The problem with Facebook is not just the loss of your privacy and the fact that it can be used as a totalitarian panopticon. The more worrying issue, in my opinion, is its use of digital information consumption as a psychological control vector. Time for a thread.

The world is being shaped in large part by two long-time trends: first, our lives are increasingly dematerialized, consisting of consuming and generating information online, both at work and at home. Second, AI is getting ever smarter. These two trends overlap at the level of the algorithms that shape our digital content consumption. Opaque social media algorithms get to decide, to an ever-increasing extent, which articles we read, who we keep in touch with, whose opinions we read, whose feedback we get.

Integrated over many years of exposure, the algorithmic curation of the information we consume gives the systems in charge considerable power over our lives, over who we become. By moving our lives to the digital realm, we become vulnerable to that which rules it -- AI algorithms. If Facebook gets to decide, over the span of many years, which news you will see (real or fake), whose political status updates you’ll see, and who will see yours, then Facebook is in effect in control of your political beliefs and your worldview. This is not quite news, as Facebook has been known to run since at least 2013 a series of experiments in which they were able to successfully control the moods and decisions of unwitting users by tuning their newsfeeds’ contents, as well as prediction user's future decisions.

In short, Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior. A RL loop. A loop in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see. A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the phenomenon at hand. In this case, us.

This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. While thinking about these issues, I have compiled a short list of psychological attack patterns that would be devastatingly effective. Some of them have been used for a long time in advertising (e.g. positive/negative social reinforcement), but in a very weak, un-targeted form. From an information security perspective, you would call these "vulnerabilities": known exploits that can be used to take over a system.

In the case of the human mind, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. They're our psychology. On a personal level, we have no practical way to defend ourselves against them. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume. Importantly, mass population control -- in particular political control -- arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat.

So, if mass population control is already possible today -- in theory -- why hasn’t the world ended yet? In short, I think it’s because we’re really bad at AI. But that may be about to change. You see, our technical capabilities are the bottleneck here. Until 2015, all ad targeting algorithms across the industry were running on mere logistic regression. In fact, that’s still true to a large extent today -- only the biggest players have switched to more advanced models. It is the reason why so many of the ads you see online seem desperately irrelevant. They aren't that sophisticated. Likewise, the social media bots used by hostile state actors to sway public opinion have little to no AI in them. They’re all extremely primitive. For now.

AI has been making fast progress in recent years, and that progress is only beginning to get deployed in targeting algorithms and social media bots. Deep learning has only started to make its way into newsfeeds and ad networks around 2016. Facebook has invested massively in it. Who knows what will be next. It is quite striking that Facebook has been investing enormous amounts in AI research and development, with the explicit goal of becoming a leader in the field. What does that tell you? What do you use AI/RL for when your product is a newsfeed?

We’re looking at a powerful entity that builds fine-grained psychological profiles of over two billion humans, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it really scares me.

If you work in AI, please don't help them. Don't play their game. Don't participate in their research ecosystem. Please show some conscience

——François Chollet

  • 发表于:
  • 原文链接http://kuaibao.qq.com/s/20180322G1P5UQ00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券