首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

Science:人工智能正在自我进化

Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

人工智能(Artificial intelligence,AI)正在逐步进化。研究人员借用达尔文进化论的概念开发了软件,其中包括“适者生存进化论” ,来构建人工智能程序,在没有人类输入的情况下一代又一代地改进人工智能。这个程序在几天内复制了几十年来人工智能的研究成果,它的设计者认为有一天,它会发现人工智能的新方法。

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

德克萨斯大学奥斯汀分校的计算机科学家里斯托 · 米克库莱宁(Risto Miikkulainen)没有参与这项研究,他说: “当大多数人还在蹒跚学步的时候,他们已经向未知领域迈出了一大步。”。“这是可能引发未来大量研究的论文之一。”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.

构建人工智能算法需要时间。以神经网络为例,这是一种常见的用于语言翻译和驾驶汽车的机器学习方法。这些网络松散地模拟了大脑的结构,并通过改变人造神经元之间的连接强度从训练数据中学习。较小的神经元亚回路负责执行特定的任务,例如发现路标,研究人员可以花费数月时间研究如何将它们连接起来,使它们能够无缝地工作。

In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.

近年来,科学家们通过自动化一些步骤加快了这一过程。但是,这些程序仍然依赖于将人工设计的现成电路缝合在一起。这意味着输出仍然受限于工程师的想象力和他们现有的偏见。

So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says.

于是,谷歌的计算机科学家顾克 · 勒和他的同事们开发了一个名为 autol-zero 的程序,这个程序可以在没有人类输入的情况下开发人工智能程序,只使用高中生可能知道的基本数学概念。他说: “我们的最终目标是开发出即使是研究人员也无法找到的新颖的机器学习概念。”。

The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck.

该程序发现算法使用松散的近似进化。它首先通过随机组合数学运算创建一个由100个候选算法组成的种群。然后测试他们在一个简单的任务,如图像识别问题,它必须决定一张图片显示的是一只猫还是一辆卡车。

In each cycle, the program compares the algorithms’ performance against hand-designed algorithms. Copies of the top performers are “mutated” by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These “children” get added to the population, while older programs get culled. The cycle repeats.

在每个循环中,程序将算法的性能与手工设计的算法进行比较。顶级执行者的副本通过随机替换、编辑或删除一些代码来“变异” ,以创建最佳算法的细微变化。这些“儿童”被加入到人口中,而较老的项目则被剔除。这个循环重复着。

The system creates thousands of these populations at once, which lets it churn through tens of thousands of algorithms a second until it finds a good solution. The program also uses tricks to speed up the search, like occasionally exchanging algorithms between populations to prevent any evolutionary dead ends, and automatically weeding out duplicate algorithms.

这个系统可以同时创建成千上万个这样的种群,这使得它可以每秒钟翻转成千上万个算法,直到找到一个好的解决方案。该程序还使用一些技巧来加快搜索速度,比如偶尔在种群之间交换算法以防止任何进化的死胡同,并自动淘汰重复算法。

In a preprint paper published last month on arXiv, the researchers show the approach can stumble on a number of classic machine learning techniques, including neural networks. The solutions are simple compared with today’s most advanced algorithms, admits Le, but he says the work is a proof of principle and he’s optimistic it can be scaled up to create much more complex AIs.

在上个月发表在 arXiv 上的预印本论文中,研究人员表明这种方法可以偶然发现许多经典的机器学习技术,包括神经网络。承认,与当今最先进的人工智能算法相比,解决方案很简单,但他说这项工作是原理的证明,他乐观地认为可以扩大规模,创造出更复杂的人工智能。

Still, Joaquin Vanschoren, a computer scientist at the Eindhoven University of Technology, thinks it will be a while before the approach can compete with the state-of-the-art. One thing that could improve the program, he says, is not asking it to start from scratch, but instead seeding it with some of the tricks and techniques humans have discovered. “We can prime the pump with learned machine learning concepts.”

尽管如此,埃因霍温理工大学的计算机科学家 Joaquin Vanschoren 认为,这种方法还需要一段时间才能与最先进的技术竞争。他说,有一件事可以改进这个程序,不是要求它从头开始,而是用人类已经发现的一些技巧和技术来播种它。“我们可以用学过的机器学习概念来启动泵。”

That’s something Le plans to work on. Focusing on smaller problems rather than entire algorithms also holds promise, he adds. His group published another paper on arXiv on 6 April that used a similar approach to redesign a popular ready-made component used in many neural networks.

这是勒计划要做的事情。他补充说,专注于较小的问题而不是整个算法也有希望。他的团队于4月6日在 arXiv 上发表了另一篇论文,采用类似的方法重新设计了许多神经网络中常用的现成组件。

But Le also believes boosting the number of mathematical operations in the library and dedicating even more computing resources to the program could let it discover entirely new AI capabilities. “That’s a direction we’re really passionate about,” he says. “To discover something really fundamental that will take a long time for humans to figure out.”

但是 Le 也相信增加图书馆中数学运算的数量和投入更多的计算资源可以让程序发现全新的人工智能能力。“这是我们真正热衷的一个方向,”他说。“去发现一些真正基础的东西,这些东西对人类来说需要很长时间才能弄清楚。”

https://www.sciencemag.org/news/2020/04/artificial-intelligence-evolving-all-itself#

图片版权归属于相关公司和个人,如有侵权,请联系删除。

  • 发表于:
  • 原文链接https://kuaibao.qq.com/s/20200415A0E9RA00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券