前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >【DeepNash智能体】DeepMind-34位作者联名发表“无模型多智能体强化学习战略游戏”新基准

【DeepNash智能体】DeepMind-34位作者联名发表“无模型多智能体强化学习战略游戏”新基准

作者头像
深度强化学习实验室
发布2022-09-23 14:58:16
8470
发布2022-09-23 14:58:16
举报

论文原文来源:DeepMind

排版:OpenDeepRL

We introduce DeepNash, an autonomous agent capable of learning to play the imperfect information game Stratego1 from scratch, up to a human expert level. Stratego is one of the few iconic board games that Artificial Intelligence (AI) has not yet mastered. This popular game has an enormous game tree on the order of 10535 nodes, i.e., 10175 times larger than that of Go. It has the additional complexity of requiring decision-making under imperfect information, similar to Texas hold’em poker, which has a significantly smaller game tree (on the order of 10164 nodes). Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome. Episodes are long, with often hundreds of moves before a player wins, and situations in Stratego can not easily be broken down into manageably-sized sub-problems as in poker. For these reasons, Stratego has been a grand challenge for the field of AI for decades, and existing AI methods barely reach an amateur level of play. DeepNash uses a game-theoretic, model-free deep reinforcement learning method, without search, that learns to master Stratego via self-play. The Regularised Nash Dynamics (R-NaD) algorithm, a key component of DeepNash, converges to an approximate Nash equilibrium, instead of ‘cycling’ around it, by directly modifying the underlying multi-agent learning dynamics. DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games platform, competing with human expert players.

我们介绍了 DeepNash,一种能够从零开始学习玩不完美信息游戏 Stratego1 的自主智能体,直至达到人类专家的水平。Stratego 是人工智能 (AI) 尚未掌握的少数标志性棋盘游戏之一。这个流行的游戏有一个巨大的游戏树,大约有 10535 个节点,比围棋大 10175 倍。它具有额外的复杂性,需要在不完全信息下进行决策,类似于德州扑克,它的游戏树要小得多(大约 10164 个节点)。Stratego 中的决策是根据大量离散的行动做出的,行动和结果之间没有明显的联系。情节很长,在玩家获胜之前通常需要数百步棋,并且 Stratego 中的情况不能像扑克中那样轻易地分解为可管理大小的子问题。由于这些原因,Stratego 几十年来一直是 AI 领域的一项重大挑战,现有的 AI 方法几乎无法达到业余水平。DeepNash 使用博弈论、无模型的深度强化学习方法,无需搜索,通过自我对弈来学习掌握 Stratego。正则化纳什动力学 (R-NaD) 算法是 DeepNash 的关键组成部分,通过直接修改底层多智能体学习动力学,收敛到近似纳什均衡,而不是围绕它“循环”。DeepNash 在 Stratego 中击败了现有最先进的 AI 方法,并在 Gravon 游戏平台上获得了年度(2022 年)和历史前三名,与人类专家玩家竞争。

论文引用

@misc{2206.15378,

Author = {Julien Perolat and Bart de Vylder and Daniel Hennes and Eugene Tarassov and Florian Strub and Vincent de Boer and Paul Muller and Jerome T. Connor and Neil Burch and Thomas Anthony and Stephen McAleer and Romuald Elie and Sarah H. Cen and Zhe Wang and Audrunas Gruslys and Aleksandra Malysheva and Mina Khan and Sherjil Ozair and Finbarr Timbers and Toby Pohlen and Tom Eccles and Mark Rowland and Marc Lanctot and Jean-Baptiste Lespiau and Bilal Piot and Shayegan Omidshafiei and Edward Lockhart and Laurent Sifre and Nathalie Beauguerlange and Remi Munos and David Silver and Satinder Singh and Demis Hassabis and Karl Tuyls},

Title = {Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning},

Year = {2022},

Eprint = {arXiv:2206.15378},

}

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2022-07-27,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 深度强化学习实验室 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • We introduce DeepNash, an autonomous agent capable of learning to play the imperfect information game Stratego1 from scratch, up to a human expert level. Stratego is one of the few iconic board games that Artificial Intelligence (AI) has not yet mastered. This popular game has an enormous game tree on the order of 10535 nodes, i.e., 10175 times larger than that of Go. It has the additional complexity of requiring decision-making under imperfect information, similar to Texas hold’em poker, which has a significantly smaller game tree (on the order of 10164 nodes). Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome. Episodes are long, with often hundreds of moves before a player wins, and situations in Stratego can not easily be broken down into manageably-sized sub-problems as in poker. For these reasons, Stratego has been a grand challenge for the field of AI for decades, and existing AI methods barely reach an amateur level of play. DeepNash uses a game-theoretic, model-free deep reinforcement learning method, without search, that learns to master Stratego via self-play. The Regularised Nash Dynamics (R-NaD) algorithm, a key component of DeepNash, converges to an approximate Nash equilibrium, instead of ‘cycling’ around it, by directly modifying the underlying multi-agent learning dynamics. DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games platform, competing with human expert players.
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档