Hung Le, Truyen Tran, Svetha Venkatesh
(Submitted on 5 Jan 2019 (v1), last revised 20 Mar 2019 (this version, v2))
Memory-augmented neural networks consisting of a neural controller and an external memory have shown potentials in long-term sequential learning. Current RAM-like memory models maintain memory accessing every timesteps, thus they do not effectively leverage the short-term memory held in the controller. We hypothesize that this scheme of writing is suboptimal in memory utilization and introduces redundant computation. To validate our hypothesis, we derive a theoretical bound on the amount of information stored in a RAM-like system and formulate an optimization problem that maximizes the bound. The proposed solution dubbed Uniform Writing is proved to be optimal under the assumption of equal timestep contributions. To relax this assumption, we introduce modifications to the original solution, resulting in a solution termed Cached Uniform Writing. This method aims to balance between maximizing memorization and forgetting via overwriting mechanisms. Through an extensive set of experiments, we empirically demonstrate the advantages of our solutions over other recurrent architectures, claiming the state-of-the-arts in various sequential modeling tasks.
本文分享自微信公众号 - CreateAMind（createamind）
原文出处及转载信息见文内详细说明，如有侵权，请联系 email@example.com 删除。
Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, Shakir Mohamed
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based C...
【导读】继Google DeepMind在Nature上发表最新论文AlphaGo Zero的重磅消息，今天 AlphaGo 团队被国际人工智能会议（IJCAI...