前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >人工智能学术速递[7.28]

人工智能学术速递[7.28]

作者头像
公众号-arXiv每日学术速递
发布2021-07-29 14:21:40
1K0
发布2021-07-29 14:21:40
举报

访问www.arxivdaily.com获取含摘要速递,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域,更有搜索、收藏、发帖等功能!点击阅读原文即可访问

cs.AI人工智能,共计46篇

【1】 Predictive Coding: a Theoretical and Experimental Review 标题:预测编码:理论与实验综述

作者:Beren Millidge,Anil Seth,Christopher L Buckley 机构:School of Informatics, University of Edinburgh, Anil K Seth, Sackler Center for Consciousness Science, School of Engineering and Informatics, University of Sussex, CIFAR Program on Brain, Mind, and Consciousness, Evolutionary and Adaptive Systems Research Group 备注:27/07/21 initial upload 链接:https://arxiv.org/abs/2107.12979 摘要:预测编码为大脑皮层功能提供了一个潜在的统一解释——假设大脑的核心功能是最小化与世界生成模型相关的预测错误。该理论与贝叶斯脑框架密切相关,在过去二十年中,在理论和认知神经科学领域取得了重大影响。大量的研究是基于对预测编码的改进和扩展的理论和数学模型的实证检验,以及在评估其在大脑中实现的潜在生物学合理性以及该理论所作的具体神经生理学和心理学预测。然而,尽管预测编码一直很受欢迎,但目前还没有对预测编码理论,特别是预测编码领域的最新进展进行全面的综述。在这里,我们提供了一个全面的审查两者的核心数学结构和逻辑预测编码,从而补充了最近的教程在文献中。我们还回顾了框架内的大量经典和最新工作,从可以实现预测编码的神经生物学现实微电路,到预测编码和广泛使用的误差反向传播算法之间的密切关系,同时考察了预测编码与现代机器学习技术的密切关系。 摘要:Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the world. The theory is closely related to the Bayesian brain framework and, over the last two decades, has gained substantial influence in the fields of theoretical and cognitive neuroscience. A large body of research has arisen based on both empirically testing improved and extended theoretical and mathematical models of predictive coding, as well as in evaluating their potential biological plausibility for implementation in the brain and the concrete neurophysiological and psychological predictions made by the theory. Despite this enduring popularity, however, no comprehensive review of predictive coding theory, and especially of recent developments in this field, exists. Here, we provide a comprehensive review both of the core mathematical structure and logic of predictive coding, thus complementing recent tutorials in the literature. We also review a wide range of classic and recent work within the framework, ranging from the neurobiologically realistic microcircuits that could implement predictive coding, to the close relationship between predictive coding and the widely-used backpropagation of error algorithm, as well as surveying the close relationships between predictive coding and modern machine learning techniques.

【2】 The social dilemma in AI development and why we have to solve it 标题:人工智能发展中的社会困境及其必须解决的原因

作者:Inga Strümke,Marija Slavkovik,Vince Madai 机构:the date of receipt and acceptance should be inserted later 链接:https://arxiv.org/abs/2107.12977 摘要:虽然对合乎道德的人工智能(AI)系统的需求在增加,但不合乎道德的人工智能使用的数量却在加快,尽管其中不乏合乎道德的准则。我们认为,一个主要的潜在原因是,人工智能开发人员面临着一个社会困境,在人工智能开发道德,阻止道德的最佳实践的广泛适应。我们定义了人工智能开发的社会困境,并描述了为什么当前人工智能开发伦理危机的解决离不开人工智能开发人员的社会困境。我们认为,人工智能的发展必须专业化,以克服社会困境,并讨论如何医学可以作为一个模板在这个过程中使用。 摘要:While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a main underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.

【3】 Reinforcement Learning with Formal Performance Metrics for Quadcopter Attitude Control under Non-nominal Contexts 标题:非标称环境下四轴飞行器姿态控制的形式化性能度量强化学习

作者:Nicola Bernini,Mikhail Bessa,Rémi Delmas,Arthur Gold,Eric Goubault,Romain Pennec,Sylvie Putot,François Sillion 机构:Fran¸cois Silliona, Uber ATCP, Paris, France, LIX, Ecole polytechnique, CNRS, IP-Paris, Palaiseau, France 链接:https://arxiv.org/abs/2107.12942 摘要:通过对四直升机姿态控制器实例的深入讨论,探讨了控制器设计的强化学习方法。我们提供了所有的细节,可以重现我们的方法,从一个CrazyFlie2.0在各种标称和非标称条件下的动力学模型开始,包括部分电机故障和阵风。我们发展了一种稳健的信号时序逻辑,以定量评估车辆的行为和衡量控制器的性能。针对不同的性能指标,本文详细描述了训练算法、神经网络结构、超参数、观测空间的选择。我们讨论了所得到的控制器对一个转子的部分功率损失和阵风的鲁棒性,并通过强化学习得出了实际控制器设计的结论。 摘要:We explore the reinforcement learning approach to designing controllers by extensively discussing the case of a quadcopter attitude controller. We provide all details allowing to reproduce our approach, starting with a model of the dynamics of a crazyflie 2.0 under various nominal and non-nominal conditions, including partial motor failures and wind gusts. We develop a robust form of a signal temporal logic to quantitatively evaluate the vehicle's behavior and measure the performance of controllers. The paper thoroughly describes the choices in training algorithms, neural net architecture, hyperparameters, observation space in view of the different performance metrics we have introduced. We discuss the robustness of the obtained controllers, both to partial loss of power for one rotor and to wind gusts and finish by drawing conclusions on practical controller design by reinforcement learning.

【4】 Persistent Reinforcement Learning via Subgoal Curricula 标题:通过子目标课程进行持续强化学习

作者:Archit Sharma,Abhishek Gupta,Sergey Levine,Karol Hausman,Chelsea Finn 机构:† Stanford University, ‡ Google Brain, # UC Berkeley 链接:https://arxiv.org/abs/2107.12931 摘要:强化学习(RL)有望实现对不同智能体复杂行为的自主获取。然而,当前强化学习算法的成功取决于一个经常被忽视的要求——每次试验都需要从一个固定的初始状态分布开始。不幸的是,在每次试验后将环境重置为初始状态需要大量的人类监督和广泛的环境仪器,这破坏了自主强化学习的目的。在这项工作中,我们提出了价值加速持续强化学习(VaPRL),它产生了一个初始状态的课程,使得代理可以在较容易的任务成功的基础上进行引导,从而有效地学习较难的任务。代理还学习达到课程建议的初始状态,最大限度地减少对人类干预学习的依赖。我们观察到,与幕式RL相比,VaPRL减少了三个数量级所需的干预,同时在各种模拟机器人问题的样本效率和渐近性能方面都优于现有的无重置RL方法。 摘要:Reinforcement learning (RL) promises to enable autonomous acquisition of complex behaviors for diverse agents. However, the success of current reinforcement learning algorithms is predicated on an often under-emphasised requirement -- each trial needs to start from a fixed initial state distribution. Unfortunately, resetting the environment to its initial state after each trial requires substantial amount of human supervision and extensive instrumentation of the environment which defeats the purpose of autonomous reinforcement learning. In this work, we propose Value-accelerated Persistent Reinforcement Learning (VaPRL), which generates a curriculum of initial states such that the agent can bootstrap on the success of easier tasks to efficiently learn harder tasks. The agent also learns to reach the initial states proposed by the curriculum, minimizing the reliance on human interventions into the learning. We observe that VaPRL reduces the interventions required by three orders of magnitude compared to episodic RL while outperforming prior state-of-the art methods for reset-free RL both in terms of sample efficiency and asymptotic performance on a variety of simulated robotics problems.

【5】 Angel's Girl for Blind Painters: an Efficient Painting Navigation System Validated by Multimodal Evaluation Approach 标题:盲人画家的天使女孩:一种多模态评价验证的高效绘画导航系统

作者:Hang Liu,Menghan Hu,Yuzhen Chen,Qingli Li,Guangtao Zhai,Simon X. Yang,Xiao-Ping Zhang,Xiaokang Yang 备注:13 pages, 18 figures 链接:https://arxiv.org/abs/2107.12921 摘要:对于热爱绘画但不幸有视觉障碍的人来说,拿着画笔创作作品是一项非常困难的任务。这一特殊群体中的人们渴望拿起画笔,像达芬奇一样,创造并充分利用自己的才华。因此,为了最大限度地弥合这一鸿沟,我们提出了一种绘画导航系统,帮助盲人进行绘画和艺术创作。该系统由认知系统和引导系统组成。系统采用基于二维码的画板定位、基于目标检测的刷子导航和实时定位。同时,本文采用了基于语音的人机交互和一种简单高效的位置信息编码规则。此外,我们还设计了一个有效判断笔刷是否到达目标的标准。根据实验结果,从被测者面部提取的热曲线表明,被蒙眼甚至盲眼的被测者都能较好地接受。在1s的提示频率下,喷漆导航系统的完成度为89%,标准差为8.37%,溢出度为347%,标准差为162.14%。刷尖轨迹的优良类型占74%,相对运动距离为4.21,SD为2.51。这项工作表明盲人通过手中的画笔感受世界是可行的。未来,我们计划在手机上部署Angle的眼睛,使其更便携。建议的喷漆导航系统的演示视频可从以下网址获得:https://doi.org/10.6084/m9.figshare.9760004.v1. 摘要:For people who ardently love painting but unfortunately have visual impairments, holding a paintbrush to create a work is a very difficult task. People in this special group are eager to pick up the paintbrush, like Leonardo da Vinci, to create and make full use of their own talents. Therefore, to maximally bridge this gap, we propose a painting navigation system to assist blind people in painting and artistic creation. The proposed system is composed of cognitive system and guidance system. The system adopts drawing board positioning based on QR code, brush navigation based on target detection and bush real-time positioning. Meanwhile, this paper uses human-computer interaction on the basis of voice and a simple but efficient position information coding rule. In addition, we design a criterion to efficiently judge whether the brush reaches the target or not. According to the experimental results, the thermal curves extracted from the faces of testers show that it is relatively well accepted by blindfolded and even blind testers. With the prompt frequency of 1s, the painting navigation system performs best with the completion degree of 89% with SD of 8.37% and overflow degree of 347% with SD of 162.14%. Meanwhile, the excellent and good types of brush tip trajectory account for 74%, and the relative movement distance is 4.21 with SD of 2.51. This work demonstrates that it is practicable for the blind people to feel the world through the brush in their hands. In the future, we plan to deploy Angle's Eyes on the phone to make it more portable. The demo video of the proposed painting navigation system is available at: https://doi.org/10.6084/m9.figshare.9760004.v1.

【6】 Experiments on Properties of Hidden Structures of Sparse Neural Networks 标题:稀疏神经网络隐层结构性质的实验研究

作者:Julian Stier,Harshil Darji,Michael Granitzer 机构:,, ∼ b,d 链接:https://arxiv.org/abs/2107.12917 摘要:神经网络结构的稀疏性可以减少能量消耗,减少内存使用,在方便的硬件上加快计算速度,以及自动机器学习。如果稀疏性产生某种结构,它可以解释学习过程中自动获得的特征。我们提供了深入的实验,在实验中,我们展示了如何稀疏可以通过预先初始化,修剪,并在学习过程中,回答问题之间的关系,神经网络的结构和他们的表现。这包括将网络理论中的先验知识引入到递归神经网络中的第一项工作,以及在神经网络结构搜索期间的结构性能预测。在我们的实验中,我们展示了数量级盲剪枝在80%压缩和再训练的MNIST上如何达到97.5%,比没有压缩的情况多0.5个点,数量级均匀剪枝明显不如它,以及在CIFAR10上增强了性能预测的遗传搜索如何达到82.4%。此外,学习Reber语法的递归网络的性能预测显示,仅给定结构信息时,R^2$最高可达0.81。 摘要:Sparsity in the structure of Neural Networks can lead to less energy consumption, less memory usage, faster computation times on convenient hardware, and automated machine learning. If sparsity gives rise to certain kinds of structure, it can explain automatically obtained features during learning. We provide insights into experiments in which we show how sparsity can be achieved through prior initialization, pruning, and during learning, and answer questions on the relationship between the structure of Neural Networks and their performance. This includes the first work of inducing priors from network theory into Recurrent Neural Networks and an architectural performance prediction during a Neural Architecture Search. Within our experiments, we show how magnitude class blinded pruning achieves 97.5% on MNIST with 80% compression and re-training, which is 0.5 points more than without compression, that magnitude class uniform pruning is significantly inferior to it and how a genetic search enhanced with performance prediction achieves 82.4% on CIFAR10. Further, performance prediction for Recurrent Networks learning the Reber grammar shows an $R^2$ of up to 0.81 given only structural information.

【7】 Emotion Recognition under Consideration of the Emotion Component Process Model 标题:考虑情感成分过程模型的情感识别

作者:Felix Casel,Amelie Heindl,Roman Klinger 机构:Institut f¨ur Maschinelle Sprachverarbeitung, University of Stuttgart, Pfaffenwaldring ,b, Stuttgart, Germany 备注:accepted at KONVENS 2021 链接:https://arxiv.org/abs/2107.12895 摘要:文本中的情感分类通常是通过神经网络模型来实现的,神经网络模型学习将语言单位与情感联系起来。虽然这通常会导致良好的预测性能,但它只在一定程度上有助于理解情绪如何在各个领域进行沟通。Scherer(2005)提出的情感成分过程模型(CPM)是解释情感交流的一种有趣的方法。它指出,情绪是各种子成分对事件的反应的协调过程,即主观感受、认知评价、表达、生理身体反应和动机行为倾向。我们假设这些成分与语言实现有关:一种情绪可以通过描述生理身体反应(“他在颤抖”)或表情(“她微笑”)来表达,我们用情感成分类对现有文献和Twitter情感语料库进行了注释,发现Twitter上的情感主要通过事件描述或对情感的主观报道来表达,而在文学作品中,作者更倾向于描述人物的行为,并将解释权留给读者。我们进一步将CPM纳入到多任务学习模型中,发现这支持情绪分类。注释语料库可在https://www.ims.uni-stuttgart.de/data/emotion. 摘要:Emotion classification in text is typically performed with neural network models which learn to associate linguistic units with emotions. While this often leads to good predictive performance, it does only help to a limited degree to understand how emotions are communicated in various domains. The emotion component process model (CPM) by Scherer (2005) is an interesting approach to explain emotion communication. It states that emotions are a coordinated process of various subcomponents, in reaction to an event, namely the subjective feeling, the cognitive appraisal, the expression, a physiological bodily reaction, and a motivational action tendency. We hypothesize that these components are associated with linguistic realizations: an emotion can be expressed by describing a physiological bodily reaction ("he was trembling"), or the expression ("she smiled"), etc. We annotate existing literature and Twitter emotion corpora with emotion component classes and find that emotions on Twitter are predominantly expressed by event descriptions or subjective reports of the feeling, while in literature, authors prefer to describe what characters do, and leave the interpretation to the reader. We further include the CPM in a multitask learning model and find that this supports the emotion categorization. The annotated corpora are available at https://www.ims.uni-stuttgart.de/data/emotion.

【8】 Efficient TBox Reasoning with Value Restrictions using the \mathcal{FL}_{o}wer reasoner

作者:Franz Baader,Patrick Koopmann,Friedrich Michel,Anni-Yasmin Turhan,Benjamin Zarrieß 备注:This paper is under consideration in Theory and Practice of Logic Programming (TPLP) 链接:https://arxiv.org/abs/2107.12877 摘要:非表达性描述逻辑(DL)$\mathcal{FL}u0$以连词和值限制作为其唯一的概念构造器,当它证明在$\mathcal{FL}u0$w.r.t.一般TBoxes中的推理是ExpTime完备的,即与在相当多的表达性逻辑$\mathcal{ALC}$中一样困难时,它就变得声名狼藉了。在本文中,我们通过为$\mathcal{FL}u0$提出一个专用的包容算法来修复$\mathcal{FL}u0$,它比高度优化的DL推理器所采用的基于表的算法简单得多。我们的实验表明,在我们的$\mathcal{FL}uo$wer推理机上实现的新算法的性能与高度优化的推理机相比非常好$\mathcal{FL}o$wer还可以通过使用多项式时间缩减来处理在$\mathcal{FL}{bot}$0$的扩展$\mathcal{FL}{\bot}$中编写的具有顶部和底部概念的本体,如本文所示,它消除了顶部和底部。我们还研究了与$\mathcal{FL}u0$和$\mathcal{FL}{ubot}$的Horn片段相关的DLs推理的复杂性。 摘要:The inexpressive Description Logic (DL) $\mathcal{FL}_0$, which has conjunction and value restriction as its only concept constructors, had fallen into disrepute when it turned out that reasoning in $\mathcal{FL}_0$ w.r.t. general TBoxes is ExpTime-complete, i.e., as hard as in the considerably more expressive logic $\mathcal{ALC}$. In this paper, we rehabilitate $\mathcal{FL}_0$ by presenting a dedicated subsumption algorithm for $\mathcal{FL}_0$, which is much simpler than the tableau-based algorithms employed by highly optimized DL reasoners. Our experiments show that the performance of our novel algorithm, as prototypically implemented in our $\mathcal{FL}_o$wer reasoner, compares very well with that of the highly optimized reasoners. $\mathcal{FL}_o$wer can also deal with ontologies written in the extension $\mathcal{FL}_{\bot}$ of $\mathcal{FL}_0$ with the top and the bottom concept by employing a polynomial-time reduction, shown in this paper, which eliminates top and bottom. We also investigate the complexity of reasoning in DLs related to the Horn-fragments of $\mathcal{FL}_0$ and $\mathcal{FL}_{\bot}$.

【9】 PDF-Malware: An Overview on Threats, Detection and Evasion Attacks 标题:恶意软件:威胁、检测和规避攻击综述

作者:Nicolas Fleury,Theo Dubrunquez,Ihsen Alouani 机构:Valenciennes, France, IEMN-DOAE Lab CNRS , Universit´e Polytechnique Hauts-De-France 链接:https://arxiv.org/abs/2107.12873 摘要:近年来,可移植文档格式(俗称PDF)已成为文档交换和传播的民主化标准。这一趋势是由于其灵活性和跨平台可移植性等特点造成的。PDF的广泛使用给良性用户带来了固有安全的假象。然而,PDF格式的特点促使黑客利用各种类型的漏洞,克服安全保障措施,从而使PDF格式成为最有效的恶意代码攻击载体之一。因此,有效地检测恶意PDF文件对信息安全至关重要。文献中已经提出了几种分析技术,无论是静态的还是动态的,以提取主要特征,从而区分恶意软件文件和良性文件。由于经典的分析技术可能在零天的情况下受到限制,基于机器学习的技术最近出现,作为一种能够从一组训练样本中概括的PDF恶意软件自动检测方法。这些技术本身也面临着规避攻击的挑战,在这种攻击中,恶意的PDF文件被转换成良性的。在这项工作中,我们给出了一个关于PDF恶意软件检测问题的概述。我们对新的挑战和正在出现的解决办法给出了一个看法。 摘要:In the recent years, Portable Document Format, commonly known as PDF, has become a democratized standard for document exchange and dissemination. This trend has been due to its characteristics such as its flexibility and portability across platforms. The widespread use of PDF has installed a false impression of inherent safety among benign users. However, the characteristics of PDF motivated hackers to exploit various types of vulnerabilities, overcome security safeguards, thereby making the PDF format one of the most efficient malicious code attack vectors. Therefore, efficiently detecting malicious PDF files is crucial for information security. Several analysis techniques has been proposed in the literature, be it static or dynamic, to extract the main features that allow the discrimination of malware files from benign ones. Since classical analysis techniques may be limited in case of zero-days, machine-learning based techniques have emerged recently as an automatic PDF-malware detection method that is able to generalize from a set of training samples. These techniques are themselves facing the challenge of evasion attacks where a malicious PDF is transformed to look benign. In this work, we give an overview on the PDF-malware detection problem. We give a perspective on the new challenges and emerging solutions.

【10】 Neural Network Branch-and-Bound for Neural Network Verification 标题:神经网络验证中的神经网络分枝定界法

作者:Florian Jaeckle,Jingyue Lu,M. Pawan Kumar 机构:Department of Engineering, University of Oxford, Oxford OX,PJ, Department of Statistics, Oxford OX,LB, Editor: 备注:arXiv admin note: substantial text overlap with arXiv:1912.01329 链接:https://arxiv.org/abs/2107.12855 摘要:许多可用的形式化验证方法都是统一分枝定界(BaB)公式的实例。我们提出了一个新的机器学习框架,可以用来设计有效的分支策略以及计算更好的下界。具体来说,我们学习了两个图神经网络(GNN),它们都直接将我们要验证的网络作为图输入,并通过GNN层执行向前向后的传递。我们使用一个GNN来模拟强分支启发式行为,另一个GNN来计算凸松弛的可行对偶解,从而提供一个有效的下界。我们提供了一个新的验证数据集,它比文献中使用的数据集更具挑战性,从而为验证算法改进的测试提供了一个有效的替代方案。虽然只使用其中一个GNN可以减少验证时间,但在结合这两种GNN方法时,我们可以获得最佳性能。与几种最先进的验证方法相比,我们的组合框架在各种卷积网络上的验证所需的分支数和时间都减少了50%。此外,我们还证明了我们的GNN模型可以很好地推广到更大的不可见网络上的硬属性。 摘要:Many available formal verification methods have been shown to be instances of a unified Branch-and-Bound (BaB) formulation. We propose a novel machine learning framework that can be used for designing an effective branching strategy as well as for computing better lower bounds. Specifically, we learn two graph neural networks (GNN) that both directly treat the network we want to verify as a graph input and perform forward-backward passes through the GNN layers. We use one GNN to simulate the strong branching heuristic behaviour and another to compute a feasible dual solution of the convex relaxation, thereby providing a valid lower bound. We provide a new verification dataset that is more challenging than those used in the literature, thereby providing an effective alternative for testing algorithmic improvements for verification. Whilst using just one of the GNNs leads to a reduction in verification time, we get optimal performance when combining the two GNN approaches. Our combined framework achieves a 50\% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to several state-of-the-art verification methods. In addition, we show that our GNN models generalize well to harder properties on larger unseen networks.

【11】 Task and Situation Structures for Service Agent Planning 标题:服务代理规划的任务和情境结构

作者:Hao Yang,Tavan Eftekhar,Chad Esselink,Yan Ding,Shiqi Zhang 机构: Ford Motor Company, USA, State University of New York - Binghamton, USA 链接:https://arxiv.org/abs/2107.12851 摘要:日常工作的特点是多样性和多样性,往往没有明确规定服务代理人。本文提出了一种综合的方法,使服务代理能够在开放的、不受控制的环境中处理日常任务。我们介绍了一种表示任务的通用结构,以及另一种表示情况的结构。基于这两种新引入的结构,我们提出了一种避免硬编码领域规则的情况处理方法,同时提高了实际任务规划系统的可扩展性。 摘要:Everyday tasks are characterized by their varieties and variations, and frequently are not clearly specified to service agents. This paper presents a comprehensive approach to enable a service agent to deal with everyday tasks in open, uncontrolled environments. We introduce a generic structure for representing tasks, and another structure for representing situations. Based on the two newly introduced structures, we present a methodology of situation handling that avoids hard-coding domain rules while improving the scalability of real-world task planning systems.

【12】 A Storytelling Robot managing Persuasive and Ethical Stances via ACT-R: an Exploratory Study 标题:通过ACT-R管理说服性和道德性立场的讲故事机器人的探索性研究

作者:Agnese Augello,Giuseppe Città,Manuel Gentile,Antonio Lieto 机构:ICAR-CNR, Italy, Palermo, ITD-CNR, Italy, Università di Torino, Dipartimento di Informatica, Italy 备注:None 链接:https://arxiv.org/abs/2107.12845 摘要:我们介绍了一个通过ACT-R认知结构控制的讲故事机器人,它能够在谈论有关COVID-19的一些话题时采用不同的说服技巧和伦理立场。本文的主要贡献在于提出了一个需求驱动的模型,在对话过程中指导和评估,在代理程序记忆中可用的说服技巧的使用(如果有的话)。在这样一个模型中测试的说服技巧的组合范围从讲故事的使用,到框架技巧和基于修辞的论点。据我们所知,这是第一次尝试建立一个有说服力的代理,能够整合有关对话管理、讲故事和说服技巧以及道德态度的明确的认知假设。本文介绍了63名参与者对该系统的探索性评估结果 摘要:We present a storytelling robot, controlled via the ACT-R cognitive architecture, able to adopt different persuasive techniques and ethical stances while conversing about some topics concerning COVID-19. The main contribution of the paper consists in the proposal of a needs-driven model that guides and evaluates, during the dialogue, the use (if any) of persuasive techniques available in the agent procedural memory. The portfolio of persuasive techniques tested in such a model ranges from the use of storytelling, to framing techniques and rhetorical-based arguments. To the best of our knowledge, this represents the first attempt of building a persuasive agent able to integrate a mix of explicitly grounded cognitive assumptions about dialogue management, storytelling and persuasive techniques as well as ethical attitudes. The paper presents the results of an exploratory evaluation of the system on 63 participants

【13】 Adversarial Stacked Auto-Encoders for Fair Representation Learning 标题:用于公平表示学习的对抗性堆叠自动编码器

作者:Patrik Joslin Kenfack,Adil Mehmood Khan,Rasheed Hussain,S. M. Ahsan Kazmi 机构: Innopolis University 备注:ICML2021 ML4data Workshop Paper 链接:https://arxiv.org/abs/2107.12826 摘要:以精确性为最终目标的机器学习模型训练可能会促进数据中的偏见和歧视行为。一种解决方案是学习满足特定公平性度量的潜在表示。不同类型的学习方法被用来将数据映射到公平的表征空间。其主要目的是学习一种潜在的数据表示方法,这种方法在保持下游任务可用性的同时,在公平性度量上得分很高。在本文中,我们提出了一种新的公平表示学习方法,利用不同层次的数据表示来收紧学习表示的公平界限。我们的研究结果表明,堆叠不同的自动编码器和在不同的潜在空间执行公平性的结果相比,其他现有的方法在公平性的改善。 摘要:Training machine learning models with the only accuracy as a final goal may promote prejudices and discriminatory behaviors embedded in the data. One solution is to learn latent representations that fulfill specific fairness metrics. Different types of learning methods are employed to map data into the fair representational space. The main purpose is to learn a latent representation of data that scores well on a fairness metric while maintaining the usability for the downstream task. In this paper, we propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation. Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.

【14】 Open-Ended Learning Leads to Generally Capable Agents 标题:开放式学习造就具有普遍能力的代理

作者:Open-Ended Learning Team,Adam Stooke,Anuj Mahajan,Catarina Barros,Charlie Deck,Jakob Bauer,Jakub Sygnowski,Maja Trebacz,Max Jaderberg,Michael Mathieu,Nat McAleese,Nathalie Bradley-Schmieg,Nathaniel Wong,Nicolas Porcel,Roberta Raileanu,Steph Hughes-Fitt,Valentin Dalibard,Wojciech Marian Czarnecki 机构:DeepMind, London, UK 链接:https://arxiv.org/abs/2107.12808 摘要:在这项工作中,我们创建的代理可以很好地执行一个单独的任务,表现出更广泛的行为概括到一个巨大的,丰富的挑战空间。我们在一个环境域中定义了一系列任务,并展示了训练代理的能力,这些代理通常能够跨越这个广阔的空间和更远的地方。环境是天生的多智能体,跨越了竞争、合作和独立游戏的连续统一体,这些游戏位于程序生成的物理三维世界中。由此产生的空间在对代理的挑战方面异常多样化,因此,即使衡量代理的学习进度也是一个开放的研究问题。我们提出了一个在连续几代代理之间改进的迭代概念,而不是寻求单一目标的最大化,允许我们量化进度,尽管任务在可实现的回报方面是无法比拟的。通过构造一个开放的学习过程,动态地改变训练任务的分布和训练目标,使agent不停止学习,从而实现对新行为的一致学习。由此产生的代理能够在我们的每一个人类可解的评估级别中获得奖励,行为概括为任务宇宙中的许多突出点。这种零炮通用的例子包括良好的性能,隐藏和寻找,捕捉旗帜,和标签。通过分析和手工编写的探测任务,我们描述了我们的代理的行为,并发现有趣的紧急启发式行为,如试错实验、简单的工具使用、选项切换和合作。最后,我们证明了该代理的一般功能可以通过廉价的微调解锁更大规模的行为转移。 摘要:In this work we create agents that can perform well beyond a single, individual task, that exhibit much wider generalisation of behaviour to a massive, rich space of challenges. We define a universe of tasks within an environment domain and demonstrate the ability to train agents that are generally capable across this vast space and beyond. The environment is natively multi-agent, spanning the continuum of competitive, cooperative, and independent games, which are situated within procedurally generated physical 3D worlds. The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem. We propose an iterative notion of improvement between successive generations of agents, rather than seeking to maximise a singular objective, allowing us to quantify progress despite tasks being incomparable in terms of achievable rewards. We show that through constructing an open-ended learning process, which dynamically changes the training task distributions and training objectives such that the agent never stops learning, we achieve consistent learning of new behaviours. The resulting agent is able to score reward in every one of our humanly solvable evaluation levels, with behaviour generalising to many held-out points in the universe of tasks. Examples of this zero-shot generalisation include good performance on Hide and Seek, Capture the Flag, and Tag. Through analysis and hand-authored probe tasks we characterise the behaviour of our agent, and find interesting emergent heuristic behaviours such as trial-and-error experimentation, simple tool use, option switching, and cooperation. Finally, we demonstrate that the general capabilities of this agent could unlock larger scale transfer of behaviour through cheap finetuning.

【15】 HPTMT: Operator-Based Architecture for ScalableHigh-Performance Data-Intensive Frameworks 标题:HPTMT:基于运算符的可扩展高性能数据密集型框架体系结构

作者:Supun Kamburugamuve,Chathura Widanage,Niranda Perera,Vibhatha Abeykoon,Ahmet Uyar,Thejaka Amila Kanewala,Gregor von Laszewski,Geoffrey Fox 机构:∗Luddy School of Informatics, Computing and Engineering, Bloomington, IN , USA, †Digital Science Center, Bloomington, IN , USA, ‡Indiana University Alumni, IN , USA, Biocomplexity Institute & Initiative and Computer Science Dept., University of Virginia 链接:https://arxiv.org/abs/2107.12807 摘要:数据密集型应用程序影响到许多领域,而且它们不断增长的规模和复杂性要求高性能、高可用性的环境。我们整合了在各种数据科学和数据工程框架中发展的一系列想法。它们在特定的数据抽象上使用一组操作符,包括向量、矩阵、张量、图和表。我们的关键概念来自MPI、HPF(高性能Fortran)、NumPy、Pandas、Spark、Modin、PyTorch、TensorFlow、RAPIDS(NVIDIA)和OneAPI(Intel)等系统。此外,在大数据领域中支持日常使用的不同语言至关重要,包括Python、R、C++和java。我们注意到apachearrow和Parquet对于实现语言无关的高性能和互操作性的重要性。在本文中,我们提出了高性能的张量、矩阵和表格(HPTMT),一种基于运算符的数据密集型应用架构,并确定了性能和可用性成功所需的基本原则。我们通过使用包含HPTMT的软件环境Cylon和Twister2讨论示例来说明这些原则。 摘要:Data-intensive applications impact many domains, and their steadily increasing size and complexity demands high-performance, highly usable environments. We integrate a set of ideas developed in various data science and data engineering frameworks. They employ a set of operators on specific data abstractions that include vectors, matrices, tensors, graphs, and tables. Our key concepts are inspired from systems like MPI, HPF (High-Performance Fortran), NumPy, Pandas, Spark, Modin, PyTorch, TensorFlow, RAPIDS(NVIDIA), and OneAPI (Intel). Further, it is crucial to support different languages in everyday use in the Big Data arena, including Python, R, C++, and Java. We note the importance of Apache Arrow and Parquet for enabling language agnostic high performance and interoperability. In this paper, we propose High-Performance Tensors, Matrices and Tables (HPTMT), an operator-based architecture for data-intensive applications, and identify the fundamental principles needed for performance and usability success. We illustrate these principles by a discussion of examples using our software environments, Cylon and Twister2 that embody HPTMT.

【16】 Towards Industrial Private AI: A two-tier framework for data and model security 标题:走向工业私有人工智能:数据和模型安全的双层框架

作者:Sunder Ali Khowaja,Kapal Dev,Nawab Muhammad Faseeh Qureshi,Parus Khuwaja,Luca Foschini 机构: Sungkyunkwan University 备注:9 pages, 4 figures, 1 table, Magazine article 链接:https://arxiv.org/abs/2107.12806 摘要:随着5G和物联网设备的发展,各行业正在大量采用人工智能(AI)技术来改进基于分类和预测的服务。然而,人工智能的使用也引起了人们对可能被滥用或泄露的数据隐私和安全的担忧。私有人工智能最近被创造出来,通过将人工智能与加密技术相结合来解决数据安全问题,但现有的研究表明,模型反转攻击可以用来根据模型参数对图像进行反向工程。在这方面,我们提出了一个基于联邦学习和加密的私有(FLEP)AI框架,它为IIoT环境中的数据和模型参数提供了两层安全性。我们提出了一种三层加密的数据安全方法,并提供了一种假设的模型参数安全方法。实验结果表明,该方法在略微增加执行时间的前提下,取得了较好的加密质量。我们还强调了与FLEP-AI框架实现相关的几个开放性问题和挑战。 摘要:With the advances in 5G and IoT devices, the industries are vastly adopting artificial intelligence (AI) techniques for improving classification and prediction-based services. However, the use of AI also raises concerns regarding data privacy and security that can be misused or leaked. Private AI was recently coined to address the data security issue by combining AI with encryption techniques but existing studies have shown that model inversion attacks can be used to reverse engineer the images from model parameters. In this regard, we propose a federated learning and encryption-based private (FLEP) AI framework that provides two-tier security for data and model parameters in an IIoT environment. We proposed a three-layer encryption method for data security and provided a hypothetical method to secure the model parameters. Experimental results show that the proposed method achieves better encryption quality at the expense of slightly increased execution time. We also highlighted several open issues and challenges regarding the FLEP AI framework's realization.

【17】 Inclusion, equality and bias in designing online mass deliberative platforms 标题:网络群议平台设计中的包容、平等与偏见

作者:Ruth Shortall,Anatol Itten,Michiel van der Meer,Pradeep K. Murukannaiah,Catholijn M. Jonker 机构:TU Delft, Leiden University 链接:https://arxiv.org/abs/2107.12711 摘要:在线讨论平台的设计者旨在对抗在线辩论质量的下降,消除基于阶级、种族或性别的在线歧视。机器学习和自然语言处理等支持技术为拓宽参与讨论的人的圈子开辟了途径,从小群体向“群体”规模转变。大型在线讨论系统的一些设计特点允许更多的人讨论共同的问题,增强批判性思维,并制定解决方案。然而,扩大审议规模具有挑战性。我们回顾了关于数字大众讨论平台设计的跨学科文献,并研究了共同特征的设计方面(例如,论证支持、自动化便利化和游戏化)。我们发现,这些文献主要集中在开发用于扩大审议的技术修复,西方对设计和测试用户的影响很大,这些用户偏袒年轻人和受过高等教育的人。相反,对于设计过程的性质、利益相关者的包含以及与包含相关的问题,显然缺乏讨论,这可能会在无意中使偏见永久化。审议平台的另一个趋势是推动参与者进入所需的论证形式,并简化好论点和坏论点的定义,以符合算法的目的。很少有研究在商议理论、设计和工程之间架起桥梁。因此,扩大审议规模很可能会在不同的系统筒仓中推进。我们提出设计和过程的建议,以纠正这一过程,并建议未来的研究途径。 摘要:Designers of online deliberative platforms aim to counter the degrading quality of online debates and eliminate online discrimination based on class, race or gender. Support technologies such as machine learning and natural language processing open avenues for widening the circle of people involved in deliberation, moving from small groups to ``crowd'' scale. Some design features of large-scale online discussion systems allow larger numbers of people to discuss shared problems, enhance critical thinking, and formulate solutions. However, scaling up deliberation is challenging. We review the transdisciplinary literature on the design of digital mass-deliberation platforms and examine the commonly featured design aspects (e.g., argumentation support, automated facilitation, and gamification). We find that the literature is heavily focused on developing technical fixes for scaling up deliberation, with a heavy western influence on design and test users skew young and highly educated. Contrastingly, there is a distinct lack of discussion on the nature of the design process, the inclusion of stakeholders and issues relating to inclusion, which may unwittingly perpetuate bias. Another tendency of deliberation platforms is to nudge participants to desired forms of argumentation, and simplifying definitions of good and bad arguments to fit algorithmic purposes. Few studies bridge disciplines between deliberative theory, design and engineering. As a result, scaling up deliberation will likely advance in separate systemic siloes. We make design and process recommendations to correct this course and suggest avenues for future research.

【18】 QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension 标题:问答数据集爆炸:面向问答和阅读理解的自然语言处理资源分类

作者:Anna Rogers,Matt Gardner,Isabelle Augenstein 机构: University of Copenhagen (Denmark), Allen Institute for Artificial Intelligence 备注:Under review 链接:https://arxiv.org/abs/2107.12708 摘要:近几年来,随着对NLP中深度学习模型的大量研究,对跟踪建模过程所需的基准数据集也做了大量的工作。在这方面,问答和阅读理解尤其丰富,在过去两年中出现了80多个新的数据集。这项研究是迄今为止最大的实地调查。我们提供了当前资源的各种格式和域的概述,突出了当前的缺陷,以供将来的工作参考。我们进一步讨论了目前问答中“推理类型”的分类,并提出了一种新的分类法。我们还讨论了过度关注英语的含义,并调查了当前其他语言的单语资源和多语资源。这项研究的目标既包括寻找丰富现有数据指针的从业者,也包括研究新资源的研究人员。 摘要:Alongside huge volumes of research on deep learning models in NLP in the recent years, there has been also much work on benchmark datasets needed to track modeling progress. Question answering and reading comprehension have been particularly prolific in this regard, with over 80 new datasets appearing in the past two years. This study is the largest survey of the field to date. We provide an overview of the various formats and domains of the current resources, highlighting the current lacunae for future work. We further discuss the current classifications of ``reasoning types" in question answering and propose a new taxonomy. We also discuss the implications of over-focusing on English, and survey the current monolingual resources for other languages and multilingual resources. The study is aimed at both practitioners looking for pointers to the wealth of existing data, and at researchers working on new resources.

【19】 Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals 标题:多传感器时间序列信号的无监督深度异常检测

作者:Yuxin Zhang,Yiqiang Chen,Jindong Wang,Zhiwen Pan 机构: Pan are with Beijing Key Laboratory of Mobile Comput-ing and Pervasive Device, Institute of Computing Technology, ChineseAcademy of Sciences and University of Chinese Academy of Sciences 备注:Accepted to IEEE Transactions on Knowledge and Data Engineering (IEEE TKDE) as a regular paper; 14 pages 链接:https://arxiv.org/abs/2107.12626 摘要:目前,多传感器技术已广泛应用于医疗保健、人类活动识别、工业控制等领域。这些传感器可以产生大量的多元时间序列数据。基于多传感器时间序列数据的无监督异常检测在机器学习研究中具有重要意义。关键的挑战是通过捕获多传感器数据的时空相关性来发现广义正态模式。除此之外,噪声数据往往与训练数据交织在一起,这很可能使模型难以区分正常数据、异常数据和噪声数据。以往的研究很少能够共同应对这两个挑战。本文提出了一种新的基于深度学习的异常检测算法&深度卷积自编码记忆网络(CAE-M)。我们首先构建了一个深度卷积自动编码器,用最大平均差(MMD)来描述多传感器数据的空间相关性,以便更好地区分噪声、正常和异常数据。然后,我们构造一个由线性(自回归模型)和非线性预测(带注意的双向LSTM)组成的记忆网络,从时间序列数据中获取时间相关性。最后,CAE-M对这两个子网进行联合优化。在HAR和HC数据集上,我们将该方法与几种最新的异常检测方法进行了比较。实验结果表明,本文提出的模型优于现有的方法。 摘要:Nowadays, multi-sensor technologies are applied in many fields, e.g., Health Care (HC), Human Activity Recognition (HAR), and Industrial Control System (ICS). These sensors can generate a substantial amount of multivariate time-series data. Unsupervised anomaly detection on multi-sensor time-series data has been proven critical in machine learning researches. The key challenge is to discover generalized normal patterns by capturing spatial-temporal correlation in multi-sensor data. Beyond this challenge, the noisy data is often intertwined with the training data, which is likely to mislead the model by making it hard to distinguish between the normal, abnormal, and noisy data. Few of previous researches can jointly address these two challenges. In this paper, we propose a novel deep learning-based anomaly detection algorithm called Deep Convolutional Autoencoding Memory network (CAE-M). We first build a Deep Convolutional Autoencoder to characterize spatial dependence of multi-sensor data with a Maximum Mean Discrepancy (MMD) to better distinguish between the noisy, normal, and abnormal data. Then, we construct a Memory Network consisting of linear (Autoregressive Model) and non-linear predictions (Bidirectional LSTM with Attention) to capture temporal dependence from time-series data. Finally, CAE-M jointly optimizes these two subnetworks. We empirically compare the proposed approach with several state-of-the-art anomaly detection methods on HAR and HC datasets. Experimental results demonstrate that our proposed model outperforms these existing methods.

【20】 Federated Learning Meets Natural Language Processing: A Survey 标题:联邦学习遇到自然语言处理:综述

作者:Ming Liu,Stella Ho,Mengqi Wang,Longxiang Gao,Yuan Jin,He Zhang 机构:Deakin University 备注:19 pages 链接:https://arxiv.org/abs/2107.12603 摘要:联邦学习的目的是在不牺牲本地数据隐私的情况下,从多个分散的边缘设备(如手机)或服务器学习机器学习模型。最近的自然语言处理技术依赖于深度学习和大量预先训练的语言模型。然而,无论是大型的深层神经模型还是语言模型,都是用大量的数据来训练的,而这些数据通常都在服务器端。由于文本数据广泛来源于终端用户,在这项工作中,我们研究了最近使用联合学习作为学习框架的NLP模型和技术。我们的调查讨论了联邦自然语言处理中的主要挑战,包括算法挑战、系统挑战以及隐私问题。我们还对现有的联邦NLP评估方法和工具进行了评论。最后,我们指出了目前的研究差距和未来的发展方向。 摘要:Federated Learning aims to learn machine learning models from multiple decentralized edge devices (e.g. mobiles) or servers without sacrificing local data privacy. Recent Natural Language Processing techniques rely on deep learning and large pre-trained language models. However, both big deep neural and language models are trained with huge amounts of data which often lies on the server side. Since text data is widely originated from end users, in this work, we look into recent NLP models and techniques which use federated learning as the learning framework. Our survey discusses major challenges in federated natural language processing, including the algorithm challenges, system challenges as well as the privacy issues. We also provide a critical review of the existing Federated NLP evaluation methods and tools. Finally, we highlight the current research gaps and future directions.

【21】 Identify Apple Leaf Diseases Using Deep Learning Algorithm 标题:基于深度学习算法的苹果叶部病害识别

作者:Daping Zhang,Hongyu Yang,Jiayu Cao 链接:https://arxiv.org/abs/2107.12598 摘要:农业是一个国家社会经济的重要产业。然而,病虫害给农业生产造成了巨大的减产,而对农民避灾的指导却不够。为了解决这个问题,我们通过建立分类模型,将CNNs应用于植物病害识别。在3642幅苹果叶片图像的数据集中,为了节省训练时间,我们采用了一种基于卷积神经网络(CNN)和Fastai框架的预训练图像分类模型Restnet34。总体分类准确率为93.765%。 摘要:Agriculture is an essential industry in the both society and economy of a country. However, the pests and diseases cause a great amount of reduction in agricultural production while there is not sufficient guidance for farmers to avoid this disaster. To address this problem, we apply CNNs to plant disease recognition by building a classification model. Within the dataset of 3,642 images of apple leaves, We use a pre-trained image classification model Restnet34 based on a Convolutional neural network (CNN) with the Fastai framework in order to save the training time. Overall, the accuracy of classification is 93.765%.

【22】 Template-based Chatbot for Agriculture Related FAQs 标题:基于模板的农业常见问题聊天机器人

作者:Daping Zhang,Xin Chen,Yujia Zhang,Shihan Qin 链接:https://arxiv.org/abs/2107.12595 摘要:农业是社会的基础产业,是粮食供给的基础,是就业和GDP增长的重要来源。然而,专家的不足并不能满足农民的需求。为了解决这个问题,我们设计了一个聊天机器人来回答农业领域的常见问题。基于模板的问题将由AIML回答,而LSA用于其他基于服务的问题。这个聊天机器人将帮助农民方便有效地处理工业问题。 摘要:Agriculture is the fundamental industry of the society, which is the basis of food supply and an important source of employment and GDP increase. However, the insufficient expert can not fulfill the demand of farmers. To address this problem, we design a chatbot to answer frequently asked questions in the Agriculture field. Template-based questions will be answered by AIML while LSA is used for other service-based questions. This chatbot will assist farmers by dealing with industry problems conveniently and efficiently.

【23】 Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning 标题:概率逻辑与深度学习相结合的自监督学习

作者:Hoifung Poon,Hai Wang,Hunter Lang 备注:Book chapter. arXiv admin note: substantial text overlap with arXiv:2012.12474, arXiv:1808.08485, arXiv:2008.12878 链接:https://arxiv.org/abs/2107.12591 摘要:深度学习已被证明对各种应用任务有效,但其适用性受到对注释示例的依赖的限制。自监督学习已经成为缓解监督瓶颈的一个很有前途的方向,但是现有的工作主要集中在利用未标记数据中的共现来进行任务不可知表征学习,例如蒙面语言模型预训练。在本章中,我们将探讨特定于任务的自我监督,它利用领域知识为最终应用程序自动注释有噪声的训练示例,方法是引入用于注释单个实例的标记函数,或者对相互依赖的标记决策施加约束。我们首先提出了深度概率逻辑(deep probabilistic logic,DPL),它通过将概率逻辑与深度学习结合起来,为特定任务的自我监控提供了一个统一的框架。DPL将未知标签表示为潜在变量,并利用概率逻辑结合多种自监督机制,利用变分EM端到端地训练深度神经网络。接下来,我们提出了自监督自监督(self-supervisory,S4),它增加了DPL自动学习新自监督的能力。从一个初始的种子自我监督开始,S4迭代地使用深度神经网络来提出新的自我监督。它们要么直接添加(结构化自我训练的一种形式),要么由人类专家验证(如在基于特征的主动学习中)。在生物医学机器阅读和各种文本分类任务等实际应用中的实验表明,特定于任务的自我监督可以有效地利用领域专业知识,并且通常只需很少的人力就可以达到监督方法的准确性。 摘要:Deep learning has proven effective for various application tasks, but its applicability is limited by the reliance on annotated examples. Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck, but existing work focuses on leveraging co-occurrences in unlabeled data for task-agnostic representation learning, as exemplified by masked language model pretraining. In this chapter, we explore task-specific self-supervision, which leverages domain knowledge to automatically annotate noisy training examples for end applications, either by introducing labeling functions for annotating individual instances, or by imposing constraints over interdependent label decisions. We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning. DPL represents unknown labels as latent variables and incorporates diverse self-supervision using probabilistic logic to train a deep neural network end-to-end using variational EM. Next, we present self-supervised self-supervision(S4), which adds to DPL the capability to learn new self-supervision automatically. Starting from an initial seed self-supervision, S4 iteratively uses the deep neural network to propose new self supervision. These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). Experiments on real-world applications such as biomedical machine reading and various text classification tasks show that task-specific self-supervision can effectively leverage domain expertise and often match the accuracy of supervised methods with a tiny fraction of human effort.

【24】 Nearest Neighborhood-Based Deep Clustering for Source Data-absent Unsupervised Domain Adaptation 标题:基于最近邻域的源数据无监督域自适应深度聚类

作者:Song Tang,Yan Yang,Zhiyuan Ma,Norman Hendrich,Fanyu Zeng,Shuzhi Sam Ge,Changshui Zhang,Jianwei Zhang 机构: University ofShanghai for Science and Technology, ChinaZhiyuan Ma are with the Institute of Machine Intelligence 链接:https://arxiv.org/abs/2107.12585 摘要:在经典的无监督域自适应(UDA)算法中,标记源数据在训练阶段是可用的。然而,在许多现实场景中,由于隐私保护和信息安全等原因,源数据是不可访问的,只有经过源域训练的模型才可用。本文提出了一种新的深度聚类方法。针对特征级的动态聚类问题,在数据之间的几何结构中引入了额外的约束来辅助聚类过程。具体地说,我们提出了一个基于几何的约束,称为最近邻语义一致性(SCNNH),并用它来鼓励鲁棒聚类。为了达到这个目的,我们为每个目标数据构造最近邻域,并将其作为基本的聚类单元,通过在几何体上建立目标。此外,我们开发了一个更符合SCNNH的结构,并附加了一个语义可信度约束,称为语义超最近邻(SHNNH)。之后,我们将我们的方法扩展到这个新的几何体。在三个具有挑战性的UDA数据集上的大量实验表明,我们的方法达到了最先进的结果。该方法在所有数据集上都有显著的改进(采用SHNNH后,在大规模数据集上平均精度提高了3.0%以上)。代码位于https://github.com/tntek/N2DCX. 摘要:In the classic setting of unsupervised domain adaptation (UDA), the labeled source data are available in the training phase. However, in many real-world scenarios, owing to some reasons such as privacy protection and information security, the source data is inaccessible, and only a model trained on the source domain is available. This paper proposes a novel deep clustering method for this challenging task. Aiming at the dynamical clustering at feature-level, we introduce extra constraints hidden in the geometric structure between data to assist the process. Concretely, we propose a geometry-based constraint, named semantic consistency on the nearest neighborhood (SCNNH), and use it to encourage robust clustering. To reach this goal, we construct the nearest neighborhood for every target data and take it as the fundamental clustering unit by building our objective on the geometry. Also, we develop a more SCNNH-compliant structure with an additional semantic credibility constraint, named semantic hyper-nearest neighborhood (SHNNH). After that, we extend our method to this new geometry. Extensive experiments on three challenging UDA datasets indicate that our method achieves state-of-the-art results. The proposed method has significant improvement on all datasets (as we adopt SHNNH, the average accuracy increases by over 3.0\% on the large-scaled dataset). Code is available at https://github.com/tntek/N2DCX.

【25】 Pointer Value Retrieval: A new benchmark for understanding the limits of neural network generalization 标题:指针值检索:理解神经网络泛化极限的新基准

作者:Chiyuan Zhang,Maithra Raghu,Jon Kleinberg,Samy Bengio 机构:Google Research, Brain Team, Cornell University 链接:https://arxiv.org/abs/2107.12580 摘要:深度学习的成功在很大程度上依赖于神经网络对看不见的数据输出有意义的预测的能力——泛化。然而,尽管它的重要性,仍然有基本的开放性问题,如何神经网络推广。神经网络在多大程度上依赖于记忆——看到高度相似的训练实例——它们在多大程度上能够进行人类智能化的推理——识别数据背后的抽象规则?在本文中,我们介绍了一个新的基准,指针值检索(PVR)任务,探索神经网络泛化的局限性。虽然PVR任务可以由视觉输入和符号输入组成,每种输入都有不同的难度,但它们都有一个简单的基本规则。PVR任务输入的一部分充当指针,给出输入的另一部分的位置,该部分构成值(和输出)。我们证明了这种任务结构为理解泛化提供了一个丰富的测试平台,我们的实证研究表明,基于数据集大小、任务复杂性和模型结构的神经网络性能有很大的变化。位置、值和指针规则的交互作用还允许通过引入分布偏移和增加函数复杂性来开发细致入微的泛化测试。这些既揭示了微妙的失败,也揭示了令人惊讶的成功,表明了在这个基准上许多有希望的探索方向。 摘要:The successes of deep learning critically rely on the ability of neural networks to output meaningful predictions on unseen data -- generalization. Yet despite its criticality, there remain fundamental open questions on how neural networks generalize. How much do neural networks rely on memorization -- seeing highly similar training examples -- and how much are they capable of human-intelligence styled reasoning -- identifying abstract rules underlying the data? In this paper we introduce a novel benchmark, Pointer Value Retrieval (PVR) tasks, that explore the limits of neural network generalization. While PVR tasks can consist of visual as well as symbolic inputs, each with varying levels of difficulty, they all have a simple underlying rule. One part of the PVR task input acts as a pointer, giving the location of a different part of the input, which forms the value (and output). We demonstrate that this task structure provides a rich testbed for understanding generalization, with our empirical study showing large variations in neural network performance based on dataset size, task complexity and model architecture. The interaction of position, values and the pointer rule also allow the development of nuanced tests of generalization, by introducing distribution shift and increasing functional complexity. These reveal both subtle failures and surprising successes, suggesting many promising directions of exploration on this benchmark.

【26】 Dual Slot Selector via Local Reliability Verification for Dialogue State Tracking 标题:基于局部可靠性验证的对话状态跟踪双槽选择器

作者:Jinyu Guo,Kai Shuang,Jijie Li,Zihan Wang 机构:State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Graduate School of Information Science and Technology, The University of Tokyo 备注:Accepted by ACL-IJCNLP 2021 main conference (long paper) 链接:https://arxiv.org/abs/2107.12578 摘要:对话状态跟踪(DST)的目标是在给定所有先前的对话上下文的情况下预测当前的对话状态。现有的方法通常从零开始预测每个回合的对话状态。但是,每个回合中绝大多数的槽都应该简单地继承上一回合的槽值。因此,在每一轮中平等对待时隙的机制不仅效率低下,而且可能由于产生冗余时隙值而导致额外的错误。为了解决这个问题,我们设计了两级DSS-DST,它由基于当前回合对话的双时隙选择器和基于对话历史的时隙值发生器组成。双时隙选择器从两个方面决定每个时隙是更新时隙值还是继承前一回合的时隙值:(1)它与当前回合对话话语之间是否存在强关系(2) 如果通过当前匝数对话可以获得高可靠性的时隙值。选择要更新的插槽被允许进入插槽值生成器以通过混合方法更新值,而其他插槽直接继承上一轮的值。实验结果表明,该方法在multiwoz2.0、multiwoz2.1和multiwoz2.2数据集上的联合精度分别达到56.93%、60.73%和58.04%,取得了新的水平,并有显著的改进。 摘要:The goal of dialogue state tracking (DST) is to predict the current dialogue state given all previous dialogue contexts. Existing approaches generally predict the dialogue state at every turn from scratch. However, the overwhelming majority of the slots in each turn should simply inherit the slot values from the previous turn. Therefore, the mechanism of treating slots equally in each turn not only is inefficient but also may lead to additional errors because of the redundant slot value generation. To address this problem, we devise the two-stage DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history. The Dual Slot Selector determines each slot whether to update slot value or to inherit the slot value from the previous turn from two aspects: (1) if there is a strong relationship between it and the current turn dialogue utterances; (2) if a slot value with high reliability can be obtained for it through the current turn dialogue. The slots selected to be updated are permitted to enter the Slot Value Generator to update values by a hybrid method, while the other slots directly inherit the values from the previous turn. Empirical results show that our method achieves 56.93%, 60.73%, and 58.04% joint accuracy on MultiWOZ 2.0, MultiWOZ 2.1, and MultiWOZ 2.2 datasets respectively and achieves a new state-of-the-art performance with significant improvements.

【27】 CCGL: Contrastive Cascade Graph Learning 标题:CCGL:对比级联图学习

作者:Xovee Xu,Fan Zhou,Kunpeng Zhang,Siyuan Liu 机构: University of Maryland 备注:Submitted to IEEE, including 14 pages, 7 figures, and 11 tables 链接:https://arxiv.org/abs/2107.12576 摘要:监督学习是信息级联建模的一种常用方法,但在训练过程中往往需要大量的标记数据,而且训练后的模型不易在任务和数据集中推广。半监督学习有助于在预训练中对未标记数据进行级联理解。它通常学习细粒度的特征级表示,这很容易导致对下游任务的过度拟合。近年来,对比自监督学习被设计用来缓解语言和视觉任务中的这两个基本问题。然而,它对于级联建模的直接适用性,特别是与图级联相关的任务,仍然没有得到充分的研究。在这项工作中,我们提出了对比级联图学习(CCGL),一个新的框架级联图表示学习的对比,自我监督,和任务无关的方式。特别是,CCGL首先设计了一种有效的数据扩充策略来捕获变化和不确定性。其次,利用未标记和标记数据,通过自监督对比预训练,学习图级联任务的通用模型。第三,CCGL通过使用标记数据进行微调来学习特定于任务的级联模型。最后,为了使模型能够在数据集和级联应用程序之间进行转换,CCGL通过使用师生结构的蒸馏进一步增强了模型。我们证明了CCGL在一些下游任务上显著优于其监督和半监督对手。 摘要:Supervised learning, while prevalent for information cascade modeling, often requires abundant labeled data in training, and the trained model is not easy to generalize across tasks and datasets. Semi-supervised learning facilitates unlabeled data for cascade understanding in pre-training. It often learns fine-grained feature-level representations, which can easily result in overfitting for downstream tasks. Recently, contrastive self-supervised learning is designed to alleviate these two fundamental issues in linguistic and visual tasks. However, its direct applicability for cascade modeling, especially graph cascade related tasks, remains underexplored. In this work, we present Contrastive Cascade Graph Learning (CCGL), a novel framework for cascade graph representation learning in a contrastive, self-supervised, and task-agnostic way. In particular, CCGL first designs an effective data augmentation strategy to capture variation and uncertainty. Second, it learns a generic model for graph cascade tasks via self-supervised contrastive pre-training using both unlabeled and labeled data. Third, CCGL learns a task-specific cascade model via fine-tuning using labeled data. Finally, to make the model transferable across datasets and cascade applications, CCGL further enhances the model via distillation using a teacher-student architecture. We demonstrate that CCGL significantly outperforms its supervised and semi-supervised counterpartsfor several downstream tasks.

【28】 CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows 标题:CFLOW-AD:基于条件归一化流定位的实时无监督异常检测

作者:Denis Gudovskiy,Shun Ishizaka,Kazuki Kozuka 机构:Panasonic AI Lab, USA, Panasonic Technology Division, Japan 备注:Accepted to WACV 2022. Preprint 链接:https://arxiv.org/abs/2107.12571 摘要:在标记不可行的情况下,以及在异常样本在列车数据中完全缺失的情况下,基于定位的无监督异常检测具有许多实际应用。虽然最近提出的用于此类数据设置的模型实现了高精度度量,但其复杂性是实时处理的限制因素。在本文中,我们提出了一个实时模型,并分析推导了它与先验方法的关系。我们的CFLOW-AD模型是基于一个用于定位异常检测的条件规范化流框架。特别地,CFLOW-AD由一个有区别的预训练编码器和一个多尺度生成解码器组成,后者显式地估计编码特征的可能性。我们的方法产生了一个计算效率和内存效率都很高的模型:CFLOW-AD在相同的输入设置下比现有的最新技术快10倍,而且更小。我们在MVTec数据集上的实验表明,CFLOW-AD在检测任务上比以前的方法有0.36%的AUROC,在定位任务上比以前的方法有1.12%的AUROC和2.5%的AUPRO。我们用完全可复制的实验来开放代码。 摘要:Unsupervised anomaly detection with localization has many practical applications when labeling is infeasible and, moreover, when anomaly examples are completely missing in the train data. While recently proposed models for such data setup achieve high accuracy metrics, their complexity is a limiting factor for real-time processing. In this paper, we propose a real-time model and analytically derive its relationship to prior methods. Our CFLOW-AD model is based on a conditional normalizing flow framework adopted for anomaly detection with localization. In particular, CFLOW-AD consists of a discriminatively pretrained encoder followed by a multi-scale generative decoders where the latter explicitly estimate likelihood of the encoded features. Our approach results in a computationally and memory-efficient model: CFLOW-AD is faster and smaller by a factor of 10x than prior state-of-the-art with the same input setting. Our experiments on the MVTec dataset show that CFLOW-AD outperforms previous methods by 0.36% AUROC in detection task, by 1.12% AUROC and 2.5% AUPRO in localization task, respectively. We open-source our code with fully reproducible experiments.

【29】 Probing neural networks with t-SNE, class-specific projections and a guided tour 标题:用t-SNE、特定类投影和导游探索神经网络

作者:Christopher R. Hoyt,Art B. Owen 机构:Stanford University 链接:https://arxiv.org/abs/2107.12547 摘要:我们使用图形化的方法来探测对图像进行分类的神经网络。网络中连续层的t-SNE输出图显示了数据点的日益有序的排列。它们还可以揭示当数据通过层时,网络如何减少甚至忘记类内结构。我们使用特定于类的主成分的类似物来可视化后续层如何将类分开。这使我们能够将给定类别的图像从最典型到最不典型(在数据中)进行排序,并且它们还可以作为数据可视化的非常有用的投影坐标。我们发现它们在定义用于动画数据可视化的向导版本时特别有用。 摘要:We use graphical methods to probe neural nets that classify images. Plots of t-SNE outputs at successive layers in a network reveal increasingly organized arrangement of the data points. They can also reveal how a network can diminish or even forget about within-class structure as the data proceeds through layers. We use class-specific analogues of principal components to visualize how succeeding layers separate the classes. These allow us to sort images from a given class from most typical to least typical (in the data) and they also serve as very useful projection coordinates for data visualization. We find them especially useful when defining versions guided tours for animated data visualization.

【30】 Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning 标题:通过基于理论的建模、探索和规划实现人性化强化学习

作者:Pedro A. Tsividis,Joao Loula,Jake Burga,Nathan Foss,Andres Campero,Thomas Pouncy,Samuel J. Gershman,Joshua B. Tenenbaum 机构:Massachusetts Institute of Technology, Harvard University, Cambridge, MA , USA, Center for Brains, Minds, and Machines, ⋄Equal contribution 链接:https://arxiv.org/abs/2107.12544 摘要:强化学习(RL)研究的是一个agent如何在一个环境中通过一段时间的交互来获得奖励。机器RL的最新进展已经超过了人类在世界上最古老的棋盘游戏和许多经典视频游戏方面的专长,但它们需要大量的经验才能成功地学习——今天的算法中没有一种能解释人类如此快速地学习如此多不同任务的能力。在这里,我们提出了一种新的方法来应对这一挑战,这种方法基于一种特别强大的基于模型的RL形式,我们称之为基于理论的强化学习,因为它使用了类似于人类的直觉理论——丰富、抽象、因果的物理对象模型、意向代理及其交互作用——来探索和模拟环境,有效地计划以实现任务目标。我们在一个名为EMPA(探索、建模和规划代理)的视频游戏玩家代理中实例化了这种方法,该代理执行贝叶斯推理来学习用游戏引擎模拟器程序表示的概率生成模型,并在这些模型上运行内部仿真以支持高效的基于对象的,关系探索和启发式规划。EMPA在一套90款具有挑战性的Atari风格的视频游戏上与人类的学习效率非常匹配,只需几分钟的游戏时间就可以学习新游戏,并能有力地推广到新的游戏情境和新的水平。该模型还捕获了人们探索轨迹和学习动态中的细粒度结构。它的设计和行为为构建更通用的类人人工智能系统指明了一条前进的道路。 摘要:Reinforcement learning (RL) studies how an agent comes to achieve reward in an environment through interactions over time. Recent advances in machine RL have surpassed human expertise at the world's oldest board games and many classic video games, but they require vast quantities of experience to learn successfully -- none of today's algorithms account for the human ability to learn so many different tasks, so quickly. Here we propose a new approach to this challenge based on a particularly strong form of model-based RL which we call Theory-Based Reinforcement Learning, because it uses human-like intuitive theories -- rich, abstract, causal models of physical objects, intentional agents, and their interactions -- to explore and model an environment, and plan effectively to achieve task goals. We instantiate the approach in a video game playing agent called EMPA (the Exploring, Modeling, and Planning Agent), which performs Bayesian inference to learn probabilistic generative models expressed as programs for a game-engine simulator, and runs internal simulations over these models to support efficient object-based, relational exploration and heuristic planning. EMPA closely matches human learning efficiency on a suite of 90 challenging Atari-style video games, learning new games in just minutes of game play and generalizing robustly to new game situations and new levels. The model also captures fine-grained structure in people's exploration trajectories and learning dynamics. Its design and behavior suggest a way forward for building more general human-like AI systems.

【31】 A Neurorobotics Approach to Behaviour Selection based on Human Activity Recognition 标题:一种基于人类活动识别的神经机器人行为选择方法

作者:Caetano M. Ranieri,Renan C. Moioli,Patricia A. Vargas,Roseli A. F. Romero 机构:Institute of Mathematical and Computer Sciences, University of Sao Paulo, Sao Carlos, SP, Brazil, Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil, Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh, Scotland, UK 链接:https://arxiv.org/abs/2107.12540 摘要:行为选择一直是机器人学的研究热点,尤其是在人机交互领域。为了使机器人能够与人类进行有效的自主交互,基于感知信息的人类活动识别技术与基于决策机制的机器人行为选择技术之间的耦合是至关重要的。然而,到目前为止,大多数方法都是由已识别的活动和机器人行为之间的确定性关联组成,忽略了实时应用中连续预测固有的不确定性。在这篇论文中,我们提出了一种基于计算模型的神经机器人方法来解决这个问题,这种模型类似于生物的神经生理学方面。这种神经机器人学方法与非生物启发的启发式方法进行了比较。为了评估这两种方法,开发了一个机器人仿真系统,其中移动机器人必须根据智能家庭中的居民执行的活动来完成任务。根据机器人提供的正确结果数对每种方法的结果进行评估。结果表明,神经机器人方法是有利的,特别是考虑到计算模型更复杂的动物。 摘要:Behaviour selection has been an active research topic for robotics, in particular in the field of human-robot interaction. For a robot to interact effectively and autonomously with humans, the coupling between techniques for human activity recognition, based on sensing information, and robot behaviour selection, based on decision-making mechanisms, is of paramount importance. However, most approaches to date consist of deterministic associations between the recognised activities and the robot behaviours, neglecting the uncertainty inherent to sequential predictions in real-time applications. In this paper, we address this gap by presenting a neurorobotics approach based on computational models that resemble neurophysiological aspects of living beings. This neurorobotics approach was compared to a non-bioinspired, heuristics-based approach. To evaluate both approaches, a robot simulation is developed, in which a mobile robot has to accomplish tasks according to the activity being performed by the inhabitant of an intelligent home. The outcomes of each approach were evaluated according to the number of correct outcomes provided by the robot. Results revealed that the neurorobotics approach is advantageous, especially considering the computational models based on more complex animals.

【32】 Toward Co-creative Dungeon Generation via Transfer Learning 标题:通过迁移学习走向共同创造的地下城世代

作者:Zisen Zhou,Matthew Guzdial 机构:University of Alberta, Edmonton, Canada 备注:None 链接:https://arxiv.org/abs/2107.12533 摘要:通过机器学习(PCGML)的协同创作过程内容生成(Co-creative Procedural Content Generation via Machine Learning,PCGML)是指PCGML代理和人类共同工作以生成输出内容的系统。协同创造PCGML的局限性之一是它需要协同创造训练数据,PCGML代理才能学会与人类交互。然而,获取这些数据是一个困难和耗时的过程。在这项工作中,我们提出了近似的人机交互数据,并采用转移学习,以适应从一个游戏学习到不同的游戏共同创造的知识。我们探索这种方法为共同创造塞尔达地下城房间一代。 摘要:Co-creative Procedural Content Generation via Machine Learning (PCGML) refers to systems where a PCGML agent and a human work together to produce output content. One of the limitations of co-creative PCGML is that it requires co-creative training data for a PCGML agent to learn to interact with humans. However, acquiring this data is a difficult and time-consuming process. In this work, we propose approximating human-AI interaction data and employing transfer learning to adapt learned co-creative knowledge from one game to a different game. We explore this approach for co-creative Zelda dungeon room generation.

【33】 Language Grounding with 3D Objects 标题:3D对象的语言基础

作者:Jesse Thomason,Mohit Shridhar,Yonatan Bisk,Chris Paxton,Luke Zettlemoyer 机构:University of Southern California, University of Washington, Carnegie Mellon University, NVIDIA 备注:this https URL 链接:https://arxiv.org/abs/2107.12514 摘要:对机器人看似简单的自然语言请求通常没有明确规定,例如“你能给我拿无线鼠标吗?”当查看架子上的鼠标时,从某些角度或位置可能看不到按钮的数量或电线的存在。候选小鼠的平面图像可能无法提供“无线”所需的鉴别信息。世界和其中的物体不是平面的图像,而是复杂的三维形状。如果人类根据物体的任何基本属性(如颜色、形状或纹理)请求物体,机器人应该进行必要的探索以完成任务。特别是,虽然在明确理解颜色和类别等视觉属性方面做出了大量的努力和进展,但在理解形状和轮廓的语言方面取得的进展相对较少。在这项工作中,我们介绍了一种新的推理任务,目标都是视觉和非视觉语言的三维物体。我们的新基准,ShapeNet注解引用表达式(SNARE),需要一个模型来选择两个对象中的哪一个被自然语言描述引用。我们介绍了几种基于剪辑的模型来区分物体,并证明了尽管视觉和语言联合建模的最新进展有助于机器人的语言理解,但这些模型在理解物体的三维本质(在操纵中起关键作用的属性)方面仍然较弱。特别是,我们发现在语言基础模型中添加视图估计可以提高SNARE和在机器人平台上识别语言中引用的对象的准确性。 摘要:Seemingly simple natural language requests to a robot are generally underspecified, for example "Can you bring me the wireless mouse?" When viewing mice on the shelf, the number of buttons or presence of a wire may not be visible from certain angles or positions. Flat images of candidate mice may not provide the discriminative information needed for "wireless". The world, and objects in it, are not flat images but complex 3D shapes. If a human requests an object based on any of its basic properties, such as color, shape, or texture, robots should perform the necessary exploration to accomplish the task. In particular, while substantial effort and progress has been made on understanding explicitly visual attributes like color and category, comparatively little progress has been made on understanding language about shapes and contours. In this work, we introduce a novel reasoning task that targets both visual and non-visual language about 3D objects. Our new benchmark, ShapeNet Annotated with Referring Expressions (SNARE), requires a model to choose which of two objects is being referenced by a natural language description. We introduce several CLIP-based models for distinguishing objects and demonstrate that while recent advances in jointly modeling vision and language are useful for robotic language understanding, it is still the case that these models are weaker at understanding the 3D nature of objects -- properties which play a key role in manipulation. In particular, we find that adding view estimation to language grounding models improves accuracy on both SNARE and when identifying objects referred to in language on a robot platform.

【34】 H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction 标题:H3D-NET:Few-Shot高保真三维头部重建

作者:Eduard Ramon,Gil Triginer,Janna Escur,Albert Pumarola,Jaime Garcia,Xavier Giro-i-Nieto,Francesc Moreno-Noguer 机构:Crisalix SA, Universitat Politecnica de Catalunya, Institut de Robotica i Informatica Industrial, CSIC-UPC, crisalixsa.github.ioh,d-net 链接:https://arxiv.org/abs/2107.12512 摘要:最近的学习方法,隐式表示表面几何使用基于坐标的神经表示已显示出令人印象深刻的结果,在多视图三维重建的问题。然而,这些技术的有效性取决于场景的大量(几十个)输入视图的可用性和计算要求的优化。在本文中,我们针对Few-Shot全三维头部重建的具体问题解决了这些限制,通过赋予基于坐标的表示一个概率形状先验,使得在使用较少的输入图像(最多三幅)时能够更快地收敛和更好的泛化。首先,我们学习一个形状模型的三维头部从数千个不完整的原始扫描使用隐式表示。在测试时,我们使用隐式可微绘制方法,将两个基于坐标的神经网络联合过拟合到场景中,一个用于几何建模,另一个用于表面辐射估计。我们设计了一个两阶段的优化策略,在初始优化阶段,利用学习到的先验知识对几何体进行初始化和约束。然后,先验解冻结并微调到场景。通过这样做,我们实现了高保真的头部重建,包括头发和肩膀,并具有高水平的细节,在少数镜头场景中始终优于最先进的三维变形模型方法,在大视图集可用时优于非参数化方法。 摘要:Recent learning approaches that implicitly represent surface geometry using coordinate-based neural representations have shown impressive results in the problem of multi-view 3D reconstruction. The effectiveness of these techniques is, however, subject to the availability of a large number (several tens) of input views of the scene, and computationally demanding optimizations. In this paper, we tackle these limitations for the specific problem of few-shot full 3D head reconstruction, by endowing coordinate-based representations with a probabilistic shape prior that enables faster convergence and better generalization when using few input images (down to three). First, we learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations. At test time, we jointly overfit two coordinate-based neural networks to the scene, one modeling the geometry and another estimating the surface radiance, using implicit differentiable rendering. We devise a two-stage optimization strategy in which the learned prior is used to initialize and constrain the geometry during an initial optimization phase. Then, the prior is unfrozen and fine-tuned to the scene. By doing this, we achieve high-fidelity head reconstructions, including hair and shoulders, and with a high level of detail that consistently outperforms both state-of-the-art 3D Morphable Models methods in the few-shot scenario, and non-parametric methods when large sets of views are available.

【35】 Adversarial Random Forest Classifier for Automated Game Design 标题:用于自动游戏设计的对抗性随机森林分类器

作者:Thomas Maurer,Matthew Guzdial 机构:University of Alberta, Edmonton, Canada 备注:None 链接:https://arxiv.org/abs/2107.12501 摘要:自主游戏设计,即通过算法生成游戏,一直是技术游戏研究领域的一个长期目标。然而,现有的自主游戏设计系统在很大程度上依赖于人类对游戏设计知识的创作,例如基于搜索的方法中的适应度函数。在这篇文章中,我们描述了一个实验,试图学习一个类似人类的适应度函数,以对抗的方式进行自主游戏设计。虽然我们的实验工作并没有达到我们的预期,但是我们对我们的系统和结果进行了分析,希望对未来的自主游戏设计研究有所帮助。 摘要:Autonomous game design, generating games algorithmically, has been a longtime goal within the technical games research field. However, existing autonomous game design systems have relied in large part on human-authoring for game design knowledge, such as fitness functions in search-based methods. In this paper, we describe an experiment to attempt to learn a human-like fitness function for autonomous game design in an adversarial manner. While our experimental work did not meet our expectations, we present an analysis of our system and results that we hope will be informative to future autonomous game design research.

【36】 Decision Making Using Rough Set based Spanning Sets for a Decision System 标题:基于粗糙集生成集的决策系统决策

作者:Nidhika Yadav 链接:https://arxiv.org/abs/2107.12477 摘要:最近提出了基于粗糙集的跨度和跨度集的概念来处理数据中的不确定性。在这里,本文提出了新的概念,一般决策过程中使用粗糙集为基础的广度决策表。人工智能中的大多数问题都与决策有关。本文提供了基于粗糙集的决策表跨度的实际应用。本文提出了决策表跨度的概念,并以抗洪抢险队伍分配的实例进行了说明。探讨了它的用途、应用和性质。本文的主要贡献主要是研究了基于粗糙集的决策表广度决策方法,并与已有的信息系统进行了比较。在这里,主要的贡献是决策类的自动学习技术的粗糙集为基础的跨度,为一个特定的问题,从而自动化的决策过程。这些基于span的决策工具可以指导专家在困难和有时间限制的情况下做出决策。 摘要:Rough Set based concepts of Span and Spanning Sets were recently proposed to deal with uncertainties in data. Here, this paper, presents novel concepts for generic decision-making process using Rough Set based span for a decision table. Majority of problems in Artificial Intelligence deal with decision making. This paper provides real life applications of proposed Rough Set based span for decision tables. Here, novel concept of span for a decision table is proposed, illustrated with real life example of flood relief and rescue team assignment. Its uses, applications and properties are explored. The key contribution of paper is primarily to study decision making using Rough Set based Span for a decision tables, as against an information system in prior works. Here, the main contribution is that decision classes are automatically learned by the technique of Rough Set based span, for a particular problem, hence automating the decision-making process. These decision-making tools based on span can guide an expert in taking decisions in tough and time-bound situations.

【37】 Adversarial Attacks with Time-Scale Representations 标题:具有时间尺度表示的对抗性攻击

作者:Alberto Santamaria-Pang,Jianwei Qiu,Aritra Chowdhury,James Kubricht,Peter Tu,Iyer Naresh,Nurali Virani 机构:GE Research, Research Circle, Niskayuna, NY 链接:https://arxiv.org/abs/2107.12473 摘要:我们提出了一种新的实时黑盒通用攻击框架,该框架可以中断深度学习模型中早期卷积层的激活。我们的假设是,在小波空间产生的扰动比在时域产生的扰动更有效地破坏了早期卷积层。对抗性攻击的主要挑战是在最小限度地改变最有意义的高频内容的同时保留低频图像内容。为了解决这个问题,我们提出了一个优化问题,使用时间尺度(小波)表示作为对偶空间分三步进行。首先,利用小波系数将原始图像投影到高、低尺度的正交子空间。第二,利用生成网络对高尺度投影的小波系数进行扰动。第三,将低尺度的原始系数和高尺度子空间的扰动系数投影回来,生成新的对抗图像。我们提供了一个理论框架,保证从时间和时间尺度域表示对偶映射。我们将我们的结果与基于生成模型和基于梯度模型的最新黑盒攻击进行了比较。我们还验证了多种防御方法的有效性,如JPEG压缩,引导去噪和comdefende。我们的结果表明,基于小波的扰动始终优于基于时间的攻击,从而为深入学习模型的脆弱性提供了新的见解,并可能通过利用时间尺度表示产生健壮的体系结构或新的防御和攻击机制。 摘要:We propose a novel framework for real-time black-box universal attacks which disrupts activations of early convolutional layers in deep learning models. Our hypothesis is that perturbations produced in the wavelet space disrupt early convolutional layers more effectively than perturbations performed in the time domain. The main challenge in adversarial attacks is to preserve low frequency image content while minimally changing the most meaningful high frequency content. To address this, we formulate an optimization problem using time-scale (wavelet) representations as a dual space in three steps. First, we project original images into orthonormal sub-spaces for low and high scales via wavelet coefficients. Second, we perturb wavelet coefficients for high scale projection using a generator network. Third, we generate new adversarial images by projecting back the original coefficients from the low scale and the perturbed coefficients from the high scale sub-space. We provide a theoretical framework that guarantees a dual mapping from time and time-scale domain representations. We compare our results with state-of-the-art black-box attacks from generative-based and gradient-based models. We also verify efficacy against multiple defense methods such as JPEG compression, Guided Denoiser and Comdefend. Our results show that wavelet-based perturbations consistently outperform time-based attacks thus providing new insights into vulnerabilities of deep learning models and could potentially lead to robust architectures or new defense and attack mechanisms by leveraging time-scale representations.

【38】 Don't Sweep your Learning Rate under the Rug: A Closer Look at Cross-modal Transfer of Pretrained Transformers 标题:不要把你的学习速度藏在地毯下:仔细观察预先训练的Transformer的跨模式转移

作者:Danielle Rothermel,Margaret Li,Tim Rocktäschel,Jakob Foerster 机构: sentiment analysis (Maas 1Facebook AI Research 2University of Washington 3UniversityCollege London 备注:Accepted to ICML 2021 Workshop: Self-Supervised Learning for Reasoning and Perception 链接:https://arxiv.org/abs/2107.12460 摘要:在文本语料库上对大规模Transformer模型进行自监督预训练,然后进行微调,在许多自然语言处理任务上取得了最先进的成果。最近,Lu等人(2021,arXiv:2103.05247)声称,冻结的预训练Transformer(FPT)在一系列转移任务中与从零开始的训练以及未冻结的(微调的)预训练Transformer相匹配或优于训练。在我们的工作中,我们发现这个结果实际上是一个没有调整学习率的伪影。在仔细地重新设计了经验设置之后,我们发现当适当地调整学习率时,预训练的Transformer在我们所有的任务中都会表现得更好或者与从头开始的训练相匹配,但前提是整个模型都经过了精调。因此,虽然从预先训练的语言模型到其他模式的转换确实为未来的工作提供了令人兴奋的可能性,但适当调整超参数对于获得可靠的结果是重要的。 摘要:Self-supervised pre-training of large-scale transformer models on text corpora followed by finetuning has achieved state-of-the-art on a number of natural language processing tasks. Recently, Lu et al. (2021, arXiv:2103.05247) claimed that frozen pretrained transformers (FPTs) match or outperform training from scratch as well as unfrozen (fine-tuned) pretrained transformers in a set of transfer tasks to other modalities. In our work, we find that this result is, in fact, an artifact of not tuning the learning rates. After carefully redesigning the empirical setup, we find that when tuning learning rates properly, pretrained transformers do outperform or match training from scratch in all of our tasks, but only as long as the entire model is finetuned. Thus, while transfer from pretrained language models to other modalities does indeed provide gains and hints at exciting possibilities for future work, properly tuning hyperparameters is important for arriving at robust findings.

【39】 Feature Synergy, Redundancy, and Independence in Global Model Explanations using SHAP Vector Decomposition 标题:使用Shap矢量分解的全局模型解释中的特征协同、冗余和独立

作者:Jan Ittner,Lukasz Bolikowski,Konstantin Hemker,Ricardo Kennedy 备注:7 pages, 2 figures 链接:https://arxiv.org/abs/2107.12436 摘要:我们提供了一种新的形式来解释监督模型中的成对特征依赖和交互作用。基于SHAP值和SHAP交互值,我们的方法将特征贡献分解为协同、冗余和独立的分量(SHAP向量的S-R-I分解)。我们提出一个几何解释的组成部分,并正式证明其基本性质。最后,我们通过将它们应用到构建的数据集和模型中,展示了协同、冗余和独立的效用。 摘要:We offer a new formalism for global explanations of pairwise feature dependencies and interactions in supervised models. Building upon SHAP values and SHAP interaction values, our approach decomposes feature contributions into synergistic, redundant and independent components (S-R-I decomposition of SHAP vectors). We propose a geometric interpretation of the components and formally prove its basic properties. Finally, we demonstrate the utility of synergy, redundancy and independence by applying them to a constructed data set and model.

【40】 The Graph Neural Networking Challenge: A Worldwide Competition for Education in AI/ML for Networks 标题:图形神经网络挑战赛:面向网络的AI/ML全球教育竞赛

作者:José Suárez-Varela,Miquel Ferriol-Galmés,Albert López,Paul Almasan,Guillermo Bernárdez,David Pujol-Perich,Krzysztof Rusek,Loïck Bonniot,Christoph Neumann,François Schnitzler,François Taïani,Martin Happ,Christian Maier,Jia Lei Du,Matthias Herlich,Peter Dorfinger,Nick Vincent Hainke,Stefan Venz,Johannes Wegener,Henrike Wissing,Bo Wu,Shihan Xiao,Pere Barlet-Ros,Albert Cabellos-Aparicio 机构:Barcelona Neural Networking center, Universitat Politècnica de Catalunya, Spain, AGH University of Science and Technology, Department of Telecommunications, Poland, InterDigital, France, Univ. Rennes, Inria, CNRS, IRISA, France 备注:None 链接:https://arxiv.org/abs/2107.12433 摘要:近十年来,机器学习逐渐成为计算机网络领域的一个热门话题,并有望在实际部署中逐渐应用于大量的控制、监视和管理任务。这就需要依靠新一代的学生、研究人员和实践者,他们都有扎实的ML应用于网络的背景。在2020年期间,国际电信联盟(ITU)组织了“ITU AI/ML 5G挑战赛”,这是一项开放的全球竞赛,向广大观众介绍了当前ML网络面临的一些主要挑战。这一大规模举措汇集了网络运营商、设备制造商和学术界提出的23项不同挑战,吸引了来自60多个国家的1300多名参与者。本文叙述了我们组织其中一项挑战的经验:“2020年图形神经网络挑战”。我们将介绍向参与者提出的问题、提供的工具和资源、一些组织方面和参与统计数据、前三名获奖解决方案的概要以及在整个过程中吸取的一些经验教训。因此,这一挑战使得任何对该主题感兴趣的人都可以公开获得一套精心策划的教育资源。 摘要:During the last decade, Machine Learning (ML) has increasingly become a hot topic in the field of Computer Networks and is expected to be gradually adopted for a plethora of control, monitoring and management tasks in real-world deployments. This poses the need to count on new generations of students, researchers and practitioners with a solid background in ML applied to networks. During 2020, the International Telecommunication Union (ITU) has organized the "ITU AI/ML in 5G challenge'', an open global competition that has introduced to a broad audience some of the current main challenges in ML for networks. This large-scale initiative has gathered 23 different challenges proposed by network operators, equipment manufacturers and academia, and has attracted a total of 1300+ participants from 60+ countries. This paper narrates our experience organizing one of the proposed challenges: the "Graph Neural Networking Challenge 2020''. We describe the problem presented to participants, the tools and resources provided, some organization aspects and participation statistics, an outline of the top-3 awarded solutions, and a summary with some lessons learned during all this journey. As a result, this challenge leaves a curated set of educational resources openly available to anyone interested in the topic.

【41】 Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework 标题:优化框架下基于张量分解的DNN模型压缩

作者:Miao Yin,Yang Sui,Siyu Liao,Bo Yuan 机构:Department of ECE, Rutgers University,Amazon 备注:This paper was accepted to CVPR'21 链接:https://arxiv.org/abs/2107.12422 摘要:高级张量分解,如张量列(TT)和张量环(TR),已被广泛应用于深度神经网络(DNN)模型压缩,特别是递归神经网络(RNNs)。然而,使用TT/TR压缩卷积神经网络(CNNs)的精度往往会受到很大的损失。本文提出了一个基于张量分解的交替方向乘子模型压缩系统框架。通过将基于TT分解的模型压缩问题转化为一个张量秩约束的优化问题,利用ADMM技术以迭代的方式系统地求解该优化问题。在这个过程中,整个DNN模型是在原来的结构,而不是TT格式的训练,但逐渐享受理想的低张量秩特征。然后,我们将这个未压缩的模型分解为TT格式,并对其进行微调,最终得到一个高精度的TT格式DNN模型。我们的框架非常通用,它同时适用于CNNs和RNNs,并且可以很容易地修改以适合其他张量分解方法。我们在不同的DNN模型上评估了我们提出的框架,用于图像分类和视频识别任务。实验结果表明,基于ADMM的TT格式模型具有很高的压缩性能和较高的精度。值得注意的是,在CIFAR-100上,压缩比为2.3X和2.4X,我们的模型的top-1精度分别比原来的ResNet-20和ResNet-32高1.96%和2.21%。通过对ImageNet上ResNet-18的压缩,我们的模型在不损失精度的情况下实现了2.47倍的跳频。 摘要:Advanced tensor decomposition, such as Tensor train (TT) and Tensor ring (TR), has been widely studied for deep neural network (DNN) model compression, especially for recurrent neural networks (RNNs). However, compressing convolutional neural networks (CNNs) using TT/TR always suffers significant accuracy loss. In this paper, we propose a systematic framework for tensor decomposition-based model compression using Alternating Direction Method of Multipliers (ADMM). By formulating TT decomposition-based model compression to an optimization problem with constraints on tensor ranks, we leverage ADMM technique to systemically solve this optimization problem in an iterative way. During this procedure, the entire DNN model is trained in the original structure instead of TT format, but gradually enjoys the desired low tensor rank characteristics. We then decompose this uncompressed model to TT format and fine-tune it to finally obtain a high-accuracy TT-format DNN model. Our framework is very general, and it works for both CNNs and RNNs, and can be easily modified to fit other tensor decomposition approaches. We evaluate our proposed framework on different DNN models for image classification and video recognition tasks. Experimental results show that our ADMM-based TT-format models demonstrate very high compression performance with high accuracy. Notably, on CIFAR-100, with 2.3X and 2.4X compression ratios, our models have 1.96% and 2.21% higher top-1 accuracy than the original ResNet-20 and ResNet-32, respectively. For compressing ResNet-18 on ImageNet, our model achieves 2.47X FLOPs reduction without accuracy loss.

【42】 Asynchronous Distributed Reinforcement Learning for LQR Control via Zeroth-Order Block Coordinate Descent 标题:基于零阶挡路坐标下降的异步分布式强化学习

作者:Gangshan Jing,He Bai,Jemin George,Aranya Chakrabortty,Piyush K. Sharma 机构: Chkarabortty are with North Carolina State University, Bai is with Oklahoma State University 链接:https://arxiv.org/abs/2107.12416 摘要:最近提出的分布式零阶优化(ZOO)算法在分布式强化学习(RL)中得到了广泛的应用。然而,在梯度估计过程中,几乎所有的算法都需要与全局变量维数相同的随机样本和/或需要计算全局代价函数,这可能导致大规模网络的估计方差较大。在本文中,我们提出了一种新的分布式零阶算法,利用优化目标中固有的网络结构,使得每个代理可以独立地通过局部代价评估来估计其局部梯度,而不需要使用任何一致性协议。该算法采用异步更新方案,并基于块坐标下降法设计了一个可行域可能非凸的随机非凸优化问题。该算法后来被用作分布式线性二次调节器设计的分布式无模型RL算法,其中设计了一个学习图来描述分布式学习中agent之间所需的交互关系。我们提供了一个实验验证所提出的算法,以测试其性能的收敛速度和方差相比,集中动物园算法。 摘要:Recently introduced distributed zeroth-order optimization (ZOO) algorithms have shown their utility in distributed reinforcement learning (RL). Unfortunately, in the gradient estimation process, almost all of them require random samples with the same dimension as the global variable and/or require evaluation of the global cost function, which may induce high estimation variance for large-scale networks. In this paper, we propose a novel distributed zeroth-order algorithm by leveraging the network structure inherent in the optimization objective, which allows each agent to estimate its local gradient by local cost evaluation independently, without use of any consensus protocol. The proposed algorithm exhibits an asynchronous update scheme, and is designed for stochastic non-convex optimization with a possibly non-convex feasible domain based on the block coordinate descent method. The algorithm is later employed as a distributed model-free RL algorithm for distributed linear quadratic regulator design, where a learning graph is designed to describe the required interaction relationship among agents in distributed learning. We provide an empirical validation of the proposed algorithm to benchmark its performance on convergence rate and variance against a centralized ZOO algorithm.

【43】 A Simplified Framework for Air Route Clustering Based on ADS-B Data 标题:基于ADS-B数据的航线聚类简化框架

作者:Quan Duong,Tan Tran,Duc-Thinh Pham,An Mai 机构:ICT Department, John von Neumann Institute, Ho Chi Minh, Vietnam, Air Traffic Management Research Institute, School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore 备注:None 链接:https://arxiv.org/abs/2107.12869 摘要:随着时间的推移,飞行交通量不断增加,这使得战略交通流管理成为一个具有挑战性的问题,因为它需要大量的计算资源来模拟整个交通数据。另一方面,自动相关监视广播(ADS-B)技术被认为是一种很有前途的数据技术,它可以为飞行机组和地面控制人员安全有效地提供有关特定区域内飞机位置和速度的必要信息。为了解决这一问题,本文提出了一种基于ADS-B数据的机场间典型航线检测的简化框架。具体地说,基于相似性度量将航班流量划分为主要的分组,这有助于减少机场之间的航线数量。事实上,我们的框架可以考虑实际降低气流优化的计算成本和评估操作性能。最后,以三对不同机场的ADS-B流量飞行数据为例进行了实验,说明了该框架的潜在应用价值。通过将聚类性能的两个指标结合起来,并将人类的判断融入到目视检测中,检测出的机场间典型航线显示出了良好的效果。 摘要:The volume of flight traffic gets increasing over the time, which makes the strategic traffic flow management become one of the challenging problems since it requires a lot of computational resources to model entire traffic data. On the other hand, Automatic Dependent Surveillance - Broadcast (ADS-B) technology has been considered as a promising data technology to provide both flight crews and ground control staff the necessary information safely and efficiently about the position and velocity of the airplanes in a specific area. In the attempt to tackle this problem, we presented in this paper a simplified framework that can support to detect the typical air routes between airports based on ADS-B data. Specifically, the flight traffic will be classified into major groups based on similarity measures, which helps to reduce the number of flight paths between airports. As a matter of fact, our framework can be taken into account to reduce practically the computational cost for air flow optimization and evaluate the operational performance. Finally, in order to illustrate the potential applications of our proposed framework, an experiment was performed using ADS-B traffic flight data of three different pairs of airports. The detected typical routes between each couple of airports show promising results by virtue of combining two indices for measuring the clustering performance and incorporating human judgment into the visual inspection.

【44】 Graph Autoencoders for Embedding Learning in Brain Networks and Major Depressive Disorder Identification 标题:用于脑网络嵌入学习的图形自动编码器及抑郁症识别

作者:Fuad Noman,Chee-Ming Ting,Hakmook Kang,Raphael C. -W. Phan,Brian D. Boyd,Warren D. Taylor,Hernando Ombao 机构: Kang is with the Department of Biostatistics 链接:https://arxiv.org/abs/2107.12838 摘要:脑功能连接性(FC)揭示了识别各种神经精神疾病的生物标志物。近年来,深度神经网络(DNNs)在连接组分类中的应用主要依赖于基于规则欧氏网格的输入连通矩阵的传统卷积神经网络。我们提出一个图形深度学习框架,结合非欧几里德信息的图形结构分类功能磁共振成像(fMRI)衍生的脑网络在抑郁症(MDD)。设计了一种基于图卷积网络(GCNs)的新型图自动编码器(GAE)结构,将大型fMRI网络的拓扑结构和节点内容嵌入到低维的潜在表示中。在网络构建中,我们采用了Ledoit-Wolf(LDW)收缩方法来有效地估计fMRI数据中的高维FC度量。我们考虑监督和非监督方法的图形嵌入式学习。然后将学习到的嵌入信息作为深度全连接神经网络(FCNN)的特征输入,用于区分MDD和健康对照组。在43名受试者的静息态fMRI MDD数据集上进行评估,结果表明,所提出的GAE-FCNN模型显著优于几种最先进的DNN方法,使用LDW-FC度量作为节点特征,准确率达到72.50%。GAE学习的fMRI-FC网络的图形嵌入也揭示了MDD和HC之间明显的组间差异。我们的新框架证明了在脑网络上学习图形嵌入的可行性,从而为脑疾病的诊断提供鉴别信息。 摘要:Brain functional connectivity (FC) reveals biomarkers for identification of various neuropsychiatric disorders. Recent application of deep neural networks (DNNs) to connectome-based classification mostly relies on traditional convolutional neural networks using input connectivity matrices on a regular Euclidean grid. We propose a graph deep learning framework to incorporate the non-Euclidean information about graph structure for classifying functional magnetic resonance imaging (fMRI)- derived brain networks in major depressive disorder (MDD). We design a novel graph autoencoder (GAE) architecture based on the graph convolutional networks (GCNs) to embed the topological structure and node content of large-sized fMRI networks into low-dimensional latent representations. In network construction, we employ the Ledoit-Wolf (LDW) shrinkage method to estimate the high-dimensional FC metrics efficiently from fMRI data. We consider both supervised and unsupervised approaches for the graph embedded learning. The learned embeddings are then used as feature inputs for a deep fully-connected neural network (FCNN) to discriminate MDD from healthy controls. Evaluated on a resting-state fMRI MDD dataset with 43 subjects, results show that the proposed GAE-FCNN model significantly outperforms several state-of-the-art DNN methods for brain connectome classification, achieving accuracy of 72.50% using the LDW-FC metrics as node features. The graph embeddings of fMRI FC networks learned by the GAE also reveal apparent group differences between MDD and HC. Our new framework demonstrates feasibility of learning graph embeddings on brain networks to provide discriminative information for diagnosis of brain disorders.

【45】 A Data-Driven Biophysical Computational Model of Parkinson's Disease based on Marmoset Monkeys 标题:基于Marmoset猴的帕金森病数据驱动生物物理计算模型

作者:Caetano M. Ranieri,Jhielson M. Pimentel,Marcelo R. Romano,Leonardo A. Elias,Roseli A. F. Romero,Michael A. Lones,Mariana F. P. Araujo,Patricia A. Vargas,Renan C. Moioli 机构:Institute of Mathematical and Computer Sciences, University of Sao Paulo, Sao Carlos, SP, Brazil, Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh, Scotland, UK, School of Electrical and Computer Engineering, University of Campinas, Campinas, SP, Brazil 链接:https://arxiv.org/abs/2107.12536 摘要:在这项工作中,我们提出了一个新的生物物理计算模型的大脑区域相关的帕金森氏病的基础上,局部场电位数据收集自绒猴的大脑。帕金森病是一种神经退行性疾病,与黑质致密部多巴胺能神经元死亡有关,影响大脑基底节丘脑皮质神经元回路的正常动力学。尽管该病的发病机制多种多样,但对这些机制和分子发病机制的完整描述仍不清楚,目前尚无治愈的方法。为了弥补这一差距,人们提出了类似于动物模型中神经生物学方面的计算模型。在我们的模型中,我们采用数据驱动的方法,利用差异进化优化一组生物约束参数。进化模型成功地模拟了来自健康和帕金森病狨猴大脑数据的单神经元平均放电率和局部场电位的光谱特征。据我们所知,这是第一个帕金森氏症的计算模型,它基于来自绒猴七个脑区的同步电生理记录。结果表明,所提出的模型有助于研究帕金森病的发病机制,有助于开发新的治疗方法。它还可以应用于其他计算神经科学问题,在这些问题中,生物数据可以用来拟合大脑回路的多尺度模型。 摘要:In this work we propose a new biophysical computational model of brain regions relevant to Parkinson's Disease based on local field potential data collected from the brain of marmoset monkeys. Parkinson's disease is a neurodegenerative disorder, linked to the death of dopaminergic neurons at the substantia nigra pars compacta, which affects the normal dynamics of the basal ganglia-thalamus-cortex neuronal circuit of the brain. Although there are multiple mechanisms underlying the disease, a complete description of those mechanisms and molecular pathogenesis are still missing, and there is still no cure. To address this gap, computational models that resemble neurobiological aspects found in animal models have been proposed. In our model, we performed a data-driven approach in which a set of biologically constrained parameters is optimised using differential evolution. Evolved models successfully resembled single-neuron mean firing rates and spectral signatures of local field potentials from healthy and parkinsonian marmoset brain data. As far as we are concerned, this is the first computational model of Parkinson's Disease based on simultaneous electrophysiological recordings from seven brain regions of Marmoset monkeys. Results show that the proposed model could facilitate the investigation of the mechanisms of PD and support the development of techniques that can indicate new therapies. It could also be applied to other computational neuroscience problems in which biological data could be used to fit multi-scale models of brain circuits.

【46】 Geometric Deep Learning on Molecular Representations 标题:分子表示的几何深度学习

作者:Kenneth Atz,Francesca Grisoni,Gisbert Schneider 机构:ETH Zurich, Dept. Chemistry and Applied Biosciences, RETHINK, Vladimir-Prelog-Weg , Zurich, Switzerland., Eindhoven University of Technology, Dept. Biomedical Engineering, Groene Loper ,AZ Eindhoven, Netherlands. 链接:https://arxiv.org/abs/2107.12375 摘要:几何深度学习(Geometric deep learning,GDL)是近年来人工智能领域出现的一种新的研究范式,它是基于融合和处理对称信息的神经网络结构。GDL在分子建模应用中有着特殊的前景,其中存在着具有不同对称性质和抽象层次的各种分子表示。本文综述了分子GDL在药物发现、化学合成预测和量子化学中的应用。重点放在所学的分子特征的相关性和它们与已建立的分子描述符的互补性上。本文综述了当前的挑战和机遇,并对GDL在分子科学中的应用前景进行了展望。 摘要:Geometric deep learning (GDL), which is based on neural network architectures that incorporate and process symmetry information, has emerged as a recent paradigm in artificial intelligence. GDL bears particular promise in molecular modeling applications, in which various molecular representations with different symmetry properties and levels of abstraction exist. This review provides a structured and harmonized overview of molecular GDL, highlighting its applications in drug discovery, chemical synthesis prediction, and quantum chemistry. Emphasis is placed on the relevance of the learned molecular features and their complementarity to well-established molecular descriptors. This review provides an overview of current challenges and opportunities, and presents a forecast of the future of GDL for molecular sciences.

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2021-07-28,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 arXiv每日学术速递 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
NLP 服务
NLP 服务(Natural Language Process,NLP)深度整合了腾讯内部的 NLP 技术,提供多项智能文本处理和文本生成能力,包括词法分析、相似词召回、词相似度、句子相似度、文本润色、句子纠错、文本补全、句子生成等。满足各行业的文本智能需求。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档