首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >统计学学术速递[12.24]

统计学学术速递[12.24]

作者头像
公众号-arXiv每日学术速递
发布2021-12-27 17:02:59
5160
发布2021-12-27 17:02:59
举报

stat统计学,共计36篇

【1】 Bayesian Learning: A Selective Overview 标题:贝叶斯学习:选择性综述 链接:https://arxiv.org/abs/2112.12722

作者:Yu Lin Hsu,Chu Chuan Jeng,Pavithra Sripathanallur Murali,Mohammadreza Torkjazi,Jonathan West,Michaela Zuber,Vadim Sokolov 机构:First Draft: Oct , This Draft: Dec 摘要:本文概述了贝叶斯学习的一些概念。在过去几十年中,贝叶斯学习的科学和工业应用数量迅速增长。这一过程始于马尔可夫链蒙特卡罗方法的广泛使用,马尔可夫链蒙特卡罗方法在20世纪90年代初成为贝叶斯的主要计算技术。自那时以来,贝叶斯学习已在从机器人和机器学习到医疗应用的多个领域得到了很好的推广。本文概述了一些广泛使用的概念,并展示了一些应用。这是一篇基于乔治·梅森大学贝叶斯学习博士课程的学生所举办的一系列研讨会的论文。这门课是在2021秋季讲授的。因此,论文涵盖的主题反映了学生选择学习的主题。 摘要:This paper presents an overview of some of the concepts of Bayesian Learning. The number of scientific and industrial applications of Bayesian learning has been growing in size rapidly over the last few decades. This process has started with the wide use of Markov Chain Monte Carlo methods that emerged as a dominant computational technique for Bayesian in the early 1990's. Since then Bayesian learning has spread well across several fields from robotics and machine learning to medical applications. This paper provides an overview of some of the widely used concepts and shows several applications. This is a paper based on the series of seminars given by students of a PhD course on Bayesian Learning at George Mason University. The course was taught in the Fall of 2021. Thus, the topics covered in the paper reflect the topics students selected to study.

【2】 A general framework for penalized mixed-effects multitask learning with application on DNAm biomarkers creation 标题:惩罚性混合效应多任务学习的通用框架及其在dNaM生物标志物生成中的应用 链接:https://arxiv.org/abs/2112.12719

作者:Andrea Cappozzo,Francesca Ieva,Giovanni Fiorito 机构:UniversitadiSassari 摘要:从血液DNA甲基化谱中创建非侵入性生物标记物是个性化医疗领域的一项前沿成就:DNAm表观突变已被证明与生活方式和环境风险因素密切相关,最终为个体健康状况提供了一个无偏见的代表。目前,DNAm代理的创建依赖于单变量惩罚回归模型,弹性网络是完成任务的标准方法。尽管如此,当响应本质上是多变量的,并且样本显示出结构化依赖模式时,需要更高级的建模过程。在这项工作中,为了从多中心研究中开发出一种多变量DNAm生物标记物,我们提出了一个高维、混合效应多任务学习的总体框架。设计了一种基于EM算法的惩罚估计方案,在拟合过程中可以方便地加入固定效应模型的任何惩罚准则。然后,该方法被用于创建心血管和高血压共病的新替代物,在预测能力和流行病学解释方面显示出比最先进的替代品更好的结果。 摘要:The creation of non invasive biomarkers from blood DNA methylation profiles is a cutting-edge achievement in personalized medicine: DNAm epimutations have been demonstrated to be tightly related to lifestyle and environmental risk factors, ultimately providing an unbiased proxy of an individual state of health. At present, the creation of DNAm surrogates relies on univariate penalized regression model, with elastic net being the standard way to-go when accomplishing the task. Nonetheless, more advanced modeling procedures are required when the response is multivariate in nature and the samples showcase a structured dependence pattern. In this work, with the aim of developing a multivariate DNAm biomarker from a multi-centric study, we propose a general framework for high-dimensional, mixed-effects multitask learning. A penalized estimation scheme based on an EM algorithm is devised, in which any penalty criteria for fixed-effects models can be conveniently incorporated in the fitting process. The methodology is then employed to create a novel surrogate of cardiovascular and high blood pressure comorbidities, showcasing better results, both in terms of predictive power and epidemiological interpretation, than state-of-the-art alternatives.

【3】 Nonparametric Estimation of Covariance and Autocovariance Operators on the Sphere 标题:球面上协方差和自协方差算子的非参数估计 链接:https://arxiv.org/abs/2112.12694

作者:Alessia Caponera,Julien Fageot,Matthieu Simeoni,Victor M. Panaretos 机构:´Ecole Polytechnique F´ed´erale de Lausanne, e-mail: 摘要:我们提出了函数数据背景下球面随机场二阶中心矩的非参数估计。我们考虑一个测量框架,其中在均匀分布的球面随机场集合中的每个场在几个随机方向上采样,可能受到测量误差的影响。字段集合可以是i.i.d.或串行相关。虽然已经对单位区间上定义的随机函数探索了类似的设置,但文献中提出的非参数估计通常依赖于局部多项式,而局部多项式不容易扩展到(乘积)球面设置。因此,我们将我们的估计过程表述为一个包含广义Tikhonov正则项的变分问题。后者支持平滑协方差/自协方差函数,其中平滑度通过合适的类Sobolev伪微分算子指定。利用再生核Hilbert空间的机制,我们建立了充分刻画估计量形式的重中心定理。对于密集(空间样本数增加)和稀疏(空间样本数有界)区域,我们确定了当场数发散时它们的一致收敛速度。此外,我们还验证并证明了我们的估计程序在模拟环境中的实际可行性,假设每个字段有固定数量的样本。我们的数值估计程序利用了我们设置的稀疏性和二阶Kronecker结构,与原始实现相比,将计算和内存需求减少了大约三个数量级。 摘要:We propose nonparametric estimators for the second-order central moments of spherical random fields within a functional data context. We consider a measurement framework where each field among an identically distributed collection of spherical random fields is sampled at a few random directions, possibly subject to measurement error. The collection of fields could be i.i.d. or serially dependent. Though similar setups have already been explored for random functions defined on the unit interval, the nonparametric estimators proposed in the literature often rely on local polynomials, which do not readily extend to the (product) spherical setting. We therefore formulate our estimation procedure as a variational problem involving a generalized Tikhonov regularization term. The latter favours smooth covariance/autocovariance functions, where the smoothness is specified by means of suitable Sobolev-like pseudo-differential operators. Using the machinery of reproducing kernel Hilbert spaces, we establish representer theorems that fully characterizing the form of our estimators. We determine their uniform rates of convergence as the number of fields diverges, both for the dense (increasing number of spatial samples) and sparse (bounded number of spatial samples) regimes. We moreover validate and demonstrate the practical feasibility of our estimation procedure in a simulation setting, assuming a fixed number of samples per field. Our numerical estimation procedure leverages the sparsity and second-order Kronecker structure of our setup to reduce the computational and memory requirements by approximately three orders of magnitude compared to a naive implementation would require.

【4】 Analysis of Langevin Monte Carlo from Poincaré to Log-Sobolev 标题:从Poincaré到Log-Sobolev的朗之万蒙特卡罗分析 链接:https://arxiv.org/abs/2112.12662

作者:Sinho Chewi,Murat A. Erdogdu,Mufan Bill Li,Ruoqi Shen,Matthew Zhang 机构:Mufan (Bill) Li‡ 备注:35 pages 摘要:经典地,在$\pi$满足庞加莱不等式的唯一假设下,连续时间Langevin扩散指数快速收敛到其平稳分布$\pi$。然而,利用这一事实为离散时间Langevin Monte Carlo(LMC)算法提供保证,由于需要处理卡方或R趵enyi发散,因此具有相当大的挑战性,之前的工作主要集中在强对数凹目标上。在这项工作中,我们假设$\pi$满足Lata{\l}a--Oleszkiewicz不等式或修改的log-Sobolev不等式(在Poincar设置和log-Sobolev设置之间插值),为LMC提供了第一个收敛保证。与以前的工作不同,我们的结果允许弱光滑性,并且不需要凸性或耗散性条件。 摘要:Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution $\pi$ under the sole assumption that $\pi$ satisfies a Poincar\'e inequality. Using this fact to provide guarantees for the discrete-time Langevin Monte Carlo (LMC) algorithm, however, is considerably more challenging due to the need for working with chi-squared or R\'enyi divergences, and prior works have largely focused on strongly log-concave targets. In this work, we provide the first convergence guarantees for LMC assuming that $\pi$ satisfies either a Lata{\l}a--Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincar\'e and log-Sobolev settings. Unlike prior works, our results allow for weak smoothness and do not require convexity or dissipativity conditions.

【5】 Self-supervised Representation Learning of Neuronal Morphologies 标题:神经元形态学的自监督表征学习 链接:https://arxiv.org/abs/2112.12482

作者:Marissa A. Weis,Laura Pede,Timo Lüddecke,Alexander S. Ecker 机构: Morphology-basedclassification has traditionally been carried out by either 1Institute of Computer Science and Campus Institute DataScience, University of G¨ottingen, Germany 2Institute for Theo-retical Physics, University of T¨ubingen 摘要:了解细胞类型的多样性及其在大脑中的功能是神经科学的关键挑战之一。大规模数据集的出现导致了对细胞类型分类的无偏和定量方法的需求。我们介绍GraphDINO,一种纯数据驱动的方法,用于学习神经元三维形态的低维表示。GraphDINO是一种新的基于Transformer模型的自监督学习的空间图表示学习方法。它在节点间基于注意的全局交互和经典的图卷积处理之间平滑插值。我们表明,该方法能够产生与手动基于特征的分类相当的形态学细胞类型聚类,并且与两个不同物种和皮层区域的专家标记细胞类型具有良好的对应性。我们的方法适用于神经科学以外的环境,其中数据集中的样本是图形,需要图形级嵌入。 摘要:Understanding the diversity of cell types and their function in the brain is one of the key challenges in neuroscience. The advent of large-scale datasets has given rise to the need of unbiased and quantitative approaches to cell type classification. We present GraphDINO, a purely data-driven approach to learning a low dimensional representation of the 3D morphology of neurons. GraphDINO is a novel graph representation learning method for spatial graphs utilizing self-supervised learning on transformer models. It smoothly interpolates between attention-based global interaction between nodes and classic graph convolutional processing. We show that this method is able to yield morphological cell type clustering that is comparable to manual feature-based classification and shows a good correspondence to expert-labeled cell types in two different species and cortical areas. Our method is applicable beyond neuroscience in settings where samples in a dataset are graphs and graph-level embeddings are desired.

【6】 Shearlet-based regularization in statistical inverse learning with an application to X-ray tomography 标题:基于Searlet的统计逆学习正则化及其在X射线层析成像中的应用 链接:https://arxiv.org/abs/2112.12443

作者:Tatiana A. Bubba,Luca Ratti 机构: Bubba) Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Ratti) Department of Mathematics, University of Genoa 摘要:统计逆学习理论是逆问题与统计学习交叉的一个领域,近年来受到越来越多的关注。为了使这种相互作用更趋向于变分正则化框架,最近证明了一类凸函数的收敛速度,对称Bregman距离中的$p$-齐次正则化子,具有$p\in(1,2)$。沿着这条路径,我们进一步研究稀疏促进正则化,并将上述收敛速度扩展到使用$p\in(1,2)$范数正则化,用于一类特殊的非紧Banach框架,称为Shearlet,并可能约束到某个凸集。根据$\Gamma$-收敛理论的参数,通过(部分)理论分析补充数值证据,将$p=1$情况作为极限情况$(1,2)\ni p\rightarrow 1$处理。我们使用模拟和测量数据,在成像角度随机采样的情况下,在X射线层析成像的背景下,数值演示了我们的理论结果。 摘要:Statistical inverse learning theory, a field that lies at the intersection of inverse problems and statistical learning, has lately gained more and more attention. In an effort to steer this interplay more towards the variational regularization framework, convergence rates have recently been proved for a class of convex, $p$-homogeneous regularizers with $p \in (1,2]$, in the symmetric Bregman distance. Following this path, we take a further step towards the study of sparsity-promoting regularization and extend the aforementioned convergence rates to work with $\ell^p$-norm regularization, with $p \in (1,2)$, for a special class of non-tight Banach frames, called shearlets, and possibly constrained to some convex set. The $p = 1$ case is approached as the limit case $(1,2) \ni p \rightarrow 1$, by complementing numerical evidence with a (partial) theoretical analysis, based on arguments from $\Gamma$-convergence theory. We numerically demonstrate our theoretical results in the context of X-ray tomography, under random sampling of the imaging angles, using both simulated and measured data.

【7】 A generalised matching distribution for the problem of coincidences 标题:重合问题的广义匹配分布 链接:https://arxiv.org/abs/2112.12442

作者:Ben O'Neill 机构: Australian National University WRITTEN 19 NOVEMBER 20 2 1 Abstract This paper examines the classical matching distribution arising in the “problem of coincidences”, Research School of Population Health 摘要:本文考察了“巧合问题”中出现的经典匹配分布。我们将经典的匹配分布推广到一轮初步分配,其中项目以一定的固定概率正确匹配,剩余的不匹配项目使用简单的随机抽样进行分配,无需替换。我们的广义匹配分布是经典匹配分布和二项式分布的卷积。我们研究后一种分布的性质,并说明如何计算其概率函数。我们还展示了如何使用分布进行匹配测试和匹配能力的推断。 摘要:This paper examines the classical matching distribution arising in the "problem of coincidences". We generalise the classical matching distribution with a preliminary round of allocation where items are correctly matched with some fixed probability, and remaining non-matched items are allocated using simple random sampling without replacement. Our generalised matching distribution is a convolution of the classical matching distribution and the binomial distribution. We examine the properties of this latter distribution and show how its probability functions can be computes. We also show how to use the distribution for matching tests and inferences of matching ability.

【8】 Sparse M-estimators in semi-parametric copula models 标题:半参数Copula模型中的稀疏M-估计 链接:https://arxiv.org/abs/2112.12351

作者:Benjamin Poignard,Jean-David Fermanian 摘要:我们研究了伪观测条件下稀疏M-估计的大样本性质。我们的框架涵盖了一类广泛的半参数copula模型,对于这些模型,边际分布是未知的,并被它们的经验对应物所取代。众所周知,与通常的M-估计相比,后一种修改显著改变了极限定律。我们建立了稀疏惩罚M-估计的相合性和渐近正态性,并证明了伪观测下的渐近预言性质,包括参数数发散的情况。我们的框架允许管理潜在无界的基于copula的损失函数。作为补充结果,我们陈述了多元秩统计量的弱极限和由这些映射索引的经验copula过程的弱收敛性。我们将我们的推理方法应用于copulavine模型和copula回归。数值结果强调了该方法在模型规格错误情况下的相关性。 摘要:We study the large sample properties of sparse M-estimators in the presence of pseudo-observations. Our framework covers a broad class of semi-parametric copula models, for which the marginal distributions are unknown and replaced by their empirical counterparts. It is well known that the latter modification significantly alters the limiting laws compared to usual M-estimation. We establish the consistency and the asymptotic normality of our sparse penalized M-estimator and we prove the asymptotic oracle property with pseudo-observations, including the case when the number of parameters is diverging. Our framework allows to manage copula based loss functions that are potentially unbounded. As additional results, we state the weak limit of multivariate rank statistics and the weak convergence of the empirical copula process indexed by such maps. We apply our inference method to copula vine models and copula regressions. The numerical results emphasize the relevance of this methodology in the context of model misspecifications.

【9】 Limiting spectral distribution of large dimensional Spearman's rank correlation matrices 标题:高维Spearman秩相关矩阵的极限谱分布 链接:https://arxiv.org/abs/2112.12347

作者:Zeyu Wu,Cheng Wang 机构:School of Mathematical Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai , China. 摘要:在本文中,我们研究了Spearman秩相关矩阵的经验谱分布,假设观测值是独立同分布的随机向量,并且特征是相关的。我们证明了极限谱分布是广义的Mar\u{c}enko Pastur定律,经过标准化变换后的观测值的协方差矩阵。利用这些结果,我们比较了几种经典的协方差/相关矩阵,包括样本协方差矩阵、皮尔逊相关矩阵、肯德尔相关矩阵和斯皮尔曼相关矩阵。 摘要:In this paper, we study the empirical spectral distribution of Spearman's rank correlation matrices, under the assumption that the observations are independent and identically distributed random vectors and the features are correlated. We show that the limiting spectral distribution is the generalized Mar\u{c}enko-Pastur law with the covariance matrix of the observation after standardized transformation. With these results, we compare several classical covariance/correlation matrices including the sample covariance matrix, the Pearson's correlation matrix, the Kendall's correlation matrix and the Spearman's correlation matrix.

【10】 Cooperative learning for multi-view analysis 标题:用于多视图分析的合作学习 链接:https://arxiv.org/abs/2112.12337

作者:Daisy Yi Ding,Robert Tibshirani 机构:Department of Statistics, Stanford University, Department of Biomedical Data Science, Stanford University 摘要:我们提出了一种具有多组特征(“视图”)的监督学习新方法。合作学习将通常的预测平方误差损失与“一致”惩罚相结合,以鼓励来自不同数据视图的预测一致。通过改变协议惩罚的权重,我们得到了一系列解决方案,其中包括著名的早期和晚期融合方法。合作学习以自适应方式选择一致程度(或融合),使用验证集或交叉验证来估计测试集预测误差。我们的拟合程序的一个版本是模块化的,在模块化过程中,可以选择适合不同数据视图的不同拟合机制(例如套索、随机森林、增强、神经网络)。在合作正则化线性回归的背景下,该方法将套索惩罚与约定惩罚相结合。当不同的数据视图在其信号中共享一些我们希望加强的潜在关系,而每个视图都有我们希望减少的特殊噪声时,该方法可能特别强大。我们通过模拟和真实数据的例子说明了我们提出的方法的有效性。 摘要:We propose a new method for supervised learning with multiple sets of features ("views"). Cooperative learning combines the usual squared error loss of predictions with an "agreement" penalty to encourage the predictions from different data views to agree. By varying the weight of the agreement penalty, we get a continuum of solutions that include the well-known early and late fusion approaches. Cooperative learning chooses the degree of agreement (or fusion) in an adaptive manner, using a validation set or cross-validation to estimate test set prediction error. One version of our fitting procedure is modular, where one can choose different fitting mechanisms (e.g. lasso, random forests, boosting, neural networks) appropriate for different data views. In the setting of cooperative regularized linear regression, the method combines the lasso penalty with the agreement penalty. The method can be especially powerful when the different data views share some underlying relationship in their signals that we aim to strengthen, while each view has its idiosyncratic noise that we aim to reduce. We illustrate the effectiveness of our proposed method on simulated and real data examples.

【11】 Asymptotic normality of least squares estimators to stochastic differential equations driven by fractional Brownian motions 标题:分数布朗运动驱动的随机微分方程最小二乘估计的渐近正态性 链接:https://arxiv.org/abs/2112.12333

作者:Yasutaka Shimizu,Shohei Nakajima 机构:Department of Applied Mathematics, Waseda University, -,-, Okubo, Shinjuku, Tokyo,-, Japan 摘要:我们将考虑下面的随机微分方程(SDE):\开始{{方程} Xyt+x00+tn(xys,ththaa0)d++sigma byt,~~~tin in(0,t],\{{{}}等式},其中${tt}}{t\ge 0 }是一个分数Brownian运动,其中Hurst指数$H in(1/2,1)$,$\theta\u 0$是一个参数,它包含一个有界且开放的凸子集$\theta\subset\mathbb{R}^d$,$\{b(\cdot,\theta),\theta\in\theta\}$是一个漂移系数族,其中$b(\cdot,\theta):\mathbb{R}\rightarrow\mathbb{R}$,并且假定$\sigma\in\mathbb{R}$是已知的扩散系数。 摘要:We will consider the following stochastic differential equation (SDE): \begin{equation} X_t=X_0+\int_0^tb(X_s,\theta_0)ds+\sigma B_t,~~~t\in(0,T], \end{equation} where $\{B_t\}_{t\ge 0}$ is a fractional Brownian motion with Hurst index $H\in(1/2,1)$, $\theta_0$ is a parameter that contains a bounded and open convex subset $\Theta\subset\mathbb{R}^d$, $\{b(\cdot,\theta),\theta\in\Theta\}$ is a family of drift coefficients with $b(\cdot,\theta):\mathbb{R}\rightarrow\mathbb{R}$, and $\sigma\in\mathbb{R}$ is assumed to be the known diffusion coefficient.

【12】 Consistency and asymptotic normality of covariance parameter estimators based on covariance approximations 标题:基于协方差近似的协方差参数估计的相合性和渐近正态性 链接:https://arxiv.org/abs/2112.12317

作者:Michael Hediger,Reinhard Furrer 机构:Institute of Mathematics, University of Zurich 备注:40 pages, 1 Figure 摘要:对于具有参数协方差函数的零均值高斯随机场,我们引入了一种新的似然近似概念(称为伪似然函数),它补充了协方差渐缩方法。伪似然函数基于假定协方差函数的直接函数近似。我们证明了在假定协方差函数和协方差近似的可及条件下,基于伪似然函数的估计在增域渐近框架内保持一致性和渐近正态性。 摘要:For a zero-mean Gaussian random field with a parametric covariance function, we introduce a new notion of likelihood approximations (termed pseudo-likelihood functions), which complements the covariance tapering approach. Pseudo-likelihood functions are based on direct functional approximations of the presumed covariance function. We show that under accessible conditions on the presumed covariance function and covariance approximations, estimators based on pseudo-likelihood functions preserve consistency and asymptotic normality within an increasing-domain asymptotic framework.

【13】 Density Regression with Bayesian Additive Regression Trees 标题:基于贝叶斯加性回归树的密度回归 链接:https://arxiv.org/abs/2112.12259

作者:Vittorio Orlandi,Jared Murray,Antonio Linero,Alexander Volfovsky 机构:Dept. of Statistical Science, Duke University, Durham, NC , Dept. of Information, Risk, and Operations Management, University of Texas, Austin, Austin, TX , Dept. of Statistics and Data Sciences 备注:30 pages, 12 figures 摘要:灵活地建模整个密度如何随协变量变化是均值和分位数回归的一个重要但具有挑战性的推广。虽然现有的密度回归方法主要由协变量相关的离散混合模型组成,但在一般的协变量空间中我们考虑了一个连续的隐变量模型,我们称之为DR BART。通过贝叶斯加性回归树(BART)的一种新应用,构造了潜变量到观测数据的先验映射。我们证明了由我们的模型产生的后验函数很快集中在足够光滑的真实生成函数周围。我们还分析了DR-BART在一组具有挑战性的模拟示例上的性能,其中DR-BART优于其他各种贝叶斯密度回归方法。最后,我们将DR-BART应用于来自教育测试和经济学的两个真实数据集,以研究学生成长并预测教育回报。我们建议的采样器是高效的,并允许人们在许多应用环境中利用BART的灵活性,其中响应的整个分布是最重要的。此外,我们在BART内对潜在变量进行拆分的方案有助于其未来应用于其他类别的模型,这些模型可以通过潜在变量进行描述,例如涉及分层或时间序列数据的模型。 摘要:Flexibly modeling how an entire density changes with covariates is an important but challenging generalization of mean and quantile regression. While existing methods for density regression primarily consist of covariate-dependent discrete mixture models, we consider a continuous latent variable model in general covariate spaces, which we call DR-BART. The prior mapping the latent variable to the observed data is constructed via a novel application of Bayesian Additive Regression Trees (BART). We prove that the posterior induced by our model concentrates quickly around true generative functions that are sufficiently smooth. We also analyze the performance of DR-BART on a set of challenging simulated examples, where it outperforms various other methods for Bayesian density regression. Lastly, we apply DR-BART to two real datasets from educational testing and economics, to study student growth and predict returns to education. Our proposed sampler is efficient and allows one to take advantage of BART's flexibility in many applied settings where the entire distribution of the response is of primary interest. Furthermore, our scheme for splitting on latent variables within BART facilitates its future application to other classes of models that can be described via latent variables, such as those involving hierarchical or time series data.

【14】 Assessment of biomarkers for carotenoids, tocopherols, retinol, vitamin B12 and folate in the Hispanic Community Health Study/Study of Latinos 标题:拉美裔社区健康研究/拉丁裔研究中类胡萝卜素、生育酚、视黄醇、维生素B12和叶酸生物标志物的评估 链接:https://arxiv.org/abs/2112.12207

作者:Lillian A. Boe,Yasmin Mossavar-Rahmani,Daniela Sotres-Alvarez,Martha L. Daviglus,Ramon A. Durazo-Arvizu,Robert C. Kaplan,Pamela A. Shaw 机构: Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania Perelman, School of Medicine, Philadelphia, Pennsylvania, Department of Epidemiology and Population Health, Albert Einstein College of Medicine, Bronx, New, York 备注:19 pages in main manuscript including 5 tables and 2 figures; 10 pages of supplement including 4 tables and 1 figure 摘要:测量误差是自我报告饮食中的一个主要问题,它会扭曲饮食与疾病的关系。血清生物标志物避免了自我报告中的主观偏见。作为拉美裔社区健康研究/拉丁美洲人研究(HCHS/SOL)的一部分,收集了所有参与者的自我报告饮食。类胡萝卜素、生育酚、视黄醇、维生素B12和叶酸的血液浓度生物标记物收集在一个子集上,作为拉丁美洲人研究的一部分:营养和体力活动评估研究(SOLNAS)。我们研究了生物标志物水平、自我报告摄入量、种族和其他参与者特征之间的关系。我们建立了十种营养生物标志物的预测方程,并评估这些预测方程是否具有足够的精度,以便在多变量Cox模型中可靠地检测与该暴露的关联。这种类型的预测暴露通常用于使用回归校准评估测量误差校正的饮食疾病关联;然而,权力问题很少被讨论。我们使用模拟研究了在HCHS/SOL队列中使用预测生物标志物水平检测真实平均浓度标志物与假设事件生存结果之间的关联的能力,该预测生物标志物水平的测量特征与在SOLNAS中观察到的相似。某些营养素具有良好的功效;然而,较低的类内相关系数导致其他人的能力较差。重复措施改善了国际刑事法院;然而,需要进一步的研究来了解如何最好地实现这些饮食生物标志物的潜力。本研究对几种营养生物标志物进行了全面检查,描述了它们与受试者特征的关联,以及测量特征对检测与健康结果关联能力的影响。 摘要:Measurement error is a major issue in self-reported diet that can distort diet-disease relationships. Serum biomarkers avoid the subjective bias in self-report. As part of the Hispanic Community Health Study/Study of Latinos (HCHS/SOL), self-reported diet was collected on all participants. Blood concentration biomarkers for carotenoids, tocopherols, retinol, vitamin B12 and folate were collected on a subset, as part of the Study of Latinos: Nutrition and Physical Activity Assessment Study (SOLNAS). We examine the relationship between biomarker levels, self-reported intake, ethnicity and other participant characteristics in this diverse population. We build prediction equations for ten nutritional biomarkers and evaluate whether there would be sufficient precision in these prediction equations to reliably detect an association with this exposure in a multivariable Cox model. This type of predicted exposure is commonly used to assess measurement-error corrected diet-disease associations using regression calibration; however, issues of power are rarely discussed. We used simulation to study the power of detecting the association between a true average concentration marker and a hypothetical incident survival outcome in the HCHS/SOL cohort using a predicted biomarker level whose measurement characteristics were similar to those observed for SOLNAS. Good power was observed for some nutrients; whereas, a low intra-class correlation coefficient contributed to poor power for others. Repeat measures improved the ICC; however, further research is needed to understand how best to realize the potential of these dietary biomarkers. This study provides a comprehensive examination of several nutritional biomarkers, characterizing their associations with subject characteristics and the influence the measurement characteristics have on the power to detect associations with health outcomes.

【15】 A family of consistent normally distributed tests for Poissonity 标题:一族相容正态分布的泊松检验 链接:https://arxiv.org/abs/2112.12201

作者:Antonio Di Noia,Marzia Marcheselli,Caterina Pisani,Luca Pratelli 机构:Department of Economics and Statistics, University of Siena, Piazza S. Francesco, Siena, Italy, Naval Academy, viale Italia , Livorno, Italy 摘要:提出了一种基于概率母函数的一致性检验方法,用于评估一大类计数分布的泊松性,其中包括一些最常用的泊松分布替代方法。统计数据除了具有直观和简单的形式外,还具有渐近正态分布,允许直接实现测试。通过广泛的模拟研究,对试验的有限样本特性进行了研究。与已知极限分布的其他试验相比,该试验显示出令人满意的性能。 摘要:A consistent test based on the probability generating function is proposed for assessing Poissonity against a wide class of count distributions, which includes some of the most frequently adopted alternatives to the Poisson distribution. The statistic, in addition to have an intuitive and simple form, is asymptotically normally distributed, allowing a straightforward implementation of the test. The finite sample properties of the test are investigated by means of an extensive simulation study. The test shows a satisfactory behaviour compared to other tests with known limit distribution.

【16】 Surrogate Likelihoods for Variational Annealed Importance Sampling 标题:变分退火重要性抽样的替代似然率 链接:https://arxiv.org/abs/2112.12194

作者:Martin Jankowiak,Du Phan 机构: One regime that 1Broad Institute, broadinstitute 备注:20 pages 摘要:变分推理是一种强大的近似贝叶斯推理范式,具有许多吸引人的特性,包括支持模型学习和数据子采样。相比之下,像哈密顿蒙特卡罗这样的MCMC方法不具有这些性质,但仍然具有吸引力,因为与参数方法相反,MCMC是渐近无偏的。出于这些原因,研究人员试图结合这两类算法的优点,最近的方法更接近于在实践中实现这一愿景。然而,在这些混合方法中支持数据子采样可能是一个挑战,我们通过引入替代似然来解决这一缺点,该替代似然可以与其他变分参数一起学习。我们在理论上认为,由此产生的算法允许用户在推理保真度和计算成本之间进行直观的权衡。在广泛的实证比较中,我们表明我们的方法在实践中表现良好,并且非常适合概率规划框架中的黑盒推理。 摘要:Variational inference is a powerful paradigm for approximate Bayesian inference with a number of appealing properties, including support for model learning and data subsampling. By contrast MCMC methods like Hamiltonian Monte Carlo do not share these properties but remain attractive since, contrary to parametric methods, MCMC is asymptotically unbiased. For these reasons researchers have sought to combine the strengths of both classes of algorithms, with recent approaches coming closer to realizing this vision in practice. However, supporting data subsampling in these hybrid methods can be a challenge, a shortcoming that we address by introducing a surrogate likelihood that can be learned jointly with other variational parameters. We argue theoretically that the resulting algorithm permits the user to make an intuitive trade-off between inference fidelity and computational cost. In an extensive empirical comparison we show that our method performs well in practice and that it is well-suited for black-box inference in probabilistic programming frameworks.

【17】 Bayesian Nested Latent Class Models for Cause-of-Death Assignment using Verbal Autopsies Across Multiple Domains 标题:基于多领域口头尸检的贝叶斯嵌套潜在类死因分类模型 链接:https://arxiv.org/abs/2112.12186

作者:Zehang Richard Li,Zhenke Wu,Irena Chen,Samuel J. Clark 机构:Department of Statistics, University of California, Santa Cruz, Department of Biostatistics, University of Michigan, Department of Sociology, The Ohio State University 备注:Main paper: 35 pages, 4 figures, 2 tables 摘要:了解特定原因的死亡率对于监测人口健康和设计公共卫生干预措施至关重要。在世界范围内,三分之二的死亡没有指定原因。口头尸检(VA)是一种成熟的工具,通过对死者的护理人员进行调查来收集描述医院外死亡的信息。许多低收入和中等收入国家都定期实施这一政策。使用VAs分配死因的统计算法通常容易受到用于训练模型的数据与目标人群之间分布变化的影响。这对分析VAs提出了一个重大挑战,因为目标人群中通常无法获得标记数据。本文提出了一个VA数据的潜在类别模型框架(LCVA),该框架联合对多个异质领域收集的VAs进行建模,为领域外观察分配死亡原因,并估计新领域的特定原因死亡率分数。我们使用嵌套的潜在类模型介绍了收集到的症状的联合分布的简约表示,并开发了一种有效的后验推理算法。我们证明了LCVA在预测性能和可扩展性方面优于现有方法。本文的补充资料和实现该模型的R包可以在线获得。 摘要:Understanding cause-specific mortality rates is crucial for monitoring population health and designing public health interventions. Worldwide, two-thirds of deaths do not have a cause assigned. Verbal autopsy (VA) is a well-established tool to collect information describing deaths outside of hospitals by conducting surveys to caregivers of a deceased person. It is routinely implemented in many low- and middle-income countries. Statistical algorithms to assign cause of death using VAs are typically vulnerable to the distribution shift between the data used to train the model and the target population. This presents a major challenge for analyzing VAs as labeled data are usually unavailable in the target population. This article proposes a Latent Class model framework for VA data (LCVA) that jointly models VAs collected over multiple heterogeneous domains, assign cause of death for out-of-domain observations, and estimate cause-specific mortality fractions for a new domain. We introduce a parsimonious representation of the joint distribution of the collected symptoms using nested latent class models and develop an efficient algorithm for posterior inference. We demonstrate that LCVA outperforms existing methods in predictive performance and scalability. Supplementary materials for this article and the R package to implement the model are available online.

【18】 Dimension-independent Markov chain Monte Carlo on the sphere 标题:球面上的维数无关马尔可夫链蒙特卡罗 链接:https://arxiv.org/abs/2112.12185

作者:H. C. Lie,D. Rudolf,B. Sprungk,T. J. Sullivan 备注:35 pages, 7 figures 摘要:我们考虑具有角中心高斯先验的高维球面上的贝叶斯分析。这些先验模型是反模式对称方向数据,很容易在希尔BERT空间中定义,并出现在贝叶斯二元分类和水平集反演中。在本文中,我们推导了有效的马尔可夫链蒙特卡罗方法,用于对这些先验的后验概率进行近似采样。我们的方法依赖于将采样问题提升到环境Hilbert空间,并利用线性空间中现有的与维数无关的采样器。通过前推马尔可夫核构造,我们得到了球面上的马尔可夫链,它继承了线性空间中采样器的可逆性和谱隙性质。此外,我们提出的算法在数值实验中显示了与维数无关的效率。 摘要:We consider Bayesian analysis on high-dimensional spheres with angular central Gaussian priors. These priors model antipodally-symmetric directional data, are easily defined in Hilbert spaces and occur, for instance, in Bayesian binary classification and level set inversion. In this paper we derive efficient Markov chain Monte Carlo methods for approximate sampling of posteriors with respect to these priors. Our approaches rely on lifting the sampling problem to the ambient Hilbert space and exploit existing dimension-independent samplers in linear spaces. By a push-forward Markov kernel construction we then obtain Markov chains on the sphere, which inherit reversibility and spectral gap properties from samplers in linear spaces. Moreover, our proposed algorithms show dimension-independent efficiency in numerical experiments.

【19】 Optimal and instance-dependent guarantees for Markovian linear stochastic approximation 标题:马尔可夫线性随机逼近的最优和实例相关保证 链接:https://arxiv.org/abs/2112.12770

作者:Wenlong Mou,Ashwin Pananjady,Martin J. Wainwright,Peter L. Bartlett 机构:Department of Electrical Engineering and Computer Sciences⋄, Department of Statistics†, UC Berkeley, Schools of Industrial & Systems Engineering, and, Electrical & Computer Engineering⋆, Georgia Tech 摘要:我们研究了基于遍历马尔可夫链观测长度为$n$的轨迹近似求解$d$维线性不动点方程的随机逼近方法。我们首先在标准格式的最后一次迭代的平方误差上展示了$t{\mathrm{mix}}\tfrac{d}{n}$阶的非渐近界,其中$t{\mathrm{mix}$是混合时间。然后,我们证明了在适当平均的迭代序列上的一个非渐近依赖实例的界,其前导项与局部渐近极大极小极限相匹配,包括在高阶项中对参数$(d,t{\mathrm{mix}}})的急剧依赖。我们用一个非渐近的minimax下界来补充这些上界,该下界建立了平均SA估计的实例最优性。我们推导了马尔可夫噪声政策评估的这些结果的推论——包括[0,1]中所有$\lambda\的TD($\lambda$)算法家族——和线性自回归模型。我们的实例相关特征为超参数调整的细粒度模型选择程序的设计打开了大门(例如,在运行TD($\lambda$)算法时选择$\lambda$的值)。 摘要:We study stochastic approximation procedures for approximately solving a $d$-dimensional linear fixed point equation based on observing a trajectory of length $n$ from an ergodic Markov chain. We first exhibit a non-asymptotic bound of the order $t_{\mathrm{mix}} \tfrac{d}{n}$ on the squared error of the last iterate of a standard scheme, where $t_{\mathrm{mix}}$ is a mixing time. We then prove a non-asymptotic instance-dependent bound on a suitably averaged sequence of iterates, with a leading term that matches the local asymptotic minimax limit, including sharp dependence on the parameters $(d, t_{\mathrm{mix}})$ in the higher order terms. We complement these upper bounds with a non-asymptotic minimax lower bound that establishes the instance-optimality of the averaged SA estimator. We derive corollaries of these results for policy evaluation with Markov noise -- covering the TD($\lambda$) family of algorithms for all $\lambda \in [0, 1)$ -- and linear autoregressive models. Our instance-dependent characterizations open the door to the design of fine-grained model selection procedures for hyperparameter tuning (e.g., choosing the value of $\lambda$ when running the TD($\lambda$) algorithm).

【20】 Latent Time Neural Ordinary Differential Equations 标题:潜伏期神经元常微分方程 链接:https://arxiv.org/abs/2112.12728

作者:Srinivas Anumasa,P. K. Srijith 机构:Computer Science and Engineering, Indian Institute of Technology Hyderabad, India 备注:Accepted at AAAI-2022 摘要:神经常微分方程(NODE)是对残差网络(ResNets)等常用深度学习模型的一种连续深度推广。它们提供了参数效率,并在一定程度上自动化了深度学习模型中的模型选择过程。然而,它们缺乏非常必要的不确定性建模和鲁棒性能力,这对于它们在一些实际应用中的使用至关重要,如自动驾驶和医疗保健。我们提出了一种新颖独特的方法,通过考虑ODE解算器在结束时间$T$上的分布来建模节点中的不确定性。所提出的潜在时间节点(LT-NODE)方法将$T$作为潜在变量,并应用贝叶斯学习从数据中获得$T$的后验分布。特别地,我们使用变分推理来学习近似的后验概率和模型参数。通过考虑来自不同后验样本的节点表示来进行预测,并且可以使用单个前向过程有效地进行预测。由于$T$隐式定义了节点的深度,因此$T$上的后验分布也有助于节点中的模型选择。我们还提出了一种自适应潜在时间节点(ALT-NODE),它允许每个数据点在结束时间上具有明显的后验分布。ALT-NODE使用分期变分推理来学习使用推理网络的近似后验概率。通过对合成图像分类数据和若干真实图像分类数据的实验,我们证明了所提出的方法在建模不确定性和鲁棒性方面的有效性。 摘要:Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats $T$ as a latent variable and apply Bayesian learning to obtain a posterior distribution over $T$ from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As $T$ implicitly defines the depth of a NODE, posterior distribution over $T$ would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.

【21】 The interplay between ranking and communities in networks 标题:网络中排名与社区的相互作用 链接:https://arxiv.org/abs/2112.12670

作者:Laura Iacovissi,Caterina De Bacco 机构:Max-Planck Institute for Intelligent Systems, Cyber Valley, Tuebingen , Germany, Bosch Industry on Campus Lab, University of Tuebingen 摘要:社区检测和层次提取通常被认为是网络上独立的推理任务。在研究真实世界的数据时,只考虑其中一种可能过于简单。在这项工作中,我们提出了一个基于社区和等级结构之间相互作用的生成模型。它假设每个节点在交互机制中都有一个首选项,具有相同首选项的节点更有可能进行交互,而异构交互仍然是允许的。算法实现是有效的,因为它利用了网络数据集的稀疏性。我们在合成数据和真实数据上演示了我们的方法,并将其性能与社区检测和排名提取的两种标准方法进行了比较。我们发现,该算法能够准确地检索不同场景中每个节点的偏好,并且可以区分行为不同于大多数节点的节点子集。因此,该模型可以识别网络是否具有总体首选交互机制。在没有关于什么结构能够很好地解释观测到的网络数据集的明确“先验”信息的情况下,这是相关的。我们的模型允许从业者从数据中自动学习这一点。 摘要:Community detection and hierarchy extraction are usually thought of as separate inference tasks on networks. Considering only one of the two when studying real-world data can be an oversimplification. In this work, we present a generative model based on an interplay between community and hierarchical structures. It assumes that each node has a preference in the interaction mechanism and nodes with the same preference are more likely to interact, while heterogeneous interactions are still allowed. The algorithmic implementation is efficient, as it exploits the sparsity of network datasets. We demonstrate our method on synthetic and real-world data and compare performance with two standard approaches for community detection and ranking extraction. We find that the algorithm accurately retrieves each node's preference in different scenarios and we show that it can distinguish small subsets of nodes that behave differently than the majority. As a consequence, the model can recognise whether a network has an overall preferred interaction mechanism. This is relevant in situations where there is no clear "a priori" information about what structure explains the observed network datasets well. Our model allows practitioners to learn this automatically from the data.

【22】 Should transparency be (in-)transparent? On monitoring aversion and cooperation in teams 标题:透明度应该是(In-)透明的吗?论团队中厌恶与合作的监控 链接:https://arxiv.org/abs/2112.12621

作者:Michalis Drouvelis,Johannes Jarke-Neuert,Johannes Lohse 机构: University of Hamburg 备注:13 pages excluding appendix, 22 pages including appendix, 3 figures 摘要:许多现代组织采用包括监控员工行为的方法,以鼓励工作场所的团队合作。虽然监测促进了透明的工作环境,但使监测本身透明的效果可能模棱两可,在文献中很少受到关注。通过一项新的实验室实验,我们创造了一个工作环境,在这个环境中,第一个搬运工可以(或不能)在一轮比赛结束时观察第二个搬运工的监控。我们的框架由一个标准的重复顺序囚徒困境组成,在这个框架中,第二个行动者可以观察第一个行动者做出的选择,无论是外在的还是内在的。我们表明,当监测变得透明时,相互合作发生的频率显著提高。此外,我们的研究结果强调了有条件的合作者(更有可能监督)在促进团队合作方面的关键作用。总的来说,观察到的合作促进效应是由于监测行动,这些行动携带有关先行者的信息,这些先行者使用这些信息来更好地筛选其合作伙伴的类型,从而降低被利用的风险。 摘要:Many modern organisations employ methods which involve monitoring of employees' actions in order to encourage teamwork in the workplace. While monitoring promotes a transparent working environment, the effects of making monitoring itself transparent may be ambiguous and have received surprisingly little attention in the literature. Using a novel laboratory experiment, we create a working environment in which first movers can (or cannot) observe second mover's monitoring at the end of a round. Our framework consists of a standard repeated sequential Prisoner's Dilemma, where the second mover can observe the choices made by first movers either exogenously or endogenously. We show that mutual cooperation occurs significantly more frequently when monitoring is made transparent. Additionally, our results highlight the key role of conditional cooperators (who are more likely to monitor) in promoting teamwork. Overall, the observed cooperation enhancing effects are due to monitoring actions that carry information about first movers who use it to better screen the type of their co-player and thereby reduce the risk of being exploited.

【23】 Optimal learning of high-dimensional classification problems using deep neural networks 标题:基于深度神经网络的高维分类问题的最优学习 链接:https://arxiv.org/abs/2112.12555

作者:Philipp Petersen,Felix Voigtlaender 摘要:在假设决策边界具有一定规律性的前提下,研究了从无噪声训练样本中学习分类函数的问题。对于一般类型的连续决策边界,我们建立了这个估计问题的通用下界。对于局部Barron正则决策边界类,我们发现最优估计率本质上独立于基本维数,并且可以通过经验风险最小化方法在一类合适的深度神经网络上实现。这些结果基于Barron正则函数类的$L^1$和$L^\infty$熵的新估计。 摘要:We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity. We establish universal lower bounds for this estimation problem, for general classes of continuous decision boundaries. For the class of locally Barron-regular decision boundaries, we find that the optimal estimation rates are essentially independent of the underlying dimension and can be realized by empirical risk minimization methods over a suitable class of deep neural networks. These results are based on novel estimates of the $L^1$ and $L^\infty$ entropies of the class of Barron-regular functions.

【24】 Emulation of greenhouse-gas sensitivities using variational autoencoders 标题:利用变分自动编码器模拟温室气体敏感性 链接:https://arxiv.org/abs/2112.12524

作者:Laura Cartwright,Andrew Zammit-Mangion,Nicholas M. Deutscher 机构:School of Mathematics and Applied Statistics, University of Wollongong, Wollongong, Centre for Atmospheric Chemistry, School of Earth, Atmospheric and Life Sciences, University of Wollongong, Wollongong, Australia 备注:25 pages, 8 figures, 2 tables, data & code available 摘要:通量反演是通过观测气体摩尔分数来确定气体源和汇的过程。反演通常涉及运行拉格朗日粒子色散模型(LPDM),以在感兴趣的空间域上生成观测值和通量之间的灵敏度。对于每次气体测量,LPDM必须在时间上向后运行,这在计算上是不允许的。为了解决这个问题,我们开发了一种新的LPDM灵敏度时空模拟器,该模拟器使用卷积变分自动编码器(CVAE)构建。利用CVAE的编码段,我们得到了低维空间中潜在变量的近似(变分)后验分布。然后,我们在低维空间上使用时空高斯过程模拟器来模拟预测位置和时间点的新变量。仿真变量然后通过CVAE的解码器段传递,以产生仿真灵敏度。我们证明了基于CVAE的仿真器优于使用经验正交函数构建的更传统的仿真器,并且它可以用于不同的LPDM。我们的结论是,我们基于仿真的方法可以可靠地减少生成用于高分辨率通量反演的LPDM输出所需的计算时间。 摘要:Flux inversion is the process by which sources and sinks of a gas are identified from observations of gas mole fraction. The inversion often involves running a Lagrangian particle dispersion model (LPDM) to generate sensitivities between observations and fluxes over a spatial domain of interest. The LPDM must be run backward in time for every gas measurement, and this can be computationally prohibitive. To address this problem, here we develop a novel spatio-temporal emulator for LPDM sensitivities that is built using a convolutional variational autoencoder (CVAE). With the encoder segment of the CVAE, we obtain approximate (variational) posterior distributions over latent variables in a low-dimensional space. We then use a spatio-temporal Gaussian process emulator on the low-dimensional space to emulate new variables at prediction locations and time points. Emulated variables are then passed through the decoder segment of the CVAE to yield emulated sensitivities. We show that our CVAE-based emulator outperforms the more traditional emulator built using empirical orthogonal functions and that it can be used with different LPDMs. We conclude that our emulation-based approach can be used to reliably reduce the computing time needed to generate LPDM outputs for use in high-resolution flux inversions.

【25】 Heuristic Random Designs for Exact Identification of Defectives Using Single Round Non-adaptive Group Testing and Compressed Sensing 标题:基于单轮非自适应分组测试和压缩感知的启发式随机设计精确识别缺陷 链接:https://arxiv.org/abs/2112.12500

作者:Catherine A. Haddad-Zaaknoon 机构:Heuristic Random Designs for ExactIdentification of Defectives Using Single RoundNon-adaptive Group Testing and CompressedSensingCatherine Haddad-ZaknoonTechnion - Israel Institute of Technologycatherine 摘要:在COVID-19大流行爆发中所面临的挑战是减少病毒携带者所需的测试次数,以遏制病毒传播,同时保持测试的可靠性。为了解决这个问题,研究了基于群体测试和压缩感知方法(GTCS)的患病率测试范式。在这些设置中,设计了一种非自适应组测试算法来排除确定的阴性样本。然后,在简化的问题上,除了为组测试阶段设计的初始测试矩阵外,采用压缩感知算法对阳性数据进行解码,而无需进行任何进一步的测试。结果是采用单轮非自适应分组测试-压缩感知算法来识别阳性样本。在本文中,我们提出了一种启发式随机方法来构造测试设计,称为$\alpha-$random row design或$\alpha-$RRD。在$\alpha-$RRD中,构造了一个随机测试矩阵,使得每个测试在一个组测试或池中最多聚集$\alpha$个样本。集合测试是一个接一个的启发式选择,这样以前在同一测试中选择的样本在新测试中聚合在一起的可能性较小。我们在GTCS范例中检查了$\alpha-$RRD设计的性能,其中的几个值为$\alpha$。实验是根据合成数据进行的。我们的结果表明,对于$\alpha$的某些值,当在GTCS范式中应用$\alpha-$RRD设计时,测试次数最多可以减少10倍。 摘要:Among the challenges that the COVID-19 pandemic outbreak revealed is the problem to reduce the number of tests required for identifying the virus carriers in order to contain the viral spread while preserving the tests reliability. To cope with this issue, a prevalence testing paradigm based on group testing and compressive sensing approach or GTCS was examined. In these settings, a non-adaptive group testing algorithm is designed to rule out sure-negative samples. Then, on the reduced problem, a compressive sensing algorithm is applied to decode the positives without requiring any further testing besides the initial test matrix designed for the group testing phase. The result is a single-round non-adaptive group testing - compressive sensing algorithm to identify the positive samples. In this paper, we propose a heuristic random method to construct the test design called $\alpha-$random row design or $\alpha-$RRD. In the $\alpha-$RRD, a random test matrix is constructed such that each test aggregates at most $\alpha$ samples in one group test or pool. The pooled tests are heuristically selected one by one such that samples that were previously selected in the same test are less likely to be aggregated together in a new test. We examined the performance of the $\alpha-$RRD design within the GTCS paradigm for several values of $\alpha$. The experiments were conducted on synthetic data. Our results show that, for some values of $\alpha$, a reduction of up to 10 fold in the tests number can be achieved when $\alpha-$RRD design is applied in the GTCS paradigm.

【26】 Equivariance and generalization in neural networks 标题:神经网络中的等方差与泛化 链接:https://arxiv.org/abs/2112.12493

作者:Srinath Bulusu,Matteo Favoni,Andreas Ipp,David I. Müller,Daniel Schuh 机构:Schuh,†, Institute for Theoretical Physics, TU Wien, Wiedner Hauptstr. ,-, Vienna, Austria, Speaker and corresponding author 备注:8 pages, 7 figures, proceedings for the 14th Quark Confinement and the Hadron Spectrum Conference (vConf2021) 摘要:高能物理和晶格场理论的基本对称性所起的关键作用要求在应用于所考虑的物理系统的神经网络体系结构中实现这种对称性。在这些会议中,我们重点讨论了在网络属性中引入翻译等价性的后果,特别是在性能和泛化方面。通过研究复标量场理论,证明了等变网络的优点,在此基础上研究了各种回归和分类任务。为了进行有意义的比较,通过系统搜索确定了有前途的等变和非等变体系结构。结果表明,在大多数任务中,我们最好的等变体系结构比其非等变体系结构的性能和泛化能力要好得多,这不仅适用于训练集中表示的物理参数,而且适用于不同的晶格尺寸。 摘要:The crucial role played by the underlying symmetries of high energy physics and lattice field theories calls for the implementation of such symmetries in the neural network architectures that are applied to the physical system under consideration. In these proceedings, we focus on the consequences of incorporating translational equivariance among the network properties, particularly in terms of performance and generalization. The benefits of equivariant networks are exemplified by studying a complex scalar field theory, on which various regression and classification tasks are examined. For a meaningful comparison, promising equivariant and non-equivariant architectures are identified by means of a systematic search. The results indicate that in most of the tasks our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts, which applies not only to physical parameters beyond those represented in the training set, but also to different lattice sizes.

【27】 Generalization capabilities of neural networks in lattice applications 标题:格型应用中神经网络的泛化能力 链接:https://arxiv.org/abs/2112.12474

作者:Srinath Bulusu,Matteo Favoni,Andreas Ipp,David I. Müller,Daniel Schuh 机构: SchuhInstitute for Theoretical Physics, Massachusetts Institute of Technology 备注:10 pages, 7 figures, proceedings for the 38th International Symposium on Lattice Field Theory (LATTICE21) 摘要:近年来,机器学习在格场理论中的应用越来越广泛。这种理论的一个基本要素是对称性,对称性包含在神经网络属性中,可以在性能和可推广性方面带来高回报。具有周期边界条件的晶格上的物理系统通常具有的一个基本对称性是时空平移下的等变。在这里,我们调查的优势,采用翻译等变神经网络有利于非等变的。我们考虑的系统是一个复杂的标量场,在通量表示的二维格子上具有四次相互作用,其中网络执行各种回归和分类任务。通过系统搜索,确定了有前途的等变和非等变体系结构。我们证明,在大多数这些任务中,我们最好的等变体系结构比非等变体系结构的性能和通用性要好得多,这不仅适用于训练集中表示的物理参数,也适用于不同的晶格尺寸。 摘要:In recent years, the use of machine learning has become increasingly popular in the context of lattice field theories. An essential element of such theories is represented by symmetries, whose inclusion in the neural network properties can lead to high reward in terms of performance and generalizability. A fundamental symmetry that usually characterizes physical systems on a lattice with periodic boundary conditions is equivariance under spacetime translations. Here we investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones. The system we consider is a complex scalar field with quartic interaction on a two-dimensional lattice in the flux representation, on which the networks carry out various regression and classification tasks. Promising equivariant and non-equivariant architectures are identified with a systematic search. We demonstrate that in most of these tasks our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts, which applies not only to physical parameters beyond those represented in the training set, but also to different lattice sizes.

【28】 Using Sequential Statistical Tests to Improve the Performance of Random Search in hyperparameter Tuning 标题:利用序贯统计检验提高随机搜索在超参数整定中的性能 链接:https://arxiv.org/abs/2112.12438

作者:Philip Buczak,Daniel Horn 机构:Department of Statistics, TU Dortmund University, Vogelpothsweg , Dortmund, Germany, Editor: 摘要:超参数调整是机器学习中最耗时的部分之一:必须评估大量不同超参数设置的性能,以找到最佳设置。尽管存在将所需评估次数最小化的现代优化算法,但单个设置的评估仍然昂贵:使用重采样技术,机器学习方法必须在不同的训练数据集上拟合固定次数的$K$。作为设置性能的估计器,使用$K$拟合的相应平均值。许多超参数设置在不到$K$的重采样迭代后可能会被丢弃,因为它们明显低于高性能设置。然而,在实践中,重采样通常执行到最后,浪费了大量的计算工作。我们建议使用顺序测试程序来最小化重采样迭代次数,以检测较差的参数设置。为此,我们首先分析了重采样误差的分布,我们会发现,对数正态分布是有希望的。然后,我们建立了一个假设这种分布的顺序测试程序。该顺序测试程序在随机搜索算法中使用。在一些实际数据情况下,我们比较了标准随机搜索和增强的顺序随机搜索。可以证明,顺序随机搜索能够找到相对较好的超参数设置,但是,找到这些设置所需的计算时间大约减少了一半。 摘要:Hyperparamter tuning is one of the the most time-consuming parts in machine learning: The performance of a large number of different hyperparameter settings has to be evaluated to find the best one. Although modern optimization algorithms exist that minimize the number of evaluations needed, the evaluation of a single setting is still expensive: Using a resampling technique, the machine learning method has to be fitted a fixed number of $K$ times on different training data sets. As an estimator for the performance of the setting the respective mean value of the $K$ fits is used. Many hyperparameter settings could be discarded after less than $K$ resampling iterations, because they already are clearly inferior to high performing settings. However, in practice, the resampling is often performed until the very end, wasting a lot of computational effort. We propose to use a sequential testing procedure to minimize the number of resampling iterations to detect inferior parameter setting. To do so, we first analyze the distribution of resampling errors, we will find out, that a log-normal distribution is promising. Afterwards, we build a sequential testing procedure assuming this distribution. This sequential test procedure is utilized within a random search algorithm. We compare a standard random search with our enhanced sequential random search in some realistic data situation. It can be shown that the sequential random search is able to find comparably good hyperparameter settings, however, the computational time needed to find those settings is roughly halved.

【29】 When Random Tensors meet Random Matrices 标题:当随机张量与随机矩阵相交时 链接:https://arxiv.org/abs/2112.12348

作者:Mohamed El Amine Seddik,Maxime Guillaud,Romain Couillet 机构:Mathematical and Algorithmic Sciences Lab, Huawei Paris Research Center, Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France 摘要:本文利用随机矩阵理论(RMT)研究了高斯噪声下的非对称有序-d$尖峰张量模型。使用[Lim,2005]中奇异向量和值的变分定义,我们表明,所考虑模型的分析归结为等效尖峰对称{分块}随机矩阵的分析,这是由所研究的张量的{收缩}和与其最佳秩1近似相关的奇异向量构成的。当$\frac{n_i}{\sum{j=1}^d n_j}到c_i\in[0,1]$与$n_i$的张量维数为$\frac{n_i}{1}^d n_j}时,我们的方法允许对几乎确定的渐近奇异值进行精确表征,并使相应的奇异向量与真尖峰分量对齐。与其他主要依赖统计物理工具来研究随机张量的工作不同,我们的结果完全依赖于经典的RMT工具,如Stein引理。最后,作为一个特例,恢复了关于尖峰随机矩阵的经典RMT结果。 摘要:Relying on random matrix theory (RMT), this paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise. Using the variational definition of the singular vectors and values of [Lim, 2005], we show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric \textit{block-wise} random matrix, that is constructed from \textit{contractions} of the studied tensor with the singular vectors associated to its best rank-1 approximation. Our approach allows the exact characterization of the almost sure asymptotic singular value and alignments of the corresponding singular vectors with the true spike components, when $\frac{n_i}{\sum_{j=1}^d n_j}\to c_i\in [0, 1]$ with $n_i$'s the tensor dimensions. In contrast to other works that rely mostly on tools from statistical physics to study random tensors, our results rely solely on classical RMT tools such as Stein's lemma. Finally, classical RMT results concerning spiked random matrices are recovered as a particular case.

【30】 Model Selection in Batch Policy Optimization 标题:批量策略优化中的模型选择 链接:https://arxiv.org/abs/2112.12320

作者:Jonathan N. Lee,George Tucker,Ofir Nachum,Bo Dai 机构:♯Stanford University, †Google Research, Brain Team 摘要:我们研究批量策略优化中的模型选择问题:给定一个固定的部分反馈数据集和$M$模型类,学习一个性能与从最佳模型类导出的策略相竞争的策略。我们用线性模型类形式化了背景bandit设置中的问题,方法是确定任何模型选择算法都应最佳权衡的三个误差源:(1)近似误差,(2)统计复杂性和(3)覆盖率。前两个来源在监督学习的模型选择中很常见,在监督学习中对这些属性的最佳权衡进行了很好的研究。与此相反,第三个源对于批处理策略优化是唯一的,并且是由于设置固有的数据集移动。我们首先证明,没有一种批量策略优化算法能够保证同时解决这三个问题,这与批量策略优化的困难和监督学习的积极结果形成了鲜明的对比。尽管这是一个负面结果,但我们表明,放松这三个错误源中的任何一个都可以使算法的设计实现其余两个的近似oracle不等式。最后,我们通过实验证明了这些算法的有效性。 摘要:We study the problem of model selection in batch policy optimization: given a fixed, partial-feedback dataset and $M$ model classes, learn a policy with performance that is competitive with the policy derived from the best model class. We formalize the problem in the contextual bandit setting with linear model classes by identifying three sources of error that any model selection algorithm should optimally trade-off in order to be competitive: (1) approximation error, (2) statistical complexity, and (3) coverage. The first two sources are common in model selection for supervised learning, where optimally trading-off these properties is well-studied. In contrast, the third source is unique to batch policy optimization and is due to dataset shift inherent to the setting. We first show that no batch policy optimization algorithm can achieve a guarantee addressing all three simultaneously, revealing a stark contrast between difficulties in batch policy optimization and the positive results available in supervised learning. Despite this negative result, we show that relaxing any one of the three error sources enables the design of algorithms achieving near-oracle inequalities for the remaining two. We conclude with experiments demonstrating the efficacy of these algorithms.

【31】 Selective Multiple Power Iteration: from Tensor PCA to gradient-based exploration of landscapes 标题:选择性多次方迭代:从张量主成分分析到基于梯度的景观探测 链接:https://arxiv.org/abs/2112.12306

作者:Mohamed Ouerfelli,Mohamed Tamaazousti,Vincent Rivasseau 机构:Universit´e Paris-Saclay, CEA, List, F-, Palaiseau, France, Universit´e Paris-Saclay, CNRSIN,P, IJCLab, Orsay, France 摘要:我们提出了选择性多重幂迭代(SMPI),一种解决重要的张量PCA问题的新算法,该算法包括恢复被高斯噪声张量$\bf{R}^n^{\otimes k}$损坏的尖峰$\bf{v{u 0}^{\otimes k}$,其中$\bf{T}=\sqrt{n}\beta\bf{v{u 0}{k}+\bf{Z}$,其中$\beta$是信噪比(SNR)。SMPI包括生成多项式数量的随机初始化,在每次初始化时执行多项式数量的对称张量幂迭代,然后选择一个最大化$\langle\bf{T}、\bf{v}^{\otimes k}\rangle$。在常规考虑的$n\leq 1000$范围内,$k=3$的各种数值模拟表明,SMPI的实验性能在现有算法的基础上大幅提高,并与理论最佳回收率相当。我们表明,这些意想不到的性能是由于一种强大的机制,其中噪声对信号恢复起着关键作用,并且发生在低$\beta$。此外,这种机制源于SMPI的五个基本特性,这五个特性使它区别于以前基于幂迭代的算法。这些显著的结果可能对张量主成分分析的实际应用和理论应用产生重大影响。(i) 我们提供了该算法的一个变体来处理低秩CP张量分解。这些算法甚至在实际数据上也优于现有方法,这对实际应用有巨大的潜在影响。(ii)我们对SMPI和梯度下降方法的行为提出了新的理论见解,用于在各种机器学习问题中存在的高维非凸景观中进行优化。(iii)我们期望这些结果可能有助于讨论推测的统计算法差距的存在。 摘要:We propose Selective Multiple Power Iterations (SMPI), a new algorithm to address the important Tensor PCA problem that consists in recovering a spike $\bf{v_0}^{\otimes k}$ corrupted by a Gaussian noise tensor $\bf{Z} \in (\mathbb{R}^n)^{\otimes k}$ such that $\bf{T}=\sqrt{n} \beta \bf{v_0}^{\otimes k} + \bf{Z}$ where $\beta$ is the signal-to-noise ratio (SNR). SMPI consists in generating a polynomial number of random initializations, performing a polynomial number of symmetrized tensor power iterations on each initialization, then selecting the one that maximizes $\langle \bf{T}, \bf{v}^{\otimes k} \rangle$. Various numerical simulations for $k=3$ in the conventionally considered range $n \leq 1000$ show that the experimental performances of SMPI improve drastically upon existent algorithms and becomes comparable to the theoretical optimal recovery. We show that these unexpected performances are due to a powerful mechanism in which the noise plays a key role for the signal recovery and that takes place at low $\beta$. Furthermore, this mechanism results from five essential features of SMPI that distinguish it from previous algorithms based on power iteration. These remarkable results may have strong impact on both practical and theoretical applications of Tensor PCA. (i) We provide a variant of this algorithm to tackle low-rank CP tensor decomposition. These proposed algorithms also outperforms existent methods even on real data which shows a huge potential impact for practical applications. (ii) We present new theoretical insights on the behavior of SMPI and gradient descent methods for the optimization in high-dimensional non-convex landscapes that are present in various machine learning problems. (iii) We expect that these results may help the discussion concerning the existence of the conjectured statistical-algorithmic gap.

【32】 A combinatorial proof of the Gaussian product inequality conjecture beyond the MTP2 case 标题:超越MTP2情形的高斯乘积不等式猜想的一个组合证明 链接:https://arxiv.org/abs/2112.12283

作者:Frédéric Ouimet 机构:aMcGill University 备注:6 pages, 0 figures 摘要:本文给出了当中心高斯向量$\boldsymbol{X}=(X_1,X_2,dots,X_d)$的分量可以写成标准高斯向量分量的非负系数线性组合时,在所有维度上高斯乘积不等式(GPI)猜想的一个组合证明。证明归结为伽马函数的某个比率的单调性。我们还证明了我们的条件比假设绝对值向量$|\boldsymbol{X}|=(| X|u 1 |,| X|u 2 |,\dots,| X|d |)$在$[0,infty)^d$上是阶$2$($\mathrm{MTP}2$)的多元全正向量弱,对于这一点,我们已经知道这个猜想是真的。 摘要:In this paper, we present a combinatorial proof of the Gaussian product inequality (GPI) conjecture in all dimensions when the components of the centered Gaussian vector $\boldsymbol{X} = (X_1,X_2,\dots,X_d)$ can be written as linear combinations, with nonnegative coefficients, of the components of a standard Gaussian vector. The proof comes down to the monotonicity of a certain ratio of gamma functions. We also show that our condition is weaker than assuming the vector of absolute values $|\boldsymbol{X}| = (|X_1|,|X_2|,\dots,|X_d|)$ to be in the multivariate totally positive of order $2$ ($\mathrm{MTP}_2$) class on $[0,\infty)^d$, for which the conjecture is already known to be true.

【33】 Algorithmic Probability of Large Datasets and the Simplicity Bubble Problem in Machine Learning 标题:大数据集的算法概率与机器学习中的简单性泡沫问题 链接:https://arxiv.org/abs/2112.12275

作者:Felipe S. Abrahão,Hector Zenil,Fabio Porto,Klaus Wehmuth 机构:oratory for Scientific Computing (LNCC),-, Petr´opolis, RJ, Brazil., for the Natural and Digital Sciences, Paris, France., The Alan Turing Institute, British Library,QR, Euston Rd, Lon-, don NW,DB. Algorithmic Dynamics Lab, Unit of Computational 摘要:在挖掘大型数据集以预测新数据时,统计机器学习背后原理的局限性不仅对大数据泛滥构成了严重挑战,也对数据生成过程偏向于低算法复杂性的传统假设构成了严重挑战。即使假设在有限数据集生成器中存在一种潜在的算法信息偏向于简单性,我们也表明,无论是否使用伪随机生成器,完全自动化的可计算学习算法,特别是当前机器学习(包括深度学习)方法中使用的统计性质的算法,总是会被足够大的数据集自然或人为地欺骗。特别是,我们证明,对于每个有限学习算法,都有一个足够大的数据集大小,超过该数据集,不可预测欺骗者的算法概率是任何其他较大数据集的算法概率的上界(最多一个乘法常数,仅取决于学习算法)。换句话说,与任何其他特定数据集一样,非常大和复杂的数据集也可能将学习算法欺骗成“简单泡沫”。这些欺骗性的数据集保证了任何预测都会偏离高算法复杂度的全局最优解,同时收敛到低算法复杂度的局部最优解。我们讨论了规避这种欺骗性现象的框架和经验条件,从统计机器学习转向基于算法信息理论和可计算性理论的内在力量或受其驱动的更强类型的机器学习。 摘要:When mining large datasets in order to predict new data, limitations of the principles behind statistical machine learning pose a serious challenge not only to the Big Data deluge, but also to the traditional assumptions that data generating processes are biased toward low algorithmic complexity. Even when one assumes an underlying algorithmic-informational bias toward simplicity in finite dataset generators, we show that fully automated, with or without access to pseudo-random generators, computable learning algorithms, in particular those of statistical nature used in current approaches to machine learning (including deep learning), can always be deceived, naturally or artificially, by sufficiently large datasets. In particular, we demonstrate that, for every finite learning algorithm, there is a sufficiently large dataset size above which the algorithmic probability of an unpredictable deceiver is an upper bound (up to a multiplicative constant that only depends on the learning algorithm) for the algorithmic probability of any other larger dataset. In other words, very large and complex datasets are as likely to deceive learning algorithms into a "simplicity bubble" as any other particular dataset. These deceiving datasets guarantee that any prediction will diverge from the high-algorithmic-complexity globally optimal solution while converging toward the low-algorithmic-complexity locally optimal solution. We discuss the framework and empirical conditions for circumventing this deceptive phenomenon, moving away from statistical machine learning towards a stronger type of machine learning based on, or motivated by, the intrinsic power of algorithmic information theory and computability theory.

【34】 Crash Data Augmentation Using Conditional Generative Adversarial Networks (CGAN) for Improving Safety Performance Functions 标题:基于条件生成对抗网络(CGAN)改进安全性能函数的碰撞数据增强 链接:https://arxiv.org/abs/2112.12263

作者:Mohammad Zarei,Bruce Hellinga 机构: Ph.D. Candidate, Department of Civil and Environmental Engineering, University of Waterloo, University Ave., Waterloo, ON N,L,G 摘要:本文提出了一种基于条件生成对抗网络的碰撞频率数据扩充方法,以改进碰撞频率模型。通过比较基本SPF(使用原始数据开发)和增强SPF(使用原始数据和合成数据开发)在热点识别性能、模型预测精度和色散参数估计精度方面的性能,对所提出的方法进行了评估。这些实验是使用模拟和真实碰撞数据集进行的。结果表明,CGAN合成的碰撞数据与原始数据具有相同的分布,增强的SPF在几乎所有方面都优于基本SPF,尤其是在色散参数较低的情况下。 摘要:In this paper, we present a crash frequency data augmentation method based on Conditional Generative Adversarial Networks to improve crash frequency models. The proposed method is evaluated by comparing the performance of Base SPFs (developed using original data) and Augmented SPFs (developed using original data plus synthesised data) in terms of hotspot identification performance, model prediction accuracy, and dispersion parameter estimation accuracy. The experiments are conducted using simulated and real-world crash data sets. The results indicate that the synthesised crash data by CGAN have the same distribution as the original data and the Augmented SPFs outperforms Base SPFs in almost all aspects especially when the dispersion parameter is low.

【35】 Compelling new electrocardiographic markers for automatic diagnosis 标题:引人注目的新型心电图自动诊断标记物 链接:https://arxiv.org/abs/2112.12196

作者:Cristina Rueda Sabater,Itziar Fernández,Yolanda Larriba,Alejandro Rodríguez Collado,Christian Canedo 机构:Department of Statistics and Operations Research, Universidad de Valladolid, Paseo de Bel´en , Valladolid, Spain 摘要:从心电图(ECG)信号自动诊断心脏病在临床决策中至关重要。然而,基于计算机的决策规则在临床实践中的应用仍然不足,主要是由于其复杂性和缺乏医学解释。本研究的目的是通过提供可在临床实践中轻松实施的有价值的诊断规则来解决这些问题。本研究为临床实践提供了有效的诊断规则。本文介绍了从心电信号分析中获得的有趣参数,并使用从所谓的FMMecg描绘器衍生的新标记定义了束支传导阻滞自动诊断的两个简单规则。这些标记物的主要优点是具有良好的统计特性,并在临床意义上有明确的解释。使用建议的规则,从著名的基准数据库中获得了35000多名患者的数据,获得了高灵敏度和特异性值。特别是,为了识别完全性左束支传导阻滞并将其与无心脏病的受试者区分开来,敏感性和特异性值分别为93%至99%和96%至99%。新的标记和自动诊断很容易在https://fmmmodel.shinyapps.io/fmmEcg/,一款专为任何给定ECG信号开发的应用程序。该提案与文献中的其他提案不同,其引人注目的原因有三个。一方面,这些标记具有简明的电生理解释。另一方面,诊断规则具有很高的准确性。最后,与黑盒和深度学习算法相比,任何记录ECG信号的设备都可以提供标记,并且可以直接进行自动诊断。 摘要:The automatic diagnosis of heart diseases from the electrocardiogram (ECG) signal is crucial in clinical decision-making. However, the use of computer-based decision rules in clinical practice is still deficient, mainly due to their complexity and a lack of medical interpretation. The objetive of this research is to address these issues by providing valuable diagnostic rules that can be easily implemented in clinical practice. In this research, efficient diagnostic rules friendly in clinical practice are provided. In this paper, interesting parameters obtained from the ECG signals analysis are presented and two simple rules for automatic diagnosis of Bundle Branch Blocks are defined using new markers derived from the so-called FMMecg delineator. The main advantages of these markers are the good statistical properties and their clear interpretation in clinically meaningful terms. High sensitivity and specificity values have been obtained using the proposed rules with data from more than 35000 patients from well known benchmarking databases. In particular, to identify Complete Left Bundle Branch Blocks and differentiate this condition from subjects without heart diseases, sensitivity and specificity values ranging from 93% to 99% and from 96% to 99%, respectively. The new markers and the automatic diagnosis are easily available at https://fmmmodel.shinyapps.io/fmmEcg/, an app specifically developed for any given ECG signal. The proposal is different from others in the literature and it is compelling for three main reasons. On the one hand, the markers have a concise electrophysiological interpretation. On the other hand, the diagnosis rules have a very high accuracy. Finally, the markers can be provided by any device that registers the ECG signal and the automatic diagnosis is made straightforwardly, in contrast to the black-box and deep learning algorithms.

【36】 Simple and near-optimal algorithms for hidden stratification and multi-group learning 标题:一种简单的近似最优的隐层多群体学习算法 链接:https://arxiv.org/abs/2112.12181

作者:Christopher Tosh,Daniel Hsu 机构:Memorial Sloan Kettering Cancer Center, New York, NY, Columbia University, New York, NY 摘要:多群体不可知学习是一种形式化的学习标准,它关注人群亚群体中预测因子的条件风险。该标准解决了最近的实际问题,如分组公平性和隐藏分层。本文研究了多群体学习问题解的结构,并给出了简单的近似最优算法。 摘要:Multi-group agnostic learning is a formal learning criterion that is concerned with the conditional risks of predictors within subgroups of a population. The criterion addresses recent practical concerns such as subgroup fairness and hidden stratification. This paper studies the structure of solutions to the multi-group learning problem, and provides simple and near-optimal algorithms for the learning problem.

机器翻译,仅供参考

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2021-12-24,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 arXiv每日学术速递 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
批量计算
批量计算(BatchCompute,Batch)是为有大数据计算业务的企业、科研单位等提供高性价比且易用的计算服务。批量计算 Batch 可以根据用户提供的批处理规模,智能地管理作业和调动其所需的最佳资源。有了 Batch 的帮助,您可以将精力集中在如何分析和处理数据结果上。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档