前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >bnlearn:an R package for Bayesian network learning and inference

bnlearn:an R package for Bayesian network learning and inference

作者头像
CreateAMind
发布2022-11-22 17:13:39
5090
发布2022-11-22 17:13:39
举报
文章被收录于专栏:CreateAMindCreateAMind

bnlearn is an R package for learning the graphical structure of Bayesian networks, estimate their parameters and perform some useful inference. It was first released in 2007, it has been under continuous development for more than 10 years (and still going strong). To get started and install the latest development snapshot type

代码语言:javascript
复制
install.packages("https://www.bnlearn.com/releases/bnlearn_latest.tar.gz", repos = NULL, type = "source")

in your R console. (More detailed installation instructions below.)

bnlearn implements the following constraint-based structure learning algorithms:

  • PC (the stable version);
  • Grow-Shrink (GS);
  • Incremental Association Markov Blanket (IAMB);
  • Fast Incremental Association (Fast-IAMB);
  • Interleaved Incremental Association (Inter-IAMB);
  • Incremental Association with FDR Correction (IAMB-FDR);
  • Max-Min Parents & Children (MMPC);
  • Semi-Interleaved Hiton-PC (SI-HITON-PC);
  • Hybrid Parents & Children (HPC);

the following score-based structure learning algorithms:

  • Hill Climbing (HC);
  • Tabu Search (Tabu);

the following hybrid structure learning algorithms:

  • Max-Min Hill Climbing (MMHC);
  • Hybrid HPC (H2PC);
  • General 2-Phase Restricted Maximization (RSMAX2);

the following local discovery algorithms:

  • Chow-Liu;
  • ARACNE;

and the following Bayesian network classifiers:

  • naive Bayes;
  • Tree-Augmented naive Bayes (TAN).

Discrete (multinomial) and continuous (multivariate normal) data sets are supported, both for structure and parameter learning. The latter can be performed using either maximum likelihood or Bayesian estimators. Each constraint-based algorithm can be used with several conditional independence tests:

  • categorical data (multinomial distribution):
    • mutual information (parametric, semiparametric and permutation tests);
    • shrinkage-estimator for the mutual information;
    • Pearson's X2 (parametric, semiparametric and permutation tests);
  • ordinal data:
    • Jonckheere-Terpstra (parametric and permutation tests);
  • continuous data (multivariate normal distribution):
    • linear correlation (parametric, semiparametric and permutation tests);
    • Fisher's Z (parametric, semiparametric and permutation tests);
    • mutual information (parametric, semiparametric and permutation tests);
    • shrinkage-estimator for the mutual information;
  • mixed data (conditional Gaussian distribution):
    • mutual information (parametric, semiparametric);

and each score-based algorithm can be used with several score functions:

  • categorical data (multinomial distribution):
    • the multinomial log-likelihood;
    • the Akaike Information Criterion (AIC);
    • the Bayesian Information Criterion (BIC);
    • the multinomial predictive log-likelihood;
    • a score equivalent Dirichlet posterior density (BDe);
    • a sparse Dirichlet posterior density (BDs);
    • a Dirichlet posterior density based on Jeffrey's prior (BDJ);
    • a modified Bayesian Dirichlet for mixtures of interventional and observational data;
    • the locally averaged BDe score (BDla);
    • the K2 score;
    • the factorized normalized likelihood score (fNML);
    • the quotient normalized likelihood score (qNML);
  • continuous data (multivariate normal distribution):
    • the multivariate Gaussian log-likelihood;
    • the corresponding Akaike Information Criterion (AIC);
    • the corresponding Bayesian Information Criterion (BIC);
    • the corresponding predictive log-likelihood;
    • a score equivalent Gaussian posterior density (BGe);
  • mixed data (conditional Gaussian distribution):
    • the conditional Gaussian log-likelihood;
    • the corresponding Akaike Information Criterion (AIC);
    • the corresponding Bayesian Information Criterion (BIC);
    • the corresponding predictive log-likelihood.

参考:

概率编程应该有什么高度?

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2022-10-08,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 CreateAMind 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档