前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >MXNet | 在R语言中使用

MXNet | 在R语言中使用

作者头像
努力在北京混出人样
发布2019-02-18 15:51:12
1.9K0
发布2019-02-18 15:51:12
举报
文章被收录于专栏:祥子的故事祥子的故事

亚马逊将MXNet指定为官方深度学习平台,1月23日MXNet成为Apache的卵化项目。

无疑,这些将MXNet推向深度学习的热潮中,成为热捧的项目。当然,学习MXNet也是很有必要的。哈哈,加油深度学习。

目前支持以下的语言:

  • python
  • R
  • C++
  • Julia
  • Scala

这里介绍基于R语言的安装和基本使用:

安装

代码语言:javascript
复制
install.packages("drat", repos="https://cran.rstudio.com")
drat:::addRepo("dmlc")
install.packages("mxnet")

若是安装过程中有问题,可以去https://cran.rstudio.com下载drat的本地文件”drat.zip”

https://cran.r-project.org/web/packages/drat/下载。

drat包下载
drat包下载

分类

下面以一个二分类的数据为例:

代码语言:javascript
复制
>require(mlbench)
>require(mxnet)
>data(Sonar, package="mlbench")
> str(Sonar)
'data.frame':   208 obs. of  61 variables:
 $ V1   : num  0.02 0.0453 0.0262 0.01 0.0762 0.0286 0.0317 0.0519 0.0223 0.0164 ...
 $ V2   : num  0.0371 0.0523 0.0582 0.0171 0.0666 0.0453 0.0956 0.0548 0.0375 0.0173 ...
 $ V3   : num  0.0428 0.0843 0.1099 0.0623 0.0481 ...
 $ V4   : num  0.0207 0.0689 0.1083 0.0205 0.0394 ...
 $ V5   : num  0.0954 0.1183 0.0974 0.0205 0.059 ...
 $ V6   : num  0.0986 0.2583 0.228 0.0368 0.0649 ...
 $ V7   : num  0.154 0.216 0.243 0.11 0.121 ...
 $ V8   : num  0.16 0.348 0.377 0.128 0.247 ...
 $ V9   : num  0.3109 0.3337 0.5598 0.0598 0.3564 ...
 $ V10  : num  0.211 0.287 0.619 0.126 0.446 ...
 $ V11  : num  0.1609 0.4918 0.6333 0.0881 0.4152 ...
 $ V12  : num  0.158 0.655 0.706 0.199 0.395 ...
 $ V13  : num  0.2238 0.6919 0.5544 0.0184 0.4256 ...
 $ V14  : num  0.0645 0.7797 0.532 0.2261 0.4135 ...
 $ V15  : num  0.066 0.746 0.648 0.173 0.453 ...
 $ V16  : num  0.227 0.944 0.693 0.213 0.533 ...
 $ V17  : num  0.31 1 0.6759 0.0693 0.7306 ...
 $ V18  : num  0.3 0.887 0.755 0.228 0.619 ...
 $ V19  : num  0.508 0.802 0.893 0.406 0.203 ...
 $ V20  : num  0.48 0.782 0.862 0.397 0.464 ...
 $ V21  : num  0.578 0.521 0.797 0.274 0.415 ...
 $ V22  : num  0.507 0.405 0.674 0.369 0.429 ...
 $ V23  : num  0.433 0.396 0.429 0.556 0.573 ...
 $ V24  : num  0.555 0.391 0.365 0.485 0.54 ...
 $ V25  : num  0.671 0.325 0.533 0.314 0.316 ...
 $ V26  : num  0.641 0.32 0.241 0.533 0.229 ...
 $ V27  : num  0.71 0.327 0.507 0.526 0.7 ...
 $ V28  : num  0.808 0.277 0.853 0.252 1 ...
 $ V29  : num  0.679 0.442 0.604 0.209 0.726 ...
 $ V30  : num  0.386 0.203 0.851 0.356 0.472 ...
 $ V31  : num  0.131 0.379 0.851 0.626 0.51 ...
 $ V32  : num  0.26 0.295 0.504 0.734 0.546 ...
 $ V33  : num  0.512 0.198 0.186 0.612 0.288 ...
 $ V34  : num  0.7547 0.2341 0.2709 0.3497 0.0981 ...
 $ V35  : num  0.854 0.131 0.423 0.395 0.195 ...
 $ V36  : num  0.851 0.418 0.304 0.301 0.418 ...
 $ V37  : num  0.669 0.384 0.612 0.541 0.46 ...
 $ V38  : num  0.61 0.106 0.676 0.881 0.322 ...
 $ V39  : num  0.494 0.184 0.537 0.986 0.283 ...
 $ V40  : num  0.274 0.197 0.472 0.917 0.243 ...
 $ V41  : num  0.051 0.167 0.465 0.612 0.198 ...
 $ V42  : num  0.2834 0.0583 0.2587 0.5006 0.2444 ...
 $ V43  : num  0.282 0.14 0.213 0.321 0.185 ...
 $ V44  : num  0.4256 0.1628 0.2222 0.3202 0.0841 ...
 $ V45  : num  0.2641 0.0621 0.2111 0.4295 0.0692 ...
 $ V46  : num  0.1386 0.0203 0.0176 0.3654 0.0528 ...
 $ V47  : num  0.1051 0.053 0.1348 0.2655 0.0357 ...
 $ V48  : num  0.1343 0.0742 0.0744 0.1576 0.0085 ...
 $ V49  : num  0.0383 0.0409 0.013 0.0681 0.023 0.0264 0.0507 0.0285 0.0777 0.0092 ...
 $ V50  : num  0.0324 0.0061 0.0106 0.0294 0.0046 0.0081 0.0159 0.0178 0.0439 0.0198 ...
 $ V51  : num  0.0232 0.0125 0.0033 0.0241 0.0156 0.0104 0.0195 0.0052 0.0061 0.0118 ...
 $ V52  : num  0.0027 0.0084 0.0232 0.0121 0.0031 0.0045 0.0201 0.0081 0.0145 0.009 ...
 $ V53  : num  0.0065 0.0089 0.0166 0.0036 0.0054 0.0014 0.0248 0.012 0.0128 0.0223 ...
 $ V54  : num  0.0159 0.0048 0.0095 0.015 0.0105 0.0038 0.0131 0.0045 0.0145 0.0179 ...
 $ V55  : num  0.0072 0.0094 0.018 0.0085 0.011 0.0013 0.007 0.0121 0.0058 0.0084 ...
 $ V56  : num  0.0167 0.0191 0.0244 0.0073 0.0015 0.0089 0.0138 0.0097 0.0049 0.0068 ...
 $ V57  : num  0.018 0.014 0.0316 0.005 0.0072 0.0057 0.0092 0.0085 0.0065 0.0032 ...
 $ V58  : num  0.0084 0.0049 0.0164 0.0044 0.0048 0.0027 0.0143 0.0047 0.0093 0.0035 ...
 $ V59  : num  0.009 0.0052 0.0095 0.004 0.0107 0.0051 0.0036 0.0048 0.0059 0.0056 ...
 $ V60  : num  0.0032 0.0044 0.0078 0.0117 0.0094 0.0062 0.0103 0.0053 0.0022 0.004 ...
 $ Class: num  1 1 1 1 1 1 1 1 1 1 ...
> dim(Sonar)
[1] 208  61

>Sonar[,61] = as.numeric(Sonar[,61])-1
>train.ind = c(1:50, 100:150)
>train.x = data.matrix(Sonar[train.ind, 1:60])
>train.y = Sonar[train.ind, 61]
>test.x = data.matrix(Sonar[-train.ind, 1:60])
>test.y = Sonar[-train.ind, 61]

采用mx.mlp函数来训练模型,这里给出对该函数参数的介绍:

代码语言:javascript
复制
mx.mlp(data, label, hidden_node = 1, out_node, dropout = NULL,
  activation = "tanh", out_activation = "softmax",
  device = mx.ctx.default(), ...)

data :为数据 label :标签 hidden_node: 隐藏层节点数,默认为1 out_node : 输出成节点数 dropout : 丢失率,为[0,1) activation : 激活函数 out_activation:输出成激活函数,默认为softmax device = mx.ctx.default() : 这里用于设置是GPU还是CPU来训练 其他的参数 : 用于传递给 mx.model.FeedForward.create eval_metric : 评估矩阵

代码语言:javascript
复制
#设置种子
> mx.set.seed(0)
# 模型,num.round表示迭代数,array.batch.size表示批规模
> model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2,out_activation="softmax", num.round=20, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy)

Start training with 1 devices
[1] Train-accuracy=0.488888888888889
[2] Train-accuracy=0.514285714285714
[3] Train-accuracy=0.514285714285714
[4] Train-accuracy=0.514285714285714
[5] Train-accuracy=0.514285714285714
[6] Train-accuracy=0.523809523809524
[7] Train-accuracy=0.619047619047619
[8] Train-accuracy=0.695238095238095
[9] Train-accuracy=0.695238095238095
[10] Train-accuracy=0.761904761904762
[11] Train-accuracy=0.828571428571429
[12] Train-accuracy=0.771428571428571
[13] Train-accuracy=0.742857142857143
[14] Train-accuracy=0.733333333333333
[15] Train-accuracy=0.771428571428571
[16] Train-accuracy=0.847619047619048
[17] Train-accuracy=0.857142857142857
[18] Train-accuracy=0.838095238095238
[19] Train-accuracy=0.838095238095238
[20] Train-accuracy=0.838095238095238

> preds = predict(model, test.x)
> pred.label = max.col(t(preds))-1
> table(pred.label, test.y)
          test.y
pred.label  0  1
         0 24 14
         1 36 33
> (24+33) / (24+14+36+33)
[1] 0.5327103

准确率= 0.5327103 完整的代码:

代码语言:javascript
复制
###########分类
require(mlbench)
require(mxnet)
data(Sonar, package="mlbench")
Sonar[,61] = as.numeric(Sonar[,61])-1
train.ind = c(1:50, 100:150)
train.x = data.matrix(Sonar[train.ind, 1:60])
train.y = Sonar[train.ind, 61]
test.x = data.matrix(Sonar[-train.ind, 1:60])
test.y = Sonar[-train.ind, 61]


mx.set.seed(0)
model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2,out_activation="softmax", num.round=20, array.batch.size=15, learning.rate=0.07, momentum=0.9, eval.metric=mx.metric.accuracy)


preds = predict(model, test.x)

pred.label = max.col(t(preds))-1
table(pred.label, test.y)

(24+33) / (24+14+36+33)

回归

下面给一个回归的模型例子:

数据的介绍

代码语言:javascript
复制
data(BostonHousing, package="mlbench")

> str(BostonHousing)
'data.frame':   506 obs. of  14 variables:
 $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
 $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
 $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
 $ chas   : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
 $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
 $ rm     : num  6.58 6.42 7.18 7 7.15 ...
 $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
 $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
 $ rad    : num  1 2 2 3 3 3 5 5 5 5 ...
 $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
 $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
 $ b      : num  397 397 393 395 397 ...
 $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
 $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
> dim(BostonHousing)
[1] 506  14

数据的划分,训练集合验证集

代码语言:javascript
复制
> train.ind = seq(1, 506, 3)
> train.x = data.matrix(BostonHousing[train.ind, -14])
> train.y = BostonHousing[train.ind, 14]
> test.x = data.matrix(BostonHousing[-train.ind, -14])
> test.y = BostonHousing[-train.ind, 14]

定义节点之间的连接方式

代码语言:javascript
复制
> # 定义输入数据
> data <- mx.symbol.Variable("data")
> # 完整连接的隐藏层
> # data: 输入源
> # num_hidden: 该层的节点数
> fc1 <- mx.symbol.FullyConnected(data, num_hidden=1)

定义损失函数

代码语言:javascript
复制
# 针对回归任务,定义损失函数
> lro <- mx.symbol.LinearRegressionOutput(fc1)

模型

代码语言:javascript
复制
>mx.set.seed(0)
#模型的定义,基于cpu训练,迭代50次,每批次为20,学习率为2e-6,评估矩阵为相对误差
>model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=mx.metric.rmse)


Start training with 1 devices
[1] Train-rmse=16.063282524034
[2] Train-rmse=12.2792375712573
[3] Train-rmse=11.1984634005885
[4] Train-rmse=10.2645236892904
[5] Train-rmse=9.49711005504284
[6] Train-rmse=9.07733734175182
[7] Train-rmse=9.07884450847991
[8] Train-rmse=9.10463850277417
[9] Train-rmse=9.03977049028532
[10] Train-rmse=8.96870685004475
[11] Train-rmse=8.93113287361574
[12] Train-rmse=8.89937257821847
[13] Train-rmse=8.87182096922953
[14] Train-rmse=8.84476075083586
[15] Train-rmse=8.81464673014974
[16] Train-rmse=8.78672567900196
[17] Train-rmse=8.76265872846474
[18] Train-rmse=8.73946101419974
[19] Train-rmse=8.71651926303267
[20] Train-rmse=8.69457600919277
[21] Train-rmse=8.67354928674563
[22] Train-rmse=8.65328755392436
[23] Train-rmse=8.63378039680078
[24] Train-rmse=8.61488162586984
[25] Train-rmse=8.5965105183022
[26] Train-rmse=8.57868133563275
[27] Train-rmse=8.56135851937663
[28] Train-rmse=8.5444819772098
[29] Train-rmse=8.52802114610432
[30] Train-rmse=8.5119504512622
[31] Train-rmse=8.49624261719241
[32] Train-rmse=8.48087453238701
[33] Train-rmse=8.46582689119887
[34] Train-rmse=8.45107881002491
[35] Train-rmse=8.43661331401712
[36] Train-rmse=8.42241575909639
[37] Train-rmse=8.40847217331365
[38] Train-rmse=8.39476931796395
[39] Train-rmse=8.38129658373974
[40] Train-rmse=8.36804269059018
[41] Train-rmse=8.35499817678397
[42] Train-rmse=8.34215505742154
[43] Train-rmse=8.32950441908131
[44] Train-rmse=8.31703985777311
[45] Train-rmse=8.30475363906755
[46] Train-rmse=8.29264031506106
[47] Train-rmse=8.28069372820073
[48] Train-rmse=8.26890902770415
[49] Train-rmse=8.25728089053853
[50] Train-rmse=8.24580511500735

或自定义评估矩阵

代码语言:javascript
复制
#自定义评估矩阵
>demo.metric.mae <- mx.metric.custom("mae", function(label, pred) {
  res <- mean(abs(label-pred))
  return(res)
})

模型结果

代码语言:javascript
复制
> mx.set.seed(0)
> model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=demo.metric.mae)
Start training with 1 devices
[1] Train-mae=13.1889538083225
[2] Train-mae=9.81431959337658
[3] Train-mae=9.21576419870059
[4] Train-mae=8.38071537613869
[5] Train-mae=7.45462437611487
[6] Train-mae=6.93423301743136
[7] Train-mae=6.91432357016537
[8] Train-mae=7.02742733055105
[9] Train-mae=7.00618194618469
[10] Train-mae=6.92541576984028
[11] Train-mae=6.87530243690643
[12] Train-mae=6.84757369098564
[13] Train-mae=6.82966501611388
[14] Train-mae=6.81151759574811
[15] Train-mae=6.78394182841811
[16] Train-mae=6.75914719419347
[17] Train-mae=6.74180388773481
[18] Train-mae=6.725853071279
[19] Train-mae=6.70932178215848
[20] Train-mae=6.6928868798746
[21] Train-mae=6.6769521329138
[22] Train-mae=6.66184809505939
[23] Train-mae=6.64754504809777
[24] Train-mae=6.63358514060577
[25] Train-mae=6.62027640889088
[26] Train-mae=6.60738245232238
[27] Train-mae=6.59505546771818
[28] Train-mae=6.58346195800437
[29] Train-mae=6.57285477783945
[30] Train-mae=6.56259003960424
[31] Train-mae=6.5527790788975
[32] Train-mae=6.54353428422991
[33] Train-mae=6.5344172368447
[34] Train-mae=6.52557652526432
[35] Train-mae=6.51697905850079
[36] Train-mae=6.50847898812758
[37] Train-mae=6.50014844106303
[38] Train-mae=6.49207674844397
[39] Train-mae=6.48412070125341
[40] Train-mae=6.47650500999557
[41] Train-mae=6.46893867486053
[42] Train-mae=6.46142131653097
[43] Train-mae=6.45395035048326
[44] Train-mae=6.44652914123403
[45] Train-mae=6.43916216409869
[46] Train-mae=6.43183777381976
[47] Train-mae=6.42455544223388
[48] Train-mae=6.41731406417158
[49] Train-mae=6.41011292926139
[50] Train-mae=6.40312503493494

完整的代码:

代码语言:javascript
复制
########回归
data(BostonHousing, package="mlbench")
str(BostonHousing)
dim(BostonHousing)
train.ind = seq(1, 506, 3)
train.x = data.matrix(BostonHousing[train.ind, -14])
train.y = BostonHousing[train.ind, 14]
test.x = data.matrix(BostonHousing[-train.ind, -14])
test.y = BostonHousing[-train.ind, 14]


# 定义输入数据
data <- mx.symbol.Variable("data")
# 完整连接的隐藏层
# data: 输入源
# num_hidden: 该层的节点数
fc1 <- mx.symbol.FullyConnected(data, num_hidden=1)

# 针对回归任务,定义损失函数
lro <- mx.symbol.LinearRegressionOutput(fc1)



mx.set.seed(0)
model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=mx.metric.rmse)



demo.metric.mae <- mx.metric.custom("mae", function(label, pred) {
  res <- mean(abs(label-pred))
  return(res)
})
mx.set.seed(0)
model <- mx.model.FeedForward.create(lro, X=train.x, y=train.y, ctx=mx.cpu(), num.round=50, array.batch.size=20, learning.rate=2e-6, momentum=0.9, eval.metric=demo.metric.mae)

总结: 代码很直接明白,简介清晰,容易上手

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2017年01月25日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 安装
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档