前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >【ML系列】一招鲜,判断哪些输入特征对神经网络是重要的!

【ML系列】一招鲜,判断哪些输入特征对神经网络是重要的!

作者头像
量化投资与机器学习微信公众号
发布2018-10-25 10:40:20
6900
发布2018-10-25 10:40:20
举报
文章被收录于专栏:量化投资与机器学习

本期作者:Muhammad Ryan

本期编译:Peter

正文

如果你看到这个题目,可能会马上回答

主成分分析(PCA),因为冗余的输入是无用的。

没错,但这不是今天的重点。我们想知道的是输入特征对神经网络的预测计算有多重要。例如,通过学习时间、年龄、身高和缺席人数等几个预测因素来预测谁会通过考试。直觉上,决定学生是否通过考试的最重要的因素是学习时间。

在一个简单的线性回归中,我们可以通过看它的线性方程的权重来测量它。当然,假设预测器(X)已经标准化(X ')所以数据的量纲是相同的。

你可以在这两个函数中选择一个来归一化你的预测器。为了理解为什么只有使用权重我们才能衡量一个预测器相对于其他预测器的重要性,这里有一个例子。假设我们有一个线性方程。

我们把所有的x用5代替:

这是它贡献的部分,直观上来说,如果这个部分很大,当输入出错时,输出就会出错。例如,当我把x3从5换成1,我们得到:

如果把x2换成1,得到的是:

这里我们可以看到,由于权重的不同,x2值的变化比x3值的变化影响更大。这很明显,但我想强调的是,除了权重之外,我们可以从输出值与参考值的偏差来看我们的输入有多重要。

在神经网络中,输入的权重不是直接连接到输出层,而是连接到隐藏层。此外,与线性回归不同,神经网络是非线性的。为了看到输入的显著水平,我们寻找我们之前找到的第二个参数,如果我们随机改变输入值,它与神经网络输出值的偏差有多大。这里我们使用的参考值是原始错误值。为什么我称之为“original”。

让我们来看看真实的数据和真实的神经网络。预测学生在考试中的表现。

数据下载地址:https://archive.ics.uci.edu/ml/datasets/student+performance

下面是逐步来实现到在神经网络中输入显著水平:

1、使用下面的代码构建训练保存神经网络。在训练神经网络之后,我们不会直接使用它来预测,而是将训练过的模型保存到一个文件中。我们为什么要这么做?因为我们需要一个稳定的模型(记住,每次对模型进行训练,每次得到的权重和偏差都会不同)来计算每个输入的显著水平。

代码语言:javascript
复制
import numpy as np
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense, LSTM, RepeatVector, TimeDistributed
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.svm import SVR
import csv
#setting
datafile = 'studentperform.csv'
studentmodel = 'studentmodel.h5'
batch_size = 10
hidden_neuron = 10
trainsize = 900
iterasi = 200
def generatemodel(totvar):
 # create and fit the LSTM network
 model = Sequential()
 model.add(Dense(3, batch_input_shape=(batch_size, totvar), activation='sigmoid'))
 model.add(Dense(hidden_neuron, activation='sigmoid'))
 model.add(Dense(1))
 model.compile(loss='mean_squared_error', optimizer='adam')
 return model
#read data
alldata = np.genfromtxt(datafile,delimiter=',')[1:]
#separate between training and test
trainparam = alldata[:900, :-1]
trainlabel = alldata[:900, -1]
testparam = alldata[900:, :-1]
testlabel = alldata[900:, -1]
trainparam = trainparam[len(trainparam)%10:]
trainlabel = trainlabel[len(trainlabel)%10:]
testparam = testparam[len(testparam)%10:]
testlabel = testlabel[len(testlabel)%10:]
###############
#normalization#
###############
trainparamnorm = np.zeros(np.shape(trainparam))
trainlabelnorm = np.zeros(np.shape(trainlabel))
testparamnorm = np.zeros(np.shape(testparam))
testlabelnorm = np.zeros(np.shape(testlabel))
print 'shape label adalah', np.shape(testlabelnorm)
#for param
for i in xrange(len(trainparam[0])-2):
 trainparamnorm[:,i] = (trainparam[:,i] - np.min(trainparam[:,i])) / (np.max(trainparam[:,i]) - np.min(trainparam[:,i]))
 testparamnorm[:,i] = (testparam[:,i] - np.min(trainparam[:,i])) / (np.max(trainparam[:,i]) - np.min(trainparam[:,i]))
for i in xrange(2):
 trainparamnorm[:,-2+i] = (trainparam[:,-2+i] - 0.0) / (20.0 - 0.0)
 testparamnorm[:,-2+i] = (testparam[:,-2+i] - 0.0) / (20.0 - 0.0)
#for label
trainlabelnorm = (trainlabel - np.min(trainlabel)) / (np.max(trainlabel) - np.min(trainlabel))
testlabelnorm = (testlabel - np.min(trainlabel)) / (np.max(trainlabel) - np.min(trainlabel))
######################
#build and save model#
######################
mod = generatemodel(len(trainparamnorm[0]))
mod.fit(trainparamnorm, trainlabelnorm, epochs=iterasi, batch_size=batch_size, verbose=2, shuffle=True)
#save trained model
mod.save(studentmodel)

2、加载模型并计算其误差:

代码语言:javascript
复制
import numpy as np
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense, LSTM, RepeatVector, TimeDistributed
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.svm import SVR
import csv
import random
#setting
datafile = 'studentperform.csv'
studentmodel = 'studentmodel.h5'
batch_size = 10
hidden_neuron = 10
trainsize = 900
iterasi = 200
#read data
alldata = np.genfromtxt(datafile,delimiter=',')[1:]
#separate between training and test
trainparam = alldata[:900, :-1]
trainlabel = alldata[:900, -1]
testparam = alldata[900:, :-1]
testlabel = alldata[900:, -1]
trainparam = trainparam[len(trainparam)%10:]
trainlabel = trainlabel[len(trainlabel)%10:]
testparam = testparam[len(testparam)%10:]
testlabel = testlabel[len(testlabel)%10:]
###############
#normalization#
###############
trainparamnorm = np.zeros(np.shape(trainparam)).astype('float32')
trainlabelnorm = np.zeros(np.shape(trainlabel)).astype('float32')
testparamnorm = np.zeros(np.shape(testparam)).astype('float32')
testlabelnorm = np.zeros(np.shape(testlabel)).astype('float32')
#for param
for i in xrange(len(trainparam[0])-2):
 trainparamnorm[:,i] = (trainparam[:,i] - np.min(trainparam[:,i])) / (np.max(trainparam[:,i]) - np.min(trainparam[:,i]))
 testparamnorm[:,i] = (testparam[:,i] - np.min(trainparam[:,i])) / (np.max(trainparam[:,i]) - np.min(trainparam[:,i]))
for i in xrange(2):
 trainparamnorm[:,-2+i] = (trainparam[:,-2+i] - 0.0) / (20.0 - 0.0)
 testparamnorm[:,-2+i] = (testparam[:,-2+i] - 0.0) / (20.0 - 0.0)
#for label
trainlabelnorm = (trainlabel - np.min(trainlabel)) / (np.max(trainlabel) - np.min(trainlabel))
testlabelnorm = (testlabel - np.min(trainlabel)) / (np.max(trainlabel) - np.min(trainlabel))
#load trained model
mod = load_model(studentmodel)
G3pred = mod.predict(testparamnorm, batch_size=batch_size)
G3real = G3pred*20.0
err = mean_squared_error(testlabel, G3real)
print 'our error value is', err

在自己的电脑上,错误是3.44525143751。这是初始误差。

3、为随机改变每个输入值。我们将随机生成0到1之间的数字,替换测试数据测中的归一化输入参数,并立即将修改后的输入数据应用到刚刚加载的神经网络中。为什么在0和1之间随机生成值呢?因为我们在上面一段使用了第二个归一化函数(使用最大值和最小值)来归一化我们的输入。对每个归一化输入进行迭代,随机改变其值,反复进行,得到大量的样本,从而得到误差的平均值和标准差。这样可以消除偶然因素(记住,我们随机产生值)。

代码语言:javascript
复制
import numpy as np
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense, LSTM, RepeatVector, TimeDistributed
from keras.models import load_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.svm import SVR
import random
import os
import csv

#setting
datafile = 'studentperform.csv'
studentmodel = 'studentmodel.h5'
batch_size = 10
hidden_neuron = 10
trainsize = 900
iterasi = 200
randsample = 100

#read data
alldata = np.genfromtxt(datafile,delimiter=',')[1:]

#separate between training and test
trainparam = alldata[:900, :-1]
trainlabel = alldata[:900, -1]

testparam = alldata[900:, :-1]
testlabel = alldata[900:, -1]

trainparam = trainparam[len(trainparam)%10:]
trainlabel = trainlabel[len(trainlabel)%10:]

testparam = testparam[len(testparam)%10:]
testlabel = testlabel[len(testlabel)%10:]


###############
#normalization#
###############

trainparamnorm = np.zeros(np.shape(trainparam)).astype('float32')
trainlabelnorm = np.zeros(np.shape(trainlabel)).astype('float32')

testparamnorm = np.zeros(np.shape(testparam)).astype('float32')
testlabelnorm = np.zeros(np.shape(testlabel)).astype('float32')

#for param
for i in xrange(len(trainparam[0])-2):
  trainparamnorm[:,i] = (trainparam[:,i] - np.min(trainparam[:,i])) / (np.max(trainparam[:,i]) - np.min(trainparam[:,i]))
  testparamnorm[:,i] = (testparam[:,i] - np.min(trainparam[:,i])) / (np.max(trainparam[:,i]) - np.min(trainparam[:,i]))

for i in xrange(2):
  trainparamnorm[:,-2+i] = (trainparam[:,-2+i] - 0.0) / (20.0 - 0.0)
  testparamnorm[:,-2+i] = (testparam[:,-2+i] - 0.0) / (20.0 - 0.0)

#for label
trainlabelnorm = (trainlabel - np.min(trainlabel)) / (np.max(trainlabel) - np.min(trainlabel))
testlabelnorm = (testlabel - np.min(trainlabel)) / (np.max(trainlabel) - np.min(trainlabel))


#load trained model
mod = load_model(studentmodel)

G3pred = mod.predict(testparamnorm, batch_size=batch_size)
G3real = G3pred*20.0

errreal = mean_squared_error(testlabel, G3real)
print 'our error value is', errreal

################################
#permutation importance session#
################################

permutsample = np.zeros((randsample, len(testparamnorm[0])))
for trying in xrange(randsample):
  randval = np.zeros((len(testlabelnorm)))
  for i in xrange(len(testlabelnorm)):
    randval[i] = random.uniform(0,1)

  for i in xrange(len(testparamnorm[0])):
    permutinput = np.zeros(np.shape(testparamnorm))
    permutinput[:] = testparamnorm
    permutinput[:,i] = randval
    G3pred = mod.predict(permutinput, batch_size=batch_size)
    G3real = G3pred*20.0
    err = mean_squared_error(testlabel, G3real)
    permutsample[trying, i] = err

print permutsample
#print testparamnorm

#print mean and standard deviation of error
errperformance = np.zeros((len(testparamnorm[0]), 2))
for i in xrange(len(testparamnorm[0])):
  errperformance[i,0] = np.mean(permutsample[:,i])
  errperformance[i,1] = np.std(permutsample[:,i])
errperformance[:,0] = errreal - errperformance[:,0]

print errperformance

代码输出示例:

4、解释结果。我们得到了一些有趣的结果。首先是第二行,从随机输入值结果中得到的误差变化较小。这表明,参数“出行时间”对学生期末考试的成绩根本没有影响。在最后一行(G2)中,我们得到了一个非常高的误差。这说明第二阶段考试的成绩与期末考试成绩高度相关。从这个结果中,我们得到了输入的显著水平,即G2、G1、考试不及格、空闲时间、缺勤、学习时间和上学时间。另一个有趣的结果是学习时间对期末考试的价值没有明显的影响。这个结果非常违反直觉。在现实生活研究中,必须进一步研究。

这就是一种简单的方法来测量神经网络输入的显著水平。该技术可应用于神经网络、支持向量机和随机森林等其他机器学习算法。

来源:https://medium.com/datadriveninvestor/a-simple-way-to-know-how-important-your-input-is-in-neural-network-86cbae0d3689

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2018-10-22,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 量化投资与机器学习 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档