前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >结构化机器学习流程

结构化机器学习流程

作者头像
吕海峰
发布2018-08-08 10:36:14
1.1K0
发布2018-08-08 10:36:14
举报
文章被收录于专栏:BrianBrian

概述

机器学习可以通过结构化的流程来梳理:1.定义问题和需求分析->2.数据探索->3.数据准备->4.评估算法->5.优化模型->6.部署

  • 导入类库
  • 导入数据集
  • 数据统计分析
  • 数据可视化
  • 数据清洗
  • 特征选择
  • 数据转换
  • 分离数据集
  • 定义模型评估标准
  • 算法审查
  • 算法比较
  • 算法调参
  • 集成算法
  • 预测评估数据集
  • 利用数据生成模型
  • 序列化模型

数据理解

数据的理解主要在于分析数据维度、数据类型属性、数据分布以及相关性等。数据属性的相关性是指数据的两个属性的互相影响。

数据统计分析

代码语言:javascript
复制
# 数据导入与数据维度探索
from numpy import loadtxt
with open(filename,'rt') as rawdata:
    data = loadtxt(rawdata,delimiter=',')
    print(data.shape)
#数据属性和描述性统计
print(data.dtype)
print(data.describe())
#数据根据类别分类
print(data.group("class").size())
#数据属性相关性(皮尔逊相关系数,1完全正相关,-1完全负相关,0不相关)
print(data.corr(method="pearson"))
#数据分布分析,skew代表高斯分布的偏离程度,数据接近于0表示数据偏差非常小
print(data.skew())

数据可视化

数据可视化主要集中在直方图、密度图和箱线图

代码语言:javascript
复制
#直方图可视化,数据趋向于指数分布还是高斯分布
data.hist()
pyplot.show()
#密度图可视化,数据值对应的边界一般用于连续变量。
data.plot(kind="density",subplots=True,layout=(3,3),shareX=False)
pyplot.show()
#箱线图
data.plot(kind="box",subplots=True,layout=(3,3),shareX=False)
#相关矩阵图
correlations = data.corr()
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(correlations,vmin=-1,vmax=1)
fig.colorbar(cax)
ticks = np.arange(0,9,1)
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(names)
ax.set_yticklabels(names)
pyplot.show()
#散点图矩阵
scatter_matrix(data)
pyplot.show()

数据预处理

数据的预处理包含调整数据尺度、正态化数据、标准化数据、二值数据调整数据尺度MinMaxScaler 可以将不同计量单位的数据统一成相同的尺度。

代码语言:javascript
复制
from sklearn.preprocessing import MinMaxScaler
transformer = MinMaxScaler(feature_range(0,1))
newx=transform.fit_transform(x)

正态化数据 将输入数据处理符合高斯分布,输出以0为中位数,方差为1,并作为假定数据符合高斯分布算法的输入。

代码语言:javascript
复制
transformer = StandardScaler().fit(x)
newx=transformer.transform(x)

标准化数据 非常适合处理稀疏数据

代码语言:javascript
复制
transformer=Normalizer().fit(x)
transformer.transform(x)

二值数据 将超过一定阈值的数据划分为二值数据即为0或者1.

代码语言:javascript
复制
from sklearn.preprocessing import Binarizer
transformer=Binarizer(threshold=0.5).fit(x)
transformer.transform(x)

数据特征选择

数据特征选择,有助于降低数据的拟合度,提高算法的精度,减少训练时间。特征选择主要是选择对结果影响最大的数据特征,在sklearn里面通过卡方检验的实现,卡方检验就是统计样本的实际观测值与理论推断值之间的偏离程度。卡方值越大,越不符合;卡方值越小,偏差越小。

代码语言:javascript
复制
from sklearn.feature_selection import chi2
from sklearn.feature_selection import SelectKBest
test = SelectKBest(score_func=chi2,k=4)
fit = test.fit(x,y)
print(fit.scores_)
features = fit.transform(x)
print(features[:2,:])

另外一种方法是通过RFE(递归特征消除),使用一个基模型来进行多轮训练,每轮训练后消除若干权重系数的特征,再基于新的特征集进行下一轮训练。通过每一个基模型的精度,找到对最终的预测结果影响最大的数据特征。

代码语言:javascript
复制
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
rfe = RFE(model,3)
fit = rfe.fit(x,y)
print("特征个数:",fit.n_features_)
print("被选定的特征:",fit.support_)
print("特征排名:",fit.ranking_)

最后一种特征选择的方法是PCA主成分分析,使用线性代数来转换压缩数据,一般称为数据降维。PCA视为了让映射后的样本具有最大的发散性,PCA是一种无监督的降维方法。

代码语言:javascript
复制
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
fit = pca.fit(x)
print("解释方差: ",fit.explained_variance_ratio_)
print(fit.components_)

机器学习算法

常用的机器学习算法主要分为分类和回归算法,分类算法很多,主要分为线性分类与非线性分类算法。其中线性分类算法主要有逻辑回归、线性判别分析,非线性算法主要有K近邻,贝叶斯分类器,分类与回归树,支持向量机。回归算法主要也是分为线性与非线性算法,其中线性算法主要有线性回归算法、岭回归算法、套索回归算法和弹性网络回归算法,非线性算法主要有K近邻算法,分类与回归树和支持向量机分类算法比较

代码语言:javascript
复制
#导入包
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.naive_bayes import GaussianNB
from matplotlib import pyplot
# 导入数据
filename = 'pima_data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
# 将数据分为输入数据和输出结果
array = data.values
X = array[:, 0:8]
Y = array[:, 8]
num_folds = 10
seed = 7
kfold = KFold(n_splits=num_folds, random_state=seed)
models = {}
models['LR'] = LogisticRegression()
models['LDA'] = LinearDiscriminantAnalysis()
models['KNN'] = KNeighborsClassifier()
models['CART'] = DecisionTreeClassifier()
models['SVM'] = SVC()
models['NB'] = GaussianNB()
results = []
for name in models:
    result = cross_val_score(models[name], X, Y, cv=kfold)
    results.append(result)
    msg = '%s: %.3f (%.3f)' % (name, result.mean(), result.std())
    print(msg)
#输出
KNN: 0.727 (0.062)
SVM: 0.651 (0.072)
LR: 0.770 (0.048)
NB: 0.755 (0.043)
LDA: 0.773 (0.052)
CART: 0.695 (0.060)

#图表显示
fig = pyplot.figure()
fig.suptitle('可视化算法')
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(models.keys())
pyplot.show()

集成算法

  • Bagging: 先将训练集分离成多个子集,然后通过多个子集训练多个模型,通过组合投票的方式获得最优解,Bagging在数据具有很大方差时非常有效。Bagged Decision Trees,Random Forest和Extra Trees。
  • Boosting: 训练多个模型并组成一个序列,序列中的每一个模型都会更正前一个模型的错误。 我们先来基于Bgging的分类与回归树
代码语言:javascript
复制
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
# 导入数据
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
# 将数据分为输入数据和输出结果
array = data.values
X = array[:, 0:8]
Y = array[:, 8]
num_folds = 10
seed = 7
kfold = KFold(n_splits=num_folds, random_state=seed)
cart = DecisionTreeClassifier()
num_tree = 100
model = BaggingClassifier(base_estimator=cart, n_estimators=num_tree, random_state=seed)
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())
#输出
0.770745044429255

随机森林算法

代码语言:javascript
复制
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
# 导入数据
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = read_csv(filename, names=names)
# 将数据分为输入数据和输出结果
array = data.values
X = array[:, 0:8]
Y = array[:, 8]
num_folds = 10
seed = 7
kfold = KFold(n_splits=num_folds, random_state=seed)
num_tree = 100
max_features = 3
model = RandomForestClassifier(n_estimators=num_tree, random_state=seed, max_features=max_features)
result = cross_val_score(model, X, Y, cv=kfold)
print(result.mean())
#输出
0.7733766233766234

回归实例

这是一个Boston House Price数据集,我们依次来看看。 1.导入类库和数据

代码语言:javascript
复制
# 导入类库
import numpy as np
from numpy import arange
from matplotlib import pyplot
from pandas import read_csv
from pandas import  set_option
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.metrics import mean_squared_error

# 导入数据
filename = 'housing.csv'
names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS',
         'RAD', 'TAX', 'PRTATIO', 'B', 'LSTAT', 'MEDV']
dataset = read_csv(filename, names=names, delim_whitespace=True)
# 数据维度
print(dataset.shape)
# 特征熟悉的字段类型
print(dataset.dtypes)
# 查看最开始的10条记录
set_option('display.line_width', 120)
print(dataset.head(10))
## 描述性统计信息
set_option('precision', 1)
print(dataset.describe())
# 关联关系(数据特征的皮尔逊相关系数)
set_option('precision', 2)
print(dataset.corr(method='pearson'))

我们把关联关系大于0.7或者小于-0.7为强关联关系,比如:NOX与INDUS之间的皮尔逊相关系数是0.76,再通过数据可视化出来后续将数据特征属性之间的强相关特征移除掉,以提高算法的准确度,我们需要可视化出数据进行图形探索。

代码语言:javascript
复制
# 直方图
dataset.hist(sharex=False, sharey=False, xlabelsize=1, ylabelsize=1)
pyplot.show()

# 密度图
dataset.plot(kind='density', subplots=True, layout=(4,4), sharex=False, fontsize=1)
pyplot.show()

# 箱线图
dataset.plot(kind='box', subplots=True, layout=(4,4), sharex=False, sharey=False, fontsize=8)
pyplot.show()

# 散点矩阵图
scatter_matrix(dataset)
pyplot.show()

# 相关矩阵图
fig = pyplot.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(dataset.corr(), vmin=-1, vmax=1, interpolation='none')
fig.colorbar(cax)
ticks = np.arange(0, 14, 1)
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(names)
ax.set_yticklabels(names)
pyplot.show()
  • 通过特征选择来减少大部分相关性高的特征
  • 通过标准化数据来降低不同数据度量单位带来的影响
  • 通过正太化数据来降低不同的数据分布结构,以提高算法的准确度
代码语言:javascript
复制
# 分离数据集
array = dataset.values
X = array[:, 0:13]
Y = array[:, 13]
validation_size = 0.2
seed = 7
X_train, X_validation, Y_train, Y_validation = train_test_split(X, Y,test_size=validation_size, random_state=seed)

# 评估算法 - 评估标准
num_folds = 10
seed = 7
scoring = 'neg_mean_squared_error'

# 评估算法 - baseline
models = {}
models['LR'] = LinearRegression()
models['LASSO'] = Lasso()
models['EN'] = ElasticNet()
models['KNN'] = KNeighborsRegressor()
models['CART'] = DecisionTreeRegressor()
models['SVM'] = SVR()
#评估算法 - 正态化数据
pipelines = {}
pipelines['ScalerLR'] = Pipeline([('Scaler', StandardScaler()), ('LR', LinearRegression())])
pipelines['ScalerLASSO'] = Pipeline([('Scaler', StandardScaler()), ('LASSO', Lasso())])
pipelines['ScalerEN'] = Pipeline([('Scaler', StandardScaler()), ('EN', ElasticNet())])
pipelines['ScalerKNN'] = Pipeline([('Scaler', StandardScaler()), ('KNN', KNeighborsRegressor())])
pipelines['ScalerCART'] = Pipeline([('Scaler', StandardScaler()), ('CART', DecisionTreeRegressor())])
pipelines['ScalerSVM'] = Pipeline([('Scaler', StandardScaler()), ('SVM', SVR())])
results = []
for key in pipelines:
    kfold = KFold(n_splits=num_folds, random_state=seed)
    cv_result = cross_val_score(pipelines[key], X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_result)
    print('%s: %f (%f)' % (key, cv_result.mean(), cv_result.std()))
#评估算法 - 箱线图
fig = pyplot.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(models.keys())
pyplot.show()
#图像具有最紧凑的数据分布

输出: ScalerSVM: -29.633086 (17.009186) ScalerCART: -25.136485 (13.179402) ScalerLR: -21.379856 (9.414264) ScalerEN: -27.932372 (10.587490) ScalerKNN: -20.107620 (12.376949) ScalerLASSO: -26.607314 (8.978761) 我们看到KNN算法具有最优的MSE,我们是否进一步优化呢,我们通过网格搜索算法来优化参数。

代码语言:javascript
复制
# 调参改进算法 - KNN
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
param_grid = {'n_neighbors': [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21]}
model = KNeighborsRegressor()
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(X=rescaledX, y=Y_train)

print('最优:%s 使用%s' % (grid_result.best_score_, grid_result.best_params_))
cv_results = zip(grid_result.cv_results_['mean_test_score'],
                 grid_result.cv_results_['std_test_score'],
                 grid_result.cv_results_['params'])
for mean, std, param in cv_results:
    print('%f (%f) with %r' % (mean, std, param))

最优:-18.172136963696367 使用{‘n_neighbors’: 3} -20.208663 (15.029652) with {‘n_neighbors’: 1} -18.172137 (12.950570) with {‘n_neighbors’: 3} -20.131163 (12.203697) with {‘n_neighbors’: 5} -20.575845 (12.345886) with {‘n_neighbors’: 7} -20.368264 (11.621738) with {‘n_neighbors’: 9} -21.009204 (11.610012) with {‘n_neighbors’: 11} -21.151809 (11.943318) with {‘n_neighbors’: 13} -21.557400 (11.536339) with {‘n_neighbors’: 15} -22.789938 (11.566861) with {‘n_neighbors’: 17} -23.871873 (11.340389) with {‘n_neighbors’: 19} -24.361362 (11.914786) with {‘n_neighbors’: 21} 除了调参外,提高模型准确度的方法是使用集成算法。比如:Bagging的RF和ET,以及Boosting的AdaBoost和GBM。

代码语言:javascript
复制
# 集成算法
ensembles = {}
ensembles['ScaledAB'] = Pipeline([('Scaler', StandardScaler()), ('AB', AdaBoostRegressor())])
ensembles['ScaledAB-KNN'] = Pipeline([('Scaler', StandardScaler()),
                                       ('ABKNN', AdaBoostRegressor(base_estimator=KNeighborsRegressor(n_neighbors=3)))])
ensembles['ScaledAB-LR'] = Pipeline([('Scaler', StandardScaler()), ('ABLR', AdaBoostRegressor(LinearRegression()))])
ensembles['ScaledRFR'] = Pipeline([('Scaler', StandardScaler()), ('RFR', RandomForestRegressor())])
ensembles['ScaledETR'] = Pipeline([('Scaler', StandardScaler()), ('ETR', ExtraTreesRegressor())])
ensembles['ScaledGBR'] = Pipeline([('Scaler', StandardScaler()), ('RBR', GradientBoostingRegressor())])

results = []
for key in ensembles:
    kfold = KFold(n_splits=num_folds, random_state=seed)
    cv_result = cross_val_score(ensembles[key], X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_result)
    print('%s: %f (%f)' % (key, cv_result.mean(), cv_result.std()))

# 集成算法 - 箱线图
fig = pyplot.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(ensembles.keys())
pyplot.show()

我们看到准确性都很大提高 ScaledGBR: -10.021402 (4.574862) ScaledRFR: -14.034097 (6.648385) ScaledAB-LR: -23.615966 (9.193021) ScaledAB: -14.542921 (6.774086) ScaledAB-KNN: -16.218433 (10.990043) ScaledETR: -11.422266 (5.479142) 所以采用ETR算法,集成算法都有个参数是n_estimators,下面对GB和ET算法进行调参,再比较两个算法模型的准确度。

代码语言:javascript
复制
# 集成算法GBM - 调参
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
param_grid = {'n_estimators': [10, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900]}
model = GradientBoostingRegressor()
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(X=rescaledX, y=Y_train)
print('最优:%s 使用%s' % (grid_result.best_score_, grid_result.best_params_))

# 集成算法ET - 调参
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
param_grid = {'n_estimators': [5, 10, 20, 30, 40, 50, 60, 70, 80]}
model = ExtraTreesRegressor()
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(X=rescaledX, y=Y_train)
print('最优:%s 使用%s' % (grid_result.best_score_, grid_result.best_params_))

输出为 最优:-9.424355044118839 使用{‘n_estimators’: 900} 最优:-9.311224106590345 使用{‘n_estimators’: 80} 最终采用ET算法进行训练和预测

代码语言:javascript
复制
#训练模型
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
gbr = ExtraTreesRegressor(n_estimators=80)
gbr.fit(X=rescaledX, y=Y_train)
# 评估算法模型
rescaledX_validation = scaler.transform(X_validation)
predictions = gbr.predict(rescaledX_validation)
print(mean_squared_error(Y_validation, predictions))

输出: 12.548510922181366

二分类实例

该数据集是采用声呐、矿山和岩石数据集,通过声呐返回的信息判断物质属于金属还是岩石,岩石为R,金属为M。

代码语言:javascript
复制
# 导入类库
import numpy as np
from matplotlib import pyplot
from pandas import read_csv
from pandas.plotting import scatter_matrix
from pandas import set_option
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier

# 导入数据
filename="sonar.all-data.csv"

dataset = read_csv(filename, header=None)

# 数据维度
print(dataset.shape)

# 查看数据类型
set_option('display.max_rows', 500)
print(dataset.dtypes)

# 查看最初的20条记录
set_option('display.width', 100)
print(dataset.head(20))

# 描述性统计信息
set_option('precision', 3)
print(dataset.describe())

# 数据的分类分布
print(dataset.groupby(60).size())

# 直方图
dataset.hist(sharex=False, sharey=False,xlabelsize=1, ylabelsize=1)
pyplot.show()

# 密度图
dataset.plot(kind='density', subplots=True, layout=(8, 8), sharex=False, legend=False, fontsize=1)
pyplot.show()

# 关系矩阵图
fig = pyplot.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(dataset.corr(), vmin=-1, vmax=1, interpolation='none')
fig.colorbar(cax)
pyplot.show()
# 分离评估数据集
array = dataset.values
X = array[:, 0:60].astype(float)
Y = array[:, 60]
validation_size = 0.2
seed = 7
X_train, X_validation, Y_train, Y_validation = train_test_split(X, Y, test_size=validation_size, random_state=seed)

# 评估算法的基准
num_folds = 10
seed = 7
scoring = 'accuracy'

# 评估算法 - 原始数据
models = {}
models['LR'] = LogisticRegression()
models['LDA'] = LinearDiscriminantAnalysis()
models['KNN'] = KNeighborsClassifier()
models['CART'] = DecisionTreeClassifier()
models['NB'] = GaussianNB()
models['SVM'] = SVC()
results = []
for key in models:
    kfold = KFold(n_splits=num_folds, random_state=seed)
    cv_results = cross_val_score(models[key], X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_results)
    print('%s : %f (%f)' % (key, cv_results.mean(), cv_results.std()))

# 评估算法 - 箱线图
fig = pyplot.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(models.keys())
pyplot.show()

# 评估算法 - 正态化数据
pipelines = {}
pipelines['ScalerLR'] = Pipeline([('Scaler', StandardScaler()), ('LR', LogisticRegression())])
pipelines['ScalerLDA'] = Pipeline([('Scaler', StandardScaler()), ('LDA', LinearDiscriminantAnalysis())])
pipelines['ScalerKNN'] = Pipeline([('Scaler', StandardScaler()), ('KNN', KNeighborsClassifier())])
pipelines['ScalerCART'] = Pipeline([('Scaler', StandardScaler()), ('CART', DecisionTreeClassifier())])
pipelines['ScalerNB'] = Pipeline([('Scaler', StandardScaler()), ('NB', GaussianNB())])
pipelines['ScalerSVM'] = Pipeline([('Scaler', StandardScaler()), ('SVM', SVC())])
results = []
for key in pipelines:
    kfold = KFold(n_splits=num_folds, random_state=seed)
    cv_results = cross_val_score(pipelines[key], X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_results)
    print('%s : %f (%f)' % (key, cv_results.mean(), cv_results.std()))

# 评估算法 - 箱线图
fig = pyplot.figure()
fig.suptitle('Scaled Algorithm Comparison')
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(models.keys())
pyplot.show()

# 调参改进算法 - KNN
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
param_grid = {'n_neighbors': [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21]}
model = KNeighborsClassifier()
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(X=rescaledX, y=Y_train)

print('最优:%s 使用%s' % (grid_result.best_score_, grid_result.best_params_))
cv_results = zip(grid_result.cv_results_['mean_test_score'],
                 grid_result.cv_results_['std_test_score'],
                 grid_result.cv_results_['params'])
for mean, std, param in cv_results:
    print('%f (%f) with %r' % (mean, std, param))

# 调参改进算法 - SVM
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train).astype(float)
param_grid = {}
param_grid['C'] = [0.1, 0.3, 0.5, 0.7, 0.9, 1.0, 1.3, 1.5, 1.7, 2.0]
param_grid['kernel'] = ['linear', 'poly', 'rbf', 'sigmoid']
model = SVC()
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(X=rescaledX, y=Y_train)

print('最优:%s 使用%s' % (grid_result.best_score_, grid_result.best_params_))
cv_results = zip(grid_result.cv_results_['mean_test_score'],
                 grid_result.cv_results_['std_test_score'],
                 grid_result.cv_results_['params'])
for mean, std, param in cv_results:
    print('%f (%f) with %r' % (mean, std, param))


# 集成算法
ensembles = {}
ensembles['ScaledAB'] = Pipeline([('Scaler', StandardScaler()), ('AB', AdaBoostClassifier())])
ensembles['ScaledGBM'] = Pipeline([('Scaler', StandardScaler()), ('GBM', GradientBoostingClassifier())])
ensembles['ScaledRF'] = Pipeline([('Scaler', StandardScaler()), ('RFR', RandomForestClassifier())])
ensembles['ScaledET'] = Pipeline([('Scaler', StandardScaler()), ('ETR', ExtraTreesClassifier())])

results = []
for key in ensembles:
    kfold = KFold(n_splits=num_folds, random_state=seed)
    cv_result = cross_val_score(ensembles[key], X_train, Y_train, cv=kfold, scoring=scoring)
    results.append(cv_result)
    print('%s: %f (%f)' % (key, cv_result.mean(), cv_result.std()))

# 集成算法 - 箱线图
fig = pyplot.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
pyplot.boxplot(results)
ax.set_xticklabels(ensembles.keys())
pyplot.show()

# 集成算法GBM - 调参
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
param_grid = {'n_estimators': [10, 50, 100, 200, 300, 400, 500, 600, 700, 800, 900]}
model = GradientBoostingClassifier()
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scoring, cv=kfold)
grid_result = grid.fit(X=rescaledX, y=Y_train)
print('最优:%s 使用%s' % (grid_result.best_score_, grid_result.best_params_))

# 模型最终化
scaler = StandardScaler().fit(X_train)
rescaledX = scaler.transform(X_train)
model = SVC(C=1.5, kernel='rbf')
model.fit(X=rescaledX, y=Y_train)
# 评估模型
rescaled_validationX = scaler.transform(X_validation)
predictions = model.predict(rescaled_validationX)
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))

参考文献

1.结构化学习 andrew ng 2.Machine Learning

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018-03-07,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 概述
  • 数据理解
    • 数据统计分析
      • 数据可视化
      • 数据预处理
      • 数据特征选择
      • 机器学习算法
      • 集成算法
      • 回归实例
      • 二分类实例
      • 参考文献
      相关产品与服务
      灰盒安全测试
      腾讯知识图谱(Tencent Knowledge Graph,TKG)是一个集成图数据库、图计算引擎和图可视化分析的一站式平台。支持抽取和融合异构数据,支持千亿级节点关系的存储和计算,支持规则匹配、机器学习、图嵌入等图数据挖掘算法,拥有丰富的图数据渲染和展现的可视化方案。
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档