机器学习项目流程及模型评估验证

4.9日到现在一直在做Udacity的P1项目——波士顿房价预测。这个项目让我收获最大的就是理清了机器学习解决问题的整体流程，搭起一个框架，学会了寻找模型的最优参数以及模型的评估和验证方法。

numpy简单的统计分析整理

```import numpy as np
a = np.array([1,2,3,4,5])
# 最小值
minimum_a = np.min(a)
# 最大值
maximum_a = np.max(a)
# 平均值
mean_a = np.mean(a)
# 中位数
median_a = np.median(a)
# 标准差
std_a = np.std(a)
# 方差
var_a = np.var(a)
# 和
sum_a = np.sum(a)```

pandas读取处理csv数据

```data = pd.read_csv('xxx')
outcome = data['XXX'] # outcome是目标列
features = data.drop('XXX', axis = 1) # features是移除目标列后剩下的特征```

模型评估验证

分类问题

准确率（accuracy）

```import numpy as np
from sklearn.metrics import accuracy_score
y_pred = [0,2,1,3]
y_true = [0,1,2,3]
accuracy_score = accuracy_score(y_true, y_pred)
print(accuracy_score) # 0.5
accuracy_score = accuracy_score(y_true, y_pred, normalize=False)
print(accuracy_score) # 2```
精确率（precision）

precision = true_positives / (true_positives + false_positives)

```>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='macro')
0.22...
>>> precision_score(y_true, y_pred, average='micro')
0.33...
>>> precision_score(y_true, y_pred, average='weighted')
...
0.22...
>>> precision_score(y_true, y_pred, average=None)
array([ 0.66...,  0.        ,  0.        ])```
召回率（recall）

recall = true_positives / (true_positives + false_negtives)

```>>> from sklearn.metrics import recall_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> recall_score(y_true, y_pred, average='macro')
0.33...
>>> recall_score(y_true, y_pred, average='micro')
0.33...
>>> recall_score(y_true, y_pred, average='weighted')
0.33...
>>> recall_score(y_true, y_pred, average=None)
array([ 1.,  0.,  0.])```
F1分数

F1 分数会同时考虑精确率和召回率，以便计算新的分数。可将 F1 分数理解为精确率和召回率的加权平均值，其中 F1 分数的最佳值为 1、最差值为 0： F1 = 2 x (精确率 x 召回率) / (精确率 + 召回率)

```>>> from sklearn.metrics import f1_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> f1_score(y_true, y_pred, average='macro')
0.26...
>>> f1_score(y_true, y_pred, average='micro')
0.33...
>>> f1_score(y_true, y_pred, average='weighted')
0.26...
>>> f1_score(y_true, y_pred, average=None)
array([ 0.8,  0. ,  0. ])```

回归问题

平均绝对误差

```>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([ 0.5,  1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
...
0.849...```
均方误差

```>>> from sklearn.metrics import mean_squared_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred)
0.375
>>> y_true = [[0.5, 1],[-1, 1],[7, -6]]
>>> y_pred = [[0, 2],[-1, 2],[8, -5]]
>>> mean_squared_error(y_true, y_pred)
0.708...
>>> mean_squared_error(y_true, y_pred, multioutput='raw_values')
...
array([ 0.416...,  1.        ])
>>> mean_squared_error(y_true, y_pred, multioutput=[0.3, 0.7])
...
0.824...```
R2分数
```>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred)
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='variance_weighted')
0.938...
>>> y_true = [1,2,3]
>>> y_pred = [1,2,3]
>>> r2_score(y_true, y_pred)
1.0
>>> y_true = [1,2,3]
>>> y_pred = [2,2,2]
>>> r2_score(y_true, y_pred)
0.0
>>> y_true = [1,2,3]
>>> y_pred = [3,2,1]
>>> r2_score(y_true, y_pred)
-3.0```
可释方差分数
```>>> from sklearn.metrics import explained_variance_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> explained_variance_score(y_true, y_pred)
0.957...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> explained_variance_score(y_true, y_pred, multioutput='uniform_average')
...
0.983...```

网格搜索和交叉验证

```def fit_model_k_fold(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """

# Create cross-validation sets from the training data
# cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
k_fold = KFold(n_splits=10)

# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=80)

# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':range(1,11)}

# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)

# TODO: Create the grid search object
grid = GridSearchCV(regressor, param_grid=params,scoring=scoring_fnc,cv=k_fold)

# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)

# Return the optimal model after fitting the data
return grid.best_estimator_

reg_k_fold = fit_model_k_fold(X_train, y_train)
print "k_fold Parameter 'max_depth' is {} for the optimal model.".format(reg_k_fold.get_params()    ['max_depth'])
# Show predictions
for i, price in enumerate(reg_k_fold.predict(client_data)):
print "k_fold Predicted selling price for Client {}'s home: ¥{:,.2f}万".format(i+1, price)```

100 篇文章31 人订阅

0 条评论

相关文章

33560

38450

13030

44740

24950

33070

21440

从CVPR2017 看多样目标检测

1、导读 When you have trouble with object detection, keep calm and use deep learnin...

48350

60880

35570