# 刷分神器，使用hyperopt实现lightgbm自动化调参！

Hyperopt是最受欢迎的调参工具包，目前在github上已经获得star数量5.8k，在kaggle天池等比赛中经常出现它的身影。

## 一，单一参数空间

```import numpy as np
import matplotlib.pyplot as plt
from hyperopt import fmin, tpe, hp, Trials,space_eval

# 1，定义目标函数
def loss(x):
return (x-1)**2

# 2，定义超参空间

#常用的搜索空间
#hp.choice 离散值
#hp.uniform 均匀分布
#hp.normal 正态分布
spaces = hp.uniform("x",-5.0,5.0)

# 3，执行搜索过程
# hyperopt支持如下搜索算法
#随机搜索(hyperopt.rand.suggest)
#模拟退火(hyperopt.anneal.suggest)
#TPE算法（hyperopt.tpe.suggest，贝叶斯优化，算法全称为Tree-structured Parzen Estimator Approach）
trials = Trials()
best = fmin(fn=loss, space=spaces, algo=tpe.suggest, max_evals=1000,trials=trials)

# 4，获取最优参数

best_params = space_eval(spaces,best)
print("best_params = ",best_params)

# 5，绘制搜索过程
losses = [x["result"]["loss"] for x in trials.trials]
minlosses = [np.min(losses[0:i+1]) for i in range(len(losses))]
steps = range(len(losses))

fig,ax = plt.subplots(figsize=(6,3.7),dpi=144)
ax.scatter(x = steps, y = losses, alpha = 0.3)
ax.plot(steps,minlosses,color = "red",axes = ax)
plt.xlabel("step")
plt.ylabel("loss")
plt.yscale("log")

```

## 二，网格参数空间

```import numpy as np
import matplotlib.pyplot as plt
from hyperopt import fmin, tpe, hp,anneal, Trials,space_eval

# 1，定义目标函数
def loss(params):
x,y = params["x"],params["y"]
return x**2+y**2

# 2，定义超参空间
hspaces = {"x":hp.uniform("x",-1.0,1.0),
"y":hp.uniform("y",1.0,2.0)}

# 3，执行搜索过程
# hyperopt支持如下搜索算法
#随机搜索(hyperopt.rand.suggest)
#模拟退火(hyperopt.anneal.suggest)
#TPE算法（hyperopt.tpe.suggest，贝叶斯优化，算法全称为Tree-structured Parzen Estimator Approach）
trials = Trials()
best = fmin(fn=loss, space=hspaces, algo=anneal.suggest, max_evals=1000,trials=trials)

# 4，获取最优参数
best_params = space_eval(hspaces,best)
print("best_params = ",best_params)

# 5，绘制搜索过程
losses = [x["result"]["loss"] for x in trials.trials]
minlosses = [np.min(losses[0:i+1]) for i in range(len(losses))]
steps = range(len(losses))

fig,ax = plt.subplots(figsize=(6,3.7),dpi=144)
ax.scatter(x = steps, y = losses, alpha = 0.3)
ax.plot(steps,minlosses,color = "red",axes = ax)
plt.xlabel("step")
plt.ylabel("loss")
plt.yscale("log")

```

## 三，树形参数空间

```import math
from hyperopt import fmin, tpe, hp, Trials,space_eval

# 1，定义目标函数
def loss(params):
f = params[0]
if f=="sin":
x = params[1]["x"]
return math.sin(x)**2
elif f=="cos":
x = params[1]["x"]
y = params[1]["y"]
return math.cos(x)**2+y**2
elif f=="sinh":
x = params[1]["x"]
return math.sinh(x)**2
else:
assert f=="cosh"
x = params[1]["x"]
y = params[1]["y"]
return math.cosh(x)**2+y**2

# 2，定义超参空间
spaces = hp.choice("f",
[("sin",{"x":hp.uniform("x1",-math.pi/2,math.pi)}),
("cos",{"x":hp.uniform("x2",-math.pi/2,math.pi),"y":hp.uniform("y2",-1,1)}),
("sinh",{"x":hp.uniform("x3",-5,5)}),
("cosh",{"x":hp.uniform("x4",-5,5),"y":hp.uniform("y4",-1,1)})])

# 3，执行搜索过程

# hyperopt支持如下搜索算法
#随机搜索(hyperopt.rand.suggest)
#模拟退火(hyperopt.anneal.suggest)
#TPE算法（hyperopt.tpe.suggest，贝叶斯优化，算法全称为Tree-structured Parzen Estimator Approach）
trials = Trials()
best = fmin(fn=loss, space=spaces, algo=tpe.suggest, max_evals=1000,trials=trials)

# 4，获取最优参数
best_params = space_eval(spaces,best)
print("best_params = ",best_params)

# 5，绘制搜索过程
losses = [x["result"]["loss"] for x in trials.trials]
minlosses = [np.min(losses[0:i+1]) for i in range(len(losses))]
steps = range(len(losses))

fig,ax = plt.subplots(figsize=(6,3.7),dpi=144)
ax.scatter(x = steps, y = losses, alpha = 0.3)
ax.plot(steps,minlosses,color = "red",axes = ax)
plt.xlabel("step")
plt.ylabel("loss")
plt.yscale("log")

```

## 四，LightGBM手动调参

```learning_rate = 0.05 # 分别尝试(0.1,0.05,0.01)
boosting_type = 'gbdt' # 分别尝试('gbdt','rf','dart')
num_leaves = 63 # 分别尝试(15,31,63)
```

```train f1_score: 0.99448
valid f1_score: 0.96591
```
```import datetime
import numpy as np
import pandas as pd
import lightgbm as lgb
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
import warnings
warnings.filterwarnings('ignore')

def printlog(info):
nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print("\n"+"=========="*8 + "%s"%nowtime)
print(info+'...\n\n')

#================================================================================
# 一，读取数据
#================================================================================

# 读取dftrain,dftest
df = pd.DataFrame(breast.data,columns = [x.replace(' ','_') for x in breast.feature_names])
df['label'] = breast.target
df['mean_texture'] = df['mean_texture'].apply(lambda x:int(x))
dftrain,dftest = train_test_split(df)

lgb_train = lgb.Dataset(dftrain.drop(['label'],axis = 1),label=dftrain['label'],
categorical_feature = categorical_features)

lgb_valid = lgb.Dataset(dftest.drop(['label'],axis = 1),label=dftest['label'],
categorical_feature = categorical_features,
reference=lgb_train)

#================================================================================
# 二，设置参数
#================================================================================
printlog("step2: setting parameters...")

boost_round = 100
early_stop_rounds = 50

params = {
'learning_rate': 0.05,
'boosting_type': 'gbdt', #'dart','rf'
'num_leaves':63,
'objective':'binary',
'metric': ['auc'],
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': -1
}

#================================================================================
# 三，训练模型
#================================================================================
printlog("step3: training model...")

results = {}
gbm = lgb.train(params,
lgb_train,
num_boost_round= boost_round,
valid_sets=(lgb_valid, lgb_train),
valid_names=('validate','train'),
early_stopping_rounds = early_stop_rounds,
evals_result= results,
verbose_eval = True)

#================================================================================
# 四，评估模型
#================================================================================
printlog("step4: evaluating model ...")

y_pred_train = gbm.predict(dftrain.drop('label',axis = 1), num_iteration=gbm.best_iteration)
y_pred_test = gbm.predict(dftest.drop('label',axis = 1), num_iteration=gbm.best_iteration)

train_score = f1_score(dftrain['label'],y_pred_train>0.5)
val_score = f1_score(dftest['label'],y_pred_test>0.5)

print('train f1_score: {:.5} '.format(train_score))
print('valid f1_score: {:.5} \n'.format(val_score))

lgb.plot_metric(results)
lgb.plot_importance(gbm,importance_type = "gain")

#================================================================================
# 五，保存模型
#================================================================================
printlog("step5: saving model ...")

model_dir = "gbm.model"
print("model_dir: %s"%model_dir)
gbm.save_model("gbm.model",num_iteration=gbm.best_iteration)

###
##
#
```

## 五，LightGBM自动化调参

```import datetime
import numpy as np
import pandas as pd
import lightgbm as lgb
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score,f1_score
import matplotlib.pyplot as plt
from hyperopt import fmin,hp,Trials,space_eval,rand,tpe,anneal
import warnings
warnings.filterwarnings('ignore')

def printlog(info):
nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print("\n"+"=========="*8 + "%s"%nowtime)
print(info+'...\n\n')

#================================================================================
# 一，读取数据
#================================================================================

# 读取dftrain,dftest
df = pd.DataFrame(breast.data,columns = [x.replace(' ','_') for x in breast.feature_names])
df['label'] = breast.target
df['mean_texture'] = df['mean_texture'].apply(lambda x:int(x))
dftrain,dftest = train_test_split(df)

lgb_train = lgb.Dataset(dftrain.drop(['label'],axis = 1),label=dftrain['label'],
categorical_feature = categorical_features,free_raw_data=False)

lgb_valid = lgb.Dataset(dftest.drop(['label'],axis = 1),label=dftest['label'],
categorical_feature = categorical_features,
reference=lgb_train,free_raw_data=False)

#================================================================================
# 二，搜索超参
#================================================================================
printlog("step2: searching parameters...")

boost_round = 1000
early_stop_rounds = 50

params = {
'learning_rate': 0.1,
'boosting_type': 'gbdt',#'dart','rf'
'objective':'binary',
'metric': ['auc'],
'num_leaves': 31,
'max_depth':  6,
'min_data_in_leaf': 5,
'min_gain_to_split': 0,
'reg_alpha':0,
'reg_lambda':0,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'feature_pre_filter':False,
'verbose': -1
}

# 1，定义目标函数
def loss(config):
params.update(config)
gbm = lgb.train(params,
lgb_train,
num_boost_round= boost_round,
valid_sets=(lgb_valid, lgb_train),
valid_names=('validate','train'),
early_stopping_rounds = early_stop_rounds,
verbose_eval = False)
y_pred_test = gbm.predict(dftest.drop('label',axis = 1), num_iteration=gbm.best_iteration)
val_score = f1_score(dftest['label'],y_pred_test>0.5)

return -val_score

# 2，定义超参空间

#可以根据需要，注释掉偏后的一些不太重要的超参
spaces = {"learning_rate":hp.loguniform("learning_rate",np.log(0.001),np.log(0.5)),
"boosting_type":hp.choice("boosting_type",['gbdt','dart','rf']),
"num_leaves":hp.choice("num_leaves",range(15,128)),
#"max_depth":hp.choice("max_depth",range(3,11)),
#"min_data_in_leaf":hp.choice("min_data_in_leaf",range(1,50)),
#"min_gain_to_split":hp.uniform("min_gain_to_split",0.0,1.0),
#"reg_alpha": hp.uniform("reg_alpha", 0, 2),
#"reg_lambda": hp.uniform("reg_lambda", 0, 2),
#"feature_fraction":hp.uniform("feature_fraction",0.5,1.0),
#"bagging_fraction":hp.uniform("bagging_fraction",0.5,1.0),
#"bagging_freq":hp.choice("bagging_freq",range(1,20))
}

# 3，执行超参搜索
# hyperopt支持如下搜索算法
#随机搜索(hyperopt.rand.suggest)
#模拟退火(hyperopt.anneal.suggest)
#TPE算法（hyperopt.tpe.suggest，算法全称为Tree-structured Parzen Estimator Approach）
trials = Trials()
best = fmin(fn=loss, space=spaces, algo= tpe.suggest, max_evals=100, trials=trials)

# 4，获取最优参数
best_params = space_eval(spaces,best)
print("best_params = ",best_params)

# 5，绘制搜索过程
losses = [x["result"]["loss"] for x in trials.trials]
minlosses = [np.min(losses[0:i+1]) for i in range(len(losses))]
steps = range(len(losses))

fig,ax = plt.subplots(figsize=(6,3.7),dpi=144)
ax.scatter(x = steps, y = losses, alpha = 0.3)
ax.plot(steps,minlosses,color = "red",axes = ax)
plt.xlabel("step")
plt.ylabel("loss")

#================================================================================
# 三，训练模型
#================================================================================
printlog("step3: training model...")

params.update(best_params)
results = {}
gbm = lgb.train(params,
lgb_train,
num_boost_round= boost_round,
valid_sets=(lgb_valid, lgb_train),
valid_names=('validate','train'),
early_stopping_rounds = early_stop_rounds,
evals_result= results,
verbose_eval = True)

#================================================================================
# 四，评估模型
#================================================================================
printlog("step4: evaluating model ...")

y_pred_train = gbm.predict(dftrain.drop('label',axis = 1), num_iteration=gbm.best_iteration)
y_pred_test = gbm.predict(dftest.drop('label',axis = 1), num_iteration=gbm.best_iteration)

train_score = f1_score(dftrain['label'],y_pred_train>0.5)
val_score = f1_score(dftest['label'],y_pred_test>0.5)

print('train f1_score: {:.5} '.format(train_score))
print('valid f1_score: {:.5} \n'.format(val_score))

fig2,ax2 = plt.subplots(figsize=(6,3.7),dpi=144)
fig3,ax3 = plt.subplots(figsize=(6,3.7),dpi=144)
lgb.plot_metric(results,ax = ax2)
lgb.plot_importance(gbm,importance_type = "gain",ax=ax3)

#================================================================================
# 五，保存模型
#================================================================================
printlog("step5: saving model ...")

model_dir = "gbm.model"
print("model_dir: %s"%model_dir)
gbm.save_model("gbm.model",num_iteration=gbm.best_iteration)

###
##
#

```

```train f1_score: 1.0
valid f1_score: 0.98925
```

0 条评论

• ### KDnuggets 本月最受欢迎：5 个不容错过的机器学习项目

【新智元导读】受欢迎的机器学习项目很多，它们受欢迎的程度体现在在 GitHub 上获得的星数（Star）。新智元不久前介绍了 GitHub 上星数最多的16个深...

• ### 北大 DAIR 实验室AutoML团队开源高效的通用黑盒优化系统OpenBox （KDD2021）

近日，由北京大学崔斌教授数据与智能实验室（ Data and Intelligence Research LAB, DAIR）开发的通用黑盒优化系统 OpenB...

• ### 资源 | Python 环境下的自动化机器学习超参数调优

由于机器学习算法的性能高度依赖于超参数的选择，对机器学习超参数进行调优是一项繁琐但至关重要的任务。手动调优占用了机器学习算法流程中一些关键步骤（如特征工程和结果...

• ### 机器学习·自动调参（Hyperopt）

从规则编程到机器学习，从人工调参到AutoML（meta-machine learning），一直是整个行业发展的趋势。目前机器学习的算法框架逐渐成熟，针对机器...

• ### 如何高效、快速、准确地完成ML任务，这4个AutoML库了解一下

图源：https://unsplash.com/photos/pjAH2Ax4uWk

• ### [自动调参]深度学习模型的超参数自动化调优详解

在实践中，经验丰富的机器学习工程师和研究人员会培养出直觉，能够判断上述选择哪些 可行、哪些不可行。也就是说，他们学会了调节超参数的技巧。但是调节超参数并没有正式...

• ### 收藏！我整理了数据科学，数据可视化和机器学习的Python顶级库

这篇文章中包括的类别，我们认为这些类别考虑了通用的数据科学库，即那些可能被数据科学领域的从业人员用于广义的，非神经网络的，非研究性工作的库：

• ### 【白话机器学习】算法理论+实战之LightGBM算法

如果想从事数据挖掘或者机器学习的工作，掌握常用的机器学习算法是非常有必要的，在这简单的先捋一捋， 常见的机器学习算法：

• ### PyCaret 可轻松搞定机器学习！

PyCaret 是由 Moez Ali 创建并于2020年4月发布的 python 开源低代码机器学习库。它只需要使用很少的代码就可以创建整个机器学习管道。

• ### 一文厘清机器学习、深度学习、统计与概率论的区别

除了风格与Supervised/Unsupervised Learning截然不同的Reinforcement Learning以外，大家虽然知道机器学习已不再...

• ### 带答案面经分享-面试中最常考的树模型！

树模型可以说在机器学习的面试中，是面试官最喜欢问的一类问题，今天小编就带你一起回顾一下去年校招中我所经历的树模型相关的问题，这次带答案呦～～(答案是小编的理解，...

• ### 自动机器学习工具全景图：精选22种框架，解放炼丹师

收集原始数据、合并数据源、清洗数据、特征工程、模型构建、超参数调优、模型验证和设备部署。

• ### 30分钟学会LightGBM

LightGBM可以看成是XGBoost的升级加强版本，2017年经微软推出后，便成为各种数据竞赛中刷分夺冠的神兵利器。

• ### LightGBM算法总结

1 LightGBM原理 1.1 GBDT和 LightGBM对比 1.2 LightGBM 的动机 1.3 Xgboost 原理 ...

• ### 还在当调参侠？推荐这三个超参优化库【含示例代码】

在传统的算法建模过程中，影响算法性能的一个重要环节、也可能是最为耗时和无趣的一项工作就是算法的调参，即超参数优化（Hyper-parameter Optimiz...

• ### 告别调参，AutoML新书221页免费下载

近期，由Frank Hutter, Lars Kotthoff, Joaquin Vanschoren撰写的《AUTOML：方法，系统，挑战》“AUTOML: ...

• ### python推荐 | 面向地学领域的Python库汇总

•NetCDF格式 : netCDF4-python，h5py，h5netcdf，xarray等。 除了上述简单的数据处理库之外，python还提供了NCO和C...