我已经尝试了一段时间来弄清楚如何“闭嘴”LightGBM。特别是在训练过程中,我要抑制LightGBM的输出(即对提升步骤的反馈)。
我的模特:
params = {
'objective': 'regression',
'learning_rate' :0.9,
'max_depth' : 1,
'metric': 'mean_squared_error',
'seed': 7,
'boosting_type' : 'gbdt'
}
gbm = lgb.train(params,
lgb_train,
num_boost_round=100000,
valid_sets=lgb_eval,
early_stopping_rounds=100)
我试图按照文档中的建议添加verbose=0
,但这不起作用。https://github.com/microsoft/LightGBM/blob/master/docs/Parameters.rst
有人知道如何在训练期间抑制LightGBM输出吗?
发布于 2019-06-18 08:21:56
正如@Peter所建议的,设置verbose_eval = -1
会抑制大多数LightGBM
输出(链接:这里)。
然而,LightGBM
仍然可能返回其他警告--例如No further splits with positive gain
。这可以如下所示(源:这里 ):
lgb_train = lgb.Dataset(X_train, y_train, params={'verbose': -1}, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, params={'verbose': -1},free_raw_data=False)
gbm = lgb.train({'verbose': -1}, lgb_train, valid_sets=lgb_eval, verbose_eval=False)
发布于 2021-10-19 20:14:04
sklearn的解决方案(请在v3.3.0上查看):
import lightgbm as lgb
param = {'objective': 'binary', "is_unbalance": 'true',
'metric': 'average_precision'}
model_skl = lgb.sklearn.LGBMClassifier(**param)
# early stopping and verbosity
# it should be 0 or False, not -1/-100/etc
callbacks = [lgb.early_stopping(10, verbose=0), lgb.log_evaluation(period=0)]
# train
model_skl.fit(x_train, y_train,
eval_set=[(x_train, y_train), (x_val, y_val)],
eval_names=['train', 'valid'],
eval_metric='average_precision',
callbacks=callbacks)
发布于 2019-06-18 07:05:37
若要抑制(大多数)来自LightGBM的输出,可以设置以下参数。
禁止警告:'verbose': -1
必须在params={}
中指定。
抑制训练迭代的输出:必须在verbose_eval=False
参数中指定train{}
。
最起码的例子:
params = {
'objective': 'regression',
'learning_rate' : 0.9,
'max_depth' : 1,
'metric': 'mean_squared_error',
'seed': 7,
'verbose': -1,
'boosting_type' : 'gbdt'
}
gbm = lgb.train(params,
lgb_train,
num_boost_round=100000,
valid_sets=lgb_eval,
verbose_eval=False,
early_stopping_rounds=100)
https://datascience.stackexchange.com/questions/53954
复制相似问题