我试图使用keras调谐器超带选择我的自动编码器模型的超参数。下面是一些伪代码:
class AEHyperModel(kt.HyperModel):
def __init_(self, input_shape):
self.input_shape = input_shape
def build(self, hp):
# Input layer
x = Input(shape=(input_shape,))
# Encoder layer 1
hp_units1 = hp.Choice('units1', values=[10, 15])
z = Dense(units=hp_units1, activation='relu')(x)
# Encoder layer 2
hp_units2 = hp.Choice('units2', values=[3, 4])
z = Dense(units=hp_units2, activation='relu')(z)
# Decoder
y = Dense(units=hp_units1, activation='relu')(z)
y = Dense(inputnodes,activation='linear')(y)
# compile encoder
autoencoder = Model(x,y)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
return autoencoder
def fit(self, hp, autoencoder, *args, **kwargs):
return autoencoder.fit(*args,
batch_size=hp.Choice('batch_size', [16]),
**kwargs)这就是我如何执行我的超参数搜索:
tuner= kt.Hyperband(AEHyperModel(input_shape),
objective='val_loss',
max_epochs=100,
factor=3,
directory=os.path.join(softwaredir, 'AEhypertuning'),
project_name='DRautoencoder2',
overwrite=True,
hyperband_iterations=5,
executions_per_trial=5)
# patient early stopping and tensorboard callback
callbacks=[EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20, restore_best_weights=True),
keras.callbacks.TensorBoard('C:/whatever/tmp/tb_logs')]
# Search the best hyperparameters for the model
tuner.search(Xz, Xz,
epochs=100,
shuffle=True,
validation_data=(valz, valz),
callbacks=[callbacks])如果我那么做了:
tuner.executions_per_trial
Out[12]: 5这似乎是对的。根据文档,oracle.trialstrialid包含度量,其中包含一个MetricsTracker对象,描述为:
所有度量的多个执行值的_Record。
它包含度量的MetricHistory实例。
Args:度量: metrics._名称的字符串列表
然后,为了了解每次试验的每次执行的结果,我可以进入metrics.metrics,并看到一个val_loss MetricsHistory对象,在docu中描述为:
一个度量的多个执行的_Record。
它包含MetricObservation实例的集合。
方向:字符串。要优化的度量的方向。值应该是"min“或"max"._。
然而,
tuner.oracle.trials['0000'].metrics.metrics['val_loss'].get_history()
Out[19]: [MetricObservation(value=[0.118283973634243], step=1)]
tuner.oracle.trials['0000'].metrics.metrics['val_loss'].get_statistics()
Out[20]:
{'min': 0.118283973634243,
'max': 0.118283973634243,
'mean': 0.118283973634243,
'median': 0.118283973634243,
'var': 0.0,
'std': 0.0}每次审判只执行一次死刑?
我能得到任何帮助吗?如何实际执行每一次试验的多个执行,并能够访问每次执行的结果,或者至少是平均值和std来知道结果有多稳定?
谢谢
发布于 2022-10-14 12:00:34
我已经通过创建一个定制的Tensorflow回调来解决这个问题,如果它对任何人都有帮助的话:
from keras.callbacks import Callback
class Logger(Callback):
def on_train_begin(self, logs=None):
# Create scores holder
global val_score_holder
val_score_holder = []
global train_score_holder
train_score_holder = []
def on_epoch_end(self, epoch, logs):
# Access tuner and logger from the global workspace
global val_score_holder
global train_score_holder
# Store scores
val_score_holder.append(logs['val_loss'])
train_score_holder.append(logs['loss'])
def on_train_end(self, logs):
# Access tuner and score holders from the global workspace
global tuner
global val_score_holder
global train_score_holder
# Get last (current) trial
trial = list(tuner.oracle.trials.keys())[-1]
# Create new attributes if not already present i.e. new trial
if 'rep_val_loss' not in dir(tuner.oracle.trials[trial]):
tuner.oracle.trials[trial].rep_val_loss = []
tuner.oracle.trials[trial].rep_train_loss = []
# Add min val loss and corresponding training loss
tuner.oracle.trials[trial].rep_val_loss.append(np.min(val_score_holder))
tuner.oracle.trials[trial].rep_train_loss.append(train_score_holder[np.argmin(val_score_holder)])https://stackoverflow.com/questions/73911329
复制相似问题