下面的模型来自Keras website,它的行为完全符合预期。它是用keras.models.Sequential()定义的。我希望将其转换为使用keras.models.Model()定义,以使其更灵活以供将来使用。但在我转换后,性能直线下降。
您可以在Keras网站上找到原始模型:
def build_model():
model = Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = keras.optimizers.Adam()
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
model = build_model()
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_22 (Dense) (None, 64) 640
_________________________________________________________________
dense_23 (Dense) (None, 64) 4160
_________________________________________________________________
dense_24 (Dense) (None, 1) 65
=================================================================
Total params: 4,865
Trainable params: 4,865
Non-trainable params: 0
_________________________________________________________________下面的代码是我的转换:
def build_model_base():
input = Input(shape=[len(train_dataset.keys())])
x = Dense(64, activation='relu', name="dense1")(input)
x = Dense(64, activation='relu', name="dense2")(x)
output = Dense(1, activation='sigmoid', name='output')(x)
model = Model(input=[input], output=[output])
optimizer = keras.optimizers.Adam()
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_absolute_error', 'mean_squared_error'])
return model
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_18 (InputLayer) (None, 9) 0
_________________________________________________________________
dense1 (Dense) (None, 64) 640
_________________________________________________________________
dense2 (Dense) (None, 64) 4160
_________________________________________________________________
output (Dense) (None, 1) 65
=================================================================
Total params: 4,865
Trainable params: 4,865
Non-trainable params: 0我能看到的唯一区别是.Sequential没有计算input layer,而.Model计算了它,但我不相信它们会使模型结构有所不同。但是,.Sequential的性能是:

而我转换的.Model()的性能是:

谁能告诉我我做错了什么?
其他一些上下文:
我读过这个post,但我的代码都是在谷歌Colab的中央处理器上运行的
print(keras.__version__) # 2.0.4
print(tf.__version__) #1.14.0-rc1
绘制损失的代码:
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
y_max = max(hist['val_mean_absolute_error'])
plt.ylim([0,y_max])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
y_max = max(hist['val_mean_squared_error'])
plt.ylim([0,y_max])
plt.legend()
plt.show() 训练模型的代码(两个模型完全相同):
his_seq = model.fit(normed_train_data.values, train_labels.values,
batch_size=128,
validation_split = 0.1,
epochs = 100,
verbose=0)
plot_history(his_seq)欢迎提出任何建议!
发布于 2019-06-19 11:16:05
默认情况下,在keras的密集层以及您构建的顺序模型的输出层中使用“线性”激活函数。
但是您在转换中将激活函数指定为'sigmoid‘,这可能会有所不同。
以下是Keras提供的默认激活函数的说明:
activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).https://stackoverflow.com/questions/56659428
复制相似问题