我这样定义了一个VAE:
def cond_encoder(nfeatures,ncondfeatures,layers, latent_dim, activation='relu'):
finputs = Input(shape=(nfeatures,))
cinputs = Input(shape=(ncondfeatures,))
inputs = concatenate([finputs,cinputs])
hidden = Dense(layers[0],activation=activation, name = f"Dense_enc{0}")(inputs)
for i in range(1, len(layers)):
hidden = Dense(units = layers[i], activation=activation, name = f"Dense_enc{i}")(hidden)
z_mean = Dense(units=latent_dim,activation='linear',name='z_mean')(hidden)
z_logvar = Dense(units=latent_dim, activation='linear',name='z_logvar')(hidden)
z_mean,z_logvar = KLDivergenceLayer(weight=1.0)([z_mean,z_logvar])
KLDivergenceLayer.trainable = False
encoder = Model(inputs = [finputs,cinputs], outputs = [z_mean]+[z_logvar])
return encoder
def cond_decoder(nfeatures,ncondfeatures, layers, latent_dim, activation='relu'):
fromEncoder = Input(shape=(latent_dim,))
cinputs = Input(shape=(ncondfeatures,))
inputs = concatenate([fromEncoder,cinputs])
hidden = Dense(units = layers[0], activation = activation, name = f"Dense_dec{0}")(inputs)
for i in range(1, len(layers)):
hidden = Dense(units = layers[i], activation=activation, name = f"Dense_dec{i}")(hidden)
output = Dense(units = nfeatures, activation = 'linear')(hidden)
decoder = Model(inputs=[fromEncoder,cinputs], outputs = output)
return decoder
encoder = models.cond_encoder(nfeatures*nclusters,ncondfeatures,dense_layers,nlatent_dim,activation='relu')
decoder = models.cond_decoder(nfeatures*nclusters,ncondfeatures,dense_layers[::-1],nlatent_dim,activation='relu')
finputs=Input(shape=(nfeatures*nclusters,))
cinputs=Input(shape=(ncondfeatures,))
Z_latent_space= encoder([finputs,cinputs])
sampled_latent_space=models.sampleZ(Z_latent_space)
outputs=decoder([sampled_latent_space,cinputs])
vae_model_train = Model(inputs=[finputs,cinputs],outputs=outputs)然后我训练编译并拟合模型,如下所示:
vae_model_train.compile(loss=mse,optimizer=Adam(lr))
history = vae_model_train.fit([trainstuff,traincond],[trainstuff, traincond],batch_size=batch_size, epochs= 100, verbose=1, validation_data=([val_jets,valcond],[val_jets,valcond]), initial_epoch=0)我想要做的是只计算训练对象和重构的训练对象之间的均方误差,但是像我一样传递输入,我怀疑训练对象,训练对象和它的重构对象之间的均方误差是计算出来的?
我试过这个自定义损失函数:
def custom_loss(use_mse=False):
def loss(y_true, y_pred):
return K.int_shape(y_pred)[1] * mse(y_true[0], y_pred[0])
return loss我不确定这是不是正确的方式?
发布于 2021-02-02 17:28:06
Keras将输出和目标映射到损失函数,所以只要我的模型输出是trainstuff reco,目标是trainstuff,我就可以简单地使用mse,就可以了。
https://stackoverflow.com/questions/66005730
复制相似问题