在keras博客中:“在Keras中构建自动编码器”提供了以下代码来构建单序列到序列自动编码器
from keras.layers import Input, LSTM, RepeatVector
from keras.models import Model
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(latent_dim)(inputs)
decoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(input_dim, return_sequences=True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)
我想构建堆叠式自动编码器,如何更新此代码来构建堆叠式自动编码器?
我自己试过,下面是我的代码:
timesteps = 3
input_dim = 1
inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(4)(inputs)
encoded = RepeatVector(timesteps)(encoded)
encoded = LSTM(2)(encoded)
encoded = RepeatVector(timesteps)(encoded)
decoded = LSTM(4,return_sequences = True)(encoded)
decoded = LSTM(input_dim,return_sequences = True)(decoded)
sequence_autoencoder = Model(inputs, decoded)
sequence_autoencoder.compile(loss='mean_squared_error', optimizer='Adam')
sequence_autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=1,
shuffle=True,
)
我想知道,这段代码是正确的,还是我漏掉了什么?
发布于 2018-01-17 21:46:19
您创建的代码尝试遵循与您找到的示例相同的理念。但是您太快地销毁了这个序列(因此您最终使用了一个额外的RepeatVector
)。
为了避免这种情况,您可以在所有编码层中使用return_sequences=True
,最后一个编码层除外。这使序列保持为一个序列,允许更大的解释力,因为您不会太快地折叠数据。
#add return_sequences=True to all layers except for the last
encoded = LSTM(4, return_sequences=True)(inputs)
#do not use RepeatVector, you've got your sequences preserved with their length
#the last encoder layer is the only one that collapses the sequence
encoded = LSTM(2)(encoded)
#this RepeatVector is the only that is needed, to restore the sequence length
decoded = RepeatVector(timesteps)(encoded)
#the rest is the same
decoded = LSTM(4,return_sequences = True)(encoded)
decoded = LSTM(input_dim,return_sequences = True)(decoded)
https://stackoverflow.com/questions/48299833
复制相似问题