我试图在一个注意层上执行按行和按列的最大池,如下所示:http://www.dfki.de/~neumann/ML4QAseminar2016/presentations/Attentive-Pooling-Network.pdf (幻灯片-15)
我使用文本数据集,其中一个句子被输入到CNN。句子中的每一个字都被嵌入了。其代码如下:
model.add(Embedding(MAX_NB_WORDS, emb_dim, weights=[embedding_matrix],input_length=MAX_SEQUENCE_LENGTH, trainable=False))
model.add(Conv1D(k, FILTER_LENGTH, border_mode = "valid", activation = "relu")) CNN的输出是形状的(无,256)。这作为对注意力层的输入。有人能建议如何在以tensorflow为后端的keras中实现逐行或列最大池吗?
发布于 2017-10-23 12:55:31
如果您的模型中有形状为(batch, width, height, channels)的图像,则可以重新塑造数据以隐藏空间维度之一,并使用一维池:
宽度的:
model.add(Reshape((width, height*channels)))
model.add(MaxPooling1D())
model.add(Reshape((width/2, height, channels))) #if you had an odd number, add +1 or -1 (one of them will work) 表示高度:
#Here, the time distributed will consider that "width" is an extra time dimension,
#and will simply think of it as an extra "batch" dimension
model.add(TimeDistributed(MaxPooling1D()))工作示例,函数API模型有两个分支,一个用于每个池:
import numpy as np
from keras.layers import *
from keras.models import *
inp = Input((30,50,4))
out1 = Reshape((30,200))(inp)
out1 = MaxPooling1D()(out1)
out1 = Reshape((15,50,4))(out1)
out2 = TimeDistributed(MaxPooling1D())(inp)
model = Model(inp,[out1,out2])
model.summary()或者使用Reshape,以防您不想为数字操心:
#swap height and width
model.add(Permute((2,1,3)))
#apply the pooling to width
model.add(TimeDistributed(MaxPooling1D()))
#bring height and width to the correct order
model.add(Permute((2,1,3)))https://stackoverflow.com/questions/46883334
复制相似问题