嗨,我正在尝试建立一个混合的专家神经网络。我在这里找到了一个代码:dc3c53e90102x9xu.html。我的目标是,门和专家来自不同的数据,但具有相同的维度。
def sliced(x,expert_num):
return x[:,:,:expert_num]
def reduce(x, axis):
return K.sum(x, axis=axis, keepdims=True)
def gatExpertLayer(inputGate, inputExpert, expert_num, nb_class):
#expert_num=30
#nb_class=10
input_vector1 = Input(shape=(inputGate.shape[1:]))
input_vector2 = Input(shape=(inputExpert.shape[1:]))
#The gate
gate = Dense(expert_num*nb_class, activation='softmax')(input_vector1)
gate = Reshape((1,nb_class, expert_num))(gate)
gate = Lambda(sliced, output_shape=(nb_class, expert_num), arguments={'expert_num':expert_num})(gate)
#The expert
expert = Dense(nb_class*expert_num, activation='sigmoid')(input_vector2)
expert = Reshape((nb_class, expert_num))(expert)
#The output
output = tf.multiply(gate, expert)
#output = keras.layers.merge([gate, expert], mode='mul')
output = Lambda(reduce, output_shape=(nb_class,), arguments={'axis': 2})(output)
model = Model(input=[input_vector1, input_vector2], output=output)
model.compile(loss='mean_squared_error', metrics=['mse'], optimizer='adam')
return model但是,我得到了"'NoneType‘对象没有属性'_inbound_nodes'“。我在这里检查了其他类似的问题:nodes' while trying to add multiple keras Dense layers,但是这个问题是用keras的Lambda函数解决的,可以转换成一个层。
发布于 2018-10-03 23:04:35
那么,您需要将tf.multiply()放在Lambda层中,以获得作为输出的Keras张量(而不是张量):
output = Lambda(lambda x: tf.multiply(x[0], x[1]))([gate, expert])https://stackoverflow.com/questions/52636328
复制相似问题