在阅读dynamic_rnn的接口文档时,我有以下问题:
批次大小、序列长度和(单元格)hidden_size之间的关系是否有约束?
我在想:
序列长度单元( <= )hidden_size,或者,
批量大小*序列长度单元( <= )hidden_size
我说的对吗?我浏览了很多网页,但都找不到答案。
谢谢大家。
https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
示例:
# create a BasicRNNCell
rnn_cell = tf.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
# create 2 LSTMCells
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.contrib.rnn.LSTMStateTuple for each cell
outputs, state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
发布于 2017-12-12 06:22:55
就API而言,这是没有关系的。修复这些参数中的任何两个,其余的仍然可以是任何非负整数(或者,在sequence_length
的情况下,是任何非负整数的batch_size
-length向量)。
如果hidden_size
很大,并且训练数据很少,那么最终的模型可能会非常容易过拟合,但它仍然可以工作。请注意,训练数据通常以小批量的形式呈现,因此训练数据的数量不会是传递给dynamic_rnn
的序列长度的总和。
还有硬件限制,因为batch_size
和最大序列长度会影响内存使用。
https://stackoverflow.com/questions/47752258
复制相似问题