在tensorflow中,我发现了两种RNN的实现。
第一个实现是 (从第124-129行)。它使用一个循环来定义RNN中的每个输入步骤。
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
out
根据我的理解,pack_sequence和pack_padded_sequence返回一个PackedSequence,它的data属性应该始终是一维的。
但是,下面的代码给出了错误:RuntimeError: input must have 2 dimensions, got 1。
import torch
import torch.nn.utils.rnn as rnn_utils
a = torch.Tensor([1, 2, 3])
b = torch.Tensor([4, 5])
c = torch.Tensor([6])
seq = rnn_utils.pack_sequence(
您好,我正在为lstm rnn cell使用以下函数。
def LSTM_RNN(_X, _istate, _weights, _biases):
# Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
# Note, some code of this notebook is inspired from an slightly different
# RNN architecture used on another dataset:
我看到了在tensorflow上调用lstm的两种不同的方法,我对一种方法和另一种方法的区别感到困惑。在哪种情况下使用其中一种
第一种方法是创建一个lstm,然后像下面的代码那样立即调用它
lstm = rnn_cell.BasicLSTMCell(lstm_size)
# Initial state of the LSTM memory.
initial_state = tf.zeros([batch_size, lstm.state_size])
for i in range(num_steps):
# The value of state is updated after pr
我正在使用MNIST数据集上的Tensorflow LSTM示例。我不明白的是,为什么在最后一层使用逻辑回归。LSTM网络的最后一个输出不是比使用前一个“时间步”的输出更好的估计器吗?我怎么能只使用LSTM网络的最后一次输出进行分类呢?
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in co
我正在尝试使用Tensorflow处理RNN。我在这个中使用了以下函数
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, timesteps, n_input)
# Required shape: 'timesteps' tensors list of shape (batch_size, n_input)
# Unstack to get
def biLSTM(data, n_steps):
n_hidden= 24
data = tf.transpose(data, [1, 0, 2])
# Reshape to (n_steps*batch_size, n_input)
data = tf.reshape(data, [-1, 300])
# Split to get a list of 'n_steps' tensors of shape (batch_size, n_input)
data = tf.split(0, n_steps, data)
下面是张量流中的代码,我定义了一个Bi,对于特定的任务,我需要遍历我的图。虽然我已经在Scope变量中设置了reuse= True,但是它会产生下面提到的代码中提到的错误。
for run in range(0, 2):
with tf.variable_scope("LSTM", reuse= True) as scope:
def LSTM(input_data):
LSTM_cell_fw= tf.contrib.rnn.BasicLSTMCell(num_units= hidden_size)
LSTM_cel