我正在尝试在tensorflow上为1,000,000个单词创建嵌入。每个单词将具有表示该单词的256个float32向量。问题是我一直在耗尽内存。这对我来说没有意义,因为我的GTX 1080上有8 8GB的内存。嵌入应该只占用1e6 * 256 *4=1 Gb内存。我还有另一个大小相同的输出矩阵。除此之外,还有一些其他的张量,它们应该比较小。因此,我只看到需要大约2-3 GB的内存来存储模型,并且当我调用sess.run(tf.initialize_all_variables())时,它会失败。我所有的记忆都到哪里去了?你有什么建议可以帮我解决这个问题吗?
import tensorflow as tf
import nltk
import numpy as np
import os
import multiprocessing
import itertools
import pickle
from unidecode import unidecode
BATCH_SIZE = 32
TIME_STEPS = 64
WORD_VEC_SIZE = 256
words, training_data = pickle.load(open('vocab.pickle', 'rb'))
word2index = {w:i for i, w in enumerate(words)}
index2word = {i:w for i, w in enumerate(words)}
input_tensor = tf.placeholder(tf.int32, (BATCH_SIZE, TIME_STEPS + 1), 'input_tensor')
embedding = tf.Variable(tf.random_uniform((len(words), WORD_VEC_SIZE), -1, 1), name = 'embedding')
rnn = tf.nn.rnn_cell.BasicRNNCell(WORD_VEC_SIZE)
state = tf.zeros((BATCH_SIZE, rnn.state_size))
input_vectors = tf.nn.embedding_lookup([embedding], input_tensor[:, :TIME_STEPS])
cost = 0
with tf.variable_scope('rnn') as scope:
W_out = tf.get_variable('W_out', (WORD_VEC_SIZE, len(words)), initializer = tf.truncated_normal_initializer(0.0, 1 / np.sqrt(WORD_VEC_SIZE)))
b_out = tf.get_variable('b_out', (len(words), ), initializer = tf.truncated_normal_initializer(0.0, 0.01))
for t in range(TIME_STEPS):
y, state = rnn(tf.reshape(input_vectors[:, t, :], (-1, WORD_VEC_SIZE)), state)
cost += tf.reduce_mean(tf.nn.sampled_softmax_loss(W_out, b_out, y, tf.reshape(input_tensor[:, t + 1], (-1, 1)), 1000, len(words)))
scope.reuse_variables()
train_step = tf.train.AdamOptimizer(1e-4).minimize(cost)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver()发布于 2016-07-14 22:46:54
我没有考虑到的是AdamOptimizer。我忘记了这需要为我的模型中的每个权重存储各种参数。当我改用GraidentDecent优化器时,它现在适合我的图形处理器了。
https://stackoverflow.com/questions/38377189
复制相似问题