我已经建立了一个用于图像分类的玩具模型。该程序的结构类似于cifar10 tutorial。训练开始时很好,但最终程序会崩溃。我已经完成了这个图表,以防某个地方的操作被添加到其中,在tensorboard中它看起来很棒,但它最终肯定会冻结并强制硬重启(或长时间等待最终的重启)。退出让它看起来像是GPU内存问题,但模型很小,应该适合。如果我分配完整的GPU内存(这又分配了4 4gb),它仍然会崩溃。
数据是256x256x3的图像和标签,存储在一个tfrecords文件中。训练函数代码如下:
def train():
with tf.Graph().as_default():
global_step = tf.contrib.framework.get_or_create_global_step()
train_images_batch, train_labels_batch = distorted_inputs(batch_size=BATCH_SIZE)
train_logits = inference(train_images_batch)
train_batch_loss = loss(train_logits, train_labels_batch)
train_op = training(train_batch_loss, global_step, 0.1)
merged = tf.summary.merge_all()
saver = tf.train.Saver(tf.global_variables())
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.75)
sess_config=tf.ConfigProto(gpu_options=gpu_options)
sess = tf.Session(config=sess_config)
train_summary_writer = tf.summary.FileWriter(
os.path.join(ROOT, 'logs', 'train'), sess.graph)
init = tf.global_variables_initializer()
sess.run(init)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
tf.Graph().finalize()
for i in range(5540):
start_time = time.time()
summary, _, batch_loss = sess.run([merged, train_op, train_batch_loss])
duration = time.time() - start_time
train_summary_writer.add_summary(summary, i)
if i % 10 == 0:
msg = 'batch: {} loss: {:.6f} time: {:.8} sec/batch'.format(
i, batch_loss, str(time.time() - start_time))
print(msg)
coord.request_stop()
coord.join(threads)
sess.close() 损失op和训练op分别是cross_entropy和adam优化器:
def loss(logits, labels):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='cross_entropy_per_example')
xentropy_mean = tf.reduce_mean(xentropy, name='cross_entropy')
tf.add_to_collection('losses', xentropy_mean)
return xentropy_mean
def training(loss, global_step, learning_rate):
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
return train_op并且批处理是用
def distorted_inputs(batch_size):
filename_queue = tf.train.string_input_producer(
['data/train.tfrecords'], num_epochs=None)
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(serialized_example,
features={'label': tf.FixedLenFeature([], tf.int64),
'image': tf.FixedLenFeature([], tf.string)})
label = features['label']
label = tf.cast(label, tf.int32)
image = tf.decode_raw(features['image'], tf.uint8)
image = (tf.cast(image, tf.float32) / 255) - 0.5
image = tf.reshape(image, shape=[256, 256, 3])
# data augmentation
image = tf.image.random_flip_up_down(image)
image = tf.image.random_flip_left_right(image)
print('filling the queue with {} images ' \
'before starting to train'.format(MIN_QUEUE_EXAMPLES))
return _generate_batch(image, label, MIN_QUEUE_EXAMPLES, BATCH_SIZE)和
def _generate_batch(image, label,
min_queue_examples=MIN_QUEUE_EXAMPLES,
batch_size=BATCH_SIZE):
images_batch, labels_batch = tf.train.shuffle_batch(
[image, label], batch_size=batch_size,
num_threads=12, capacity=min_queue_examples + 3 * BATCH_SIZE,
min_after_dequeue=min_queue_examples)
tf.summary.image('images', images_batch)
return images_batch, labels_batch我遗漏了什么?
发布于 2017-03-16 08:07:05
所以我解决了这个问题。这是一个解决方案,以防它对其他人有用。TL,DR:这是一个硬件问题。
具体地说,这是一个PCIe总线错误,与投票率最高的here错误相同。这可能是由于消息信号中断与PLX开关不兼容而引起的,如建议的here。在该线程中也解决了这个问题,设置内核参数pci=nommconf来禁用msi。
在Tensorflow、Torch和Theano之间,tf是触发此问题的唯一深度学习框架。为什么,我不确定。
https://stackoverflow.com/questions/42767187
复制相似问题