首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >张量流和RandomShuffleQueue“不足的元素(请求64,当前大小为0)”

张量流和RandomShuffleQueue“不足的元素(请求64,当前大小为0)”
EN

Stack Overflow用户
提问于 2017-12-13 23:13:09
回答 0查看 98关注 0票数 2

我目前正在使用tensorFlow,尽管教程很容易完成,但真正的工作是从我们尝试输入自己的数据开始的。

我使用了一个非常基本的动物和背景数据集编写器。

我创建了3个test记录(train/val/test)。然后,我尝试读取它们并训练一个简单的模型(这里是Alexnet)。我尝试使用"FLAGS.num_iter“来确保我没有超出迭代范围。

这段代码处理得到一个很好的RandomShuffleQueue“元素不足(请求64,当前大小为0)”错误。

我试着搜索网页,但我没有找到我的问题的答案。它们是:我们如何解决这个问题?我们如何检查我们的tfrecord是否包含任何错误?我们可以写任何条件来确保我们有足够的元素吗?如果您对我的代码有任何进一步的问题,请不要错过!

诚挚的问候,

代码语言:javascript
复制
import tensorflow as tf
import os.path
from model import Model
from alexnet import Alexnet


FLAGS = tf.app.flags.FLAGS
NUM_LABELS = 2

IMAGE_WIDTH = 64
IMAGE_HEIGHT = 64
NUMBER_OF_CHANNELS = 3
#SOURCE_DIR = './data/'
#TRAINING_IMAGES_DIR = SOURCE_DIR + 'train/'
#LIST_FILE_NAME = 'list.txt'
BATCH_SIZE = 2
#TRAINING_SET_SIZE = 81112
TRAIN_FILE = '/home/sebv/SebV/datas/tfRecording/train.tfrecords'
VAL_FILE = '/home/sebv/SebV/datas/tfRecording/val.tfrecor'

def read_and_decode(filename_queue):
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    features = tf.parse_single_example(
        serialized_example,
        # Defaults are not specified since both keys are required.
        features={
          'image/encoded': tf.FixedLenFeature([], tf.string),
          'image/format': tf.FixedLenFeature([], tf.string),
          'image/class/label': tf.FixedLenFeature([], tf.int64),
          'image/height': tf.FixedLenFeature([], tf.int64),
          'image/width': tf.FixedLenFeature([], tf.int64),
        })

    # Convert from a scalar string tensor (whose single string has
    # length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
    # [mnist.IMAGE_PIXELS].
    image = tf.image.decode_png(features['image/encoded'], 3, tf.uint8)

    # OPTIONAL: Could reshape into a 28x28 image and apply distortions
    # here.  Since we are not applying any distortions in this
    # example, and the next step expects the image to be flattened
    # into a vector, we don't bother.

    # Convert from [0, 255] -> [-0.5, 0.5] floats.
    image = tf.cast(image, tf.float32)# * (1. / 255) - 0.5
    image = tf.reshape(image, [IMAGE_WIDTH,IMAGE_HEIGHT,NUMBER_OF_CHANNELS])
    # Convert label from a scalar uint8 tensor to an int32 scalar.
    label = tf.cast(features['image/class/label'], tf.int64)

    return image, label


def inputs(train, filen, batch_size, num_epochs):
    """Reads input data num_epochs times.
    Args:
    train: Selects between the training (True) and validation (False) data.
    batch_size: Number of examples per returned batch.
    num_epochs: Number of times to read the input data, or 0/None to
    train forever.
    Returns:
    A tuple (images, labels), where:
    * images is a float tensor with shape [batch_size, mnist.IMAGE_PIXELS]
    in the range [-0.5, 0.5].
    * labels is an int32 tensor with shape [batch_size] with the true label,
    a number in the range [0, mnist.NUM_CLASSES).
    Note that an tf.train.QueueRunner is added to the graph, which
    must be run using e.g. tf.train.start_queue_runners().
    """
    if not num_epochs: num_epochs = None
    filename = filen
    filename_queue = tf.train.string_input_producer([filename], num_epochs=num_epochs)

    # Even when reading in multiple threads, share the filename
    # queue.
    image, label = read_and_decode(filename_queue)
    # Shuffle the examples and collect them into batch_size batches.
    # (Internally uses a RandomShuffleQueue.)
    # We run this in two threads to avoid being a bottleneck.
    images, sparse_labels = tf.train.shuffle_batch([image, label], batch_size=batch_size, num_threads=2,capacity=20000 + 3 * batch_size,min_after_dequeue=20000)
    sparse_labels = tf.reshape(sparse_labels, [batch_size])
    return images, sparse_labels


def train():
    model = Alexnet()
    with tf.Graph().as_default():

        x = tf.placeholder(tf.float32, [None, IMAGE_WIDTH,IMAGE_HEIGHT,NUMBER_OF_CHANNELS], name='x-input')
        y = tf.placeholder(tf.float32, [None], name='y-input')

        images, labels = inputs(train=True, filen=TRAIN_FILE, batch_size=FLAGS.batch_size,num_epochs=FLAGS.num_iter)

        images_val, labels_val = inputs(train=False, filen=VAL_FILE, batch_size=FLAGS.batch_size,num_epochs=1)

        keep_prob = tf.placeholder(tf.float32, name='dropout_prob')
        global_step = tf.contrib.framework.get_or_create_global_step()

        logits = model.inference(images, keep_prob=keep_prob)
        loss = model.loss(logits=logits, labels=labels)

        accuracy = model.accuracy(logits, labels)
        summary_op = tf.summary.merge_all()
        train_op = model.train(loss, global_step=global_step)

        saver = tf.train.Saver()

        with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
            writer = tf.summary.FileWriter(FLAGS.summary_dir, sess.graph)
            sess.run(tf.global_variables_initializer())
            sess.run(tf.local_variables_initializer())
            coord = tf.train.Coordinator()
            threads = tf.train.start_queue_runners(sess=sess, coord=coord)
            for i in xrange(FLAGS.num_iter):
                _, cur_loss, summary = sess.run([train_op, loss, summary_op],
                                                feed_dict={keep_prob: 0.5})
                writer.add_summary(summary, i)

                if i % 10 == 0:

                    batch_x = sess.run(images_val)
                    batch_y = sess.run(labels_val)
                    validation_accuracy = accuracy.eval(feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
                    print('Iter {} Accuracy: {}'.format(i, validation_accuracy))
                    saver.save(sess, FLAGS.checkpoint_file_path, global_step)
                if i == FLAGS.num_iter:
                    coord.request_stop()
                    coord.join(threads)



def main(argv=None):
    train()


if __name__ == '__main__':
    tf.app.flags.DEFINE_integer('batch_size', 64, 'size of training batches')
    tf.app.flags.DEFINE_integer('num_iter', 4001, 'number of training iterations') #10000
    tf.app.flags.DEFINE_string('checkpoint_file_path', 'checkpoints/model.ckpt-10000', 'path to checkpoint file')
    tf.app.flags.DEFINE_string('train_data', 'data', 'path to train and test data')
    tf.app.flags.DEFINE_string('summary_dir', 'graphs', 'path to directory for storing summaries')

    tf.app.run()
EN

回答

页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/47796396

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档