前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >tf46:再议tf.estimator之便利

tf46:再议tf.estimator之便利

作者头像
MachineLP
发布2019-05-26 17:31:30
1.1K0
发布2019-05-26 17:31:30
举报
文章被收录于专栏:小鹏的专栏

版权声明:本文为博主原创文章,未经博主允许不得转载。有问题可以加微信:lp9628(注明CSDN)。 https://cloud.tencent.com/developer/article/1435840

MachineLP的Github(欢迎follow):https://github.com/MachineLP

再议tf.estimator之便利:

代码语言:txt
复制
了解一下TF的高级API如何使用。
代码语言:txt
复制
看上去好像挺高大上的,其实按照固定的格式使用就可以了。
代码语言:txt
复制
回头再看有点像是的keras的风格了, 呵呵哒。
代码语言:txt
复制
还可以和这篇文章一起看: [https://blog.csdn.net/u014365862/article/details/79399775](https://blog.csdn.net/u014365862/article/details/79399775)。
代码语言:javascript
复制
# coding = utf-8
''' 再议tf.estimator之便利。 
    了解一下TF的高级API如何使用。
    看上去好像挺高大上的,其实按照固定的格式使用就可以了。
    回头再看有点像是的keras的风格了, 呵呵哒。
    还可以和这篇文章一起看: https://blog.csdn.net/u014365862/article/details/79399775。 
'''

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

# Imports
import sys
import numpy as np
import tensorflow as tf
import argparse
import os.path as osp
from PIL import Image
from functools import partial
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import sklearn.metrics


# 用于模型评估,
def compute_map(gt, pred, valid, average=None):
    """
    计算多标签的准确率。
    """
    nclasses = gt.shape[1]
    all_ap = []
    for cid in range(nclasses):
        gt_cls = gt[:, cid][valid[:, cid] > 0].astype('float32')
        pred_cls = pred[:, cid][valid[:, cid] > 0].astype('float32')
        # As per PhilK. code:
        # https://github.com/philkr/voc-classification/blob/master/src/train_cls.py
        pred_cls -= 1e-5 * gt_cls
        ap = sklearn.metrics.average_precision_score(
            gt_cls, pred_cls, average=average)
        all_ap.append(ap)
    return all_ap


plt.ioff()
# 为将要记录的设置开始入口
tf.logging.set_verbosity(tf.logging.INFO)

# 定义分类的类别名称
CLASS_NAMES = [
    'aeroplane',
    'bicycle',
    'bird',
    'boat',
    'bottle',
    'bus',
    'car',
    'cat',
    'chair',
    'cow',
    'diningtable',
    'dog',
    'horse',
    'motorbike',
    'person',
    'pottedplant',
    'sheep',
    'sofa',
    'train',
    'tvmonitor',
]


# 定义模型
# 使用tf.estimator的话,需要按照固定的定义模型的格式。
''' 参考如下: 
    my_model(
      features, #输入的特征数据
      labels, #输入的标签数据
      mode, #train、evaluate或predict
      params #超参数,对应上面Estimator传来的参数
    )
    train:
    需要以tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)作为return。
        其中: mode, #train、evaluate或predict; loss: 损失函数; train_op: 定义优化算法。
    predict:
    需要以return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)作为return。
        其中: predictions为预测值。 
'''

def cnn_model_fn(features, labels, mode, num_classes=20):

    # Input Layer
    input_layer = tf.reshape(features["x"], [-1, 256, 256, 3])
    valid = features["w"]

    # Do data augmentation here !
    if mode == tf.estimator.ModeKeys.PREDICT:
        final_input = tf.map_fn(lambda im_tf: tf.image.central_crop(im_tf, float(0.875)), input_layer)

    if mode == tf.estimator.ModeKeys.TRAIN:
        final_input = tf.map_fn(lambda im_tf: tf.image.random_flip_left_right(im_tf), input_layer)
        final_input = tf.map_fn(lambda im_tf: tf.random_crop(im_tf, size=[224,224,3]), final_input)
        tf.summary.image(name='train_images', tensor=final_input, max_outputs=10)

    # VGG16 archirecture

    # Block - 1  
    conv1 = tf.layers.conv2d(inputs=final_input, filters=64, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv2 = tf.layers.conv2d(inputs=conv1, filters=64, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    pool1 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)

    # Block - 2
    conv3 = tf.layers.conv2d(inputs=pool1, filters=128, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv4 = tf.layers.conv2d(inputs=conv3, filters=128, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    pool2 = tf.layers.max_pooling2d(inputs=conv4, pool_size=[2, 2], strides=2)

    # Block - 3
    conv5 = tf.layers.conv2d(inputs=pool2, filters=256, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv6 = tf.layers.conv2d(inputs=conv5, filters=256, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv7 = tf.layers.conv2d(inputs=conv6, filters=256, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    pool3 = tf.layers.max_pooling2d(inputs=conv7, pool_size=[2, 2], strides=2)

    # Block - 4
    conv8 = tf.layers.conv2d(inputs=pool3, filters=512, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv9 = tf.layers.conv2d(inputs=conv8, filters=512, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv10 = tf.layers.conv2d(inputs=conv9, filters=512, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    pool4 = tf.layers.max_pooling2d(inputs=conv10, pool_size=[2, 2], strides=2)

    # Block - 5 
    conv11 = tf.layers.conv2d(inputs=pool4, filters=512, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv12 = tf.layers.conv2d(inputs=conv11, filters=512, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    conv13 = tf.layers.conv2d(inputs=conv12, filters=512, kernel_size=[3,3], padding="same", activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    pool5 = tf.layers.max_pooling2d(inputs=conv13, pool_size=[2, 2], strides=2)

    pool5_flat = tf.reshape(pool5, [-1, 7*7*512])
    # FC Layers
    dense1 = tf.layers.dense(inputs=pool5_flat, units=4096, activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    dropout1 = tf.layers.dropout(inputs=dense1, rate=0.5, training=mode == tf.estimator.ModeKeys.TRAIN)
    dense2 = tf.layers.dense(inputs=dropout1, units=4096, activation=tf.nn.relu, use_bias=True, trainable=True, bias_initializer=tf.zeros_initializer(), kernel_initializer=tf.contrib.layers.xavier_initializer())
    dropout2 = tf.layers.dropout(inputs=dense2, rate=0.5, training=mode == tf.estimator.ModeKeys.TRAIN)

    logits = tf.layers.dense(inputs=dropout2, units=num_classes)
    predictions = {"probabilities": tf.sigmoid(logits, name="sigmoid_tensor")}


    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

    loss = tf.identity(tf.losses.sigmoid_cross_entropy(labels, logits=logits), name='loss')

    # Configure the Training Op (for TRAIN mode)
    if mode == tf.estimator.ModeKeys.TRAIN:
        tf.summary.scalar('Train Loss', loss)
        lr = tf.train.exponential_decay(learning_rate=0.001,global_step=tf.train.get_global_step(),decay_steps=10000,decay_rate=0.5, staircase=True)
        tf.summary.scalar('Learning rate', lr)
        optimizer = tf.train.MomentumOptimizer(
                    learning_rate=lr,
                    momentum=0.9)

        gradients_and_vars = optimizer.compute_gradients(loss, tf.trainable_variables())
        for g, v in gradients_and_vars:
            if g is not None:
                tf.summary.histogram('gradient histogram'+v.name, g)

        train_op = optimizer.minimize(loss=loss,global_step=tf.train.get_global_step())
        return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)


# 读取数据,将数据都读到内存中
# 并不提倡这种方式,可以选用tf队列的方式,可以参考: https://github.com/MachineLP/train_arch/blob/master/train_cnn_v1/lib/data_load/data_loader.py
def load_pascal(data_dir, split='train'):

    # Get the exact dirs for Images and annorations
    images_dir = data_dir + '/VOCdevkit/VOC2007/JPEGImages/' 
    annt_dir = data_dir + '/VOCdevkit/VOC2007/ImageSets/Main/' 

    # Get the list of all classes, this is a Global variable
    global CLASS_NAMES

    # For the given split, get the dimensionality of output matrices
    # Do this by opening annotation file for any class, say aeroplane
    num_images = len(open(annt_dir+CLASS_NAMES[0]+'_'+split+'.txt','r').read().splitlines())
    num_classes = len(CLASS_NAMES)

    # Initialize images, labels, weights matrices
    images = np.zeros((num_images, 256, 256, 3), dtype='float32')
    labels = np.zeros((num_images, num_classes), dtype='int')
    weights = np.zeros((num_images, num_classes), dtype='int')

    # First load all the images for given split and then check for labels. This is faster.
    list_all_images = [i.split(' ')[0]+'.jpg' for i in open(annt_dir+CLASS_NAMES[0]+'_'+split+'.txt','r').read().splitlines()] # all files are in jpg format.
    for given_im in range(0, len(list_all_images)):
        print('Loading Image: ' + str(given_im))
        im_file_path = images_dir + list_all_images[given_im]
        im = np.asarray(Image.open(im_file_path).resize((256,256)).convert('RGB'), dtype='float32') # Just in case some image in grayscale
        images[given_im,:,:,:] = im

    # Now itwrate through annotation files and generate labels and weights
    for given_cls in range(0, len(CLASS_NAMES)):
        cls_name = CLASS_NAMES[given_cls]
        ann_file_name = annt_dir + cls_name + '_' + split + '.txt'
        cls_data = [i for i in open(ann_file_name, 'r').read().splitlines()]
        cls_ann = np.zeros((len(cls_data), 1))
        for  i in range(0, len(cls_data)):
            list_i = cls_data[i].split(' ')
            if len(list_i) == 2:
                cls_ann[i] = int(list_i[1])
            elif len(list_i) == 3:
                cls_ann[i] = int(list_i[2])

        for given_im in range(0, len(list_all_images)):
            if cls_ann[given_im] == 1:
                labels[given_im][given_cls] = 1
                weights[given_im][given_cls] = 1
            elif cls_ann[given_im] == 0:
                labels[given_im][given_cls] = 1
                weights[given_im][given_cls] = 0
            elif cls_ann[given_im] == -1:
                labels[given_im][given_cls] = 0
                weights[given_im][given_cls] = 1
            else:
                print('Something wrong in annotations: ' + str(given_im) + ' | ' + str(cls_name))
                sys.exit(1)
    return images, labels, weights

def parse_args():
    parser = argparse.ArgumentParser(
        description='Train a classifier in tensorflow!')
    parser.add_argument(
        'data_dir', type=str, default='data/VOC2007',
        help='Path to PASCAL data storage')
    if len(sys.argv) == 1:
        parser.print_help()
        sys.exit(1)
    args = parser.parse_args()
    return args


def _get_el(arr, i):
    try:
        return arr[i]
    except IndexError:
        return arr


def main():

    args = parse_args()
    BATCH_SIZE = 10

    # 定义40行1列的数组,存储计算的mAP。
    map_list = np.zeros((40,1), dtype='float')

    # 加载训练数据 和 测试数据。
    train_data, train_labels, train_weights = load_pascal(args.data_dir, split='trainval')
    eval_data, eval_labels, eval_weights = load_pascal(args.data_dir, split='test')

    # 定义模型以及参数
    pascal_classifier = tf.estimator.Estimator(
        model_fn=partial(cnn_model_fn,
        num_classes=train_labels.shape[1]),
        model_dir="./vgg_models/")
    
    # 设置一些日志用于我们跟踪程序运行。我们可以用tf.train.SessionRunHook来创建一个tf.train.LoggingTensorHook,它能够记录我们CNN的loss的值。
    tensors_to_log = {"loss": "loss"}
    logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log, at_end=True)

    # Train the model
    # 将训练数据拉到tf队列中。
    train_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={"x": train_data, "w": train_weights},
        y=train_labels,
        batch_size=BATCH_SIZE,
        num_epochs=None,
        shuffle=True)

    # Evaluate the model and print results
    # 将测试数据拉到tf队列中。
    eval_input_fn = tf.estimator.inputs.numpy_input_fn(
        x={"x": eval_data, "w": eval_weights},
        y=eval_labels,
        num_epochs=1,
        shuffle=False)
    
    # 训练迭代一次测试一次
    # 这里不要频繁进行测试,会造成io太频繁,GPU利用率很差。
    for given_iter in range(0,40):
        # 模型开始训练, 也是要以固定的格式开始。
        pascal_classifier.train(input_fn=train_input_fn, steps=1000, hooks=[logging_hook])
        pred = list(pascal_classifier.predict(input_fn=eval_input_fn))
        pred = np.stack([p['probabilities'] for p in pred])
        AP = compute_map(eval_labels, pred, eval_weights, average=None)
        print('\nmAP : ' + str(np.mean(AP)) + '\n')
        map_list[given_iter] = np.mean(AP) 
    xx = range(1,41)
    plt.plot(xx, map_list, 'r--')
    plt.xlabel('i*1000 iterations')
    plt.ylabel('mAP')
    plt.savefig('pascal_vgg.png') 
    
if __name__ == "__main__":
    main()

'''
    还可以参考:
    https://github.com/MachineLP/Tensorflow-/tree/master/Tensorflow-estimator-classification
    https://www.jianshu.com/p/5495f87107e7
'''

上面的数据读取部分可以换成这种形式:

代码语言:javascript
复制
from pathlib import Path
import logging
import sys

import tensorflow as tf

from model import model_fn


def train_generator_fn():
    for number in range(100):
        yield [number, number], [2 * number]


def train_input_fn():
    shapes, types = (2, 1), (tf.float32, tf.float32)
    dataset = tf.data.Dataset.from_generator(
        train_generator_fn, output_types=types, output_shapes=shapes)
    dataset = dataset.batch(20).repeat(200)
    return dataset


if __name__ == '__main__':
    # Logging
    Path('model').mkdir(exist_ok=True)
    tf.logging.set_verbosity(logging.INFO)
    handlers = [
        logging.FileHandler('model/train.log'),
        logging.StreamHandler(sys.stdout)
    ]
    logging.getLogger('tensorflow').handlers = handlers

    # Train estimator
    estimator = tf.estimator.Estimator(model_fn, 'model', params={})
    estimator.train(train_input_fn)

还可以通过:

代码语言:javascript
复制
import time

import tensorflow as tf

label_map = {'猫': 0, '狗': 1}

with open('train.csv') as f:
  lines = [line.strip().split(',') for line in f.readlines()]


def _parse_function(filename, label):
  image_string = tf.read_file(filename)
  image_decoded = tf.image.decode_jpeg(image_string, channels=3)          # (1)
  image = tf.cast(image_decoded, tf.float32)

  image = tf.image.resize_images(image, [224, 224])  # (2)
  return image, filename, label


def training_preprocess(image, filename, label):
  flip_image = tf.image.random_flip_left_right(image)                # (4)

  return flip_image, filename, label


images = []
labels = []
for line in lines:
  images.append(line[0])
  labels.append(label_map[line[1]])

images = tf.constant(images)
labels = tf.constant(labels)
images = tf.random_shuffle(images, seed=0)
labels = tf.random_shuffle(labels, seed=0)
data = tf.data.Dataset.from_tensor_slices((images, labels))

data = data.map(_parse_function, num_parallel_calls=4)
data = data.prefetch(buffer_size=2 * 10)
batched_data = data.batch(2)

iterator = tf.data.Iterator.from_structure(batched_data.output_types,
                                           batched_data.output_shapes)

init_op = iterator.make_initializer(batched_data)
tt = time.time()
with tf.Session() as sess:
  sess.run(init_op)
  for i in range(100):
    try:
      images, filenames, labels = iterator.get_next()
      print('{} -> {}'.format(i, sess.run(labels)))
    except tf.errors.OutOfRangeError:
      sess.run(init_op)
  print(time.time() - tt)

还可以通过,与最开始类似:

代码语言:javascript
复制
import time

import cv2
import numpy as np
import tensorflow as tf


with open('train.csv') as f:
  lines = [line.strip().split(',') for line in f.readlines()]

label_map = {'猫': 0, '狗': 1}

images = np.ndarray((len(lines), 224, 224, 3), float)
labels = np.ndarray((len(lines)), int)

for i, line in enumerate(lines):
  image = cv2.imread(line[0])
  image = cv2.resize(image, (224, 224))
  label = label_map[line[1]]

  images[i] = image
  labels[i] = label

batch_size = 2
data = tf.data.Dataset.from_tensor_slices((images, labels))
data = data.batch(batch_size)
iterator = tf.data.Iterator.from_structure(data.output_types,
                                           data.output_shapes)
init_op = iterator.make_initializer(data)
tt = time.time()
with tf.Session() as sess:
  sess.run(init_op)
  for i in range(100):
    try:
      _, labels = iterator.get_next()
      labels = sess.run(labels)
      print('{} -> {}'.format(i, labels))
    except tf.errors.OutOfRangeError:
      sess.run(init_op)

  print(time.time() - tt)

还可以参考:

https://github.com/MachineLP/Tensorflow-/tree/master/Tensorflow-estimator-classification

https://www.jianshu.com/p/5495f87107e7

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2018年11月23日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档