前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >一看就懂的Tensorflow实战(Logistic回归模型Eager API)

一看就懂的Tensorflow实战(Logistic回归模型Eager API)

作者头像
AI异构
发布2020-07-29 11:20:04
4250
发布2020-07-29 11:20:04
举报
文章被收录于专栏:AI异构AI异构

Tensorflow Eager API Logistic回归

from __future__ import absolute_import, division, print_function

import tensorflow as tf
import tensorflow.contrib.eager as tfe
设置 Eager API
# Set Eager API
tfe.enable_eager_execution()
导入数据
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./data/", one_hot=False)

Extracting ./data/train-images-idx3-ubyte.gz
Extracting ./data/train-labels-idx1-ubyte.gz
Extracting ./data/t10k-images-idx3-ubyte.gz
Extracting ./data/t10k-labels-idx1-ubyte.gz
设置变量
# Parameters
learning_rate = 0.1
batch_size = 128
num_steps = 1000
display_step = 100
调用 Dataset API 读取数据[3]

Dataset API是TensorFlow 1.3版本中引入的一个新的模块,主要服务于数据读取,构建输入数据的pipeline。

如果想要用到Eager模式,就必须要使用Dataset API来读取数据。

之前有用 placeholder 读取数据,tf.data.Dataset.from_tensor_slices 是另一种方式,其主要作用是切分传入 Tensor 的第一个维度,生成相应的 dataset。以下面的例子为例,是对 mnist.train.images 按batch_size 进行切分。

在Eager模式中,创建Iterator的方式是通过 tfe.Iterator(dataset) 的形式直接创建Iterator并迭代。迭代时可以直接取出值,不需要使用sess.run()。

# Iterator for the dataset
dataset = tf.data.Dataset.from_tensor_slices(
    (mnist.train.images, mnist.train.labels)).batch(batch_size)
dataset_iter = tfe.Iterator(dataset)
定义模型(公式+损失函数+准确率计算)
# Variables
W = tfe.Variable(tf.zeros([784, 10]), name='weights')
b = tfe.Variable(tf.zeros([10]), name='bias')

# Logistic regression (Wx + b)
def logistic_regression(inputs):
    return tf.matmul(inputs, W) + b

# Cross-Entropy loss function
def loss_fn(inference_fn, inputs, labels):
    # Using sparse_softmax cross entropy
    return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=inference_fn(inputs), labels=labels))

# Calculate accuracy
def accuracy_fn(inference_fn, inputs, labels):
    prediction = tf.nn.softmax(inference_fn(inputs))
    correct_pred = tf.equal(tf.argmax(prediction, 1), labels)
    return tf.reduce_mean(tf.cast(correct_pred, tf.float32))

补充: tf.nn.softmax_cross_entropy_with_logits(logits, labels, name=None):[4]

  • 第一个参数logits:就是神经网络最后一层的输出,如果有batch的话,它的大小就是[batchsize,num_classes],单样本的话,大小就是num_classes
  • 第二个参数labels:实际的标签,大小同上

执行下面两步操作:

返回值是一个向量,对向量求 tf.reduce_mean,得到loss。

设置优化器(SGD)
# SGD Optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)

# Compute gradients
grad = tfe.implicit_gradients(loss_fn)
训练
# Training
average_loss = 0.
average_acc = 0.
for step in range(num_steps):

    # Iterate through the dataset
    try:
        d = dataset_iter.next()
    except StopIteration:  # try...except,except用于处理异常
        # Refill queue
        dataset_iter = tfe.Iterator(dataset)
        d = dataset_iter.next()

    # Images
    x_batch = d[0]
    # Labels
    y_batch = tf.cast(d[1], dtype=tf.int64)

    # Compute the batch loss
    batch_loss = loss_fn(logistic_regression, x_batch, y_batch)
    average_loss += batch_loss
    # Compute the batch accuracy
    batch_accuracy = accuracy_fn(logistic_regression, x_batch, y_batch)
    average_acc += batch_accuracy

    if step == 0:
        # Display the initial cost, before optimizing
        print("Initial loss= {:.9f}".format(average_loss))

    # Update the variables following gradients info
    optimizer.apply_gradients(grad(logistic_regression, x_batch, y_batch))

    # Display info
    if (step + 1) % display_step == 0 or step == 0:
        if step > 0:
            average_loss /= display_step
            average_acc /= display_step
        print("Step:", '%04d' % (step + 1), " loss=",
              "{:.9f}".format(average_loss), " accuracy=",
              "{:.4f}".format(average_acc))
        average_loss = 0.
        average_acc = 0.

Initial loss= 2.302585363
Step: 0001  loss= 2.302585363  accuracy= 0.1172
Step: 0100  loss= 0.952338576  accuracy= 0.7955
Step: 0200  loss= 0.535867393  accuracy= 0.8712
Step: 0300  loss= 0.485415280  accuracy= 0.8757
Step: 0400  loss= 0.433947176  accuracy= 0.8843
Step: 0500  loss= 0.381990731  accuracy= 0.8971
Step: 0600  loss= 0.394154936  accuracy= 0.8947
Step: 0700  loss= 0.391497582  accuracy= 0.8905
Step: 0800  loss= 0.386373132  accuracy= 0.8945
Step: 0900  loss= 0.332039326  accuracy= 0.9096
Step: 1000  loss= 0.358993709  accuracy= 0.9002
测试
# Evaluate model on the test image set
testX = mnist.test.images
testY = mnist.test.labels

test_acc = accuracy_fn(logistic_regression, testX, testY)
print("Testset Accuracy: {:.4f}".format(test_acc))

Testset Accuracy: 0.9083

参考

[1] [tensorflow reduction_indices理解]https://www.cnblogs.com/likethanlove/p/6547405.html

[2] [tf.argmax()以及axis解析]https://blog.csdn.net/qq575379110/article/details/70538051

[3] [TensorFlow全新的数据读取方式:Dataset API入门教程]https://blog.csdn.net/kwame211/article/details/78579035

[4] [【TensorFlow】tf.nn.softmax_cross_entropy_with_logits的用法]https://blog.csdn.net/mao_xiao_feng/article/details/53382790

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-04-09,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 AI异构 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Tensorflow Eager API Logistic回归
    • 设置 Eager API
      • 导入数据
        • 设置变量
          • 调用 Dataset API 读取数据[3]
            • 定义模型(公式+损失函数+准确率计算)
              • 设置优化器(SGD)
                • 训练
                  • 测试
                  • 参考
                  领券
                  问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档