TensorFlow:TensorBoard可视化

关键字全网搜索最新排名

【机器学习算法】:排名第一

【机器学习】:排名第二

【Python】:排名第三

【算法】:排名第四

我们会再接再厉

成为全网优质的技术类公众号

在学习深度网络框架的过程中,我们发现一个问题,就是如何输出各层网络参数,用于更好地理解,调试和优化网络?针对这个问题,TensorFlow开发了一个特别有用的可视化工具包:TensorBoard,既可以显示网络结构,又可以显示训练和测试过程中各层参数的变化情况。

TensorBoard的输入是tensorflow保存summary data的日志文件。日志文件名的形式如:events.out.tfevents.1467809796.lei-All-Series 或 events.out.tfevents.1467809800.lei-All-Series。TensorBoard可读的summary data有scalar,images,audio,histogram和graph。

代码测试

"""A simple MNIST classifier which displays summaries in TensorBoard.This is an unimpressive MNIST model, but it is a good example of using tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of naming summary tags so that they are grouped meaningfully in TensorBoard. It demonstrates the functionality of every TensorBoard dashboard.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
                     'for unit testing.')
flags.DEFINE_integer('max_steps', 1000, 'Number of steps to run trainer.')
flags.DEFINE_float('learning_rate', 0.001, 'Initial learning rate.')
flags.DEFINE_float('dropout', 0.9, 'Keep probability for training dropout.')
flags.DEFINE_string('data_dir', '/tmp/data', 'Directory for storing data')
flags.DEFINE_string('summaries_dir', '/tmp/mnist_logs', 'Summaries directory')
def train():
  # Import data
  mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True,
                                    fake_data=FLAGS.fake_data)
  sess = tf.InteractiveSession()
  # Create a multilayer model.
  # Input placehoolders
  with tf.name_scope('input'):
    x = tf.placeholder(tf.float32, [None, 784], name='x-input')
    image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])
    tf.image_summary('input', image_shaped_input, 10)
    y_ = tf.placeholder(tf.float32, [None, 10], name='y-input')
    keep_prob = tf.placeholder(tf.float32)
    tf.scalar_summary('dropout_keep_probability', keep_prob)
  # We can't initialize these variables to 0 - the network will get stuck.
  def weight_variable(shape):
    """Create a weight variable with appropriate initialization."""
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)
  def bias_variable(shape):
    """Create a bias variable with appropriate initialization."""
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)
  def variable_summaries(var, name):
    """Attach a lot of summaries to a Tensor."""
    with tf.name_scope('summaries'):
      mean = tf.reduce_mean(var)
      tf.scalar_summary('mean/' + name, mean)
      with tf.name_scope('stddev'):
        stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean)))
      tf.scalar_summary('sttdev/' + name, stddev)
      tf.scalar_summary('max/' + name, tf.reduce_max(var))
      tf.scalar_summary('min/' + name, tf.reduce_min(var))
      tf.histogram_summary(name, var)
  def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
    """Reusable code for making a simple neural net layer.
    It does a matrix multiply, bias add, and then uses relu to nonlinearize.
    It also sets up name scoping so that the resultant graph is easy to read, and
    adds a number of summary ops.
    """
    # Adding a name scope ensures logical grouping of the layers in the graph.
    with tf.name_scope(layer_name):
      # This Variable will hold the state of the weights for the layer
      with tf.name_scope('weights'):
        weights = weight_variable([input_dim, output_dim])
        variable_summaries(weights, layer_name + '/weights')
      with tf.name_scope('biases'):
        biases = bias_variable([output_dim])
        variable_summaries(biases, layer_name + '/biases')
      with tf.name_scope('Wx_plus_b'):
        preactivate = tf.matmul(input_tensor, weights) + biases
        tf.histogram_summary(layer_name + '/pre_activations', preactivate)
      activations = act(preactivate, 'activation')
      tf.histogram_summary(layer_name + '/activations', activations)
      return activations
  hidden1 = nn_layer(x, 784, 500, 'layer1')
  dropped = tf.nn.dropout(hidden1, keep_prob)
  y = nn_layer(dropped, 500, 10, 'layer2', act=tf.nn.softmax)
  with tf.name_scope('cross_entropy'):
    diff = y_ * tf.log(y)
    with tf.name_scope('total'):
      cross_entropy = -tf.reduce_mean(diff)
    tf.scalar_summary('cross entropy', cross_entropy)
  with tf.name_scope('train'):
    train_step = tf.train.AdamOptimizer(
        FLAGS.learning_rate).minimize(cross_entropy)
  with tf.name_scope('accuracy'):
    with tf.name_scope('correct_prediction'):
      correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    with tf.name_scope('accuracy'):
      accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    tf.scalar_summary('accuracy', accuracy)
  # Merge all the summaries and write them out to /tmp/mnist_logs (by default)
  merged = tf.merge_all_summaries()
  train_writer = tf.train.SummaryWriter(FLAGS.summaries_dir + '/train', sess.graph)
  test_writer = tf.train.SummaryWriter(FLAGS.summaries_dir + '/test')
  tf.initialize_all_variables().run()
  # Train the model, and also write summaries.
  # Every 10th step, measure test-set accuracy, and write test summaries
  # All other steps, run train_step on training data, & add training summaries
  def feed_dict(train):
    """Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""
    if train or FLAGS.fake_data:
      xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)
      k = FLAGS.dropout
    else:
      xs, ys = mnist.test.images, mnist.test.labels
      k = 1.0
    return {x: xs, y_: ys, keep_prob: k}
  for i in range(FLAGS.max_steps):
    if i % 10 == 0:  # Record summaries and test-set accuracy
      summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))
      test_writer.add_summary(summary, i)
      print('Accuracy at step %s: %s' % (i, acc))
    else: # Record train set summarieis, and train
      summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))
      train_writer.add_summary(summary, i)
def main(_):
  if tf.gfile.Exists(FLAGS.summaries_dir):
    tf.gfile.DeleteRecursively(FLAGS.summaries_dir)
  tf.gfile.MakeDirs(FLAGS.summaries_dir)
  train()
if __name__ == '__main__':
  tf.app.run()

运行上述代码之后调用TensorBoard可视化运行结果,

tensorboard --logdir=/tmp/mnist_logs/train/

打开链接 http://0.0.0.0:6006

EVENTS是训练参数统计显示,可以看到整个训练过程中,各个参数的变换情况

GRAPH网络结构显示

HISTOGRAM训练过程参数分布情况显示

原文发布于微信公众号 - 机器学习算法与Python学习(guodongwei1991)

原文发表时间:2017-04-27

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏吉浦迅科技

关于在Jetson TX2跑的那些深度学习的例子

Lady我总结了NVIDIA官方论坛推荐的几个在Jetson TX2跑的例子/教程,供各小主儿们学习。

39730
来自专栏owent

2018年的新通用伪随机数算法(xoshiro / xoroshiro)的C++(head only)实现

前段时间看到说Lua 5.4用了一种新的通用随机数算法,替换掉本来内部使用的CRT的随机数引擎。我看了一下大致的实现,CPU和空间复杂度任然保持了一个较低的水平...

25520
来自专栏一心无二用,本人只专注于基础图像算法的实现与优化。

SSE图像算法优化系列十二:多尺度的图像细节提升。

无意中浏览一篇文章,中间提到了基于多尺度的图像的细节提升算法,尝试了一下,还是有一定的效果的,结合最近一直研究的SSE优化,把算法的步骤和优化过程分享给大家。...

28980
来自专栏专知

【重磅】深度学习顶会 ICLR 2018 匿名提交论文列表(附pdf下载链接)

【导读】ICLR,全称为「International Conference on Learning Representations」(国际学习表征会议),201...

676100
来自专栏生信技能树

转录组数据的基因表达变化情况探索

一般来说可以用CV或者MAD来衡量某基因在某些样本的表达变化情况。 标准差与平均数的比值称为变异系数,记为C.V(Coefficient of Variance...

42260
来自专栏数据结构与算法

P1083 借教室

题目描述 在大学期间,经常需要租借教室。大到院系举办活动,小到学习小组自习讨论,都需要向学校申请借教室。教室的大小功能不同,借教室人的身份不同,借教室的手续也不...

376100
来自专栏开发与安全

算法:最短路径之迪杰斯特拉(Dijkstra)算法

对于网图来说,最短路径,是指两顶点之间经过的边上权值之和最少的路径,并且我们称路径上的第一个顶点为源点,最后一个顶点为终点。最短路径的算法主要有迪杰斯特拉(Di...

40450
来自专栏天天P图攻城狮

Android P之Smart Linkify

如果是自定义模式,则需要调用上面的方法(方法很多,未完全列出来),其核心就是通过正则去匹配,所以这种自定义模式必须要传入一个Pattern值。

24720
来自专栏冰霜之地

高效的多维空间点索引算法 — Geohash 和 Google S2

每天我们晚上加班回家,可能都会用到滴滴或者共享单车。打开 app 会看到如下的界面:

77950
来自专栏菩提树下的杨过

Flash/Flex学习笔记(38):缓动动画

缓动 与 匀变速 看上去很类似,但其实有区别: 匀变速的公式为 V = V0 + at --速度v与时间t是线性(正比)关系,而且这种运动不需要确定目标点,速度...

22050

扫码关注云+社区

领取腾讯云代金券