前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >tensorflow编程: Layers (contrib)

tensorflow编程: Layers (contrib)

作者头像
JNingWei
发布2018-09-28 15:39:11
7540
发布2018-09-28 15:39:11
举报
文章被收录于专栏:JNing的专栏JNing的专栏

Higher level ops for building neural network layers

tf.contrib.layers.batch_norm

  添加一个 Batch Normalization 层

tf.contrib.layers.batch_norm (inputs, decay=0.999, updates_collections=tf.GraphKeys.UPDATE_OPS, is_training=True, data_format=DATA_FORMAT_NHWC)

  可用作conv2d和fully_connected的归一化函数。

tf.nn.conv2d_transpose

   conv2d 的转置

tf.conv2d_transpose (value, filter, output_shape, strides, padding=’SAME’, data_format=’NHWC’, name=None)

代码语言:javascript
复制
# -*- coding: utf-8 -*-

import tensorflow as tf


def func(in_put, in_channel, out_channel):

    weights = tf.get_variable(name="weights", shape=[2, 2, in_channel, out_channel],
                              initializer=tf.contrib.layers.xavier_initializer_conv2d())
    convolution = tf.nn.conv2d(input=in_put, filter=weights, strides=[1, 1, 1, 1], padding='VALID')
    conv_shape = convolution.get_shape().as_list()
    deconv_shape = [conv_shape[0], conv_shape[1]*2, conv_shape[2]*2, conv_shape[3]]
    deconvolution = tf.nn.conv2d_transpose(value=convolution, filter=weights, output_shape=deconv_shape, strides=[1, 2, 2, 1], padding='VALID')
    return in_put, convolution, deconvolution


def main():

    with tf.Graph().as_default():
        input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1])
        in_put, convolution, deconvolution = func(input_x, 1, 1)

        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            import numpy as np
            _in_put, _convolution, _deconvolution = sess.run([in_put, convolution, deconvolution], feed_dict={input_x:np.random.uniform(low=0, high=255, size=[1, 4, 4, 1])})
            print '\nin_put:'
            print in_put
            # print _in_put
            print '\nconvolution:'
            print convolution
            # print _convolution
            print '\ndeconvolution:'
            print deconvolution
            # print _deconvolution

if __name__ == "__main__":
    main()
代码语言:javascript
复制
2017-09-29 09:51:41.472842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)

in_put:
Tensor("Placeholder:0", shape=(1, 4, 4, 1), dtype=float32)

convolution:
Tensor("Conv2D:0", shape=(1, 3, 3, 1), dtype=float32)

deconvolution:
Tensor("conv2d_transpose:0", shape=(1, 6, 6, 1), dtype=float32)

Process finished with exit code 0

tf.nn.dropout

tf.nn.dropout (x, keep_prob, noise_shape=None, seed=None, name=None)

代码语言:javascript
复制
# coding=utf-8

import tensorflow as tf

def main():

    with tf.Graph().as_default():
        import numpy as np
        input_x = np.random.uniform(0, 255, [3, 3])
        print input_x

        drop = [0, 0, 0]
        for i, keep_prob in enumerate([0.1, 0.5, 1.0]):
            drop[i] = tf.nn.dropout(x=input_x, keep_prob=keep_prob)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for drop_i in drop:
                _drop_i = sess.run(drop_i)
                print '\n----------\n'
                print _drop_i

if __name__ == "__main__":
    main()
代码语言:javascript
复制
# 初始输入
[[  16.46278229  253.27597997  246.33614039]
 [ 130.45261984  227.85971767  142.72621045]
 [ 173.23025953  165.99906514  180.13238617]]

2017-09-29 11:02:29.146976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)

----------
# keep_prob = 0.1
[[    0.            0.            0.       ]
 [    0.            0.            0.       ]
 [    0.            0.         1801.3238617]]

----------
# keep_prob = 0.5
[[  32.92556457  506.55195994    0.        ]
 [ 260.90523969  455.71943533    0.        ]
 [ 346.46051906    0.          360.26477234]]

----------
# keep_prob = 1.0
[[  16.46278229  253.27597997  246.33614039]
 [ 130.45261984  227.85971767  142.72621045]
 [ 173.23025953  165.99906514  180.13238617]]

tf.contrib.layers.fully_connected

tf.contrib.layers.fully_connected (inputs, num_outputs, activation_fn=tf.nn.relu)

  • 其中默认进行了 convolutionactivation
  • convolution 中,只对输入的最后一维求平均,且阶数不变。即 ‘weights:0’.shape=[inputs.shape[-1], num_outputs])。
  • ‘weights:0’.shape 永远是二维的。
  • num_outputs 是 ‘weights:0’ 第二维(即第-1维)的参数值;经过fn计算后,也变成了 结果输出的tensor 的最后一维(即第-1维)的参数值。
  • 如果设置 activation_fn=None,则输出结果 不经过激活,依然可能包含负数值。
代码语言:javascript
复制
# coding=utf-8

import tensorflow as tf
from tensorflow.contrib.layers import fully_connected

def main():

    with tf.Graph().as_default():
        import numpy as np
        input_x = np.random.uniform(0, 10, [3, 3])
        print input_x
        fn = fully_connected(inputs=input_x, num_outputs=1)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            _fn = sess.run(fn)
            print _fn
            print '\n----------\n'
            for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())):
                print '\n', x, '\n', y

if __name__ == "__main__":
    main()
代码语言:javascript
复制
# 原始输入矩阵
[[ 7.73305319  0.2780667   7.27101124]
 [ 0.84666041  0.92980727  6.83676724]
 [ 1.02844109  5.51824496  1.78840816]]

2017-09-29 11:33:02.500942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)

# fn后输出的结果tensor
[[ 2.32549239]
 [ 1.69284669]
 [ 0.        ]]

----------

<tf.Variable 'fully_connected/weights:0' shape=(3, 1) dtype=float64_ref> 
[[-0.01048241]
 [-0.83954232]
 [ 0.36308597]]

<tf.Variable 'fully_connected/biases:0' shape=(1,) dtype=float64_ref> 
[ 0.]

Process finished with exit code 0
代码语言:javascript
复制
# coding=utf-8

import tensorflow as tf
from tensorflow.contrib.layers import fully_connected

def main():

    with tf.Graph().as_default():
        import numpy as np
        input_x = np.random.uniform(0, 10, [2, 4, 4, 3])
        print np.shape(input_x)

        fn = fully_connected(inputs=input_x, num_outputs=1)
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            _fn = sess.run(fn)
            print np.shape(_fn)
            print '\n----------\n'
            for i in tf.global_variables():
                print '\n', i

if __name__ == "__main__":
    main()
代码语言:javascript
复制
(2, 4, 4, 3)

2017-09-29 11:46:17.248114: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
# convolution 中,只对输入的最后一维求平均,且阶数不变。即 'weights:0'.shape=[inputs.shape[-1], num_outputs])。
(2, 4, 4, 1)

----------

<tf.Variable 'fully_connected/weights:0' shape=(3, 1) dtype=float64_ref>

<tf.Variable 'fully_connected/biases:0' shape=(1,) dtype=float64_ref>

tf.nn.relu

max(features, 0)

tf.nn.relu(features, name=None)

tf.nn.relu6

min(max(features, 0), 6)。即对 tf.nn.relu 的优化,防止 relu过后 某些 极端值 依然 大于6

tf.nn.relu6(features, name=None)

tf.nn.softmax

计算公式: softmax = exp(logits) / reduce_sum(exp(logits), dim)

tf.nn.softmax (logits, dim=-1, name=None)

代码语言:javascript
复制
# coding=utf-8

import tensorflow as tf
import numpy as np
input_x = tf.constant(np.random.uniform(0, 5, [2, 3]))
softmax = tf.nn.softmax(logits=input_x)

# 自己写的softmax接口
def my_softmax(logits, dim=-1):
    my_softmax = tf.div(tf.exp(logits), tf.reduce_sum(tf.exp(logits), dim, keep_dims=True))
    return my_softmax

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    my_softmax = my_softmax(logits=input_x)
    _input_x, _softmax, _my_softmax = sess.run([input_x, softmax, my_softmax])
    print 'input:', np.shape(input_x), input_x, '\n', _input_x
    print '\n----------\n'
    print 'softmax:', np.shape(_softmax), softmax, '\n', _softmax
    print '\n----------\n'
    print 'my_softmax:', np.shape(_my_softmax), my_softmax, '\n', _my_softmax
    print '\n----------\n'
    for i in tf.global_variables():
        print '\n', i
代码语言:javascript
复制
# softmax输入类型为tensor型
input: (2, 3) Tensor("Const:0", shape=(2, 3), dtype=float64) 
[[ 3.88517858  3.69402461  3.07837121]
 [ 1.27162028  2.12622856  4.34646188]]

----------
# softmax后,shape保持不变,返回tensor型
softmax: (2, 3) Tensor("Softmax:0", shape=(2, 3), dtype=float64) 
[[ 0.44008545  0.36351295  0.1964016 ]
 [ 0.04000495  0.09402978  0.86596527]]

----------
# 根据计算公式:softmax = exp(logits) / reduce_sum(exp(logits), dim) 得到了相同的接口输出
my_softmax: (2, 3) Tensor("div:0", shape=(2, 3), dtype=float64) 
[[ 0.44008545  0.36351295  0.1964016 ]
 [ 0.04000495  0.09402978  0.86596527]]

----------
# 内存中无参数保存

Process finished with exit code 0

Regularizers

tf.contrib.layers.l1_regularizer

tf.contrib.layers.l1_regularizer (scale, scope=None)

tf.contrib.layers.l2_regularizer

tf.contrib.layers.l2_regularizer (scale, scope=None)

Initializers

tf.contrib.layers.xavier_initializer

执行“Xavier”初始化的初始化程序。

tf.contrib.layers.xavier_initializer (uniform=True, seed=None, dtype=tf.float32)

代码语言:javascript
复制
# coding=utf-8

import tensorflow as tf

xavier = tf.get_variable(name="weights",
                         shape=[2, 2],
                         initializer=tf.contrib.layers.xavier_initializer())
constant = tf.get_variable(name='biases',
                         shape=[2],
                         initializer=tf.constant_initializer())

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    _xavier, _constant = sess.run([xavier, constant])
    print '\n\nxavier:'
    print xavier
    print _xavier
    print '\n\nconstant:'
    print constant
    print _constant
代码语言:javascript
复制
xavier:
<tf.Variable 'weights:0' shape=(2, 2) dtype=float32_ref>
[[ 1.20015538  0.34742999]
 [ 0.39075291  0.60076308]]


constant:
<tf.Variable 'biases:0' shape=(2,) dtype=float32_ref>
[ 0.  0.]
代码语言:javascript
复制
import tensorflow as tf

print '\n\ntf.contrib.layers.xavier_initializer_conv2d() :\n', tf.contrib.layers.xavier_initializer_conv2d()
print '\n\ntf.constant_initializer() :\n', tf.constant_initializer()
print '\n\ntf.global_variables_initializer() :\n', tf.global_variables_initializer()
代码语言:javascript
复制
tf.contrib.layers.xavier_initializer_conv2d() :
<function _initializer at 0x7fe5133da578>


tf.constant_initializer() :
<tensorflow.python.ops.init_ops.Constant object at 0x7fe528bbdfd0>


tf.global_variables_initializer() :
name: "init"
op: "NoOp"

Optimization

Summaries

Feature columns



本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2017年09月26日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Higher level ops for building neural network layers
    • tf.contrib.layers.batch_norm
      • tf.nn.conv2d_transpose
        • tf.nn.dropout
          • tf.contrib.layers.fully_connected
            • tf.nn.relu
              • tf.nn.relu6
                • tf.nn.softmax
                • Regularizers
                  • tf.contrib.layers.l1_regularizer
                    • tf.contrib.layers.l2_regularizer
                    • Initializers
                      • tf.contrib.layers.xavier_initializer
                      • Optimization
                      • Summaries
                      • Feature columns
                      领券
                      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档