前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >tensorflow编程: Neural Network

tensorflow编程: Neural Network

作者头像
JNingWei
发布2018-09-28 15:23:25
8010
发布2018-09-28 15:23:25
举报
文章被收录于专栏:JNing的专栏JNing的专栏

Activation Functions

tf.nn.relu

负数归零。

tf.nn.relu6

负数归零,大于6的正数归6。

tf.nn.crelu

featurestf.negative(features) 分别 Relu 并 concatenate 在一起。

tf.nn.elu

负数进行 exp(features) - 1

tf.nn.softplus

返回 log(exp(features) + 1)

tf.nn.softsign

返回 features / (abs(features) + 1)

tf.nn.dropout

根据 keep_prob参数项 随机进行 dropout 。

tf.nn.bias_add

偏置项

tf.sigmoid

返回 1 / (1 + exp(-x))

tf.tanh

返回 tanh(x)

代码语言:javascript
复制
# coding=utf-8
import tensorflow as tf

# 保证每次都是 同一组features
import numpy as np
inputs = np.random.uniform(-10, 10, size=[3, 3])
features = tf.placeholder_with_default(input=inputs, shape=[3, 3])

# 激活函数
output_relu = tf.nn.relu(features)
output_relu6 = tf.nn.relu6(features)
output_crelu = tf.nn.crelu(features)
output_elu = tf.nn.elu(features)
output_softplus = tf.nn.softplus(features)
output_softsign = tf.nn.softsign(features)
output_dropout = tf.nn.dropout(features, keep_prob=0.5)
output_bias_add = tf.nn.bias_add(features, bias=[10, 10, 10])
output_sigmoid = tf.nn.sigmoid(features)
output_tanh = tf.nn.tanh(features)

with tf.Session() as sess:
    print '\nfeatures :\n', sess.run(features)
    print '\n----------\n'
    print '\ntf.nn.relu :\n', sess.run(output_relu)
    print '\ntf.nn.relu6 :\n', sess.run(output_relu6)
    print '\ntf.nn.crelu :\n', sess.run(output_crelu)
    print '\ntf.nn.elu :\n', sess.run(output_elu)
    print '\ntf.nn.softplus :\n', sess.run(output_softplus)
    print '\ntf.nn.softsign :\n', sess.run(output_softsign)
    print '\ntf.nn.output_dropout :\n', sess.run(output_dropout)
    print '\ntf.nn.output_bias_add :\n', sess.run(output_bias_add)
    print '\ntf.nn.output_sigmoid :\n', sess.run(output_sigmoid)
    print '\ntf.nn.output_tanh :\n', sess.run(output_tanh)
代码语言:javascript
复制
features :
[[ 0.53874537 -3.09047282 -2.88714205]
 [-1.92602402 -1.56025457  3.64309646]
 [-9.13147387  8.37367913 -7.9849204 ]]

----------


tf.nn.relu :
[[ 0.53874537  0.          0.        ]
 [ 0.          0.          3.64309646]
 [ 0.          8.37367913  0.        ]]

tf.nn.relu6 :
[[ 0.53874537  0.          0.        ]
 [ 0.          0.          3.64309646]
 [ 0.          6.          0.        ]]

tf.nn.crelu :
[[ 0.53874537  0.          0.          0.          3.09047282  2.88714205]
 [ 0.          0.          3.64309646  1.92602402  1.56025457  0.        ]
 [ 0.          8.37367913  0.          9.13147387  0.          7.9849204 ]]

tf.nn.elu :
[[ 0.53874537 -0.95451955 -0.94426473]
 [-0.85427355 -0.78991742  3.64309646]
 [-0.99989179  8.37367913 -0.99965944]]

tf.nn.softplus :
[[  9.98370226e-01   4.44765358e-02   5.42374660e-02]
 [  1.36038893e-01   1.90688608e-01   3.66893104e+00]
 [  1.08200132e-04   8.37390997e+00   3.40501625e-04]]

tf.nn.softsign :
[[ 0.3501199  -0.75552948 -0.74274159]
 [-0.65823931 -0.60941384  0.78462649]
 [-0.90129768  0.8933183  -0.88870241]]

tf.nn.output_dropout :
[[  0.          -6.18094565  -5.77428411]
 [ -0.          -3.12050914   7.28619293]
 [-18.26294775  16.74735827  -0.        ]]

tf.nn.output_bias_add :
[[ 10.53874537   6.90952718   7.11285795]
 [  8.07397598   8.43974543  13.64309646]
 [  0.86852613  18.37367913   2.0150796 ]]

tf.nn.output_sigmoid :
[[  6.31520510e-01   4.35019567e-02   5.27928497e-02]
 [  1.27191314e-01   1.73610121e-01   9.74496282e-01]
 [  1.08194278e-04   9.99769189e-01   3.40443661e-04]]

tf.nn.output_tanh :
[[ 0.49203767 -0.9958716  -0.9938064 ]
 [-0.9584108  -0.91546169  0.99863108]
 [-0.99999998  0.99999989 -0.99999977]]

Convolution

tf.nn.convolution

tf.nn.convolution (input, filter, padding, strides=None, dilation_rate=None, name=None, data_format=None)

计算 N-D卷积 的 和 。

tf.nn.conv2d

2-D 卷积

tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

4-Dinputfilter 进行 tensor之间 的 2-D 卷积

其中,input.shape = [batch, in_height, in_width, in_channels] ,filter.shape = [filter_height, filter_width, in_channels, out_channels], stride 为长度为4的int型一维列表。

tf.nn.conv2d_transpose

2-D 卷积(tf.nn.conv2d)转置

tf.nn.conv2d_transpose (value, filter, output_shape, strides, padding=’SAME’, data_format=’NHWC’, name=None)

这个操作有时在解卷积网络之后称为“反卷积”,但实际上是conv2d的转置(渐变),而不是实际的反卷积。

tf.nn.conv1d

tf.nn.conv1d (value, filters, stride, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

其中, stride 为整型。

padding参数项SAME 时:

代码语言:javascript
复制
# coding=utf-8
import tensorflow as tf

# 保证每次都是 同一组features
import numpy as np
x_4d = tf.placeholder_with_default(input=np.ndarray.reshape(np.array(range(1, 10), dtype=np.float32), [1, 3, 3, 1]),
                                   shape=[1, 3, 3, 1])
y_4d = tf.placeholder_with_default(input=np.ones(shape=[2, 2, 1, 1], dtype=np.float32),
                                   shape=[2, 2, 1, 1])
x_3d = tf.placeholder_with_default(input=np.ndarray.reshape(np.array(range(1, 4), dtype=np.float32), [1, 3, 1]),
                                   shape=[1, 3, 1])
y_3d = tf.placeholder_with_default(input=np.ones(shape=[2, 1, 1], dtype=np.float32),
                                   shape=[2, 1, 1])
# 激活函数; padding='SAME'
output_convolution = tf.nn.convolution(x_4d, y_4d, 'SAME')
output_conv2d = tf.nn.conv2d(x_4d, y_4d, [1, 1, 1, 1], 'SAME')
output_conv2d_transpose = tf.nn.conv2d_transpose(x_4d, y_4d, [1, 6, 6, 1], [1, 2, 2, 1], 'SAME')
output_conv1d = tf.nn.conv1d(x_3d, y_3d, 1, 'SAME')

with tf.Session() as sess:
    print '\nx :\n', sess.run(x_4d)
    print '\n----------\n'
    print '\n\ntf.nn.convolution :\n', sess.run(output_convolution)
    print '\n\ntf.nn.conv2d :\n', sess.run(output_conv2d)
    print '\n\ntf.nn.conv2d_transpose :\n', sess.run(output_conv2d_transpose)
    print '\n\ntf.nn.conv1d :\n', sess.run(output_conv1d)

输出:

代码语言:javascript
复制
x :
[[[[ 1.]
   [ 2.]
   [ 3.]]

  [[ 4.]
   [ 5.]
   [ 6.]]

  [[ 7.]
   [ 8.]
   [ 9.]]]]

----------



tf.nn.convolution :
[[[[ 12.]
   [ 16.]
   [  9.]]

  [[ 24.]
   [ 28.]
   [ 15.]]

  [[ 15.]
   [ 17.]
   [  9.]]]]


tf.nn.conv2d :
[[[[ 12.]
   [ 16.]
   [  9.]]

  [[ 24.]
   [ 28.]
   [ 15.]]

  [[ 15.]
   [ 17.]
   [  9.]]]]


tf.nn.conv2d_transpose :
[[[[ 1.]
   [ 1.]
   [ 2.]
   [ 2.]
   [ 3.]
   [ 3.]]

  [[ 1.]
   [ 1.]
   [ 2.]
   [ 2.]
   [ 3.]
   [ 3.]]

  [[ 4.]
   [ 4.]
   [ 5.]
   [ 5.]
   [ 6.]
   [ 6.]]

  [[ 4.]
   [ 4.]
   [ 5.]
   [ 5.]
   [ 6.]
   [ 6.]]

  [[ 7.]
   [ 7.]
   [ 8.]
   [ 8.]
   [ 9.]
   [ 9.]]

  [[ 7.]
   [ 7.]
   [ 8.]
   [ 8.]
   [ 9.]
   [ 9.]]]]


tf.nn.conv1d :
[[[ 3.]
  [ 5.]
  [ 3.]]]

padding参数项VALID 时:

代码语言:javascript
复制
# coding=utf-8
import tensorflow as tf

# 保证每次都是 同一组features
import numpy as np
x_4d = tf.placeholder_with_default(input=np.ndarray.reshape(np.array(range(1, 10), dtype=np.float32), [1, 3, 3, 1]),
                                   shape=[1, 3, 3, 1])
y_4d = tf.placeholder_with_default(input=np.ones(shape=[2, 2, 1, 1], dtype=np.float32),
                                   shape=[2, 2, 1, 1])
x_3d = tf.placeholder_with_default(input=np.ndarray.reshape(np.array(range(1, 4), dtype=np.float32), [1, 3, 1]),
                                   shape=[1, 3, 1])
y_3d = tf.placeholder_with_default(input=np.ones(shape=[2, 1, 1], dtype=np.float32),
                                   shape=[2, 1, 1])
# 激活函数; padding='VALID'
output_convolution = tf.nn.convolution(x_4d, y_4d, 'VALID')
output_conv2d = tf.nn.conv2d(x_4d, y_4d, [1, 1, 1, 1], 'VALID')
output_conv2d_transpose = tf.nn.conv2d_transpose(x_4d, y_4d, [1, 6, 6, 1], [1, 2, 2, 1], 'VALID')
output_conv1d = tf.nn.conv1d(x_3d, y_3d, 1, 'VALID')

with tf.Session() as sess:
    print '\nx :\n', sess.run(x_4d)
    print '\n----------\n'
    print '\n\ntf.nn.convolution :\n', sess.run(output_convolution)
    print '\n\ntf.nn.conv2d :\n', sess.run(output_conv2d)
    print '\n\ntf.nn.conv2d_transpose :\n', sess.run(output_conv2d_transpose)
    print '\n\ntf.nn.conv1d :\n', sess.run(output_conv1d)

输出:

代码语言:javascript
复制
x :
[[[[ 1.]
   [ 2.]
   [ 3.]]

  [[ 4.]
   [ 5.]
   [ 6.]]

  [[ 7.]
   [ 8.]
   [ 9.]]]]

----------



tf.nn.convolution :
[[[[ 12.]
   [ 16.]]

  [[ 24.]
   [ 28.]]]]


tf.nn.conv2d :
[[[[ 12.]
   [ 16.]]

  [[ 24.]
   [ 28.]]]]


tf.nn.conv2d_transpose :
[[[[ 1.]
   [ 1.]
   [ 2.]
   [ 2.]
   [ 3.]
   [ 3.]]

  [[ 1.]
   [ 1.]
   [ 2.]
   [ 2.]
   [ 3.]
   [ 3.]]

  [[ 4.]
   [ 4.]
   [ 5.]
   [ 5.]
   [ 6.]
   [ 6.]]

  [[ 4.]
   [ 4.]
   [ 5.]
   [ 5.]
   [ 6.]
   [ 6.]]

  [[ 7.]
   [ 7.]
   [ 8.]
   [ 8.]
   [ 9.]
   [ 9.]]

  [[ 7.]
   [ 7.]
   [ 8.]
   [ 8.]
   [ 9.]
   [ 9.]]]]


tf.nn.conv1d :
[[[ 3.]
  [ 5.]]]

Pooling

tf.nn.avg_pool

输出 average_pooling 的结果

tf.nn.avg_pool (value, ksize, strides, padding, data_format=’NHWC’, name=None)

tf.nn.max_pool

输出 max_pooling 的结果

tf.nn.max_pool (value, ksize, strides, padding, data_format=’NHWC’, name=None)

tf.nn.max_pool_with_argmax

input 进行 max_pooling,输出 max_pooling 的结果 以及 max_pooling 的辅助indices(用于后面的up_sampling)。

tf.nn.max_pool_with_argmax (input, ksize, strides, padding, Targmax=None, name=None)

tf.nn.pool

padding='SAME' 时:

代码语言:javascript
复制
# coding=utf-8
import tensorflow as tf

# 保证每次都是 同一组features
import numpy as np
x_4d = tf.placeholder_with_default(input=np.ndarray.reshape(np.array(range(1, 10), dtype=np.float32), [1, 3, 3, 1]),
                                   shape=[1, 3, 3, 1])
y_4d = tf.placeholder_with_default(input=np.ones(shape=[2, 2, 1, 1], dtype=np.float32),
                                   shape=[2, 2, 1, 1])

# padding='SAME'
output_avg_pool = tf.nn.avg_pool(x_4d, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')
output_max_pool = tf.nn.max_pool(x_4d, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')
output_max_pool_with_argmax = tf.nn.max_pool_with_argmax(x_4d, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')

with tf.Session() as sess:
    print '\ninput :\n', sess.run(x_4d)
    print '\n----------\n'
    print '\n\ntf.nn.avg_pool :\n', sess.run(output_avg_pool)
    print '\n\ntf.nn.max_pool :\n', sess.run(output_max_pool)
    print '\n\ntf.nn.max_pool_with_argmax :\n', sess.run(output_max_pool_with_argmax)
代码语言:javascript
复制
input :
[[[[ 1.]
   [ 2.]
   [ 3.]]

  [[ 4.]
   [ 5.]
   [ 6.]]

  [[ 7.]
   [ 8.]
   [ 9.]]]]

----------



tf.nn.avg_pool :
[[[[ 3. ]
   [ 4.5]]

  [[ 7.5]
   [ 9. ]]]]


tf.nn.max_pool :
[[[[ 5.]
   [ 6.]]

  [[ 8.]
   [ 9.]]]]


tf.nn.max_pool_with_argmax :
MaxPoolWithArgmax(output=array([[[[ 5.],
         [ 6.]],

        [[ 8.],
         [ 9.]]]], dtype=float32), argmax=array([[[[4],
         [5]],

        [[7],
         [8]]]]))

padding='VALID' 时:

代码语言:javascript
复制
# coding=utf-8
import tensorflow as tf

# 保证每次都是 同一组features
import numpy as np
x_4d = tf.placeholder_with_default(input=np.ndarray.reshape(np.array(range(1, 10), dtype=np.float32), [1, 3, 3, 1]),
                                   shape=[1, 3, 3, 1])
y_4d = tf.placeholder_with_default(input=np.ones(shape=[2, 2, 1, 1], dtype=np.float32),
                                   shape=[2, 2, 1, 1])

# padding='VALID'
output_avg_pool = tf.nn.avg_pool(x_4d, [1, 2, 2, 1], [1, 2, 2, 1], 'VALID')
output_max_pool = tf.nn.max_pool(x_4d, [1, 2, 2, 1], [1, 2, 2, 1], 'VALID')
output_max_pool_with_argmax = tf.nn.max_pool_with_argmax(x_4d, [1, 2, 2, 1], [1, 2, 2, 1], 'VALID')

with tf.Session() as sess:
    print '\nvalue :\n', sess.run(x_4d)
    print '\n----------\n'
    print '\n\ntf.nn.avg_pool :\n', sess.run(output_avg_pool)
    print '\n\ntf.nn.max_pool :\n', sess.run(output_max_pool)
    print '\n\ntf.nn.max_pool_with_argmax :\n', sess.run(output_max_pool_with_argmax)
代码语言:javascript
复制
value :
[[[[ 1.]
   [ 2.]
   [ 3.]]

  [[ 4.]
   [ 5.]
   [ 6.]]

  [[ 7.]
   [ 8.]
   [ 9.]]]]

----------



tf.nn.avg_pool :
[[[[ 3.]]]]


tf.nn.max_pool :
[[[[ 5.]]]]


tf.nn.max_pool_with_argmax :
MaxPoolWithArgmax(output=array([[[[ 5.]]]], dtype=float32), argmax=array([[[[4]]]]))

Morphological filtering


Normalization

tf.nn.l2_normalize

计算 output = x / sqrt(max(sum(x**2), epsilon))

tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None)

tf.nn.batch_normalization

Batch normalization。跟 tf.contrib.layers.batch_norm 的具体区别还没搞懂。貌似是同一个东西用不同 api 来实现。

tf.nn.batch_norm_with_global_normalization


Losses

tf.nn.l2_loss

tf.nn.log_poisson_loss


Classification

tf.nn.softmax

对 -1维 进行重新赋值,之后每个元素取值在 (0, 1) 范围内, 且总和为 1 。

计算公式:softmax = exp(logits) / reduce_sum(exp(logits), dim)

tf.nn.softmax (logits, dim=-1, name=None)

tf.nn.log_softmax

计算 log_softmax

计算公式:logsoftmax = logits - log(reduce_sum(exp(logits), dim))

tf.nn.softmax (logits, dim=-1, name=None)

代码语言:javascript
复制
# coding=utf-8
import tensorflow as tf
x = tf.placeholder(dtype=tf.float32)

# 保证每次都是 同一组features
import numpy as np
i_p = np.random.uniform(-1, 1, [5])

# tensorflow 自带的 计算接口
output_softmax = tf.nn.softmax(x)
output_log_softmax = tf.nn.log_softmax(x)

# 自己定义的 softmax、log_softmax 计算函数:
diy_softmax = lambda logits: tf.divide(tf.exp(logits), tf.reduce_sum(tf.exp(logits), -1))
diy_log_softmax = lambda logits: tf.subtract(logits, tf.log(tf.reduce_sum(tf.exp(logits), -1)))

with tf.Session() as sess:
    print '\ninput :\n', sess.run(x, feed_dict={x:i_p})
    print '\n----------\n'
    print '\ntf.nn.softmax :\n', sess.run(output_softmax, feed_dict={x:i_p})
    print '\ndiy_softmax :\n', sess.run(diy_softmax(x), feed_dict={x:i_p})
    print '\ntf.nn.log_softmax :\n', sess.run(output_log_softmax, feed_dict={x:i_p})
    print '\ndiy_log_softmax :\n', sess.run(diy_log_softmax(x), feed_dict={x:i_p})

根据打印结果可见,自己 根据数学公式 定义的 softmax 函数log_softmax 函数 得到了和 tf.nn.softmax()tf.nn.log_softmax() 接口一样的计算结果:

代码语言:javascript
复制
input :
[-0.33777472  0.34060675  0.70219433 -0.98343688  0.405606  ]

----------

tf.nn.softmax :
[ 0.11866388  0.23384923  0.33571553  0.06221728  0.24955413]

diy_softmax :
[ 0.11866388  0.23384921  0.33571553  0.06221728  0.2495541 ]

tf.nn.log_softmax :
[-2.13146019 -1.45307875 -1.0914911  -2.7771225  -1.3880794 ]

diy_log_softmax :
[-2.13146019 -1.45307875 -1.09149122 -2.7771225  -1.38807952]

Embeddings

tf.nn.embedding_lookup

tf.nn.embedding_lookup_sparse


Recurrent Neural Networks


Connectionist Temporal Classification (CTC)


Evaluation

tf.nn.top_k

tf.nn.in_top_k


Candidate Sampling



本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2017年10月03日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • Activation Functions
    • tf.nn.relu
      • tf.nn.relu6
        • tf.nn.crelu
          • tf.nn.elu
            • tf.nn.softplus
              • tf.nn.softsign
                • tf.nn.dropout
                  • tf.nn.bias_add
                    • tf.sigmoid
                      • tf.tanh
                      • Convolution
                        • tf.nn.convolution
                          • tf.nn.conv2d
                            • tf.nn.conv2d_transpose
                              • tf.nn.conv1d
                              • Pooling
                                • tf.nn.avg_pool
                                  • tf.nn.max_pool
                                    • tf.nn.max_pool_with_argmax
                                      • tf.nn.pool
                                      • Morphological filtering
                                      • Normalization
                                        • tf.nn.l2_normalize
                                          • tf.nn.batch_normalization
                                            • tf.nn.batch_norm_with_global_normalization
                                            • Losses
                                              • tf.nn.l2_loss
                                                • tf.nn.log_poisson_loss
                                                • Classification
                                                  • tf.nn.softmax
                                                    • tf.nn.log_softmax
                                                    • Embeddings
                                                      • tf.nn.embedding_lookup
                                                        • tf.nn.embedding_lookup_sparse
                                                        • Recurrent Neural Networks
                                                        • Connectionist Temporal Classification (CTC)
                                                        • Evaluation
                                                          • tf.nn.top_k
                                                            • tf.nn.in_top_k
                                                            • Candidate Sampling
                                                            领券
                                                            问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档