01
Variable
a = tf.Variable(2, name="scalar") # create variable a with scalar value b = tf.Variable([2, 3], name="vector") # create variable b as a vector c = tf.Variable([[0, 1], [2, 3]], name="matrix") # create variable c as a 2x2 matrix # create variable W as 784 x 10 tensor, filled with zeros W = tf.Variable(tf.zeros([784,10]))
02
Variable vs constant
Variable 是tensorflow的一个类,里面封装了很多operations,简称ops,所以它是大写的,而tensorflow的op是小写的。
又知constant是一个operation,简写为op,所以它小写了。
03
初始化Variable
在01节中,创建了a,b,c,w4个Variable对象,在tensorflow中,创建的这些对象,必须要经过初始化才能使用。
最简单直接的初始化所有变量的方法:
init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init)
初始化指定变量:
#初始化变量a和b
init_ab = tf.variables_initializer([a, b], name="init_ab") with tf.Session() as sess: sess.run(init_ab)
初始化一个变量:
W = tf.Variable(tf.zeros([784,10])) with tf.Session() as sess: sess.run(W.initializer)
04
eval
对比两片代码,pieceA, pieceB :
pieceA
# W is a random 700 x 100 variable object W = tf.Variable(tf.truncated_normal([700, 10])) with tf.Session() as sess: sess.run(W.initializer) print W >> Tensor("Variable/read:0", shape=(700, 10), dtype=float32)
pieceB
# W is a random 700 x 100 variable object W = tf.Variable(tf.truncated_normal([700, 10])) with tf.Session() as sess: sess.run(W.initializer) print W.eval() >> [[-0.76781619 -0.67020458 1.15333688 ..., -0.98434633 -1.25692499 -0.90904623] [-0.36763489 -0.65037876 -1.52936983 ..., 0.19320194 -0.38379928 0.44387451] [ 0.12510735 -0.82649058 0.4321366 ..., -0.3816964 0.70466036 1.33211911] ..., [ 0.9203397 -0.99590844 0.76853162 ..., -0.74290705 0.37568584 0.64072722] ]
So, you can guess what eval() functions!
05
Variable的操作接口:assign()
一个问题:
W = tf.Variable(10) W.assign(100) with tf.Session() as sess: sess.run(W.initializer) print W.eval()
打印的结果,是10,还是100???
10
Why?
一条tensorflow的规则:
W.assign(100) 并不会给W赋值,assign()是一个op,所以它返回一个op object,需要在Session中run这个op object,才会赋值给W.
Just like this:
W = tf.Variable(10) assign_op = W.assign(100) with tf.Session() as sess: sess.run(W.initializer) sess.run(assign_op) print W.eval() # >> 100
带下划线的代码可以省略,因为assign_op可以完成赋初始值操作。事实上, initializer op 是一个特殊的assign op.
Go on:
# create a variable whose original value is 2 my_var = tf.Variable(2, name="my_var") # assign a * 2 to a and call that op a_times_two my_var_times_two = my_var.assign(2 * my_var) with tf.Session() as sess: sess.run(my_var.initializer) sess.run(my_var_times_two) # >> 4 sess.run(my_var_times_two) # >> 8 sess.run(my_var_times_two) # >> 16
大家可以体会,为什么执行一次,就会加倍。
进而,体会assign()返回的assign_op的意义。
为什么tensorflow要将每一个op扔到一个Session中去run 呢? Session的工程意义是什么? 关于这个问题,接下来回答。
本文分享自 程序员郭震zhenguo 微信公众号,前往查看
如有侵权,请联系 cloudcommunity@tencent.com 删除。
本文参与 腾讯云自媒体同步曝光计划 ,欢迎热爱写作的你一起参与!