前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >What's the difference of name scope and a variable scope in tensorflow?

What's the difference of name scope and a variable scope in tensorflow?

作者头像
GavinZhou
发布2018-01-02 16:12:19
7580
发布2018-01-02 16:12:19
举报

Let’s begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around. The method tf.get_variable can be used with the name of the variable as argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists). It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.

As a result, we end up having two different types of scopes:

name scope, created using tf.name_scope or tf.op_scope variable scope, created using tf.variable_scope or tf.variable_op_scope Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e. the scope will be added as a prefix to the operation or variable name.

1. tf.name_scope creates namespace for operators in the default graph. 2. tf.variable_scope creates namespace for both variables and operators in the default graph.

However, name scope is ignored by tf.get_variable. We can see that in the following example:

with tf.name_scope("my_scope"):
        v1 = tf.get_variable("var1", [1], dtype=tf.float32)
        v2 = tf.Variable(1, name="var2", dtype=tf.float32)
        a = tf.add(v1, v2)
print(v1.name)  # var1:0
print(v2.name)  # my_scope/var2:0
print(a.name)   # my_scope/Add:0

The only way to place a variable accessed using tf.get_variable in a scope is to use variable scope, as in the following example:

with tf.variable_scope("my_scope"):
    v1 = tf.get_variable("var1", [1], dtype=tf.float32)
    v2 = tf.Variable(1, name="var2", dtype=tf.float32)
    a = tf.add(v1, v2)

print(v1.name)  # my_scope/var1:0
print(v2.name)  # my_scope/var2:0
print(a.name)   # my_scope/Add:0

Finally, let’s look at the difference between the different methods for creating scopes. We can group them in two categories:

  • tf.name_scope(name) (for name scope) and tf.variable_scope(name_or_scope, …) (for variable scope) create a scope with the name specified as argument
  • tf.op_scope(values, name, default_name=None) (for name scope) and tf.variable_op_scope(values, name_or_scope, default_name=None, …) (for variable scope) create a scope, just like the functions above, but besides the scope name, they accept an argument default_name which is used instead of name when it is set to None. Moreover, they accept a list of tensors (values) in order to check if all the tensors are from the same, default graph. This is useful when creating new operations, for example, see the implementation of tf.histogram_summary.
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2016-12-03 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档