前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Tensorflow1.x与Tensorflow2.0的区别

Tensorflow1.x与Tensorflow2.0的区别

作者头像
用户7886150
修改2021-01-18 11:47:32
1.3K0
修改2021-01-18 11:47:32
举报
文章被收录于专栏:bit哲学院bit哲学院

参考链接: Tensorflow 2.0的新功能

来源:斯坦福大学cs231n 

Historical background on TensorFlow 1.x 

TF1.x的历史背景 

TensorFlow 1.x is primarily a framework for working with static computational graphs. Nodes in the computational graph are Tensors which will hold n-dimensional arrays when the graph is run; edges in the graph represent functions that will operate on Tensors when the graph is run to actually perform useful computation. 

Before Tensorflow 2.0, we had to configure the graph into two phases. There are plenty of tutorials online that explain this two-step process. The process generally looks like the following for TF 1.x: 

Build a computational graph that describes the computation that you want to perform. This stage doesn’t actually perform any computation; it just builds up a symbolic representation of your computation. This stage will typically define one or more placeholder objects that represent inputs to the computational graph.Run the computational graph many times. Each time the graph is run (e.g. for one gradient descent step) you will specify which parts of the graph you want to compute, and pass a feed_dict dictionary that will give concrete values to any placeholders in the graph. 

TensorFlow 1.x主要是用于处理静态计算图的框架。计算图中的节点是Tensors,当图形运行时,它将保持n维数组;图中的边表示在运行图以实际执行有用计算时将在张量上运行的函数。 

在Tensorflow 2.0之前,我们必须将图表分为两个阶段: 

构建一个描述您要执行的计算的计算图。这个阶段实际上不执行任何计算;它只是建立了计算的符号表示。该阶段通常将定义一个或多个表示计算图输入的“占位符”(placeholder)对象。多次运行计算图。 每次运行图形时(例如,对于一个梯度下降步骤),您将指定要计算的图形的哪些部分,并传递一个“feed_dict”字典,该字典将给出具体值为图中的任何“占位符”。 

The new paradigm in Tensorflow 2.0 

Tensorflow 2.0中的新范例 

Now, with Tensorflow 2.0, we can simply adopt a functional form that is more Pythonic and similar in spirit to PyTorch and direct Numpy operation. Instead of the 2-step paradigm with computation graphs, making it (among other things) easier to debug TF code. You can read more details at https://www.tensorflow.org/guide/eager. 

The main difference between the TF 1.x and 2.0 approach is that the 2.0 approach doesn’t make use of tf.Session, tf.run, placeholder, feed_dict. To get more details of what’s different between the two version and how to convert between the two, check out the official migration guide: https://www.tensorflow.org/alpha/guide/migration_guide 

现在,使用Tensorflow 2.0,我们可以简单地采用"更像python"的功能形式,与PyTorch和Numpy操作直接相似。 而不是带有计算图的2步范例,使其(除其他事项外)更容易调试TF代码。 您可以在https://www.tensorflow.org/guide/eager上阅读更多详细信息。 

TF 1.x和2.0方法的主要区别在于2.0方法不使用tf.Session,tf.run,placeholder,feed_dict。 要了解两个版本之间的不同之处以及如何在两者之间进行转换的更多详细信息,请查看官方迁移指南:https://www.tensorflow.org/alpha/guide/migration_guide 

一个简单的例子:flatten功能 

tf1.x 

def flatten(x):

    """    

    Input:

    - TensorFlow Tensor of shape (N, D1, ..., DM)

    Output:

    - TensorFlow Tensor of shape (N, D1 * ... * DM)

    """

    N = tf.shape(x)[0]

    return tf.reshape(x, (N, -1))

def test_flatten():

    # Clear the current TensorFlow graph.

    tf.reset_default_graph()

    # Stage I: Define the TensorFlow graph describing our computation.

    # In this case the computation is trivial: we just want to flatten

    # a Tensor using the flatten function defined above.

    # Our computation will have a single input, x. We don't know its

    # value yet, so we define a placeholder which will hold the value

    # when the graph is run. We then pass this placeholder Tensor to

    # the flatten function; this gives us a new Tensor which will hold

    # a flattened view of x when the graph is run. The tf.device

    # context manager tells TensorFlow whether to place these Tensors

    # on CPU or GPU.

    with tf.device(device):

        x = tf.placeholder(tf.float32)

        x_flat = flatten(x)

    # At this point we have just built the graph describing our computation,

    # but we haven't actually computed anything yet. If we print x and x_flat

    # we see that they don't hold any data; they are just TensorFlow Tensors

    # representing values that will be computed when the graph is run.

    print('x: ', type(x), x)

    print('x_flat: ', type(x_flat), x_flat)

    print()

    # We need to use a TensorFlow Session object to actually run the graph.

    with tf.Session() as sess:

        # Construct concrete values of the input data x using numpy

        x_np = np.arange(24).reshape((2, 3, 4))

        print('x_np:\n', x_np, '\n')

        # Run our computational graph to compute a concrete output value.

        # The first argument to sess.run tells TensorFlow which Tensor

        # we want it to compute the value of; the feed_dict specifies

        # values to plug into all placeholder nodes in the graph. The

        # resulting value of x_flat is returned from sess.run as a

        # numpy array.

        x_flat_np = sess.run(x_flat, feed_dict={x: x_np})

        print('x_flat_np:\n', x_flat_np, '\n')

        # We can reuse the same graph to perform the same computation

        # with different input data

        x_np = np.arange(12).reshape((2, 3, 2))

        print('x_np:\n', x_np, '\n')

        x_flat_np = sess.run(x_flat, feed_dict={x: x_np})

        print('x_flat_np:\n', x_flat_np)

test_flatten()

tf2.0 

def flatten(x):

    """    

    Input:

    - TensorFlow Tensor of shape (N, D1, ..., DM)

    Output:

    - TensorFlow Tensor of shape (N, D1 * ... * DM)

    """

    N = tf.shape(x)[0]

    return tf.reshape(x, (N, -1))

def test_flatten():

    # Construct concrete values of the input data x using numpy

    x_np = np.arange(24).reshape((2, 3, 4))

    print('x_np:\n', x_np, '\n')

    # Compute a concrete output value.

    x_flat_np = flatten(x_np)

    print('x_flat_np:\n', x_flat_np, '\n')

test_flatten()

本文系转载,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系转载前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档