前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >tensorflow2线性拟合教程

tensorflow2线性拟合教程

作者头像
平凡的学生族
发布2019-11-04 16:17:35
4510
发布2019-11-04 16:17:35
举报
文章被收录于专栏:后端技术后端技术

一个线性拟合的例子,不懂可以问哈,我偶尔会登录看博客

代码语言:javascript
复制
import os

import tensorflow as tf
import numpy as np

os.environ['CUDA_VISIBLE_DEVICES'] = "0"  # Specify visible gpus.
tf.debugging.set_log_device_placement(True)  # Show the devices when calculating.

x0 = np.array([i * 1.0 for i in range(100)], dtype=np.float64)
y0 = 2.0 * x0 + 3.0


initializer = tf.initializers.GlorotUniform()
W = tf.Variable(initializer(shape=(1, ), dtype=tf.float64), name="W")
b = tf.Variable(initializer(shape=(1, ), dtype=tf.float64), name="b")

optimizer = tf.optimizers.Adam()


def train_step(x_batch, y_batch, epoch, batch_i):
    """
    Note that here we should not call:

    y_predict = W * x_batch + b
    with tf.GradientTape() as tape:
        loss = tf.reduce_mean(tf.math.pow(y_predict - y_batch, 2))

    which will cause "No gradients provided for any variable:" error.
    I don't know the reason but it's better to define tensors within the "tf.GradientTape" block.
    """
    with tf.GradientTape() as tape:
        y_predict = W * x_batch + b
        loss = tf.reduce_mean(tf.math.pow(y_predict - y_batch, 2))
    # print("loss", loss)
    train_variables = [W, b]
    gradients = tape.gradient(loss, train_variables)
    # print("grads", gradients)
    optimizer.apply_gradients(zip(gradients, train_variables))
    print(epoch, batch_i, W, b)


def fit():
    for epoch in range(10000):
        for i in range(0, 100, 10):
            start = i
            end = i + 10
            x_batch = x0[start:end]
            y_batch = y0[start:end]
            train_step(x_batch, y_batch, epoch, i)

fit()
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档