# TensorFlow指南（一）——上手TensorFlow

http://blog.csdn.net/u011239443/article/details/79066094 TensorFlow是谷歌开源的深度学习库。不多介绍，相信准备学习TensorFlow的同学也会自己去更多的了解。本系列博文讲尽量不涉及深度学习理论，但是会给出相关理论对应的博文等资料供大家参阅。

TensorFlow会根据代码先创建好计算图，然后数据会再流入这样的计算图中：

Tensor，张量，其实我们可以简单的理解为是多维数组，这也是TensorFlow中的基本数据结构。 Flow，流，很直观的表达了，Tensor之间的转化是一种类似于数据流的方式。

# 2. 初体验

```import tensorflow as tf
x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f=x*x*y+y+2```

```init = tf.global_variables_initializer() # 准备初始化全局变量，及 x 和 y
with tf.Session() as sess:
init.run() # 初始化
result = f.eval() # 计算 f
print(result)```

# 3. 节点值的生命周期

```w = tf.constant(3)
x=w+2
y=x+5
z=x*3
with tf.Session() as sess:
print(y.eval()) # 10
print(z.eval()) # 15```

```with tf.Session() as sess:
y_val, z_val = sess.run([y, z])
print(y_val) # 10
print(z_val) # 15```

# 4. 实现梯度下降

## 4.1 手动计算梯度下降

```# -*- coding:utf-8 -*-
import numpy as np
import tensorflow as tf
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler

n_epochs = 1000 # 迭代次数
learning_rate = 0.01

housing = fetch_california_housing()
m,n = housing.data.shape```

StandardScaler() 将特征进行标准归一化，可参阅：http://blog.csdn.net/u011239443/article/details/76360294#t2

```scaler = StandardScaler()
scaled_housing_data = scaler.fit_transform(housing.data)
scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]

X = tf.constant(scaled_housing_data_plus_bias,dtype=tf.float32)
y = tf.constant(housing.target.reshape(-1,1),dtype=tf.float32)
theta = tf.Variable(tf.random_uniform([n+1,1],-1.0,1.0)) # 用随机数初始化
y_pred = tf.matmul(X,theta)
error = y_pred - y
mse = tf.reduce_mean(tf.square(error))

init = tf.global_variables_initializer()

with tf.Session() as sess:
sess.run(init)
#for x in theta:
#    print(x)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)```
```Epoch 0 MSE = 6.46781
Epoch 100 MSE = 0.664388
Epoch 200 MSE = 0.543974
Epoch 300 MSE = 0.536818
Epoch 400 MSE = 0.534512
Epoch 500 MSE = 0.532798
Epoch 600 MSE = 0.531383
Epoch 700 MSE = 0.530206
Epoch 800 MSE = 0.529225
Epoch 900 MSE = 0.528407```

## 4.2 梯度下降优化器

```gradients = 2/m * tf.matmul(tf.transpose(X),error)

```optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)```
```Epoch 0 MSE = 7.25867
Epoch 100 MSE = 0.694411
Epoch 200 MSE = 0.577526
Epoch 300 MSE = 0.56535
Epoch 400 MSE = 0.55735
Epoch 500 MSE = 0.551041
Epoch 600 MSE = 0.54601
Epoch 700 MSE = 0.541978
Epoch 800 MSE = 0.538734
Epoch 900 MSE = 0.536115```

# 5. Mini-batch 梯度下降

```X = tf.placeholder(tf.float32,(None,n+1),'X')
y = tf.placeholder(tf.float32,(None,1),'y')
batch_size = 100
n_batches = int(np.ceil(m/batch_size))

def fetch_batch(epoch,batch_index,batch_size):
np.random.seed(epoch*n_batches+batch_index) # 每次调用 都有不同的 随机种子
indices = np.random.randint(m,size=batch_size) # 去 0 ~ m-1 之间去 batch_size 整数
X_batch = scaled_housing_data_plus_bias[indices]
y_batch = housing.target.reshape(-1,1)[indices]
return X_batch,y_batch

with tf.Session() as sess:
sess.run(init)

for epoch in range(n_epochs):
if epoch%100==0:
print("Epoch", epoch, "MSE =", mse.eval())

for batch_index in range(n_batches):
X_batch,y_batch = fetch_batch(epoch,batch_index,batch_size)
sess.run(training_op,feed_dict={X:X_batch,y:y_batch})```
```Epoch 0 MSE = 12.6052
Epoch 100 MSE = 0.524321
Epoch 200 MSE = 0.524321
Epoch 300 MSE = 0.524321
Epoch 400 MSE = 0.524321
Epoch 500 MSE = 0.524321
Epoch 600 MSE = 0.524321
Epoch 700 MSE = 0.524321
Epoch 800 MSE = 0.524321
Epoch 900 MSE = 0.524321```

# 6. 保存 & 加载 模型

```saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)

for epoch in range(n_epochs):
if epoch%100==0:
print("Epoch", epoch, "MSE =", mse.eval())

for batch_index in range(n_batches):
X_batch,y_batch = fetch_batch(epoch,batch_index,batch_size)
sess.run(training_op,feed_dict={X:X_batch,y:y_batch})
save_path = saver.save(sess,'./my_model_final.ckpt')   ```

```with tf.Session() as sess:
saver.restore(sess,'./my_model_final.ckpt')
print(mse.eval())```
```INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt
0.524321```

# 7. 使用 TensorBoard 可视化训练

TensorFlow在训练模型的时候可以将训练过程通过日志保存下来。TensorBoard可以根据这些日志来可视化训练过程。 首先，我们利用当前时间给日志文件起一个唯一名字：

```from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logs = 'tf_logs'
logdir = "{}/run-{}".format(root_logs,now)```
• scalar 用于获取摘要信息，
• FileWriter类提供了一种机制，在给定的目录中创建事件文件，并向其添加摘要和事件。该类将异步更新文件内容。这允许训练程序调用方法，直接从训练循环中直接向文件添加数据，而不需要减慢训练的速度。
```mse_summary = tf.summary.scalar('MSE',mse)
file_writer = tf.summary.FileWriter(logdir,tf.get_default_graph())

with tf.Session() as sess:
sess.run(init)

for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch,y_batch = fetch_batch(epoch,batch_index,batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X:X_batch,y:y_batch})
step = epoch * n_batches + batch_index
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})```

`tensorboard --logdir tf_logs/`

# 8.名字空间

```with tf.name_scope("loss") as scope:
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")```

```print(error.op.name)
print(mse.op.name)```
```loss/sub
loss/mse```

# 9. 模块化

```reset_graph()
n_features = 3
X = tf.placeholder(tf.float32,(None,n_features),name='X')
w1 = tf.Variable(tf.random_normal((n_features,1)),name='w1')
w2 = tf.Variable(tf.random_normal((n_features,1)),name='w2')
b1 = tf.Variable(0.0,name='b1')
b2 = tf.Variable(0.0,name='b2')

relu1 = tf.maximum(0.0,z1,name='relu1')
relu2 = tf.maximum(0.0,z2,name='relu1')

file_writer = tf.summary.FileWriter("logs/relu0", tf.get_default_graph())```

```reset_graph()

def relu(X):
with tf.name_scope('relu'):
w_shape = (int(X.get_shape()[1]),1)
w = tf.Variable(tf.random_normal(w_shape),name='w')
b = tf.Variable(0.0,name='b')
return tf.maximum(0.0,z,name='max')

n_features = 3
X = tf.placeholder(tf.float32,(None,n_features),name='X')
relus = [relu(X) for i in range(10)]

file_writer = tf.summary.FileWriter("logs/relu2", tf.get_default_graph())
file_writer.close()```

# 10.共享变量

```reset_graph()

def relu(X):
with tf.variable_scope('relu',reuse=True):
threshold = tf.get_variable('threshold')
w_shape = (int(X.get_shape()[1]),1)
w = tf.Variable(tf.random_normal(w_shape),name='w')
b = tf.Variable(0.0,name='b')
return tf.maximum(threshold,z,name='max')

n_features = 3
X = tf.placeholder(tf.float32,(None,n_features),name='X')
with tf.variable_scope('relu'):
threshold = tf.get_variable('threshold',shape=(),initializer=tf.constant_initializer(0.0))
relus = [relu(X) for i in range(10)]

file_writer = tf.summary.FileWriter("logs/relu3", tf.get_default_graph())
file_writer.close()```

0 条评论

• ### TensorFlow指南（三）——深度神经网络（初级）

由于本系列博文主要专注于Tensorflow本身，所以还是老样子不会过多讲解神经网络的理论知识。 可以参阅这篇博文来先理解下神经网络：http://blog...

• ### TensorFlow实战——CNN（LeNet5）——MNIST数字识别

本文地址： http://blog.csdn.net/u011239443/article/details/72861591

• ### TensorFlow实战——CNN（VGGNet19）——图像风格转化

http://blog.csdn.net/u011239443/article/details/73721903

• ### tensorflow编程: Running Graphs

A class for running TensorFlow operations.   这是一个类，执行 tensorflow 中的 op 。它里面定...

• ### TensorFlow从0到1 - 16 - L2正则化对抗“过拟合”

前面的14 交叉熵损失函数——防止学习缓慢和15 重新思考神经网络初始化从学习缓慢问题入手，尝试改进神经网络的学习。本篇讨论过拟合问题，并引入与之相对的L2正...

• ### TensorFlow从0到1丨第十六篇 L2正则化对抗“过拟合”

前面的第十四篇 交叉熵损失函数——防止学习缓慢和第十五篇 重新思考神经网络初始化从学习缓慢问题入手，尝试改进神经网络的学习。本篇讨论过拟合问题，并引入与之相对的...

• ### 人工智能|TensorFlow前向传播实例

举一个mnist手写数据集的识别的例子，这个数据集在机器学习中是非常经典的数据集，由60k个训练样本和10k个测试样本组成，每个样本都是一张28*28像素的灰度...