# TensorBoard可视化

• 数据管道
• 模型层
• 损失函数
• TensorBoard可视化

TensorBoard正是这样一个神奇的炼丹可视化辅助工具。它原是TensorFlow的小弟，但它也能够很好地和Pytorch进行配合。甚至在Pytorch中使用TensorBoard比TensorFlow中使用TensorBoard还要来的更加简单和自然。

Pytorch中利用TensorBoard可视化的大概过程如下：

### 一，可视化模型结构

```import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter
from torchkeras import Model,summary
```
```class Net(nn.Module):

def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3)
self.pool = nn.MaxPool2d(kernel_size = 2,stride = 2)
self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5)
self.dropout = nn.Dropout2d(p = 0.1)
self.flatten = nn.Flatten()
self.linear1 = nn.Linear(64,32)
self.relu = nn.ReLU()
self.linear2 = nn.Linear(32,1)
self.sigmoid = nn.Sigmoid()

def forward(self,x):
x = self.conv1(x)
x = self.pool(x)
x = self.conv2(x)
x = self.pool(x)
x = self.dropout(x)
x = self.flatten(x)
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
y = self.sigmoid(x)
return y

net = Net()
print(net)
```
```Net(
(conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
(dropout): Dropout2d(p=0.1, inplace=False)
(flatten): Flatten()
(linear1): Linear(in_features=64, out_features=32, bias=True)
(relu): ReLU()
(linear2): Linear(in_features=32, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
```
```summary(net,input_shape= (3,32,32))
```
```----------------------------------------------------------------
Layer (type)               Output Shape         Param #
================================================================
Conv2d-1           [-1, 32, 30, 30]             896
MaxPool2d-2           [-1, 32, 15, 15]               0
Conv2d-3           [-1, 64, 11, 11]          51,264
MaxPool2d-4             [-1, 64, 5, 5]               0
Dropout2d-5             [-1, 64, 5, 5]               0
AdaptiveMaxPool2d-6             [-1, 64, 1, 1]               0
Flatten-7                   [-1, 64]               0
Linear-8                   [-1, 32]           2,080
ReLU-9                   [-1, 32]               0
Linear-10                    [-1, 1]              33
Sigmoid-11                    [-1, 1]               0
================================================================
Total params: 54,273
Trainable params: 54,273
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.011719
Forward/backward pass size (MB): 0.359634
Params size (MB): 0.207035
Estimated Total Size (MB): 0.578388
----------------------------------------------------------------
```
```writer = SummaryWriter('./data/tensorboard')
writer.close()
```
```%load_ext tensorboard
#%tensorboard --logdir ./data/tensorboard
```
```from tensorboard import notebook
#查看启动的tensorboard程序
notebook.list()
```
```#启动tensorboard程序
notebook.start("--logdir ./data/tensorboard")
#等价于在命令行中执行 tensorboard --logdir ./data/tensorboard
#可以在浏览器中打开 http://localhost:6006/ 查看
```

## 二，可视化指标变化

```import numpy as np
import torch
from torch.utils.tensorboard import SummaryWriter

# f(x) = a*x**2 + b*x + c的最小值
x = torch.tensor(0.0,requires_grad = True) # x需要被求导
a = torch.tensor(1.0)
b = torch.tensor(-2.0)
c = torch.tensor(1.0)

optimizer = torch.optim.SGD(params=[x],lr = 0.01)

def f(x):
result = a*torch.pow(x,2) + b*x + c
return(result)

writer = SummaryWriter('./data/tensorboard')
for i in range(500):
y = f(x)
y.backward()
optimizer.step()

writer.close()

print("y=",f(x).data,";","x=",x.data)
```
```y= tensor(0.) ; x= tensor(1.0000)
```

## 三，可视化参数分布

```import numpy as np
import torch
from torch.utils.tensorboard import SummaryWriter

# 创建正态分布的张量模拟参数矩阵
def norm(mean,std):
t = std*torch.randn((100,20))+mean
return t

writer = SummaryWriter('./data/tensorboard')
for step,mean in enumerate(range(-10,10,1)):
w = norm(mean,1)
writer.flush()
writer.close()

```

## 四，可视化原始图像

```import torch
import torchvision
from torch import nn
from torchvision import transforms,datasets

transform_train = transforms.Compose(
[transforms.ToTensor()])
transform_valid = transforms.Compose(
[transforms.ToTensor()])

```
```ds_train = datasets.ImageFolder("./data/cifar2/train/",
transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())
ds_valid = datasets.ImageFolder("./data/cifar2/test/",
transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())

print(ds_train.class_to_idx)

dl_train = DataLoader(ds_train,batch_size = 50,shuffle = True,num_workers=3)
dl_valid = DataLoader(ds_valid,batch_size = 50,shuffle = True,num_workers=3)

dl_train_iter = iter(dl_train)
images, labels = dl_train_iter.next()

# 仅查看一张图片
writer = SummaryWriter('./data/tensorboard')
writer.close()

# 将多张图片拼接成一张图片，中间用黑色网格分割
writer = SummaryWriter('./data/tensorboard')
# create grid of images
img_grid = torchvision.utils.make_grid(images)
writer.close()

# 将多张图片直接写入
writer = SummaryWriter('./data/tensorboard')
writer.close()
```
```{'0_airplane': 0, '1_automobile': 1}
```

## 五，可视化人工绘图

```import torch
import torchvision
from torch import nn
from torchvision import transforms,datasets

transform_train = transforms.Compose(
[transforms.ToTensor()])
transform_valid = transforms.Compose(
[transforms.ToTensor()])

ds_train = datasets.ImageFolder("./data/cifar2/train/",
transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())
ds_valid = datasets.ImageFolder("./data/cifar2/test/",
transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())

print(ds_train.class_to_idx)
```
```{'0_airplane': 0, '1_automobile': 1}
```
```%matplotlib inline
%config InlineBackend.figure_format = 'svg'
from matplotlib import pyplot as plt

figure = plt.figure(figsize=(8,8))
for i in range(9):
img,label = ds_train[i]
img = img.permute(1,2,0)
ax=plt.subplot(3,3,i+1)
ax.imshow(img.numpy())
ax.set_title("label = %d"%label.item())
ax.set_xticks([])
ax.set_yticks([])
plt.show()
```
```writer = SummaryWriter('./data/tensorboard')
writer.close()
```

0 条评论

• ### 激活函数activation

激活函数在深度学习中扮演着非常重要的角色，它给网络赋予了非线性，从而使得神经网络能够拟合任意复杂的函数。

• ### torchkeras,像Keras一样训练Pytorch模型

torchkeras 是在pytorch上实现的仿keras的高层次Model接口。有了它，你可以像Keras那样，对pytorch构建的模型进行summary...

• ### 张量的结构操作

Pytorch提供的方法比numpy更全面，运算速度更快，如果需要的话，还可以使用GPU进行加速。

• ### Cisco Packet Tracer 6.0 实验笔记

开篇：组建小型局域网 实验任务         1、利用一台型号为2960的交换机将2pc机互连组建一个小型局域网；         2、分别设置pc机的ip地...

• ### HashMap 中的容量与扩容实现，细致入微，值的一品！

巴闭，你的脚怎么会有味道，我要闻闻看是不是好吃的，嗯~~爸比你的脚臭死啦！！ ……

• ### Mysql占用过高CPU时的优化手段

Mysql占用CPU过高的时候，该从哪些方面下手进行优化？ 占用CPU过高，可以做如下考虑： 1）一般来讲，排除高并发的因素，还是要找到导致你CPU过高的哪几条...

• ### elasticsearch实战三部曲之一：索引操作

从本章开始，我们一起来实战elasticsearch，熟悉相关操作和命令，为后续的深入学习打好基础；

• ### iOS_版本号大小的比较

已经找工作很久了，简历就像是石沉大海，一点回音都没有，今天下午收到一个回复，内容如下： 您好！麻烦您用任何熟悉的编程语言实现function versionC...

• ### Java对象初始化顺序

最近我发现了一个有趣的问题，这个问题的答案乍一看下骗过了我的眼睛。看一下这三个类：

• ### 2017年末总结

又是一年，年复一年傻傻的自己， 和之前还是没有什么区别。 没有什么会很简单，也没有什么会比较复杂。 什么事情，任何事情，做多了就变得淡然。 没有什么事情...