# 前言

backward()函数，这个函数返回的就是`torch.autograd.backward()`。也就是说，我们在训练中输入我们数据，然后经过一系列神经网络运算，最后计算loss，然后loss.backward()。这里的backward()归根绝地就是，上面说的这个函数。

# 正文

## Fake Backward

```class ContentLoss(nn.Module):
def __init__(self, target, weight):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
self.target = target.detach() * weight
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.weight = weight
self.criterion = nn.MSELoss()
def forward(self, input):
self.loss = self.criterion(input * self.weight, self.target)
self.output = input
return self.output
def backward(self, retain_graph=True):
print('ContentLoss Backward works')
self.loss.backward(retain_graph=retain_graph)
return self.loss
...
# 执行backward语句，具体代码请看下方的连接。
for sl in style_losses:
style_score += sl.backward()
for cl in content_losses:
content_score += cl.backward()```

```class ContentLoss(nn.Module):
def __init__(self, target, ):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
...
# 执行代码，具体看官网的最新0.4.0风格迁移教程
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
loss = style_score + content_score
loss.backward()```

## Real Backward

```class MyReLU(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
torch.autograd.Function and implementing the forward and backward passes
which operate on Tensors.
"""
@staticmethod
def forward(ctx, x):
"""
In the forward pass we receive a context object and a Tensor containing the
input; we must return a Tensor containing the output, and we can use the
context object to cache objects for use in the backward pass.
"""
ctx.save_for_backward(x)
return x.clamp(min=0)
def backward(ctx, grad_output):
"""
In the backward pass we receive the context object and a Tensor containing
the gradient of the loss with respect to the output produced during the
forward pass. We can retrieve cached data from the context object, and must
compute and return the gradient of the loss with respect to the input to the
forward function.
"""
x, = ctx.saved_tensors
grad_x = grad_output.clone()
grad_x[x < 0] = 0
return grad_x```

```class my_function(Function):
def forward(self, input, parameters):
self.saved_for_backward = [input, parameters]
# output = [对输入和参数进行的操作，这里省略]
return output
def backward(self, grad_output):
input, parameters = self.saved_for_backward
# grad_input = [求 forward(input)关于 parameters 的导数] * grad_output
return grad_input
# 然后通过定义一个Module来包装一下
class my_module(nn.Module):
def __init__(self, ...):
super(my_module, self).__init__()
self.parameters = # 初始化一些参数
def backward(self, input):
output = my_function(input, self.parameters) # 在这里执行你之前定义的function!
return output```

# 参考链接

https://discuss.pytorch.org/t/defining-backward-function-in-nn-module/5047

https://discuss.pytorch.org/t/whats-the-difference-between-torch-nn-functional-and-torch-nn/681

https://discuss.pytorch.org/t/difference-of-methods-between-torch-nn-and-functional/1076

https://discuss.pytorch.org/t/whats-the-difference-between-torch-nn-functional-and-torch-nn/681/4

61 篇文章39 人订阅

0 条评论

## 相关文章

### 本次新生赛部分题解

A poj1129 这题的愿意是考察四色原理（不是太难，主要是了解），但是模拟+暴力枚举也是可以过的，有几个WA点，注意看注释 #include<cstdio>...

22050

11320

### php pathinfo()的用法

pathinfo — 返回文件路径的信息  mixed pathinfo ( string \$path [, int \$options = PATHINFO_D...

41770

37640

27320

### 第10-11周Python学习周记

3.时间允许的话，尽可能了解一些身为程序员必要掌握的知识（例如json，参考于网络资源）。

14210

30960

288110

### Sqoop切分数据的思想概况

Sqoop通过--split-by指定切分的字段，--m设置mapper的数量。通过这两个参数分解生成m个where子句，进行分段查询。因此sqoop的spl...

19950

### 每周学点大数据 | No.28 表排序

No.28期 表排序 Mr. 王：前面我们讨论了一些基础磁盘算法，现在我们来讨论一些关于磁盘中图算法的问题。 通过对基础磁盘算法的学习，我们可以很容易地想到...

34370