(错误的代码库:https://github.com/zihualiu/pytorch_linear_bug)
我最近在Pytorch中遇到了一个奇怪的bug,希望你能帮助我。在我的一个网络中,我有一个完全连接层,称为net.fc_h1。然而,在训练过程中,我意识到这一层是在激活之前输出NaNs的。所以我把它放在pdb中,希望它能给我带来一些东西。以下是日志:
# in network declaration:
def forward(self, obs):
z1 = self.fc_h1(obs)
if np.isnan(np.sum(z1.data.numpy())):
pdb.set_trace()
h1 = F.tanh(z1)
...
NaN确实被捕获了,但我在pdb中意识到,如果您再次运行该操作,结果将是显著的:
(Pdb) z1.sum()
Variable containing:
nan
[torch.FloatTensor of size 1]
(Pdb) self.fc_h1(obs).sum()
Variable containing:
771.5120
[torch.FloatTensor of size 1]
当我检查我的输入或权重是否包含NaN时,我得到以下内容:(Pdb) self.fc_h1.weight.max()变量,其中包含: 0.2482大小为1的torch.FloatTensor
(Pdb) self.fc_h1.weight.mean()
Variable containing:
1.00000e-03 *
1.7761
[torch.FloatTensor of size 1]
(Pdb) self.fc_h1.weight.min()
Variable containing:
-0.2504
[torch.FloatTensor of size 1]
(Pdb) obs.max()
Variable containing:
6.9884
[torch.FloatTensor of size 1]
(Pdb) obs.min()
Variable containing:
-6.7855
[torch.FloatTensor of size 1]
(Pdb) obs.mean()
Variable containing:
1.00000e-02 *
-1.5033
[torch.FloatTensor of size 1]
(Pdb) self.fc_h1.bias.max()
Variable containing:
0.2482
[torch.FloatTensor of size 1]
(Pdb) self.fc_h1.bias.mean()
Variable containing:
1.00000e-03 *
3.9104
[torch.FloatTensor of size 1]
(Pdb) self.fc_h1.bias.min()
Variable containing:
-0.2466
[torch.FloatTensor of size 1]
似乎输入、权重和偏差都处于良好状态。如果一切都很好,那么线性层是如何产生NaN的?
编辑:更多的奇怪,所以我尝试再次运行向前传球,有趣的是,多次向前传球会给我不同的结果:
(Pdb) self.fc_h1(obs)
Variable containing:
2.2321e-01 -6.2586e-01 -1.9004e-01 ... -4.2521e-01 8.6175e-01 8.6866e-01
-7.2699e-02 7.8234e-01 -5.8862e-01 ... 2.4041e-01 -1.7577e-01 6.9928e-01
-7.2699e-02 7.8234e-01 -5.8862e-01 ... 2.4041e-01 -1.7577e-01 6.9928e-01
... ⋱ ...
-6.4686e-02 -1.5819e+00 5.7410e-01 ... -6.4127e-01 5.2837e-01 -1.3166e+00
3.9214e-01 2.8727e-01 -5.5699e-01 ... -8.3164e-01 -5.1795e-01 -3.7637e-01
-9.6061e-01 1.4780e-01 5.3614e-02 ... -1.5042e+00 6.0759e-02 -3.6862e-01
[torch.FloatTensor of size 4096x170]
(Pdb) self.fc_h1(obs)
Variable containing:
2.2321e-01 -6.2586e-01 -1.9004e-01 ... -4.2521e-01 8.6175e-01 8.6866e-01
-7.2699e-02 7.8234e-01 -5.8862e-01 ... 2.4041e-01 -1.7577e-01 6.9928e-01
-7.2699e-02 7.8234e-01 -5.8862e-01 ... 2.4041e-01 -1.7577e-01 6.9928e-01
... ⋱ ...
nan nan nan ... nan 5.2837e-01 -1.3166e+00
nan nan nan ... nan -5.1795e-01 -3.7637e-01
nan nan nan ... nan 6.0759e-02 -3.6862e-01
[torch.FloatTensor of size 4096x170]
我也没有使用GPU,只使用CPU。
https://stackoverflow.com/questions/48558915
复制相似问题