作为实验,我正在尝试将旧代码转换为PyTorch代码。最终,我将在10,000+ x 100矩阵上进行回归,适当地更新权重等等。
为了学习,我正在慢慢地扩大玩具示例的规模。我用下面的示例代码碰壁了。
import torch
import torch.nn as nn
import torch.nn.functional as funct
from torch.autograd import Variable
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
x_data = Variable( torch.Tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ] ),
requires_grad=True )
y_data = Variable( torch.Tensor( [ [2.0], [4.0], [6.0] ] ) )
w = Variable( torch.randn( 2, 1, requires_grad=True ) )
b = Variable( torch.randn( 1, 1, requires_grad=True ) )
class Model(torch.nn.Module) :
def __init__(self) :
super( Model, self).__init__()
self.linear = torch.nn.Linear(2,1) ## 2 features per entry. 1 output
def forward(self, x2, w2, b2) :
y_pred = x2 @ w2 + b2
return y_pred
model = Model()
criterion = torch.nn.MSELoss( size_average=False )
optimizer = torch.optim.SGD( model.parameters(), lr=0.01 )
for epoch in range(10) :
y_pred = model( x_data,w,b ) # Get prediction
loss = criterion( y_pred, y_data ) # Calc loss
print( epoch, loss.data.item() ) # Print loss
optimizer.zero_grad() # Zero gradient
loss.backward() # Calculate gradients
optimizer.step() # Update w, b然而,这样做,我的损失总是相同的,调查显示我的w和b实际上从未改变。我对这里发生的事情有点不知所措。
最终,我希望能够存储“新的”w和b的结果,以便跨迭代和数据集进行比较。
发布于 2019-04-01 05:25:43
对我来说,它看起来像是cargo编程的案例。
请注意,您的Model类没有在forward中使用self,因此它实际上是一个“常规”(非方法)函数,而model是完全无状态的。对您的代码最简单的修复方法是,通过将其创建为optimizer = torch.optim.SGD([w, b], lr=0.01),使optimizer能够识别w和b。我还将model重写为一个函数
import torch
import torch.nn as nn
# torch.autograd.Variable is roughly equivalent to requires_grad=True
# and is deprecated in PyTorch 1.0
# your code gives not reason to have `requires_grad=True` on `x_data`
x_data = torch.tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ])
y_data = torch.tensor( [ [2.0], [4.0], [6.0] ] )
w = torch.randn( 2, 1, requires_grad=True )
b = torch.randn( 1, 1, requires_grad=True )
def model(x2, w2, b2):
return x2 @ w2 + b2
criterion = torch.nn.MSELoss( size_average=False )
optimizer = torch.optim.SGD([w, b], lr=0.01 )
for epoch in range(10) :
y_pred = model( x_data,w,b )
loss = criterion( y_pred, y_data )
print( epoch, loss.data.item() )
optimizer.zero_grad()
loss.backward()
optimizer.step()话虽如此,nn.Linear就是为了简化这个过程而构建的。它会自动创建w和b的等价物,分别称为self.weight和self.bias。此外,self.__call__(x)等同于Model的forward定义,因为它返回self.weight @ x + self.bias。换句话说,您也可以使用替代代码。
import torch
import torch.nn as nn
x_data = torch.tensor( [ [1.0, 2.0], [2.0, 3.0], [3.0, 4.0] ] )
y_data = torch.tensor( [ [2.0], [4.0], [6.0] ] )
model = nn.Linear(2, 1)
criterion = torch.nn.MSELoss( size_average=False )
optimizer = torch.optim.SGD(model.parameters(), lr=0.01 )
for epoch in range(10) :
y_pred = model(x_data)
loss = criterion( y_pred, y_data )
print( epoch, loss.data.item() )
optimizer.zero_grad()
loss.backward()
optimizer.step()其中,model.parameters()可用于枚举模型参数(相当于上面手动创建的列表[w, b] )。要访问参数(加载、保存、打印等),请使用model.weight和model.bias。
https://stackoverflow.com/questions/55444804
复制相似问题