我尝试了以下代码。当try
中的代码由于CUDA内存不足而失败时,我将except
中的批处理大小减少了一半,但它仍然出现了在except
中运行模型的相同问题,但我确信一半的批处理大小是可运行的,因为我尝试在except
中直接运行代码,而不尝试完整的批处理。它工作得很好。顺便问一下,有没有办法自动设置批处理大小,以充分利用CUDA内存而不会溢出?
try:
output = model(Variable(torch.LongTensor(np.array(x))).to(device),Variable(torch.LongTensor(np.array(pos))).to(device),Variable(torch.LongTensor(np.array(m))).to(device))
loss = criterion(output, Variable(torch.LongTensor(y)).to(device))#lb.transform(y)
loss.backward()
optimizer.step()
losses.append(loss.data.mean())
except:
half = int(len(x) / 2)
x1 = x[:half]
x2 = x[half:]
pos1 = pos[:half]
pos2 = pos[half:]
m1 = m[:half]
m2 = m[half:]
y1 = y[:half]
y2 = y[half:]
optimizer.zero_grad()
output = model(Variable(torch.LongTensor(np.array(x1))).to(device),Variable(torch.LongTensor(np.array(pos1))).to(device),Variable(torch.LongTensor(np.array(m1))).to(device))
loss = criterion(output, Variable(torch.LongTensor(y1)).to(device))#lb.transform(y)
loss.backward()
optimizer.step()
losses.append(loss.data.mean())
output = model(Variable(torch.LongTensor(np.array(x2))).to(device),Variable(torch.LongTensor(np.array(pos2))).to(device),Variable(torch.LongTensor(np.array(m2))).to(device))
loss = criterion(output, Variable(torch.LongTensor(y2)).to(device))#lb.transform(y)
loss.backward()
optimizer.step()
losses.append(loss.data.mean())
发布于 2020-06-20 21:13:04
看起来你的gpu上还留有一些东西。在except的开头,您是否尝试过使用torch.cuda.empty_cache()来释放cuda缓存?
https://stackoverflow.com/questions/53837057
复制相似问题