前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >pytorch MSELoss参数详解「建议收藏」

pytorch MSELoss参数详解「建议收藏」

作者头像
全栈程序员站长
发布2022-10-02 12:56:16
3800
发布2022-10-02 12:56:16
举报
文章被收录于专栏:全栈程序员必看

大家好,又见面了,我是你们的朋友全栈君。

pytorch MSELoss参数详解

代码语言:javascript
复制
import torch
import numpy as np
loss_fn = torch.nn.MSELoss(reduce=False, size_average=False)
a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

loss_fn = torch.nn.MSELoss(reduce=False, size_average=True)
a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
loss_fn = torch.nn.MSELoss(reduce=True, size_average=False)
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
loss_fn = torch.nn.MSELoss(reduce=True, size_average=True)
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
loss_fn = torch.nn.MSELoss()##reduce=True, size_average=True
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

loss_fn = torch.nn.MSELoss(reduction = 'none')
a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

loss_fn = torch.nn.MSELoss(reduction = 'sum')
a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

loss_fn = torch.nn.MSELoss(reduction = 'none')
a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

loss_fn = torch.nn.MSELoss(reduction = 'elementwise_mean')
a=np.array([[1,2],[3,8]])
b=np.array([[5,4],[6,2]])
input = torch.autograd.Variable(torch.from_numpy(a))
target = torch.autograd.Variable(torch.from_numpy(b))
loss = loss_fn(input.float(), target.float())
print(loss)

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/192085.html原文链接:https://javaforall.cn

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022年9月18日 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档