首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >调用torch.sparse.sum和.to_dense()时出错:无法使用'CPU‘后端的参数运行'aten::to_dense’

调用torch.sparse.sum和.to_dense()时出错:无法使用'CPU‘后端的参数运行'aten::to_dense’
EN

Stack Overflow用户
提问于 2022-10-05 07:42:51
回答 1查看 35关注 0票数 0

我打电话给torch.sparse.sum()时遇到了一个问题。原代码如下:

代码语言:javascript
运行
复制
class RGCN_Layer(nn.Module):
    """ A Relation GCN module operated on documents graphs. """

    def __init__(self, in_dim, mem_dim, num_layers, relation_cnt=8):
        super().__init__()
        self.layers = num_layers
        self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self.mem_dim = mem_dim
        self.relation_cnt = relation_cnt
        self.in_dim = in_dim

        self.dropout = 0.2
        # self.in_drop = nn.Dropout(self.dropout)
        self.gcn_drop = nn.Dropout(self.dropout)

        # gcn layer
        self.W_0 = nn.ModuleList()
        self.W_r = nn.ModuleList()
        # for i in range(self.relation_cnt):
        for i in range(relation_cnt):
            self.W_r.append(nn.ModuleList())

        for layer in range(self.layers):
            input_dim = self.in_dim if layer == 0 else self.mem_dim
            self.W_0.append(nn.Linear(input_dim, self.mem_dim).to(self.device))
            for W in self.W_r:
                W.append(nn.Linear(input_dim, self.mem_dim).to(self.device))


    def forward(self, nodes, adj):
        """
        
        :param nodes:  batch_size * num_event * num_event
        :param adj:  batch_size * 8 * num_event * num_event
        :return:
        """
        # gcn_inputs = self.in_drop(nodes)
        gcn_inputs = nodes

        maskss = []
        denomss = []
        for batch in range(adj.shape[0]):
            masks = []
            denoms = []
            for i in range(self.relation_cnt):
                denom = torch.sparse.sum(adj[batch, i], dim=1).to_dense()
                t_g = denom + torch.sparse.sum(adj[batch, i], dim=0).to_dense()
                mask = t_g.eq(0)
                denoms.append(denom.unsqueeze(1))
                masks.append(mask)
            denoms = torch.sum(torch.stack(denoms), 0)
            denoms = denoms + 1
            masks = sum(masks)
            maskss.append(masks)
            denomss.append(denoms)
        denomss = torch.stack(denomss) # 40 * 61 * 1

        # sparse rgcn layer
        for l in range(self.layers):
            gAxWs = []
            for j in range(self.relation_cnt):
                gAxW = []

                bxW = self.W_r[j][l](gcn_inputs)
                for batch in range(adj.shape[0]):

                    xW = bxW[batch]  # 255 * 25
                    AxW = torch.sparse.mm(adj[batch][j], xW)  # 255, 25
                    # AxW = AxW/ denomss[batch][j]  # 255, 25
                    gAxW.append(AxW)
                gAxW = torch.stack(gAxW)
                gAxWs.append(gAxW)
            gAxWs = torch.stack(gAxWs, dim=1)
            # print("denomss", denomss.shape)
            # print((torch.sum(gAxWs, 1) + self.W_0[l](gcn_inputs)).shape)
            gAxWs = F.relu((torch.sum(gAxWs, 1) + self.W_0[l](gcn_inputs)) / denomss)  # self loop
            gcn_inputs = self.gcn_drop(gAxWs) if l < self.layers - 1 else gAxWs

        return gcn_inputs, maskss

但是,在执行行denom = torch.sparse.sum(adj[batch, i], dim=1).to_dense()时,会发生如下错误:

代码语言:javascript
运行
复制
RuntimeError: Could not run 'aten::to_dense' with arguments from the 'CPU' backend. 'aten::to_dense' is only available for these backends: [MkldnnCPU, SparseCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

错误告诉我,代码不能在CPU平台上执行;但是,我注意到CUDA也没有列出在支持的列表中。此外,在检查库后,可能支持在CPU和CUDA上执行。有人能帮我解决这个问题吗?谢谢!

EN

回答 1

Stack Overflow用户

发布于 2022-10-12 12:29:51

如果对所有稀疏维之和,则函数torch.sparse.sum返回一个密集张量。因为你首先分割邻接矩阵,然后进行求和,你就调用了稠密张量上的方法.to_dense()。

代码语言:javascript
运行
复制
torch.arange(10).to_dense() # will give same error
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73957218

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档