我正在尝试使用torch.distributed将PyTorch张量从一台机器发送到另一台机器。dist.init_process_group函数工作正常。但是,dist.broadcast函数中存在连接故障。以下是我在节点0上的代码:
import torch
from torch import distributed as dist
import numpy as np
import os
master_addr = '47.xxx.xxx.xx'
master_port = 10000
world_size = 2
rank = 0
backend = 'nccl'
os.environ['MASTER_ADDR'] = master_addr
os.environ['MASTER_PORT'] = str(master_port)
os.environ['WORLD_SIZE'] = str(world_size)
os.environ['RANK'] = str(rank)
dist.init_process_group(backend, init_method='tcp://47.xxx.xxx.xx:10000', timeout=datetime.timedelta(0, 10), rank=rank, world_size=world_size)
print("Finished initializing process group; backend: %s, rank: %d, "
"world_size: %d" % (backend, rank, world_size))
a = torch.from_numpy(np.random.rand(3, 3)).cuda()
dist.broadcast(tensor=a, src=0)
以下是我在节点1上的代码:
import torch
from torch import distributed as dist
import numpy as np
import os
master_addr = '47.xxx.xxx.xx'
master_port = 10000
world_size = 2
rank = 1
backend = 'nccl'
os.environ['MASTER_ADDR'] = master_addr
os.environ['MASTER_PORT'] = str(master_port)
os.environ['WORLD_SIZE'] = str(world_size)
os.environ['RANK'] = str(rank)
dist.init_process_group(backend, init_method='tcp://47.xxx.xxx.xx:10000', timeout=datetime.timedelta(0, 10), rank=rank, world_size=world_size)
print("Finished initializing process group; backend: %s, rank: %d, "
"world_size: %d" % (backend, rank, world_size))
a = torch.zeros((3,3)).cuda()
dist.broadcast(tensor=a, src=0)
我在运行代码之前设置了NCCL_DEBUG=INFO
。以下是我在Node 1上获得的信息:
iZbp11ufz31riqnssil53cZ:13530:13530 [0] NCCL INFO Bootstrap : Using [0]eth0:192.168.0.181<0>
iZbp11ufz31riqnssil53cZ:13530:13530 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
iZbp11ufz31riqnssil53cZ:13530:13530 [0] NCCL INFO NET/IB : No device found.
iZbp11ufz31riqnssil53cZ:13530:13530 [0] NCCL INFO NET/Socket : Using [0]eth0:192.168.0.181<0>
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO Setting affinity for GPU 0 to ffff
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO Call to connect returned Connection timed out, retrying
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO Call to connect returned Connection timed out, retrying
iZbp11ufz31riqnssil53cZ:13530:13553 [0] include/socket.h:395 NCCL WARN Connect to 192.168.0.143<59811> failed : Connection timed out
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO bootstrap.cc:100 -> 2
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO bootstrap.cc:326 -> 2
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO init.cc:695 -> 2
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO init.cc:951 -> 2
iZbp11ufz31riqnssil53cZ:13530:13553 [0] NCCL INFO misc/group.cc:69 -> 2 [Async thread]
Traceback (most recent call last):
File "test_dist_1.py", line 25, in <module>
dist.broadcast(tensor=a, src=0)
File "/root/anaconda3/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 806, in broadcast
work = _default_pg.broadcast([tensor], opts)
RuntimeError: NCCL error in: /tmp/pip-req-build-58y_cjjl/torch/lib/c10d/ProcessGroupNCCL.cpp:290, unhandled system error
而节点0似乎卡在函数dist.broadcast中:
iZuf6cu11ru7evq9ybagdjZ:13530:13530 [0] NCCL INFO Bootstrap : Using [0]eth0:192.168.0.143<0>
iZuf6cu11ru7evq9ybagdjZ:13530:13530 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
iZuf6cu11ru7evq9ybagdjZ:13530:13530 [0] NCCL INFO NET/IB : No device found.
iZuf6cu11ru7evq9ybagdjZ:13530:13530 [0] NCCL INFO NET/Socket : Using [0]eth0:192.168.0.143<0>
iZuf6cu11ru7evq9ybagdjZ:13530:13553 [0] NCCL INFO Setting affinity for GPU 0 to ffff
有人能帮我吗?如何将节点0的张量发送到节点1?如果有任何帮助,我将不胜感激!
发布于 2021-11-01 13:05:02
我发现我使用的两台机器不在同一个VPC下。因此,他们永远不能交流。所以我在同一个VPC下使用了其他服务器,并且工作正常。
发布于 2021-10-31 12:10:47
unhandled system error
意味着在NCCL端有一些潜在的错误。您应该首先使用NCCL_DEBUG=INFO
重新运行代码(就像OP所做的那样)。然后从调试日志中找出错误是什么(特别是日志中的警告)。
在操作员的日志中,我认为行iZbp11ufz31riqnssil53cZ:13530:13553 [0] include/socket.h:395 NCCL WARN Connect to 192.168.0.143<59811> failed : Connection timed out
是导致unhandled system error
的原因
发布于 2020-11-10 15:52:33
ProcessGroupNCCL.cpp中未处理的系统错误通常意味着两个节点上的代码之间存在差异,我想知道为什么在节点1上使用与节点0具有相同para名称的a = torch.zeros((3,3)).cuda()
?
`
https://stackoverflow.com/questions/61094394
复制相似问题