在4 A6000 GPU上运行分布式培训时,我得到以下错误:
[E ProcessGroupNCCL.cpp:630] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1803710 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:390] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 2] Watchdog caught collective operation timeout:
WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1804406 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:390] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
我用的是标准的NVidia PyTorch码头。有趣的是,培训对于小数据集很好,但是对于较大的数据集,我得到了这个错误。这样我就可以确认训练代码是正确的,而且确实有效。
没有实际的运行时错误或任何其他信息来获取任何地方的实际错误消息。
发布于 2022-08-05 16:06:55
对我来说,问题在于PyTorch 1.10.1的PyTorch命令。我只需要切换到python -m torch.distributed.launch
命令,一切就都正常了。我花了很多时间在StackOverflow和PyTorch论坛上,但是没有人提到这个解决方案,所以我分享它来节省人们的时间。
torchrun
似乎在PyTorch 1.11和更高版本上运行得很好。
https://stackoverflow.com/questions/69693950
复制相似问题