我在github上复制了deeplab的源代码,并将所有文件配置为required.And。当我尝试运行local_tset.sh测试文件时,出现了一系列问题。
我无法读取错误消息,因此我不知道哪里出了问题,也不知道从哪里开始
2019-08-23 10:39:16.486931: W tensorflow/core/common_runtime/bfc_allocator.cc:319] *************************************************____********___****____************************xxxx
2019-08-23 10:39:16.487253: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at depthwise_conv_op.cc:365 : Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[gradients/AddN_56/_12764]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/models-master/research/deeplab/train.py", line 517, in <module>
tf.app.run()
...
...
File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise (defined at \models-master\research\deeplab\core\xception.py:175) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[gradients/AddN_56/_12764]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise (defined at \models-master\research\deeplab\core\xception.py:175) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise:
xception_65/entry_flow/block1/unit_1/xception_module/Relu_1 (defined at \models-master\research\deeplab\core\xception.py:274)
Input Source operations connected to node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise:
xception_65/entry_flow/block1/unit_1/xception_module/Relu_1 (defined at \models-master\research\deeplab\core\xception.py:274)
Original stack trace for 'xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise':
File "/models-master/research/deeplab/train.py", line 517, in <module>
tf.app.run()
...
...
File "\anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
发布于 2019-12-03 00:58:01
您使用的批处理大小或图像大小超过了您的计算机可以计算的大小,这导致了“资源耗尽:分配张量时的OOM”。尝试使用较小的批处理大小和图像大小运行模型。
https://stackoverflow.com/questions/57619347
复制相似问题