看看这段代码:
(cudavenv) C:\main\FemtoTest\Library\Python\libImageProcess\trunk\src\libImageProcess>python
Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> from numba import cuda
>>> for i in range(26):
... arr = np.zeros((17, 8025472),dtype=np.uint32)
... d_arr = cuda.to_device(arr)
...
成功运行vs.
(cudavenv) C:\main\FemtoTest\Library\Python\libImageProcess\trunk\src\libImageProcess>python
Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> from numba import cuda
>>> class M:
... def __init__(self):
... self.arr = np.zeros((17, 8025472),dtype=np.uint32)
... self.d_arr = None
...
>>> ms = [M() for _ in range(26)]
>>> for m in ms:
... m.d_arr = cuda.to_device(m.arr)
...
Traceback (most recent call last):
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 741, in _attempt_allocation
allocator()
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 756, in allocator
driver.cuMemAlloc(byref(ptr), bytesize)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 294, in safe_cuda_api_call
self._check_error(fname, retcode)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 329, in _check_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [2] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\devices.py", line 225, in _require_cuda_context
return fn(*args, **kws)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\api.py", line 110, in to_device
to, new = devicearray.auto_device(obj, stream=stream, copy=copy)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\devicearray.py", line 693, in auto_device
devobj = from_array_like(obj, stream=stream)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\devicearray.py", line 631, in from_array_like
writeback=ary, stream=stream, gpu_data=gpu_data)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\devicearray.py", line 102, in __init__
gpu_data = devices.get_context().memalloc(self.alloc_size)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 758, in memalloc
self._attempt_allocation(allocator)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 748, in _attempt_allocation
allocator()
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 756, in allocator
driver.cuMemAlloc(byref(ptr), bytesize)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 294, in safe_cuda_api_call
self._check_error(fname, retcode)
File "C:\Users\alex\AppData\Local\Continuum\anaconda3\envs\cudavenv\lib\site-packages\numba\cuda\cudadrv\driver.py", line 329, in _check_error
raise CudaAPIError(retcode, msg)
numba.cuda.cudadrv.driver.CudaAPIError: [2] Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY
我认为在第一个实例中,我每次都会将d_arr重新分配给设备数组,所以它只占用那么多内存。在第二种情况下,因为有26个实例,所以每次都会在设备上创建一个新数组,并最终耗尽内存。当我在for循环中使用完内存引用时,我需要调用什么方法来删除它?这样就可以毫无问题地运行了?
发布于 2019-09-17 08:15:13
您可能希望阅读3.3.8 here部分。
删除对不再需要的CUDA内存的最后一个引用时,可以释放不再需要的CUDA内存。在第一种情况下,当重新分配d_arr
时,每次遍历循环时都会发生这种情况。在第二种情况下,它不会,因为引用保存在ms
中。
我认为一个正确的解决方案是使引用被删除。The pythonic way to do this是删除引用:
import numpy as np
from numba import cuda
class M:
def __init__(self):
self.arr = np.zeros((17, 8025472),dtype=np.uint32)
self.d_arr = None
ms = [M() for _ in range(26)]
for m in ms:
m.d_arr = cuda.to_device(m.arr)
# do whatever it is you want to do with m.d_arr here
m.d_arr = None
https://stackoverflow.com/questions/57965362
复制相似问题