首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >Tensorflow不适用于gpu -占用内存太多。如何解决这个问题?

Tensorflow不适用于gpu -占用内存太多。如何解决这个问题?
EN

Stack Overflow用户
提问于 2022-05-05 23:29:36
回答 1查看 865关注 0票数 0

我使用tensorflow进行图像分类(20个类)和卷积。我的数据集包含大约20000张火车图像和5000张测试图像。图像(RGB)有200x256像素。当我运行脚本来训练使用cpu的模型时,一切似乎都很好。然而,当我尝试使用gpu运行脚本时,在加载我的培训和测试数据之后,我会得到model_fit函数上的错误。

代码语言:javascript
运行
复制
Num GPUs Available:  1
2022-05-04 17:58:58.482057: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-05-04 17:59:03.655618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4634 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1060, pci bus id: xxxx:xx:xx.x, compute capability: 6.1
1 Physical GPUs, 1 Logical GPUs
Path: D:/Dataset/seg_train
Loading seg_train
Path: D:/Dataset/seg_test
Loading seg_test
2022-05-04 18:02:48.971100: W tensorflow/core/common_runtime/bfc_allocator.cc:462] Allocator (GPU_0_bfc) ran out of memory trying to allocate 10.44GiB (rounded to 11206656000)requested by op _EagerConst
If the cause is memory fragmentation maybe the environment variable 'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation.
Current allocation summary follows.
Current allocation summary follows.
2022-05-04 18:02:48.996013: I tensorflow/core/common_runtime/bfc_allocator.cc:1010] BFCAllocator dump for GPU_0_bfc
2022-05-04 18:02:48.996173: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (256):  Total Chunks: 16, Chunks in use: 16. 4.0KiB allocated for chunks. 4.0KiB in use in bin. 392B client-requested in use in bin.
2022-05-04 18:02:48.996308: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (512):  Total Chunks: 1, Chunks in use: 1. 512B allocated for chunks. 512B in use in bin. 512B client-requested in use in bin.
2022-05-04 18:02:48.996473: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (1024):         Total Chunks: 1, Chunks in use: 1. 1.2KiB allocated for chunks. 1.2KiB in use in bin. 1.0KiB client-requested in use in bin.
2022-05-04 18:02:48.996629: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (2048):         Total Chunks: 2, Chunks in use: 1. 7.0KiB allocated for chunks. 3.5KiB in use in bin. 3.4KiB client-requested in use in bin.
2022-05-04 18:02:48.996889: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (4096):         Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:48.997493: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (8192):         Total Chunks: 1, Chunks in use: 1. 9.5KiB allocated for chunks. 9.5KiB in use in bin. 9.5KiB client-requested in use in bin.
2022-05-04 18:02:48.997960: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (16384):        Total Chunks: 1, Chunks in use: 0. 19.0KiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:48.998482: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (32768):        Total Chunks: 2, Chunks in use: 1. 79.5KiB allocated for chunks. 36.0KiB in use in bin. 36.0KiB client-requested in use in bin.
2022-05-04 18:02:48.999113: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (65536):        Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:48.999710: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (131072):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.000273: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (262144):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.000742: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (524288):       Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.001208: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (1048576):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.001671: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (2097152):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.002131: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (4194304):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.002700: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (8388608):      Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.004034: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (16777216):     Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.004682: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (33554432):     Total Chunks: 1, Chunks in use: 1. 44.56MiB allocated for chunks. 44.56MiB in use in bin. 44.56MiB client-requested in use in bin.
2022-05-04 18:02:49.005383: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (67108864):     Total Chunks: 1, Chunks in use: 0. 89.12MiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.007520: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (134217728):    Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.008016: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (268435456):    Total Chunks: 1, Chunks in use: 0. 4.39GiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-05-04 18:02:49.008477: I tensorflow/core/common_runtime/bfc_allocator.cc:1033] Bin for 10.44GiB was 256.00MiB, Chunk State:
2022-05-04 18:02:49.008888: I tensorflow/core/common_runtime/bfc_allocator.cc:1039]   Size: 4.39GiB | Requested Size: 0B | in_use: 0 | bin_num: 20, prev:   Size: 44.56MiB | Requested Size: 44.56MiB | in_use: 1 | bin_num: -1
2022-05-04 18:02:49.009335: I tensorflow/core/common_runtime/bfc_allocator.cc:1046] Next region of size 4859428864
2022-05-04 18:02:49.025604: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00000 of size 256 next 1
2022-05-04 18:02:49.025772: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00100 of size 1280 next 2
2022-05-04 18:02:49.026373: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00600 of size 256 next 3
2022-05-04 18:02:49.026991: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00700 of size 256 next 4
2022-05-04 18:02:49.028407: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00800 of size 256 next 5
2022-05-04 18:02:49.028560: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00900 of size 256 next 6
2022-05-04 18:02:49.029196: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00a00 of size 256 next 9
2022-05-04 18:02:49.029937: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00b00 of size 256 next 10
2022-05-04 18:02:49.030556: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00c00 of size 256 next 11
2022-05-04 18:02:49.031054: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00d00 of size 256 next 14
2022-05-04 18:02:49.031553: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00e00 of size 256 next 15
2022-05-04 18:02:49.031906: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a00f00 of size 512 next 16
2022-05-04 18:02:49.032334: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a01100 of size 256 next 19
2022-05-04 18:02:49.032719: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a01200 of size 256 next 20
2022-05-04 18:02:49.033158: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a01300 of size 256 next 21
2022-05-04 18:02:49.033523: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a01400 of size 256 next 24
2022-05-04 18:02:49.033892: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a01500 of size 256 next 25
2022-05-04 18:02:49.034323: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a01600 of size 256 next 26
2022-05-04 18:02:49.034824: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at b03a01700 of size 3584 next 7
2022-05-04 18:02:49.035472: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a02500 of size 3584 next 8
2022-05-04 18:02:49.035923: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at b03a03300 of size 19456 next 23
2022-05-04 18:02:49.036957: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a07f00 of size 9728 next 22
2022-05-04 18:02:49.039251: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at b03a0a500 of size 44544 next 13
2022-05-04 18:02:49.039789: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b03a15300 of size 36864 next 12
2022-05-04 18:02:49.040234: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at b03a1e300 of size 93454336 next 18
2022-05-04 18:02:49.040779: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at b0933e300 of size 46727168 next 17
2022-05-04 18:02:49.041233: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at b0bfce300 of size 4719123712 next 18446744073709551615
2022-05-04 18:02:49.041719: I tensorflow/core/common_runtime/bfc_allocator.cc:1071]      Summary of in-use Chunks by size:
2022-05-04 18:02:49.042440: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 16 Chunks of size 256 totalling 4.0KiB
2022-05-04 18:02:49.042831: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 512 totalling 512B
2022-05-04 18:02:49.043889: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 1280 totalling 1.2KiB
2022-05-04 18:02:49.044474: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 3584 totalling 3.5KiB
2022-05-04 18:02:49.044901: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 9728 totalling 9.5KiB
2022-05-04 18:02:49.045330: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 36864 totalling 36.0KiB
2022-05-04 18:02:49.045784: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 46727168 totalling 44.56MiB
2022-05-04 18:02:49.046196: I tensorflow/core/common_runtime/bfc_allocator.cc:1078] Sum Total of in-use chunks: 44.62MiB
2022-05-04 18:02:49.046552: I tensorflow/core/common_runtime/bfc_allocator.cc:1080] total_region_allocated_bytes_: 4859428864 memory_limit_: 4859428864 available bytes: 0 curr_region_allocation_bytes_: 9718857728
2022-05-04 18:02:49.046902: I tensorflow/core/common_runtime/bfc_allocator.cc:1086] Stats:
Limit:                      4859428864
InUse:                        46783232
MaxInUse:                    140225792
NumAllocs:                          34
MaxAllocSize:                 46727168
Reserved:                            0
PeakReserved:                        0
LargestFreeBlock:                    0

2022-05-04 18:02:49.047317: W tensorflow/core/common_runtime/bfc_allocator.cc:474] ***_________________________________________________________________________________________________
Traceback (most recent call last):
  File "D:\DatasetProcessing\ImageClassification.py", line 394, in <module>
    main()
  File "D:\DatasetProcessing\ImageClassification.py", line 387, in main
    first_model()
  File "D:\DatasetProcessing\ImageClassification.py", line 162, in first_model
    history = model.fit(train_images, train_labels, batch_size=2, epochs=4)
  File "D:\WinPython\WPy64-3980\python-3.9.8.amd64\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "D:\WinPython\WPy64-3980\python-3.9.8.amd64\lib\site-packages\tensorflow\python\framework\constant_op.py", line 102, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized.

我有一个笔记本与nvidia gtx 1060 (6GB) gpu。我已经安装了这个gpu和cuda版本11.2的最新驱动程序。当脚本运行时,我在任务管理器中检查了gpu值,但是它是1%到5%。看起来tensorflow根本不使用gpu。

我试着用:

代码语言:javascript
运行
复制
TF_GPU_ALLOCATOR=cuda_malloc_async

代码语言:javascript
运行
复制
memory_limit=4096

代码语言:javascript
运行
复制
allow_growth=True

我还将batch_size从128个减少到2个,但这两个选项都不起作用。

型号:

代码语言:javascript
运行
复制
model = tf.keras.Sequential([
            tf.keras.layers.Conv2D(32, (3,3), activation = 'relu', input_shape = (200, 256, 3)),
            tf.keras.layers.MaxPooling2D(2,2),
            tf.keras.layers.Conv2D(32, (3,3), activation = 'relu'),
            tf.keras.layers.MaxPooling2D(2,2),
            tf.keras.layers.Flatten(),
            tf.keras.layers.Dense(128, activation = tf.nn.relu),
            tf.keras.layers.Dense(20, activation = tf.nn.softmax)
        ])
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_images, train_labels, batch_size=2, epochs=4)

这是一个简单的模型,但我在model_fit函数上出现了错误。

EN

回答 1

Stack Overflow用户

发布于 2022-07-22 17:08:33

我有一个用于二进制分类的lstm模型,并在GCP上使用2个GPU运行它。由于我有这个问题已经有几天了,在开始的时候尝试了所有这些:

代码语言:javascript
运行
复制
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
for device in gpu_devices:
  tf.config.experimental.set_memory_growth(device, True)

这是:

代码语言:javascript
运行
复制
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

此外,还包括:

代码语言:javascript
运行
复制
from tensorflow.compat.v1 import ConfigProto
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.compat.v1.Session(config=config)

什么都没有工作,最后找到了这里:基本上,它是在两个GPU上复制模型,以便在每个GPU上运行模型的副本,在它们之间分割输入数据,也就是所谓的“数据并行”。

代码语言:javascript
运行
复制
tf.debugging.set_log_device_placement(True)
gpus = tf.config.list_logical_devices('GPU')
strategy = tf.distribute.MirroredStrategy(gpus)
with strategy.scope():
    model = Sequential()
    model.add(LSTM(layer_unit, input_shape=(lookback,number_of_feature ),return_sequences=True))
    model.add(Dropout(dr))
    for i in range(hidden_layer_no-2):
        model.add(LSTM(layer_unit,return_sequences=True))
        model.add(Dropout(dr))
    model.add(LSTM(layer_unit))
    model.add(Dropout(dr))

    model.add(Dense(forward,activation='sigmoid')) # relu
    model.compile(loss='binary_crossentropy', optimizer=opt) 

免责声明:我不知道它是否在一个GPU上工作。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72134608

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档