我在python中做了一个不和谐的bot,现在使用Tensorflow和NLTK在它中添加了"Chatbot“特性。当我在本地运行机器人时,它运行的非常好,没有任何问题,但是当我将它移到驻留我的投资组合的Namec堆托管包时,它开始出错,它说:
OpenBLAS blas_thread_init: pthread_create failed for thread 29 of 64: Resource temporarily unavailable
nltk和tensorflow不会被导入,而机器人就会崩溃。
我搜索了它并找到了一个解决方案,它告诉我在使用任何导入之前都要使用os.environ['OPENBLAS_NUM_THREADS'] = '1'
。这解决了以前的错误,但现在又给出了另一个错误:
Check failed: ret == 0 (11 vs. 0)Thread creation via pthread_create() failed.
现在运行python main.py
的完整输出是:
2021-06-10 11:18:19.606471: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-06-10 11:18:19.606497: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-06-10 11:18:21.090650: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-06-10 11:18:21.090684: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-06-10 11:18:21.090716: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (server270.web-hosting.com): /proc/driver/nvidia/version does not exist
2021-06-10 11:18:21.091042: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-10 11:18:21.092409: F tensorflow/core/platform/default/env.cc:73] Check failed: ret == 0 (11 vs. 0)Thread creation via pthread_create() failed.
为了不让这个问题太长,源文件已经托管在GitHub上了:https://github.com/Nalin-2005/The2020CoderBot和README.md
告诉您哪些文件包含bot的哪个部分。
bot托管在Namec堆共享主机上,有关服务器的详细信息和技术规格如下:
cat /proc/cpuinfo | grep 'model name' | uniq
):Intel(R) Xeon(R) Gold 6140 CPU@ 2.30GHz据我所知,这两个问题都是由有限的RAM或CPU使用造成的。但是现在,Python脚本本身阻止了使用。
那么,造成这种情况的原因是什么(如果我是不正确的),以及如何解决这个问题?
发布于 2021-06-11 02:03:17
经过一段时间的头脑风暴和谷歌搜索,我发现了Tensorflow Lite,它消耗了更少的资源,但在我的服务器上提供了相同的性能*,我可以轻松地将它与前面的代码集成,从而生成一个更节省资源的模型。对于那些想知道如何将任何keras模型转换为Tensorflow lite的用户,下面是说明。
model.save("/path/to/model.h5")
替换为:converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open("/path/to/model.tflite", "wb") as f:
f.write(tflite_model)
model = tf.lite.Interpreter("/path/to/model.tflite")
model.allocate_tensors()
input_details = model.get_input_details()
output_details = model.get_output_details()
# prepare input data
model.set_tensor(input_details[0]['index'],input_data)
model.invoke()
output_data = model.get_tensor(output_details[0]['index'])
results = np.squeeze(output_data)
https://stackoverflow.com/questions/67929842
复制相似问题