通过使用Tensorflow服务示例中的基本gRPC客户端从运行在docker上的模型中获得预测,我得到了以下响应:
status = StatusCode.UNAVAILABLE
details = "OS Error"
debug_error_string = "{"created":"@1580748231.250387313",
"description":"Error received from peer",
"file":"src/core/lib/surface/call.cc",
"file_line":1017,"grpc_message":"OS Error","grpc_status":14}"这就是我的客户目前的样子:
import grpc
import tensorflow as tf
import cv2
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
def main():
data = cv2.imread('/home/matt/Downloads/cat.jpg')
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'model'
request.model_spec.signature_name = 'serving_default'
request.inputs['image_bytes'].CopyFrom(
tf.make_tensor_proto(data, shape=[1, data.size]))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
if __name__ == '__main__':
main()(谢谢您的帮助:)
发布于 2020-04-09 13:21:01
在这里提供解决方案,即使它存在于评论部分,以造福社区。
解决方案是,在执行客户端文件之前,我们需要使用下面给出的代码运行Docker容器来调用Tensorflow Model Server:
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &除了调用Tensorflow模型服务器之外,
8500为gRPC公开,端口8501为REST API)公开
https://stackoverflow.com/questions/60043772
复制相似问题