首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >由于操作错误,TensorFlow从.pb转换为.tflite失败

由于操作错误,TensorFlow从.pb转换为.tflite失败
EN

Stack Overflow用户
提问于 2022-05-11 13:10:07
回答 1查看 475关注 0票数 1

嘿,各位,这是我第一次发问。如果我做错了什么,或者你需要更多的信息,请告诉我,我会尽我最大的努力。

我试图为TensorFlow lite创建一个对象检测。为此,我在这里培训了一个名为ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8下载表单的模型:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md

为此,我遵循了一些教程。一切都很顺利。做完这件事后,我把我的newst ckpt输出到

代码语言:javascript
运行
复制
python exporter_main_v2.py 
--pipeline_config_path=training/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config
--trained_checkpoint_dir=/training 
--output_directory=/training/model

因此,我接受了saved_model.pd的培训/模型/保存模型。现在,我的计划是将这个saved_model.pd转换为一个.tflite文件。

这就是我被困的地方。我试着用

代码语言:javascript
运行
复制
tflite_convert 
--output_file=tflite/ 
--saved_model_dir=model/saved_model 
--graph_def_file=model/saved_model/saved_model.pb 
--input_arrays=input --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 
--input_shape=1,320,320,3 
--allow_custom_ops

并使用

代码语言:javascript
运行
复制
toco 
--saved_model_dir=model/saved_model 
--output_file=tflite/detect.tflite

但是无论我做什么,我总是收到一条长长的错误消息。

代码语言:javascript
运行
复制
tflite_convert --output_file=training/tflite/ --saved_model_dir=training/model/saved_model --graph_def_file=training/model/saved_model/saved_model.pb --input_arrays=input --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --
input_shape=1,320,320,3 --allow_custom_ops
2022-05-11 15:01:16.421059: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operati
ons:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-05-11 15:01:16.905178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7445 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:02:00.
0, compute capability: 8.6
2022-05-11 15:01:25.665750: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:357] Ignored output_format.
2022-05-11 15:01:25.665982: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:360] Ignored drop_control_dependency.
2022-05-11 15:01:25.666905: I tensorflow/cc/saved_model/reader.cc:43] Reading SavedModel from: training/model/saved_model

    raise converter_error
tensorflow.lite.python.convert_phase.ConverterError: <unknown>:0: error: loc(callsite(callsite(fused["ConcatV2:", "Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/ChangeCoordinateFrame/Scale/concat@__inferenc
e_call_func_11394"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_13768"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.ConcatV2' op is neither a custom op nor a f
lex op
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(callsite(fused["ConcatV2:", "Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/ChangeCoordinateFrame/Scale/concat@__inference_call_func_11394"] at fused["StatefulPartitionedCall:
", "StatefulPartitionedCall@__inference_signature_wrapper_13768"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: loc(callsite(fused["StridedSlice:", "map/while/strided_slice@map_while_body_7735"] at callsite(callsite(fused["StatelessWhile:", "map/while@__inference_call_func_11394"] at fused["StatefulPartitionedCall:", "Statefu
lPartitionedCall@__inference_signature_wrapper_13768"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]))): 'tf.StridedSlice' op is neither a custom op nor a flex op
<unknown>:0: note: loc(callsite(callsite(fused["StatelessWhile:", "map/while@__inference_call_func_11394"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_13768"]) at fused["StatefulPartition
edCall:", "StatefulPartitionedCall"])): called from
<unknown>:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
<unknown>:0: note: loc(callsite(fused["StridedSlice:", "map/while/strided_slice@map_while_body_7735"] at callsite(callsite(fused["StatelessWhile:", "map/while@__inference_call_func_11394"] at fused["StatefulPartitionedCall:", "Stateful
PartitionedCall@__inference_signature_wrapper_13768"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]))): Error code: ERROR_NEEDS_FLEX_OPS
<unknown>:0: error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: ConcatV2, StridedSlice
Details:
        tf.ConcatV2(tensor<f32>, tensor<f32>, tensor<f32>, tensor<f32>, tensor<i32>) -> (tensor<4xf32>) : {device = ""}
        tf.StridedSlice(tensor<?x?x3xf32>, tensor<4xi32>, tensor<4xi32>, tensor<4xi32>) -> (tensor<1x?x?x3xf32>) : {begin_mask = 14 : i64, device = "", ellipsis_mask = 0 : i64, end_mask = 14 : i64, new_axis_mask = 1 : i64, shrink_axis_
mask = 0 : i64}

我尝试使用的脚本来自于这里:https://github.com/tensorflow/models/tree/master/research/object_detection

因为我是TensorFlow的初学者,所以我并不真正理解这里的问题。据我所知,这个问题与操作系统有关,在TensorFlow 2中支持的某些特性在TensorFlow Lite中不受支持。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-05-16 15:08:11

我发现了我的独白。

我只是使用错误的脚本将检查点转换为冻结图(.pd)。

我需要使用export_tflite_graph_tf2.py脚本而不是exporter_main_v2.py.在此之后,我可以简单地使用tflite_convert脚本导出到.tflite

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72201667

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档