pytorch任意形式的model(.t7、.pth等等)转.onnx全都可以采用固定格式。
完整实现:
def pth2onnx(self, simplify_onnx_sw=True):
import torch
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
model = torch.nn.DataParallel(self.model)
_state_dict = torch.load(pth_path, map_location=torch.device('cpu'))
model.load_state_dict(_state_dict, strict=True)
model.eval()
torch.onnx.export(model.module,
torch.randn(batch_size, *C.input_shape),
pure_onnx_path,
input_names=["input"],
output_names=["output"]
)
if simplify_onnx_sw:
os.system('python -m onnxsim {} {}'.format(pure_onnx_path, simplified_onnx_path))
print('\n Simplified onnx has been save to {}\n'.format(simplified_onnx_path))
os.remove(pure_onnx_path)
else:
print('\n Pure onnx has been save to {}\n'.format(pure_onnx_path))
实验举例:
model_dir = './'
pth_path = model_dir + 'A.pth'
onnx_path = model_dir + 'A.onnx'
batch_size = 1
input_shape = (3, 112, 112)
cfg = Config()
cfg.load_from_file(args.model_cfg_file)
model = PFLD_SE3_eval(cfg.model_conf.layer_cfg, cfg.model_conf.num_points)
model.load(pth_path)
model.eval()
torch.onnx.export(model,
torch.randn(batch_size, *input_shape),
onnx_path,
input_names=["input"],
output_names=["output_0", "output_1"],
)
print('\n\n onnx has been save to {}\n\n'.format(onnx_path))
如在mac下执行,还需要加上这行环境配置:
os.environ['KMP_DUPLICATE_LIB_OK']='True'
可能的报错:
ImportError: cannot import name 'get_all_providers' from 'onnxruntime.capi._pybind_state'
mac下的通用解决方法:
brew install libomp
如果还是报相同错误,则可能是版本问题。换版本即可。例如我是执行:
pip install onnxruntime==1.2.0