模型准备
TACO Infer 支持对 Pytorch TorchScript 和 torch.nn.Module 两种模型格式进行优化。通常在生产环境中,性能最优的方式是导出 TorchScript 模型后进行部署。TorchScript 模型也是 TACO Infer 支持最完善的模型格式,推荐您优先使用 TorchScript 模型格式。在优化前,您需要准备好导出后的 TorchScript 模型。
模型导出示例代码如下:
import pathlibimport torchfrom torchvision.models.resnet import Bottleneck, ResNetfrom taco.utils.network import wget_urlclass ImageClassifier(ResNet):def __init__(self):super(ImageClassifier, self).__init__(Bottleneck, [3, 8, 36, 3])self.ckpt_url = "https://taco-1251783334.cos.ap-shanghai.myqcloud.com/model/pytorch/checkpoints/resnet152/resnet152-b121ed2d.pth"def gen_model(work_dir: str) -> torch.nn.Module:model = ImageClassifier()model_path = pathlib.Path(work_dir) / "model.pth"wget_url(model.ckpt_url, model_path, disable_bar=False)model.load_state_dict(torch.load(model_path))return modelmodel = gen_model(".")script_model = torch.jit.script(model)scirpt_model_path = "./model.pt"torch.jit.save(script_model, scirpt_model_path)
导入TACO Infer
使用TACO Infer优化模型首先需要导入python模块:
from taco import optimize_gpu, OptimizeConfig, ModelConfig
调用优化接口
report = optimize_gpu(input_model,output_model_dir,test_data = test_data,optimize_config = optimize_config,model_config = model_config)
模型优化结束后,会产出一个保存在您指定的目录中的优化后的模型,以及一个包括硬件信息,软件信息及优化过程相关指标的优化报告。优化报告的详细信息如以下样例所示:
{"hardware": {"device": "NVIDIA A10, driver: 470.82.01","driver": "470.82.01","num_gpus": "1","cpu": "AMD EPYC 7K83 64-Core Processor, family '25', model '1'"},"software": {"taco version": "0.2.10","framework": "pytorch","framework version": "1.12.0+cu113","torch device": "NVIDIA A10"},"summary": {"working directory": "/root/resnet152","input model": "ImageClassifier","output model folder": "optimized_dir","input model format": "torch.nn.Module memory object","status": "satisfactory","baseline latency": "20ms 102us","accelerated latency": "5ms 418us","speedup": "3.71","optimization time": "1min 2s 542ms","env": "{}"}}
完整的优化示例代码如下:
import torchfrom taco.optimizer.optimize import optimize_gpufrom taco import ModelConfigfrom taco import OptimizeConfigdef gen_test_data(batch_size: int = 1) -> torch.Tensor:IMAGE_SIZE=224return torch.rand(batch_size, 3, IMAGE_SIZE, IMAGE_SIZE)# the path of torchscript model exported in previous chaptersscirpt_model_path = "./model.pt"test_data = gen_test_data(batch_size=1)optim_cfg = OptimizeConfig()model_cfg = ModelConfig()report = optimize_gpu(input_model=scirpt_model_path,output_model_dir="optimized_dir",test_data=test_data,optimize_config=optim_cfg,model_config=model_cfg)
优化完成后,在配置好的模型输出目录,可以看到产出的优化模型:
[root@60abf692a8a1 /root/resnet152]#ll optimized_dir/total 454Mdrwxr-xr-x 3 root root 4.0K Jan 5 15:30 ../drwxr-xr-x 2 root root 4.0K Jan 5 15:30 ./-rw-r--r-- 1 root root 454M Jan 5 15:30 optimized_recursive_script_module.pt
可以看到,TACO 优化模型后会产出 TorchScript 格式的模型文件供您进行部署。
模型验证
经过以上步骤,得到优化后的模型文件之后,您可以使用 torch.jit.load 接口加载该模型,验证其性能和正确性。加载模型运行的代码如下所示:
import torchimport tacodef gen_test_data(batch_size: int = 1) -> torch.Tensor:IMAGE_SIZE=224return torch.rand(batch_size, 3, IMAGE_SIZE, IMAGE_SIZE)optimized_model = torch.jit.load("optimized_dir/optimized_recursive_script_module.pt")test_data = gen_test_data(batch_size=1).cuda()with torch.no_grad():output = optimized_model(test_data)print(output.shape)
需要注意的是,由于优化后的模型包含经过高度优化的 TACO Kit 自定义算子,因此运行模型之前,需要执行
import taco
加载包含自定义算子的动态链接库。您根据自己的输出模型目录调整好相关参数之后,运行以上代码,即可加载优化后的模型进行推理计算。输出日志如下:
[root@a2c4f3d901e6 /root/resnet152]#python infer.pytorch.Size([1, 1000])
可以看到,模型正常运行并且输出了计算结果。