操作场景
本文将演示如何使用 GPU 云服务器优化 AI 绘画模型,结合 TACO Infer 的加速能力优化后的模型是推理时间加速 2.8倍。
操作步骤
购买 GPU 云服务器
实例:选择 计算型 PNV4。
系统盘:配置容量不小于 500GB 的云硬盘。
镜像:建议选择公共镜像。
操作系统使用 CentOS 7.9。
选择公共镜像后请勾选后台自动安装GPU驱动,实例将在系统启动后预装对应版本驱动。如下图所示:
安装docker和NVIDIA docker
1. 参见 使用标准登录方式登录 Linux 实例,登录实例。
2. 执行以下命令,安装 docker。
curl -s -L http://mirrors.tencent.com/install/GPU/taco/get-docker.sh | sudo bash
3. 执行以下命令,安装 nvidia-docker2。
curl -s -L http://mirrors.tencent.com/install/GPU/taco/get-nvidia-docker2.sh | sudo bash
下载并启动 docker 镜像
环境准备
1. 进入
/root/sd_webui_demo
目录中,执行以下命令安装 TACO Infer,下载模型权重:source env.sh
模型导出
1. 使用以下代码利用 diffusers 加载模型权重,并导出为torchscript,您也可以在镜像目录中直接运行 export_model.py:
import torchimport functoolsfrom diffusers import StableDiffusionPipelinedef get_unet(device="cuda:0"):pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5").to(device)unet = pipe.unetunet.eval()unet.to(memory_format=torch.channels_last) # use channels_last memory formatunet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as defaultreturn unetdef get_sample_input(batch_size, latent_height, latent_width, device="cuda:0"):dtype = torch.float32text_maxlen = 77embedding_dim = 768return (torch.randn(2*batch_size, 4, latent_height, latent_width, dtype=dtype, device=device),torch.tensor([1.]*batch_size, dtype=dtype, device=device),torch.randn(2*batch_size, text_maxlen, embedding_dim, dtype=dtype, device=device))model = get_unet()test_data = get_sample_input(1, 64, 64)script_model = torch.jit.trace(model, test_data, strict=False)script_model.save("origin_model/trace_module.pt")
2. 进入 origin_model 目录下可以查看导出的模型:
[root@vm-0-46-centos demo]# ll origin_model/total 3358720-rw-r--r-- 1 500 500 3439324538 Mar 13 16:56 trace_module.pt
模型优化
1. 执行以下命令即可启动 TACO Infer 对导出的模型进行性能优化:
python demo.py{"hardware": {"device": "NVIDIA A10, driver: 470.82.01","driver": "470.82.01","num_gpus": "1","cpu": "AMD EPYC 7K83 64-Core Processor, family '25', model '1'"},"software": {"taco version": "0.0.0","framework": "pytorch","framework version": "1.12.1+cu113","torch device": "NVIDIA A10"},"summary": {"working directory": "/workspace/Arch/demo","input model path": "origin_model/trace_module.pt","output model folder": "./optimized_model","input model format": "torch.jit saved (traced or scripted) model","status": "satisfactory","baseline latency": "191ms 828us","accelerated latency": "45ms 030us","speedup": "4.26","optimization time": "20min 53s 459ms","env": "{}","optimizations": "['wrap_forward_9 (10s 328ms 472us)', 'validate_testdata_8 (1min 6s 540ms)', 'export_7 (1min 13s 965ms)', 'module_pretreat_6 (1min 8s 170ms)', 'torch_trt_5 (7min 43s 367ms)', 'select_3 (1min 46s 166ms)', 'profile_final_perf_2 (12s 839ms 319us)', 'unwrap_forward_1 (6s 660ms 227us)']"}}
2. 优化完成后,在 optimized_model 目录中查看优化后的模型:
[root@4e302835766c /root/demo]#ll optimized_model/total 1.8Gdrwxr-xr-x 2 500 500 4.0K Mar 3 14:38 ./-rw-r--r-- 1 500 500 1.8G Mar 13 16:46 optimized_recursive_script_module.ptdrwxr-xr-x 7 500 500 4.0K Mar 13 16:47 ../
模型验证
经过以上步骤,得到优化后的模型文件之后,您可以使用 torch.jit.load 接口加载该模型,验证其性能和正确性。加载模型运行的代码如下所示:
import torchimport tacodef gen_test_data(batch_size: int = 1) -> torch.Tensor:IMAGE_SIZE=224return torch.rand(batch_size, 3, IMAGE_SIZE, IMAGE_SIZE)optimized_model = torch.jit.load("optimized_dir/optimized_module.pt")test_data = gen_test_data(batch_size=1).cuda()with torch.no_grad():output = optimized_model(test_data)print(output.shape)
需要注意的是,由于优化后的模型包含经过高度优化的 TACO Kit 自定义算子,因此运行模型之前,需要执行
import taco
加载包含自定义算子的动态链接库。您根据自己的输出模型目录调整好相关参数之后,运行以上代码,即可加载优化后的模型进行推理计算。
整体性能测评
使用以下代码可以评测 Stable diffusion 模型优化的整体流程加速情况:
import timeimport torchfrom diffusers import StableDiffusionPipelinefrom dataclasses import dataclassimport tacoprompt = "a photo of an astronaut riding a horse on mars"pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")pipe = pipe.to("cuda:0")@dataclassclass UNet2DConditionOutput:sample: torch.FloatTensorclass Config():def __init__(self, sample_size):self.sample_size = sample_sizeclass OptUNet(torch.nn.Module):def __init__(self, model_path):super().__init__()self.in_channels = pipe.unet.in_channelsself.device = pipe.unet.deviceself.config = Config(64)self.model_path = model_pathself.trace_unet = torch.jit.load(self.model_path).to(self.device).eval()def forward(self, latent_model_input, t, encoder_hidden_states, cross_attention_kwargs=None):t = torch.tensor([t]*2)sample = self.trace_unet(latent_model_input, t, encoder_hidden_states)[0]return UNet2DConditionOutput(sample=sample)# warmupimage = pipe(prompt).images[0]s0 = time.time()image = pipe(prompt).images[0]s1 = time.time()print(f"StableDiffusionPipeline with origin Unet duration: {s1-s0}s")image.save("./output/ori.png")opt_model_path = "./optimized_model/optimized_recursive_script_module.pt"torch.ops.load_library("/root/venv/taco_dev/lib/python3.8/site-packages/taco/torch_tensorrt/lib/libtorchtrt.so")pipe.unet = OptUNet(opt_model_path).to("cuda:0")#warmupwith torch.inference_mode():image = pipe(prompt).images[0]s0 = time.time()with torch.inference_mode():image = pipe(prompt).images[0]s1 = time.time()print(f"StableDiffusionPipeline with optimized Unet duration: {s1-s0}s")image.save("./output/opt.png")
总结
本文基于腾讯云 GPU 云服务器评测优化了 Stable diffusion 模型,通过TACO Infer的优化,在模型耗时主体结构 Unet 上获得了超过 4 倍的性能提升。