做了一个简单的实验,利用modelscope的人像抠图模型对视频流进行抠像并更换背景。
地址链接:视频人像抠图模型-通用领域 该款模型是window下少数可以使用的,就自己试着玩一下。 视频人像抠图(Video human matting)是计算机视觉的经典任务,输入一个视频(图像序列),得到对应视频中人像的alpha图,其中alpha与分割mask不同,mask将视频分为前景与背景,取值只有0和1,而alpha的取值范围是0到1之间,返回数值代表透明度。VHM模型处理1080P视频每帧计算量为10.6G,参数量只有6.3M。
核心代码:
from modelscope.outputs import OutputKeys
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
video_matting = pipeline(Tasks.video_human_matting,
model='damo/cv_effnetv2_video-human-matting')
result_status = video_matting({'video_input_path':'https://modelscope.oss-cn-beijing.aliyuncs.com/test/videos/video_matting_test.mp4',
'output_path':'matting_out.mp4'})
result = result_status[OutputKeys.MASKS]
正常情况下,输出路径会返回人像抠图的mask视频结果,算法result返回的是包含每帧narray格式结果的列表,同时打印字符串'matting process done'
所以这里result输出的都是半成品,如果需要背景更换就需要自己再编辑一下
笔者自己改编成以下:
import cv2
#from PIL import Image
#from matplotlib import pyplot as plt
#base_image = Image.open('mask_backaround.jpg')#.convert('RGB')
#base_image.size
def video_human_segmentation(video_path,out_path,result_msk,\
back_pic = "mask_backaround.jpg"):
'''
video_path:视频链接URL
out_path:导出之后的视频链接
result_msk:图像mask 序列
'''
video_input = cv2.VideoCapture(video_path)
fps = video_input.get(cv2.CAP_PROP_FPS)
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
success, frame = video_input.read()
h, w = frame.shape[:2]
scale = 512 / max(h, w)
video_save = cv2.VideoWriter(out_path, fourcc, fps, (w, h))
# 背景图
back_ori = cv2.imread(back_pic)
n = 0
while True:
if frame is None:
break
mask = result_msk[n]
n += 1
#print(n)
# 人像从原图抠出来
#mask = mask / 255.0
frame[:,:,0] = frame[:,:,0] * mask[:,:,0] # 有人像的地方有值
frame[:,:,1] = frame[:,:,1] * mask[:,:,0]
frame[:,:,2] = frame[:,:,2] * mask[:,:,0]
# 背景图,把人像部分扣出去
back = cv2.resize(back_ori,(w,h))
back[:,:,0] = back[:,:,0] * (1 - mask[:,:,0]) # 有人像的地方 = 0
back[:,:,1] = back[:,:,1] * (1 - mask[:,:,0])
back[:,:,2] = back[:,:,2] * (1 - mask[:,:,0])
# 图片加权合成
com = cv2.add(back,frame)
video_save.write(com)
success, frame = video_input.read()
video_input.release()
video_save.release()
return pass
video_path = 'https://modelscope.oss-cn-beijing.aliyuncs.com/test/videos/video_matting_test.mp4'
out_path = 'test_3.mp4'
video_human_segmentation(video_path,out_path,result_msk,back_pic = "mask_backaround.jpg")
video_human_segmentation
做了几件事情,cv2.VideoCapture
读取视频流;读入背景图(固定的),在背景图中将人物位置抠出来;在原图把人物抠出来;然后背景图+原图抠图进行合成,最后写出到视频中。
大致效果:
原图:
背景图:
合成图: