# 自己动手制作“平均脸”【2】

【1】获取其中的68个脸部特征点，并以这些点为定点，剖分Delaunay 三角形，就如下图这样：

[Code-1] 首先要获得68个脸部特征点，这68个点定义了脸型、眉毛、眼睛、鼻子和嘴的轮廓。幸运的是，这么复杂的操作，我们用OpenCV，几行代码就搞定了！

detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(predictor_path) img = io.imread(image_file_path) dets = detector(img, 1) for k, d in enumerate(dets): shape = predictor(img, d) points = np.zeros((68, 2), dtype = int) for i in range(0, 68): points[i] = (int(shape.part(i).x), int(shape.part(i).y))

[Code-2] 接着剖分Delaunay 三角形，其中points就是68个面部特征点，rect是脸部所在的矩形：

def calculateDelaunayTriangles(rect, points): subdiv = cv2.Subdiv2D(rect); for p in points: subdiv.insert((p[0], p[1])); triangleList = subdiv.getTriangleList(); delaunayTri = [] for t in triangleList: pt = [] pt.append((t[0], t[1])) pt.append((t[2], t[3])) pt.append((t[4], t[5])) pt1 = (t[0], t[1]) pt2 = (t[2], t[3]) pt3 = (t[4], t[5]) if rectContains(rect, pt1) and rectContains(rect, pt2) and rectContains(rect, pt3): ind = [] for j in xrange(0, 3): for k in xrange(0, len(points)): if(abs(pt[j][0] - points[k][0]) < 1.0 and abs(pt[j][1] - points[k][1]) < 1.0): ind.append(k) if len(ind) == 3: delaunayTri.append((ind[0], ind[1], ind[2])) return delaunayTri

【2】然后计算每张脸上各个Delaunay剖分三角的仿射变换，再通过仿射变换扭曲Delaunay三角形：

[Code-3] 计算仿射变换

def applyAffineTransform(src, srcTri, dstTri, size) : warpMat = cv2.getAffineTransform( np.float32(srcTri), np.float32(dstTri) ) dst = cv2.warpAffine( src, warpMat, (size[0], size[1]), None, flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_REFLECT_101 ) return dst

[Code-4] 通过仿射变换扭曲Delaunay剖分三角形

def warpTriangle(img1, img2, t1, t2) : r1 = cv2.boundingRect(np.float32([t1])) r2 = cv2.boundingRect(np.float32([t2])) t1Rect = [] t2Rect = [] t2RectInt = [] for i in xrange(0, 3): t1Rect.append(((t1[i][0] - r1[0]),(t1[i][1] - r1[1]))) t2Rect.append(((t2[i][0] - r2[0]),(t2[i][1] - r2[1]))) t2RectInt.append(((t2[i][0] - r2[0]),(t2[i][1] - r2[1]))) mask = np.zeros((r2[3], r2[2], 3), dtype = np.float32) cv2.fillConvexPoly(mask, np.int32(t2RectInt), (1.0, 1.0, 1.0), 16, 0); img1Rect = img1[r1[1]:r1[1] + r1[3], r1[0]:r1[0] + r1[2]] size = (r2[2], r2[3]) img2Rect = applyAffineTransform(img1Rect, t1Rect, t2Rect, size) img2Rect = img2Rect * mask img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] = img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] * ( (1.0, 1.0, 1.0) - mask ) img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] = img2[r2[1]:r2[1]+r2[3], r2[0]:r2[0]+r2[2]] + img2Rect

【NOTE-1】我们用来做平均脸的单个人脸图像的尺寸很可能不一样，为了方便起见，我们将它们全部转为600*600大小。而所用原始图片，最好比这个尺寸大。

【NOTE-2】既然是要做平均脸，最好都是选用正面、端正姿态的人脸，面部表情最好也不要过于夸张。

mean_filename='models\mean.binaryproto' gender_net_model_file = 'models\deploy_gender.prototxt' gender_net_pretrained = 'models\gender_net.caffemodel' gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained, mean=mean, channel_swap=(2, 1, 0), raw_scale=255, image_dims=(256, 256)) gender_list = ['Male', 'Female'] img = io.imread(image_file_path) dets = detector(img, 1) for k, d in enumerate(dets): cropped_face = img[d.top():d.bottom(), d.left():d.right(), :] h = d.bottom() - d.top() w = d.right() - d.left() hF = int(h * 0.1) wF = int(w*0.1) cropped_face_big = img[d.top() - hF:d.bottom() + hF, d.left() - wF:d.right() + wF, :] prediction = gender_net.predict([cropped_face_big]) gender = gender_list[prediction[0].argmax()].lower() print 'predicted gender:', gender dirname = dirname + gender + "\\" copyfile(image_file_path, dirname + filename)

80 篇文章29 人订阅

0 条评论

## 相关文章

1253

3186

3865

37016

791

3083

2677

### DeepMind 提出分层强化学习新模型 FuN，超越 LSTM

【新智元导读】在用强化学习玩游戏的路上越走越远的 DeepMind，今天发表在 arxiv上的最新论文《分层强化学习的 FeUdal 网络》引起热议。简称 Fu...

50312

3039