前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >【论文复现】ArcFace: Additive Angular Margin Loss for Deep Face Recognition

【论文复现】ArcFace: Additive Angular Margin Loss for Deep Face Recognition

作者头像
致Great
发布2021-04-09 15:29:13
2.7K0
发布2021-04-09 15:29:13
举报
文章被收录于专栏:程序生活

论文的标题:《ArcFace: Additive Angular Margin Loss for Deep Face Recognition》 论文下载链接:https://arxiv.org/pdf/1801.07698v1.pdf

一、核心思想

本篇文论提出了一种新的几何可解释性的损失函数:ArcFace。在L2正则化之后的weights和features基础之上,引入了

cos(\theta+m)
cos(\theta+m)

使得角度空间中类间的决策边界最大化,如下图所示:

上图是ArcFace的几何解释:(a)蓝色和绿色点代表了两个不同类别的向量特征,比如蓝色代表一些猫的图片向量特征,绿色代表一些狗的图片向量特征。ArcFace可以直接进一步增加两种类别间隔。(2)右边更加直观地解释了角度和角度间隔。ArcFace的角度间隔代表了(超)球面上不同种类样本的几何间隔。

二、背景介绍

  • 深度卷积网络能够将面部图像映射到嵌入特征向量中(通常在姿势输入进行归一化步骤之后)。
  • 这样,同一个人的特征之间的距离很小,而不同个体的特征之间的距离很大。
  • 深度卷积网络的人脸识别方法在以下三个主要方面有所不同: (1)训练数据:
    • 许多数据集的大小各不相同
    • 数据集带有标注噪声
    • 作者发现,MegaFace和FaceScrub数据集之间存在数百张重叠的人脸图像,针对MS-Celeb-1M, MegaFace及FaceScrub做了整理,并将整理过后的dataset公开。
    • 训练数据量表上的数量级差异–>行业的人脸识别模型比学术界的模型要好得多
    • 训练数据的差异也使得某些深度网络的人脸识别结果无法完全重现。

    (2)网络架构和设置:

    • ResNet,Inception-ResNet,VGG和Google Inception V1
    • 训练速度与模型精度之间的权衡

    (3)损失函数:

    • 欧氏边界损失函数
    • 基于角余弦余量的损失函数

三、ArcFace 损失函数的演变

该部分我们主要解释下从Softmax到ArcFace演变历程

1. Softmax

L_{1}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{W_{y_i}^Tx_i+b_{y_{i}}}}{\sum_{j=1}^ne^{W_{j}^Tx_i+b_{j}}}
L_{1}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{W_{y_i}^Tx_i+b_{y_{i}}}}{\sum_{j=1}^ne^{W_{j}^Tx_i+b_{j}}}
  • 其中m代表了batch size个样本,n代表了类别个数
  • Softmax损失函数没有明确优化目标,这个目标就是正样本能够有更高的相似度,负样本能够有更低的相似度,也就是说并没有扩大决策边界。

2. Weights Normalisation

b_{j}=0
b_{j}=0

,Softmax公式中分子的部分可以表示为:

W_{j}^Tx_{i}=\left \|W_{j}\right\|\left\|x_{i} \right\|cos\theta_{j}
W_{j}^Tx_{i}=\left \|W_{j}\right\|\left\|x_{i} \right\|cos\theta_{j}

\left\|W_{j}\right\|=1
\left\|W_{j}\right\|=1

,那么我们有如下公式:

L2=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{\left\|x_{i}\right\|\cos(\theta_{y_i})}}{e^{\left\|x_{i}\right\|\cos(\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{\left\|x_{i}\right\|\cos(\theta_{y_j})}}
L2=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{\left\|x_{i}\right\|\cos(\theta_{y_i})}}{e^{\left\|x_{i}\right\|\cos(\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{\left\|x_{i}\right\|\cos(\theta_{y_j})}}

权重归一化之后,loss也就只跟特征向量和权重之间的角度有关了。在Sphereface论文中表明,权值归一化可以提高一点点效果。

3. MultiplicativeAngular Margin

在SphereFace中,角度乘以角度间隔m从而扩大m倍:

L_{3}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{\left\|x_{i}\right\|\cos(m\theta_{y_i})}}{e^{\left\|x_{i}\right\|\cos(m\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{\left\|x_{i}\right\|\cos(\theta_{y_j})}}
L_{3}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{\left\|x_{i}\right\|\cos(m\theta_{y_i})}}{e^{\left\|x_{i}\right\|\cos(m\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{\left\|x_{i}\right\|\cos(\theta_{y_j})}}

其中

\theta_{y_i}\in[0,\pi/m]
\theta_{y_i}\in[0,\pi/m]

由于cosine不是单调的,所以用了一个分段函数

\psi(y_i)
\psi(y_i)

给转成单调的,

L_{4}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{\left\|x_{i}\right\|\psi(\theta_{y_i})}}{e^{\left\|x_{i}\right\|\psi(\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{\left\|x_{i}\right\|\cos(\theta_{y_j})}}
L_{4}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{\left\|x_{i}\right\|\psi(\theta_{y_i})}}{e^{\left\|x_{i}\right\|\psi(\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{\left\|x_{i}\right\|\cos(\theta_{y_j})}}

其中:

\psi(\theta_{y_i})=(-1)^k\cos(m\theta_{y_i})-2k,\theta_{y_i}\in[\frac{k\pi}{m},\frac{(k+1)\pi}{m}],k\in[0,m-1],m \geq 1
\psi(\theta_{y_i})=(-1)^k\cos(m\theta_{y_i})-2k,\theta_{y_i}\in[\frac{k\pi}{m},\frac{(k+1)\pi}{m}],k\in[0,m-1],m \geq 1

引入

\psi(\theta_{y_i})
\psi(\theta_{y_i})

实际上相当于训练的时候加入了softmax帮助收敛,并且权重由动态超参数

\lambda
\lambda

控制:

\psi(\theta_{y_i})=\frac{(-1)^k\cos(m\theta_{y_i})-2k+\lambda\cos(\theta_{y_i})}{1+\lambda}
\psi(\theta_{y_i})=\frac{(-1)^k\cos(m\theta_{y_i})-2k+\lambda\cos(\theta_{y_i})}{1+\lambda}

4. Feature Normalisation

特征和权重正则化消除了径向变化并且让每个特征都分布在超球面上,本文将特征超球面半径设为s=64,也就是将

\left\|x_{i}\right\|
\left\|x_{i}\right\|

缩放为s,sphereface的loss变为:

L_{5}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{s\psi(\theta_{y_i})}}{e^{s\psi(\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{s\cos(\theta_{y_j})}}
L_{5}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{s\psi(\theta_{y_i})}}{e^{s\psi(\theta_{y_i})}+\sum_{j=1,j\not =y_{i}}^ne^{s\cos(\theta_{y_j})}}

5. Additive Cosine Margin

本文把m设为0.35. 与sphereface相比,cosine-face有三个好处:(1)没有超参数简单易实现,(2)清晰且不用softmax监督也能收敛,(3)性能明显提升

L_{6}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{s\cos(\theta_{y_i}-m)}}{e^{s\cos(\theta_{y_i}-m)}+\sum_{j=1,j\not =y_{i}}^ne^{s\cos(\theta_{y_j})}}
L_{6}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{s\cos(\theta_{y_i}-m)}}{e^{s\cos(\theta_{y_i}-m)}+\sum_{j=1,j\not =y_{i}}^ne^{s\cos(\theta_{y_j})}}

6. Additive AngularMargin

L_{6}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{s\cos(\theta_{y_i}+m)}}{e^{s\cos(\theta_{y_i}+m)}+\sum_{j=1,j\not =y_{i}}^ne^{s\cos(\theta_{y_j})}}
L_{6}=-\frac{1}{m}\sum_{i=1}^m\log\frac{e^{s\cos(\theta_{y_i}+m)}}{e^{s\cos(\theta_{y_i}+m)}+\sum_{j=1,j\not =y_{i}}^ne^{s\cos(\theta_{y_j})}}

服从于:

W_{j}=\frac{W_{j}}{\left\|W_{j}\right\|},x_{i}=\frac{x_{i}}{\left\|x_{i}\right\|},\cos(\theta_{j})=W_{j}^Tx_{i}
W_{j}=\frac{W_{j}}{\left\|W_{j}\right\|},x_{i}=\frac{x_{i}}{\left\|x_{i}\right\|},\cos(\theta_{j})=W_{j}^Tx_{i}

四、不同损失函数

五、结果对比

六、基于 MNIST Dataset的ARCFace 的Pytorch实现

1 导入包

代码语言:javascript
复制
import torch 
import torch.nn.functional as F

from torch import nn, optim 
from torch.utils.data import DataLoader
from torchvision import transforms as T, datasets

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt 
import plotly.express as px

from tqdm.notebook import tqdm
from sklearn.metrics import accuracy_score

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

2 数据预处理

代码语言:javascript
复制
transform = T.Compose([
    T.ToTensor(),
    T.Normalize((0.5,), (0.5,))
])
trainset = datasets.MNIST('../input/mnist-dataset-pytorch', train = True, transform = transform)
testset = datasets.MNIST('../input/mnist-dataset-pytorch', train = False, transform = transform)

3 ArcFace CNN Model

代码语言:javascript
复制
class ArcFace(nn.Module):
    
    def __init__(self,in_features,out_features,margin = 0.7 ,scale = 64):
        super().__init__()
        
        self.in_features = in_features
        self.out_features = out_features
        self.scale = scale
        self.margin = margin 
        
        self.weights = nn.Parameter(torch.FloatTensor(out_features,in_features))
        nn.init.xavier_normal_(self.weights)
        
    def forward(self,features,targets):
        cos_theta = F.linear(features,F.normalize(self.weights),bias=None)
        cos_theta = cos_theta.clip(-1+1e-7, 1-1e-7)
        
        arc_cos = torch.acos(cos_theta)
        M = F.one_hot(targets, num_classes = self.out_features) * self.margin
        arc_cos = arc_cos + M
        
        cos_theta_2 = torch.cos(arc_cos)
        logits = cos_theta_2 * self.scale
        return logits
    
    
class MNIST_Model(nn.Module):
    
    def __init__(self):
        super(MNIST_Model, self).__init__()

        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50,3)
        self.arc_face = ArcFace(in_features = 3, out_features = 10)
        
    def forward(self,features,targets = None):
        
        x = F.relu(F.max_pool2d(self.conv1(features), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        _,c,h,w = x.shape
        x = x.view(-1, c*h*w)
        x = F.relu(self.fc1(x))
        x = F.normalize(self.fc2(x))
        
        if targets is not None:
            logits = self.arc_face(x,targets)
            return logits
        return x
代码语言:javascript
复制
model = MNIST_Model()
model.to(device)
代码语言:javascript
复制
MNIST_Model(
  (conv1): Conv2d(1, 10, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1))
  (conv2_drop): Dropout2d(p=0.5, inplace=False)
  (fc1): Linear(in_features=320, out_features=50, bias=True)
  (fc2): Linear(in_features=50, out_features=3, bias=True)
  (arc_face): ArcFace()
)

4 模型训练

代码语言:javascript
复制
class TrainModel():
    
    def __init__(self,criterion = None,optimizer = None,schedular = None,device = None):
        self.criterion = criterion
        self.optimizer = optimizer
        self.schedular = schedular
        self.device = device
        
    def accuracy(self,logits,labels):
        ps = torch.argmax(logits,dim = 1).detach().cpu().numpy()
        acc = accuracy_score(ps,labels.detach().cpu().numpy())
        return acc

    def get_dataloader(self,trainset,validset):
        trainloader = DataLoader(trainset,batch_size = 64, num_workers = 4, pin_memory = True)
        validloader = DataLoader(validset,batch_size = 64, num_workers = 4, pin_memory = True)
        return trainloader, validloader
        
    def train_batch_loop(self,model,trainloader,i):
        
        epoch_loss = 0.0
        epoch_acc = 0.0
        pbar_train = tqdm(trainloader, desc = "Epoch" + " [TRAIN] " + str(i+1))
        
        for t,data in enumerate(pbar_train):
            
            images,labels = data
            images = images.to(device)
            labels = labels.to(device)
            
            logits = model(images,labels)
            loss = self.criterion(logits,labels)
            
            self.optimizer.zero_grad()
            loss.backward()
            self.optimizer.step()
            
            epoch_loss += loss.item()
            epoch_acc += self.accuracy(logits,labels)
            
            pbar_train.set_postfix({'loss' : '%.6f' %float(epoch_loss/(t+1)), 'acc' : '%.6f' %float(epoch_acc/(t+1))})
            
        return epoch_loss / len(trainloader), epoch_acc / len(trainloader)
            
    
    def valid_batch_loop(self,model,validloader,i):
        
        epoch_loss = 0.0
        epoch_acc = 0.0
        pbar_valid = tqdm(validloader, desc = "Epoch" + " [VALID] " + str(i+1))
        
        for v,data in enumerate(pbar_valid):
            
            images,labels = data
            images = images.to(device)
            labels = labels.to(device)
            
            logits = model(images,labels)
            loss = self.criterion(logits,labels)
            
            epoch_loss += loss.item()
            epoch_acc += self.accuracy(logits,labels)
            
            pbar_valid.set_postfix({'loss' : '%.6f' %float(epoch_loss/(v+1)), 'acc' : '%.6f' %float(epoch_acc/(v+1))})
            
        return epoch_loss / len(validloader), epoch_acc / len(validloader)
            
    
    def run(self,model,trainset,validset,epochs):
    
        trainloader,validloader = self.get_dataloader(trainset,validset)
        
        for i in range(epochs):
            
            model.train()
            avg_train_loss, avg_train_acc = self.train_batch_loop(model,trainloader,i)
            
            model.eval()
            avg_valid_loss, avg_valid_acc = self.valid_batch_loop(model,validloader,i)
            
        return model 
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr = 0.0001)


model = TrainModel(criterion, optimizer, device).run(model, trainset, testset, 20)

5 提取图片向量

代码语言:javascript
复制
emb = []
y = []

testloader = DataLoader(testset,batch_size = 64)
with torch.no_grad():
    for images,labels in tqdm(testloader):
        
        images = images.to(device)
        embeddings = model(images)
        
        emb += [embeddings.detach().cpu()]
        y += [labels]
        
    embs = torch.cat(emb).cpu().numpy()
    y = torch.cat(y).cpu().numpy()
代码语言:javascript
复制
tsne_df = pd.DataFrame(
    np.column_stack((embs, y)),
    columns = ["x","y","z","targets"]
)

fig = px.scatter_3d(tsne_df, x='x', y='y', z='z',
              color='targets')
fig.show()

https://www.kaggle.com/parthdhameliya77/simple-arcface-implementation-on-mnist-dataset

参考资料

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 一、核心思想
  • 二、背景介绍
  • 三、ArcFace 损失函数的演变
    • 1. Softmax
      • 2. Weights Normalisation
        • 3. MultiplicativeAngular Margin
          • 4. Feature Normalisation
            • 5. Additive Cosine Margin
              • 6. Additive AngularMargin
              • 四、不同损失函数
              • 五、结果对比
              • 六、基于 MNIST Dataset的ARCFace 的Pytorch实现
                • 1 导入包
                  • 2 数据预处理
                    • 3 ArcFace CNN Model
                      • 4 模型训练
                        • 5 提取图片向量
                        • 参考资料
                        相关产品与服务
                        人脸识别
                        腾讯云神图·人脸识别(Face Recognition)基于腾讯优图强大的面部分析技术,提供包括人脸检测与分析、比对、搜索、验证、五官定位、活体检测等多种功能,为开发者和企业提供高性能高可用的人脸识别服务。 可应用于在线娱乐、在线身份认证等多种应用场景,充分满足各行业客户的人脸属性识别及用户身份确认等需求。
                        领券
                        问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档