专栏首页CVerCVPR2019 | 10篇论文速递(涵盖全景分割、实例分割和姿态估计等方向)

CVPR2019 | 10篇论文速递(涵盖全景分割、实例分割和姿态估计等方向)

【导读】CVPR 2019 接收论文列表已经出来了,但只是一些索引号,所以并没有完整的论文合集。CVer 最近也在整理收集,今天一文涵盖10篇 CVPR 2019 论文速递,内容涵盖全景分割、实例分割和姿态估计等方向。

之前曾分享:CVPR2019 | 15篇论文速递(涵盖目标检测、语义分割和姿态估计等方向)

特别鸣谢 CV_arXiv_Daily 公众号提供的部分素材

实例分割

[1] CVPR2019 开源Mask Scoring R-CNN新文

论文题目:Mask Scoring R-CNN

作者:Zhaojin Huang, Lichao Huang, Yongchao Gong, Chang Huang, Xinggang Wang

论文链接:https://arxiv.org/abs/1903.00241

代码链接:https://github.com/zjhuang22/maskscoring_rcnn

摘要: Letting a deep network be aware of the quality of its own predictions is an interesting yet important problem. In the task of instance segmentation, the confidence of instance classification is used as mask quality score in most instance segmentation frameworks. However, the mask quality, quantified as the IoU between the instance mask and its ground truth, is usually not well correlated with classification score. In this paper, we study this problem and propose Mask Scoring R-CNN which contains a network block to learn the quality of the predicted instance masks. The proposed network block takes the instance feature and the corresponding predicted mask together to regress the mask IoU. The mask scoring strategy calibrates the misalignment between mask quality and mask score, and improves instance segmentation performance by prioritizing more accurate mask predictions during COCO AP evaluation. By extensive evaluations on the COCO dataset, Mask Scoring R-CNN brings consistent and noticeable gain with different models, and outperforms the state-of-the-art Mask R-CNN. We hope our simple and effective approach will provide a new direction for improving instance segmentation.

3D点云

[2] CVPR2019 3D Point Clouds新文

论文题目:Octree guided CNN with Spherical Kernels for 3D Point Clouds

作者:Huan Lei, Naveed Akhtar, Ajmal Mian

论文链接:https://arxiv.org/abs/1903.00343

摘要: We propose an octree guided neural network architecture and spherical convolutional kernel for machine learning from arbitrary 3D point clouds. The network architecture capitalizes on the sparse nature of irregular point clouds, and hierarchically coarsens the data representation with space partitioning. At the same time, the proposed spherical kernels systematically quantize point neighborhoods to identify local geometric structures in the data, while maintaining the properties of translation-invariance and asymmetry. We specify spherical kernels with the help of network neurons that in turn are associated with spatial locations. We exploit this association to avert dynamic kernel generation during network training that enables efficient learning with high resolution point clouds. The effectiveness of the proposed technique is established on the benchmark tasks of 3D object classification and segmentation, achieving new state-of-the-art on ShapeNet and RueMonge2014 datasets.

聚类

[3] CVPR 2019 聚类新文

论文题目:Efficient Parameter-free Clustering Using First Neighbor Relations

作者:M. Saquib Sarfraz, Vivek Sharma, Rainer Stiefelhagen

论文链接:https://arxiv.org/abs/1902.11266

摘要: We present a new clustering method in the form of a single clustering equation that is able to directly discover groupings in the data. The main proposition is that the first neighbor of each sample is all one needs to discover large chains and finding the groups in the data. In contrast to most existing clustering algorithms our method does not require any hyper-parameters, distance thresholds and/or the need to specify the number of clusters. The proposed algorithm belongs to the family of hierarchical agglomerative methods. The technique has a very low computational overhead, is easily scalable and applicable to large practical problems. Evaluation on well known datasets from different domains ranging between 1077 and 8.1 million samples shows substantial performance gains when compared to the existing clustering techniques.

表示学习

[4] CVPR 2019 表示学习新文

论文题目:End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization

作者:Yeonwoo Jeong, Yoonsuing Kim, Hyun Oh Song

论文链接:https://arxiv.org/abs/1902.10990

摘要: We develop hierarchically quantized efficient embedding representations for similarity-based search and show that this representation provides not only the state of the art performance on the search accuracy but also provides several orders of speed up during inference. The idea is to hierarchically quantize the representation so that the quantization granularity is greatly increased while maintaining the accuracy and keeping the computational complexity low. We also show that the problem of finding the optimal sparse compound hash code respecting the hierarchical structure can be optimized in polynomial time via minimum cost flow in an equivalent flow network. This allows us to train the method end-to-end in a mini-batch stochastic gradient descent setting. Our experiments on Cifar100 and ImageNet datasets show the state of the art search accuracy while providing several orders of magnitude search speedup respectively over exhaustive linear search over the dataset.

Text-to-Image

[5] CVPR 2019 Text-to-Image新文

论文题目:Object-driven Text-to-Image Synthesis via Adversarial Training

作者:Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, Jianfeng Gao

论文链接:https://arxiv.org/abs/1902.10740

摘要: In this paper, we propose Object-driven Attentive Generative Adversarial Newtorks (Obj-GANs) that allow object-centered text-to-image synthesis for complex scenes. Following the two-step (layout-image) generation process, a novel object-driven attentive image generator is proposed to synthesize salient objects by paying attention to the most relevant words in the text description and the pre-generated semantic layout. In addition, a new Fast R-CNN based object-wise discriminator is proposed to provide rich object-wise discrimination signals on whether the synthesized object matches the text description and the pre-generated layout. The proposed Obj-GAN significantly outperforms the previous state of the art in various metrics on the large-scale COCO benchmark, increasing the Inception score by 27% and decreasing the FID score by 11%. A thorough comparison between the traditional grid attention and the new object-driven attention is provided through analyzing their mechanisms and visualizing their attention layers, showing insights of how the proposed model generates complex scenes in high quality.

人脸

[6] CVPR 2019 人脸新文

论文题目:Joint Face Detection and Facial Motion Retargeting for Multiple Faces

作者:Bindita Chaudhuri, Noranart Vesdapunt, Baoyuan Wang

论文链接:https://arxiv.org/abs/1902.10744

摘要: Facial motion retargeting is an important problem in both computer graphics and vision, which involves capturing the performance of a human face and transferring it to another 3D character. Learning 3D morphable model (3DMM) parameters from 2D face images using convolutional neural networks is common in 2D face alignment, 3D face reconstruction etc. However, existing methods either require an additional face detection step before retargeting or use a cascade of separate networks to perform detection followed by retargeting in a sequence. In this paper, we present a single end-to-end network to jointly predict the bounding box locations and 3DMM parameters for multiple faces. First, we design a novel multitask learning framework that learns a disentangled representation of 3DMM parameters for a single face. Then, we leverage the trained single face model to generate ground truth 3DMM parameters for multiple faces to train another network that performs joint face detection and motion retargeting for images with multiple faces. Experimental results show that our joint detection and retargeting network has high face detection accuracy and is robust to extreme expressions and poses while being faster than state-of-the-art methods.

CNN训练

[7] CVPR 2019 CNN训练新文

论文题目:RePr: Improved Training of Convolutional Filters

作者:Aaditya Prakash, James Storer, Dinei Florencio, Cha Zhang

论文链接:https://arxiv.org/abs/1811.07275

摘要: A well-trained Convolutional Neural Network can easily be pruned without significant loss of performance. This is because of unnecessary overlap in the features captured by the network's filters. Innovations in network architecture such as skip/dense connections and Inception units have mitigated this problem to some extent, but these improvements come with increased computation and memory requirements at run-time. We attempt to address this problem from another angle - not by changing the network structure but by altering the training method. We show that by temporarily pruning and then restoring a subset of the model's filters, and repeating this process cyclically, overlap in the learned features is reduced, producing improved generalization. We show that the existing model-pruning criteria are not optimal for selecting filters to prune in this context and introduce inter-filter orthogonality as the ranking criteria to determine under-expressive filters. Our method is applicable both to vanilla convolutional networks and more complex modern architectures, and improves the performance across a variety of tasks, especially when applied to smaller networks.

视觉语音导航

[8] CVPR 2019 视觉语音导航新文

论文题目:Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation

作者:Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, Lei Zhang

论文链接:https://arxiv.org/abs/1811.10092

摘要: Vision-language navigation (VLN) is the task of navigating an embodied agent to carry out natural language instructions inside real 3D environments. In this paper, we study how to address three critical challenges for this task: the cross-modal grounding, the ill-posed feedback, and the generalization problems. First, we propose a novel Reinforced Cross-Modal Matching (RCM) approach that enforces cross-modal grounding both locally and globally via reinforcement learning (RL). Particularly, a matching critic is used to provide an intrinsic reward to encourage global matching between instructions and trajectories, and a reasoning navigator is employed to perform cross-modal grounding in the local visual scene. Evaluation on a VLN benchmark dataset shows that our RCM model significantly outperforms existing methods by 10% on SPL and achieves the new state-of-the-art performance. To improve the generalizability of the learned policy, we further introduce a Self-Supervised Imitation Learning (SIL) method to explore unseen environments by imitating its own past, good decisions. We demonstrate that SIL can approximate a better and more efficient policy, which tremendously minimizes the success rate performance gap between seen and unseen environments (from 30.7% to 11.7%).

6D 目标姿态估计

[9] CVPR 2019 6D目标姿态估计新文

论文题目:DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion

作者:Chen Wang, Danfei Xu, Yuke Zhu, Roberto Martín-Martín, Cewu Lu, Li Fei-Fei, Silvio Savarese

论文链接:https://arxiv.org/abs/1901.04780

摘要: A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGB-D images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose.

全景分割

[10] CVPR 2019 全景分割新文

论文题目:Attention-guided Unified Network for Panoptic Segmentation

作者:Yanwei Li, Xinze Chen, Zheng Zhu, Lingxi Xie, Guan Huang, Dalong Du, Xingang Wang

论文链接:https://arxiv.org/abs/1812.03904

摘要: This paper studies panoptic segmentation, a recently proposed task which segments foreground (FG) objects at the instance level as well as background (BG) contents at the semantic level. Existing methods mostly dealt with these two problems separately, but in this paper, we reveal the underlying relationship between them, in particular, FG objects provide complementary cues to assist BG understanding. Our approach, named the Attention-guided Unified Network (AUNet), is an unified framework with two branches for FG and BG segmentation simultaneously. Two sources of attentions are added to the BG branch, namely, RPN and FG segmentation mask to provide object-level and pixel-level attentions, respectively. Our approach is generalized to different backbones with consistent accuracy gain in both FG and BG segmentation, and also sets new state-of-the-arts in the MS-COCO (46.5% PQ) benchmarks.

---End---

本文分享自微信公众号 - CVer(CVerNews),作者:Amusi

原文出处及转载信息见文内详细说明,如有侵权,请联系 yunjia_community@tencent.com 删除。

原始发表时间:2019-03-05

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • [计算机视觉论文速递] 2018-03-11

    通知:这篇推文有10篇论文速递信息,涉及目标检测、行人重识别Re-ID、图像检索和Zero-Shot Learning等方向 这篇文章本来是在2018-03-1...

    Amusi
  • 一文看尽21篇目标检测最新论文(腾讯/Google/商汤/旷视/清华/浙大/CMU/华科/中科院等)

    CVer 有几天没更新论文速递了,主要是这段时间的论文太多,而且质量较高的论文也不少,所以为了方便大家阅读,我已经将其中的目标检测(Object Detecti...

    Amusi
  • [计算机视觉论文速递] 2018-07-07 CVPR 图像分割专场1

    这篇文章有 2篇论文速递,都是图像分割方向(CVPR 2018),一篇提出CCB-Cut损失,另一篇是对FCN网络进行了改进。注意,两篇都是CVPR 2018文...

    Amusi
  • 【论文推荐】最新7篇变分自编码器(VAE)相关论文—汉语诗歌、生成模型、跨模态、MR图像重建、机器翻译、推断、合成人脸

    【导读】专知内容组整理了最近七篇变分自编码器(Variational Autoencoders)相关文章,为大家进行介绍,欢迎查看! 1. Generating...

    WZEARW
  • 【论文推荐】最新6篇推荐系统(Recommendation System)相关论文—深度、注意力、安全、可解释性、评论、自编码器

    【导读】专知内容组整理了最近六篇推荐系统(Recommendation System)相关文章,为大家进行介绍,欢迎查看! 1. DKN: Deep Knowl...

    WZEARW
  • 以动能为基础,确保学习动态系统的稳定性(CS RO)

    非线性动力系统是一种紧凑、灵活、有力的反应运动生成工具。 动态系统的有效性依赖于它们精确表示稳定运动的能力。 为了从演示中学习稳定和准确的运动,已经提出了几种方...

    用户7095611
  • Miguel de Icaza 细说 Mix 07大会上的Silverlight和DLR

    Mono之父Miguel de Icaza 详细报道微软Mix 07大会上的Silverlight和DLR ,上面还谈到了Mono and Silverligh...

    张善友
  • Introduction to Model Driven Development with AndroMDA

    Introduction AndroMDA (pronounced "Andromeda") is a free and open source extensi...

    张善友
  • 人群密度估计--CrowdNet: A Deep Convolutional Network for Dense Crowd Counting

    CrowdNet: A Deep Convolutional Network for Dense Crowd Counting published in ...

    用户1148525
  • Going Deeper with Convolutions——GoogLeNet论文翻译——中英文对照

    声明:作者翻译论文仅为学习,如有侵权请联系作者删除博文,谢谢! Going Deeper with Convolutions Abstract We propo...

    Tyan

扫码关注云+社区

领取腾讯云代金券