Carl Vondrick , Abhinav Shrivastava , Alireza Fathi , Sergio Guadarrama ,Kevin Murphy
(Submitted on 25 Jun 2018 (v1), last revised 27 Jul 2018 (this version, v2))
We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow. Moreover, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.
本文分享自微信公众号 - CreateAMind（createamind）
原文出处及转载信息见文内详细说明，如有侵权，请联系 email@example.com 删除。
Felipe Codevilla, Matthias Müller, Alexey Dosovitskiy, Antonio López, Vladlen Ko...
A versatile GAN(generative adversarial network) implementation focused on scalab...
While research in Generative Adversarial Networks (GANs) continues to improve th...
Partner has created his own stable anchor to point to his embedded component EC_...