首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >视频动作识别--Convolutional Two-Stream Network Fusion for Video Action Recognition

视频动作识别--Convolutional Two-Stream Network Fusion for Video Action Recognition

作者头像
用户1148525
发布2018-01-03 15:38:16
发布2018-01-03 15:38:16
1.6K0
举报

Convolutional Two-Stream Network Fusion for Video Action Recognition CVPR2016

http://www.robots.ox.ac.uk/~vgg/software/two_stream_action/ https://github.com/feichtenhofer/twostreamfusion

对视频动作识别 采用 two steam CNN 分开处理 时空信息,这里我们主要探讨这怎么在 CNN中更好的融合时空信息。 我们的发现有以下三点: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters; 在卷积层融合时空网络不会导致性能下降,但是可以减少网络参数

(ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; 在网络的后卷积层空间融合比浅层要好,在类别预测层融合会增加性能

(iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. 在时空邻域加入池化可以增加性能

针对CNN网络为什么没有在 视频动作识别中取得很好的结果,我们认为的原因是:1)训练数据可能太少了,2)时间信息利用的不够 current ConvNet architectures are not able to take full advantage of temporal information and their performance is consequently often dominated by spatial (appearance) recognition

至少以前的 two-stream architecture 不能很好的解决下面的问题: 1)recognizing what is moving where, i.e. registering appearance recognition (spatial cue) with optical flow recognition (temporal cue) 时空信息的对应 2)how these cues evolve over time. 信息是如何变化

3 Approach 以前的 two-stream architecture 不能很好的融合时空信息,没有时空对应关系 3.1. Spatial fusion 空间融合 这里介绍了好几种融合:Sum fusion,Max fusion,Concatenation fusion,Conv fusion,Bilinear fusion

3.2. Where to fuse the networks 这里的选择也是比较多的

3.3. Temporal fusion

3.4. Proposed architecture

We fuse the two networks, at the last convolutional layer (after ReLU) into the spatial stream to convert it into a spatiotemporal stream by using 3D Conv fusion followed by 3D pooling (see Fig. 4, left). Moreover, we do not truncate the temporal stream and also perform 3D Pooling in the temporal network (see Fig. 4, right). The losses of both streams are used for training and during testing we average the predictions of the two streams

有没有感觉搞复杂了啊!

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2017年09月15日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档