暂无搜索历史
论文: LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
其中函数$g_l$和$R_l$定义了网络如何更新第l层的输入$x_l$。函数$g_l$通常是恒等式,而残差分支$R_l$则是网络构建的核心模块,许多研究都着力于...
论文: How Much More Data Do I Need? Estimating Requirements for Downstream Tasks
论文: CvT: Introducing Convolutions to Vision Transformers
论文: DeepViT: Towards Deeper Vision Transformer
论文: Should All Proposals be Treated Equally in Object Detection?
论文: Not All Images are Worth 16x16 Words: Dynamic Transformers for Efficient Ima...
论文: Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
论文: Incorporating Convolution Designs into Visual Transformers
论文: Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction withou...
论文: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
论文: PeLK: Parameter-efficient Large Kernel ConvNets with Peripheral Convolution
论文: FasterViT: Fast Vision Transformers with Hierarchical Attention
论文: LORS: Low-rank Residual Structure for Parameter-Efficient Network Stacking
论文: Conditional Positional Encodings for Vision Transformers
论文: Training data-efficient image transformers & distillation through attention
论文: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
论文: Dynamic Label Assignment for Object Detection by Combining Predicted and Anc...
模型速度在模型的移动端应用中十分重要,提高模型推理速度的方法有模型剪枝、权值量化、知识蒸馏、模型设计以及动态推理等。其中,动态推理根据输入调整其结构,降低整体计...
论文: CondenseNet V2: Sparse Feature Reactivation for Deep Networks
暂未填写公司和职称
暂未填写技能专长
暂未填写学校和专业
暂未填写个人网址
暂未填写所在城市