前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >torch.nn.SyncBatchNorm

torch.nn.SyncBatchNorm

作者头像
狼啸风云
修改2022-09-02 22:26:59
2.7K0
修改2022-09-02 22:26:59
举报
文章被收录于专栏:计算机视觉理论及其实现

torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None)[source]

Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups.

\gamma
\gamma

and

\beta
\beta

are learnable parameter vectors of size C (where C is the input size). By default, the elements of

\gamma
\gamma

are sampled from

\upsilon (0,1)
\upsilon (0,1)

and the elements of

\beta
\beta

are set to 0.

Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1.

If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

Note

This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is

\hat{x}_{new}=(1-momentum)\times \hat{x} + momentum \times x_t
\hat{x}_{new}=(1-momentum)\times \hat{x} + momentum \times x_t

, where

\hat{x}
\hat{x}

is the estimated statistic and

x_t
x_t

is the new observed value.

Because the Batch Normalization is done over the C dimension, computing statistics on (N, +) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.

Currently SyncBatchNorm only supports DistributedDataParallel with single GPU per process. Use torch.nn.SyncBatchNorm.convert_sync_batchnorm() to convert BatchNorm layer to SyncBatchNorm before wrapping Network with DDP.

Parameters:

  • num_features – CCC from an expected input of size (N,C,+)
  • eps – a value added to the denominator for numerical stability. Default: 1e-5
  • momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
  • affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True
  • track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
  • process_group – synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world

Shape:

  • Input: (N,C,+)(N, C, +)(N,C,+)
  • Output: (N,C,+)(N, C, +)(N,C,+) (same shape as input)

Examples:

代码语言:javascript
复制
>>> # With Learnable Parameters
>>> m = nn.SyncBatchNorm(100)
>>> # creating process group (optional)
>>> # process_ids is a list of int identifying rank ids.
>>> process_group = torch.distributed.new_group(process_ids)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input)

>>> # network is nn.BatchNorm layer
>>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group)
>>> # only single gpu per process is currently supported
>>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel(
>>>                         sync_bn_network,
>>>                         device_ids=[args.local_rank],
>>>                         output_device=args.local_rank)

classmethod

convert_sync_batchnorm(module, process_group=None)[source]

Helper function to convert torch.nn.BatchNormND layer in the model to torch.nn.SyncBatchNorm layer.

Parameters:

  • module (nn.Module) – containing module
  • process_group (optional) – process group to scope synchronization,

default is the whole world.

Returns:

  • The original module with the converted torch.nn.SyncBatchNorm layer.

Example:

代码语言:javascript
复制
>>> # Network with nn.BatchNorm layer
>>> module = torch.nn.Sequential(
>>>            torch.nn.Linear(20, 100),
>>>            torch.nn.BatchNorm1d(100)
>>>          ).cuda()
>>> # creating process group (optional)
>>> # process_ids is a list of int identifying rank ids.
>>> process_group = torch.distributed.new_group(process_ids)
>>> sync_bn_module = convert_sync_batchnorm(module, process_group)
本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2020/05/07 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
批量计算
批量计算(BatchCompute,Batch)是为有大数据计算业务的企业、科研单位等提供高性价比且易用的计算服务。批量计算 Batch 可以根据用户提供的批处理规模,智能地管理作业和调动其所需的最佳资源。有了 Batch 的帮助,您可以将精力集中在如何分析和处理数据结果上。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档