torch.nn.SyncBatchNorm

torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None)[source]

Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.

The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups.

and

are learnable parameter vectors of size C (where C is the input size). By default, the elements of

are sampled from

and the elements of

are set to 0.

Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1.

If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

Note

This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is

, where

is the estimated statistic and

is the new observed value.

Because the Batch Normalization is done over the C dimension, computing statistics on (N, +) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.

Currently SyncBatchNorm only supports DistributedDataParallel with single GPU per process. Use torch.nn.SyncBatchNorm.convert_sync_batchnorm() to convert BatchNorm layer to SyncBatchNorm before wrapping Network with DDP.

Parameters:

  • num_features – CCC from an expected input of size (N,C,+)
  • eps – a value added to the denominator for numerical stability. Default: 1e-5
  • momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1
  • affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True
  • track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
  • process_group – synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world

Shape:

  • Input: (N,C,+)(N, C, +)(N,C,+)
  • Output: (N,C,+)(N, C, +)(N,C,+) (same shape as input)

Examples:

>>> # With Learnable Parameters
>>> m = nn.SyncBatchNorm(100)
>>> # creating process group (optional)
>>> # process_ids is a list of int identifying rank ids.
>>> process_group = torch.distributed.new_group(process_ids)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input)

>>> # network is nn.BatchNorm layer
>>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group)
>>> # only single gpu per process is currently supported
>>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel(
>>>                         sync_bn_network,
>>>                         device_ids=[args.local_rank],
>>>                         output_device=args.local_rank)

classmethod

convert_sync_batchnorm(module, process_group=None)[source]

Helper function to convert torch.nn.BatchNormND layer in the model to torch.nn.SyncBatchNorm layer.

Parameters:

  • module (nn.Module) – containing module
  • process_group (optional) – process group to scope synchronization,

default is the whole world.

Returns:

  • The original module with the converted torch.nn.SyncBatchNorm layer.

Example:

>>> # Network with nn.BatchNorm layer
>>> module = torch.nn.Sequential(
>>>            torch.nn.Linear(20, 100),
>>>            torch.nn.BatchNorm1d(100)
>>>          ).cuda()
>>> # creating process group (optional)
>>> # process_ids is a list of int identifying rank ids.
>>> process_group = torch.distributed.new_group(process_ids)
>>> sync_bn_module = convert_sync_batchnorm(module, process_group)

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • numpy.geomspace

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明...

    于小勇
  • matplotlib.collections、(二)

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。 ...

    于小勇
  • numpy.cumsum

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

    于小勇
  • bug诞生记——无调用关系的代码导致死锁

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

    方亮
  • 查看CPU信息小脚本

    echo "                  the `hostname` cpuinfo                       "

    三杯水Plus
  • Mongodb 查询优化

    A good writeup of how your index should be created is available in Optimizing Mo...

    乐事
  • PHP数组is_*()对比和解析

    如is_null,is_object,is_array,is_string,is_resource,is_bool,is_long,is_float 今天补充一...

    php007
  • java字符流之ByteArrayOutputStream,ByteArrayInputStream

    ByteArrayOutputStream流用来字节数组输出流在内存中创建一个字节数组缓冲区,所有发送到输出流的数据保存在该字节数组缓冲区中,默认初始化大小32...

    用户2603479
  • 面试题40(关于运算符的优先级以及字符串的拼接的理解)

    下面这三条语句? ---- System.out.println(“is ”+ 100 + 5); System.out.println(100 + 5 +“...

    Java学习
  • Spark 2.0.0正式版编译及问题分析

    从上次编译了技术预览版2.0.1之后,官网终于放出了正式版本的2.0.0版本。 Spark Release 2.0.0(见http://spark.apach...

    sparkexpert

扫码关注云+社区

领取腾讯云代金券