torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None)[source]
Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups.
are learnable parameter vectors of size C (where C is the input size). By default, the elements of
are sampled from
and the elements of
are set to 0.
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum of 0.1.
track_running_stats is set to
False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.
momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is
is the estimated statistic and
is the new observed value.
Because the Batch Normalization is done over the C dimension, computing statistics on (N, +) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.
Currently SyncBatchNorm only supports DistributedDataParallel with single GPU per process. Use torch.nn.SyncBatchNorm.convert_sync_batchnorm() to convert BatchNorm layer to SyncBatchNorm before wrapping Network with DDP.
Nonefor cumulative moving average (i.e. simple average). Default: 0.1
True, this module has learnable affine parameters. Default:
True, this module tracks the running mean and variance, and when set to
False, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:
>>> # With Learnable Parameters >>> m = nn.SyncBatchNorm(100) >>> # creating process group (optional) >>> # process_ids is a list of int identifying rank ids. >>> process_group = torch.distributed.new_group(process_ids) >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input) >>> # network is nn.BatchNorm layer >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group) >>> # only single gpu per process is currently supported >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel( >>> sync_bn_network, >>> device_ids=[args.local_rank], >>> output_device=args.local_rank)
Helper function to convert torch.nn.BatchNormND layer in the model to torch.nn.SyncBatchNorm layer.
default is the whole world.
>>> # Network with nn.BatchNorm layer >>> module = torch.nn.Sequential( >>> torch.nn.Linear(20, 100), >>> torch.nn.BatchNorm1d(100) >>> ).cuda() >>> # creating process group (optional) >>> # process_ids is a list of int identifying rank ids. >>> process_group = torch.distributed.new_group(process_ids) >>> sync_bn_module = convert_sync_batchnorm(module, process_group)