版权声明：本文为博主原创文章，遵循 CC 4.0 BY-SA 版权协议，转载请附上原文出处链接和本声明。
本文链接：https://blog.csdn.net/weixin_36670529/article/details/101036702
目录
DataParallel layers (multi-GPU, distributed)
class torch.nn.RNN
(*args, **kwargs)[source]
Applies a multi-layer Elman RNN with tanhtanhtanh or ReLUReLUReLU non-linearity to an input sequence.
For each element in the input sequence, each layer computes the following function:
ht=tanh(Wihxt+bih+Whhh(t−1)+bhh)h_t = \text{tanh}(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh}) ht=tanh(Wihxt+bih+Whhh(t−1)+bhh)
where hth_tht is the hidden state at time t, xtx_txt is the input at time t, and h(t−1)h_{(t-1)}h(t−1) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If nonlinearity
is 'relu'
, then ReLU is used instead of tanh.
Parameters
Inputs: input, h_0
Outputs: output, h_n
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16
4) V100 GPU is used, 5) input data is not in PackedSequence
format persistent algorithm can be selected to improve performance.
Examples:
>>> rnn = nn.RNN(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
class torch.nn.LSTM
(*args, **kwargs)[source]
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
it=σ(Wiixt+bii+Whih(t−1)+bhi)ft=σ(Wifxt+bif+Whfh(t−1)+bhf)gt=tanh(Wigxt+big+Whgh(t−1)+bhg)ot=σ(Wioxt+bio+Whoh(t−1)+bho)ct=ft∗c(t−1)+it∗gtht=ot∗tanh(ct)\begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) \\ o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \\ \end{array} it=σ(Wiixt+bii+Whih(t−1)+bhi)ft=σ(Wifxt+bif+Whfh(t−1)+bhf)gt=tanh(Wigxt+big+Whgh(t−1)+bhg)ot=σ(Wioxt+bio+Whoh(t−1)+bho)ct=ft∗c(t−1)+it∗gtht=ot∗tanh(ct)
where hth_tht is the hidden state at time t, ctc_tct is the cell state at time t, xtx_txt is the input at time t, h(t−1)h_{(t-1)}h(t−1) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and iti_tit , ftf_tft , gtg_tgt , oto_tot are the input, forget, cell, and output gates, respectively. σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
In a multilayer LSTM, the input xt(l)x^{(l)}_txt(l) of the lll -th layer (l>=2l >= 2l>=2 ) is the hidden state ht(l−1)h^{(l-1)}_tht(l−1) of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}_tδt(l−1) where each δt(l−1)\delta^{(l-1)}_tδt(l−1) is a Bernoulli random variable which is 000 with probability dropout
.
Parameters
Inputs: input, (h_0, c_0)
Outputs: output, (h_n, c_n)
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16
4) V100 GPU is used, 5) input data is not in PackedSequence
format persistent algorithm can be selected to improve performance.
Examples:
>>> rnn = nn.LSTM(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> c0 = torch.randn(2, 3, 20) >>> output, (hn, cn) = rnn(input, (h0, c0))
class torch.nn.GRU
(*args, **kwargs)[source]
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h(t−1)\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array} rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h(t−1)
where hth_tht is the hidden state at time t, xtx_txt is the input at time t, h(t−1)h_{(t-1)}h(t−1) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and rtr_trt , ztz_tzt , ntn_tnt are the reset, update, and new gates, respectively. σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
In a multilayer GRU, the input xt(l)x^{(l)}_txt(l) of the lll -th layer (l>=2l >= 2l>=2 ) is the hidden state ht(l−1)h^{(l-1)}_tht(l−1) of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}_tδt(l−1) where each δt(l−1)\delta^{(l-1)}_tδt(l−1) is a Bernoulli random variable which is 000 with probability dropout
.
Parameters
Inputs: input, h_0
Outputs: output, h_n
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16
4) V100 GPU is used, 5) input data is not in PackedSequence
format persistent algorithm can be selected to improve performance.
Examples:
>>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
class torch.nn.RNNCell
(input_size, hidden_size, bias=True, nonlinearity='tanh')[source]
An Elman RNN cell with tanh or ReLU non-linearity.
h′=tanh(Wihx+bih+Whhh+bhh)h' = \tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})h′=tanh(Wihx+bih+Whhh+bhh)
If nonlinearity
is ‘relu’, then ReLU is used in place of tanh.
Parameters
Inputs: input, hidden
Outputs: h’
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Examples:
>>> rnn = nn.RNNCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
class torch.nn.LSTMCell
(input_size, hidden_size, bias=True)[source]
A long short-term memory (LSTM) cell.
i=σ(Wiix+bii+Whih+bhi)f=σ(Wifx+bif+Whfh+bhf)g=tanh(Wigx+big+Whgh+bhg)o=σ(Wiox+bio+Whoh+bho)c′=f∗c+i∗gh′=o∗tanh(c′)\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\ g = \tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\ o = \sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i * g \\ h' = o * \tanh(c') \\ \end{array}i=σ(Wiix+bii+Whih+bhi)f=σ(Wifx+bif+Whfh+bhf)g=tanh(Wigx+big+Whgh+bhg)o=σ(Wiox+bio+Whoh+bho)c′=f∗c+i∗gh′=o∗tanh(c′)
where σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
Parameters
Inputs: input, (h_0, c_0)
Outputs: (h_1, c_1)
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Examples:
>>> rnn = nn.LSTMCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> cx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx, cx = rnn(input[i], (hx, cx)) output.append(hx)
class torch.nn.GRUCell
(input_size, hidden_size, bias=True)[source]
A gated recurrent unit (GRU) cell
r=σ(Wirx+bir+Whrh+bhr)z=σ(Wizx+biz+Whzh+bhz)n=tanh(Winx+bin+r∗(Whnh+bhn))h′=(1−z)∗n+z∗h\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}r=σ(Wirx+bir+Whrh+bhr)z=σ(Wizx+biz+Whzh+bhz)n=tanh(Winx+bin+r∗(Whnh+bhn))h′=(1−z)∗n+z∗h
where σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
Parameters
Inputs: input, hidden
Outputs: h’
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Examples:
>>> rnn = nn.GRUCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
class torch.nn.Transformer
(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, custom_encoder=None, custom_decoder=None)[source]
A transformer model. User is able to modify the attributes as needed. The architechture is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010.
Parameters
Examples::
>>> transformer_model = nn.Transformer(src_vocab, tgt_vocab) >>> transformer_model = nn.Transformer(src_vocab, tgt_vocab, nhead=16, num_encoder_layers=12)
forward
(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Take in and process masked source/target sequences.
Parameters
Shape:
Note: [src/tgt/memory]_mask should be filled with float(‘-inf’) for the masked positions and float(0.0) else. These masks ensure that predictions for position i depend only on the unmasked positions j and are applied identically for each sequence in a batch. [src/tgt/memory]_key_padding_mask should be a ByteTensor where True values are positions that should be masked with float(‘-inf’) and False values will be unchanged. This mask ensures that no information will be taken from position i if it is masked, and has a separate mask for each sequence in a batch.
Note: Due to the multi-head attention architecture in the transformer model, the output sequence length of a transformer is same as the input sequence (i.e. target) length of the decode.
where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number
Examples
>>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
generate_square_subsequent_mask
(sz)[source]
Generate a square mask for the sequence. The masked positions are filled with float(‘-inf’). Unmasked positions are filled with float(0.0).
class torch.nn.TransformerEncoder
(encoder_layer, num_layers, norm=None)[source]
TransformerEncoder is a stack of N encoder layers
Parameters
Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model, nhead) >>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers)
forward
(src, mask=None, src_key_padding_mask=None)[source]
Pass the input through the endocder layers in turn.
Parameters
Shape:
see the docs in Transformer class.
class torch.nn.TransformerDecoder
(decoder_layer, num_layers, norm=None)[source]
TransformerDecoder is a stack of N decoder layers
Parameters
Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model, nhead) >>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers)
forward
(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Pass the inputs (and mask) through the decoder layer in turn.
Parameters
Shape:
see the docs in Transformer class.
class torch.nn.TransformerEncoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1)[source]
TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
Parameters
Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model, nhead)
forward
(src, src_mask=None, src_key_padding_mask=None)[source]
Pass the input through the endocder layer.
Parameters
Shape:
see the docs in Transformer class.
class torch.nn.TransformerDecoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1)[source]
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
Parameters
Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model, nhead)
forward
(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Pass the inputs (and mask) through the decoder layer.
Parameters
Shape:
see the docs in Transformer class.
class torch.nn.Identity
(*args, **kwargs)[source]
A placeholder identity operator that is argument-insensitive.
Parameters
Examples:
>>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 20])
class torch.nn.Linear
(in_features, out_features, bias=True)[source]
Applies a linear transformation to the incoming data: y=xAT+by = xA^T + by=xAT+b
Parameters
Shape:
Variables
,k
False
False
False
False
False
cutoffs
should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example setting cutoffs = [10, 100, 1000]
means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster.
div_value
is used to compute the size of each additional cluster, which is given as ⌊in_featuresdiv_valueidx⌋\left\lfloor\frac{in\_features}{div\_value^{idx}}\right\rfloor⌊div_valueidxin_features⌋ , where idxidxidx is the cluster index (with clusters for less frequent words having larger indices, and indices starting from 111 ).
head_bias
if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation.
True
, adds a bias term to the ‘head’ of the adaptive softmax. Default: False
N
containing computed target log probabilities for each example
Scalar
None
for cumulative moving average (i.e. simple average). Default: 0.1
True
, this module has learnable affine parameters. Default: True
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
None
for cumulative moving average (i.e. simple average). Default: 0.1
True
, this module has learnable affine parameters. Default: True
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
None
for cumulative moving average (i.e. simple average). Default: 0.1
True
, this module has learnable affine parameters. Default: True
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
True
, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True
.
None
for cumulative moving average (i.e. simple average). Default: 0.1
True
, this module has learnable affine parameters. Default: True
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: True
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: False
.
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: False
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: False
.
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: False
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: False
.
True
, this module tracks the running mean and variance, and when set to False
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default: False
True
, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default: True
.
num_layers=2
would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1
'tanh'
or 'relu'
. Default: 'tanh'
False
, then the layer does not use bias weights b_ih and b_hh. Default: True
True
, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout
. Default: 0
True
, becomes a bidirectional RNN. Default: False
torch.nn.utils.rnn.pack_padded_sequence()
or torch.nn.utils.rnn.pack_sequence()
for details.
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.
For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.
h_n.view(num_layers, num_directions, batch, hidden_size)
.
num_layers=2
would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1
False
, then the layer does not use bias weights b_ih and b_hh. Default: True
True
, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout
. Default: 0
True
, becomes a bidirectional LSTM. Default: False
torch.nn.utils.rnn.pack_padded_sequence()
or torch.nn.utils.rnn.pack_sequence()
for details.
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.
For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.
h_n.view(num_layers, num_directions, batch, hidden_size)
and similarly for c_n.
num_layers=2
would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1
False
, then the layer does not use bias weights b_ih and b_hh. Default: True
True
, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout
. Default: 0
True
, becomes a bidirectional GRU. Default: False
torch.nn.utils.rnn.pack_padded_sequence()
for details.
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively.
Similarly, the directions can be separated in the packed case.
h_n.view(num_layers, num_directions, batch, hidden_size)
.
False
, then the layer does not use bias weights b_ih and b_hh. Default: True
'tanh'
or 'relu'
. Default: 'tanh'
False
, then the layer does not use bias weights b_ih and b_hh. Default: True
False
, then the layer does not use bias weights b_ih and b_hh. Default: True
False
, the layer will not learn an additive bias. Default: True
bias
is True
, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
torch.nn.Bilinear
(in1_features, in2_features, out_features, bias=True)[source]
Applies a bilinear transformation to the incoming data: y=x1Ax2+by = x_1 A x_2 + by=x1Ax2+b
Parameters
Shape:
Variables
,k True
bias
is True
, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
torch.nn.Dropout
(p=0.5, inplace=False)[source]
During training, randomly zeroes some of the elements of the input tensor with probability p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.
This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of 11−p\frac{1}{1-p}1−p1 during training. This means that during evaluation the module simply computes an identity function.
Parameters
Shape:
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Dropout2d
class torch.nn.Dropout2d
(p=0.5, inplace=False)[source]
Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jjj -th channel of the iii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j]input[i,j] ). Each channel will be zeroed out independently on every forward call with probability p
using samples from a Bernoulli distribution.
Usually the input comes from nn.Conv2d
modules.
As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case, nn.Dropout2d()
will help promote independence between feature maps and should be used instead.
Parameters
Shape:
Examples:
>>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input)
Dropout3d
class torch.nn.Dropout3d
(p=0.5, inplace=False)[source]
Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jjj -th channel of the iii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j]input[i,j] ). Each channel will be zeroed out independently on every forward call with probability p
using samples from a Bernoulli distribution.
Usually the input comes from nn.Conv3d
modules.
As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case, nn.Dropout3d()
will help promote independence between feature maps and should be used instead.
Parameters
Shape:
Examples:
>>> m = nn.Dropout3d(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input)
AlphaDropout
class torch.nn.AlphaDropout
(p=0.5, inplace=False)[source]
Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes hand-in-hand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper Self-Normalizing Neural Networks .
Parameters
Shape:
Examples:
>>> m = nn.AlphaDropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Sparse layers
Embedding
class torch.nn.Embedding
(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None)[source]
A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
Parameters
Variables
~Embedding.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from N(0,1)\mathcal{N}(0, 1)N(0,1)
Shape:
Note
Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD
(CUDA and CPU), optim.SparseAdam
(CUDA and CPU) and optim.Adagrad
(CPU)
Note
With padding_idx
set, the embedding vector at padding_idx
is initialized to all zeros. However, note that this vector can be modified afterwards, e.g., using a customized initialization method, and thus changing the vector used to pad the output. The gradient for this vector from Embedding
is always zero.
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) >>> embedding(input) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.1535, -2.0309, 0.9315], [ 0.0000, 0.0000, 0.0000], [-0.1655, 0.9897, 0.0635]]])
classmethod from_pretrained
(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)[source]
Creates Embedding instance from given 2-dimensional FloatTensor.
Parameters
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embedding = nn.Embedding.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([1]) >>> embedding(input) tensor([[ 4.0000, 5.1000, 6.3000]])
EmbeddingBag
class torch.nn.EmbeddingBag
(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None)[source]
Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.
For bags of constant length and no per_sample_weights
, this class
mode="sum"
is equivalent to Embedding
followed by torch.sum(dim=0)
,
mode="mean"
is equivalent to Embedding
followed by torch.mean(dim=0)
,
mode="max"
is equivalent to Embedding
followed by torch.max(dim=0)
.
However, EmbeddingBag
is much more time and memory efficient than using a chain of these operations.
EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by mode
. If per_sample_weights`
is passed, the only supported mode
is "sum"
, which computes a weighted sum according to per_sample_weights
.
Parameters
Variables
~EmbeddingBag.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from N(0,1)\mathcal{N}(0, 1)N(0,1) .
Inputs: input
(LongTensor), offsets
(LongTensor, optional), and
per_index_weights
(Tensor, optional)
per_sample_weights (Tensor, optional): a tensor of float / double weights, or None
to indicate all weights should be taken to be 1
. If specified, per_sample_weights
must have exactly the same shape as input and is treated as having the same offsets
, if those are not None
. Only supported for mode='sum'
.
Output shape: (B, embedding_dim)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.LongTensor([0,4]) >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]])
classmethod from_pretrained
(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False)[source]
Creates EmbeddingBag instance from given 2-dimensional FloatTensor.
Parameters
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([[1, 0]]) >>> embeddingbag(input) tensor([[ 2.5000, 3.7000, 4.6500]])
Distance functions
CosineSimilarity
class torch.nn.CosineSimilarity
(dim=1, eps=1e-08)[source]
Returns cosine similarity between x1x_1x1 and x2x_2x2 , computed along dim.
similarity=x1⋅x2max(∥x1∥2⋅∥x2∥2,ϵ).\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}. similarity=max(∥x1∥2⋅∥x2∥2,ϵ)x1⋅x2.
Parameters
Shape:
Examples::
>>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6) >>> output = cos(input1, input2)
PairwiseDistance
class torch.nn.PairwiseDistance
(p=2.0, eps=1e-06, keepdim=False)[source]
Computes the batchwise pairwise distance between vectors v1v_1v1 , v2v_2v2 using the p-norm:
∥x∥p=(∑i=1n∣xi∣p)1/p.\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}. ∥x∥p=(i=1∑n∣xi∣p)1/p.
Parameters
Shape:
Examples::
>>> pdist = nn.PairwiseDistance(p=2) >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = pdist(input1, input2)
Loss functions
L1Loss
class torch.nn.L1Loss
(size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that measures the mean absolute error (MAE) between each element in the input xxx and target yyy .
The unreduced (i.e. with reduction
set to 'none'
) loss can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|, ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,
where NNN is the batch size. If reduction
is not 'none'
(default 'mean'
), then:
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
xxx and yyy are tensors of arbitrary shapes with a total of nnn elements each.
The sum operation still operates over all the elements, and divides by nnn .
The division by nnn can be avoided if one sets reduction = 'sum'
.
Parameters
Shape:
Examples:
>>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
MSELoss
class torch.nn.MSELoss
(size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xxx and target yyy .
The unreduced (i.e. with reduction
set to 'none'
) loss can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=(xn−yn)2,\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2, ℓ(x,y)=L={l1,…,lN}⊤,ln=(xn−yn)2,
where NNN is the batch size. If reduction
is not 'none'
(default 'mean'
), then:
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
xxx and yyy are tensors of arbitrary shapes with a total of nnn elements each.
The sum operation still operates over all the elements, and divides by nnn .
The division by nnn can be avoided if one sets reduction = 'sum'
.
Parameters
Shape:
Examples:
>>> loss = nn.MSELoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
CrossEntropyLoss
class torch.nn.CrossEntropyLoss
(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')[source]
This criterion combines nn.LogSoftmax()
and nn.NLLLoss()
in one single class.
It is useful when training a classification problem with C classes. If provided, the optional argument weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input is expected to contain raw, unnormalized scores for each class.
input has to be a Tensor of size either (minibatch,C)(minibatch, C)(minibatch,C) or (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 for the K-dimensional case (described later).
This criterion expects a class index in the range [0,C−1][0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch; if ignore_index is specified, this criterion also accepts this class index (this index may not necessarily be in the class range).
The loss can be described as:
loss(x,class)=−log(exp(x[class])∑jexp(x[j]))=−x[class]+log(∑jexp(x[j]))\text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) = -x[class] + \log\left(\sum_j \exp(x[j])\right) loss(x,class)=−log(∑jexp(x[j])exp(x[class]))=−x[class]+log(j∑exp(x[j]))
or in the case of the weight
argument being specified:
loss(x,class)=weight[class](−x[class]+log(∑jexp(x[j])))\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right) loss(x,class)=weight[class](−x[class]+log(j∑exp(x[j])))
The losses are averaged across observations for each minibatch.
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 , where KKK is the number of dimensions, and a target of appropriate shape (see below).
Parameters
Shape:
Examples:
>>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(5) >>> output = loss(input, target) >>> output.backward()
CTCLoss
class torch.nn.CTCLoss
(blank=0, reduction='mean', zero_infinity=False)[source]
The Connectionist Temporal Classification loss.
Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be ≤\leq≤ the input length.
Parameters
Shape:
Example:
>>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward()
Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Note
In order to use CuDNN, the following must be satisfied: targets
must be in concatenated format, all input_lengths
must be T. blank=0blank=0blank=0 , target_lengths
≤256\leq 256≤256 , the integer arguments must be of dtype torch.int32
.
The regular implementation uses the (more common in PyTorch) torch.long dtype.
Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True
. Please see the notes on Reproducibility for background.
NLLLoss
class torch.nn.NLLLoss
(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')[source]
The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch,C)(minibatch, C)(minibatch,C) or (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 for the K-dimensional case (described later).
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range [0,C−1][0, C-1][0,C−1] where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with reduction
set to 'none'
) loss can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynxn,yn,wc=weight[c]⋅1{c≠ignore_index},\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\}, ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynxn,yn,wc=weight[c]⋅1{c=ignore_index},
where NNN is the batch size. If reduction
is not 'none'
(default 'mean'
), then
ℓ(x,y)={∑n=1N1∑n=1Nwynln,if reduction=’mean’;∑n=1Nln,if reduction=’sum’.\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{'mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={∑n=1N∑n=1Nwyn1ln,∑n=1Nln,if reduction=’mean’;if reduction=’sum’.
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 , where KKK is the number of dimensions, and a target of appropriate shape (see below). In the case of images, it computes NLL loss per-pixel.
Parameters
Shape:
Examples:
>>> m = nn.LogSoftmax(dim=1) >>> loss = nn.NLLLoss() >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = loss(m(input), target) >>> output.backward() >>> >>> >>> # 2D loss example (used, for example, with image inputs) >>> N, C = 5, 4 >>> loss = nn.NLLLoss() >>> # input is of size N x C x height x width >>> data = torch.randn(N, 16, 10, 10) >>> conv = nn.Conv2d(16, C, (3, 3)) >>> m = nn.LogSoftmax(dim=1) >>> # each element in target has to have 0 <= value < C >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C) >>> output = loss(m(conv(data)), target) >>> output.backward()
PoissonNLLLoss
class torch.nn.PoissonNLLLoss
(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')[source]
Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
target∼Poisson(input)loss(input,target)=input−target∗log(input)+log(target!)\text{target} \sim \mathrm{Poisson}(\text{input}) \text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})target∼Poisson(input)loss(input,target)=input−target∗log(input)+log(target!)
The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
Parameters
Examples:
>>> loss = nn.PoissonNLLLoss() >>> log_input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> output = loss(log_input, target) >>> output.backward()
Shape:
KLDivLoss
class torch.nn.KLDivLoss
(size_average=None, reduce=None, reduction='mean')[source]
The Kullback-Leibler divergence Loss
KL divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with NLLLoss
, the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are given as probabilities (i.e. without taking the logarithm).
This criterion expects a target Tensor of the same size as the input Tensor.
The unreduced (i.e. with reduction
set to 'none'
) loss can be described as:
l(x,y)=L={l1,…,lN},ln=yn⋅(logyn−xn)l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right) l(x,y)=L={l1,…,lN},ln=yn⋅(logyn−xn)
where the index NNN spans all dimensions of input
and LLL has the same shape as input
. If reduction
is not 'none'
(default 'mean'
), then:
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
In default reduction
mode 'mean'
, the losses are averaged for each minibatch over observations as well as over dimensions. 'batchmean'
mode gives the correct KL divergence where losses are averaged over batch dimension only. 'mean'
mode’s behavior will be changed to the same as 'batchmean'
in the next major release.
Parameters
Note
size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
.
Note
reduction
= 'mean'
doesn’t return the true kl divergence value, please use reduction
= 'batchmean'
which aligns with KL math definition. In the next major release, 'mean'
will be changed to be the same as 'batchmean'
.
Shape:
BCELoss
class torch.nn.BCELoss
(weight=None, size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that measures the Binary Cross Entropy between the target and the output:
The unreduced (i.e. with reduction
set to 'none'
) loss can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logxn+(1−yn)⋅log(1−xn)],\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right], ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logxn+(1−yn)⋅log(1−xn)],
where NNN is the batch size. If reduction
is not 'none'
(default 'mean'
), then
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets yyy should be numbers between 0 and 1.
Parameters
Shape:
Examples:
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(m(input), target) >>> output.backward()
BCEWithLogitsLoss
class torch.nn.BCEWithLogitsLoss
(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)[source]
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
The unreduced (i.e. with reduction
set to 'none'
) loss can be described as:
ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logσ(xn)+(1−yn)⋅log(1−σ(xn))],\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right], ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logσ(xn)+(1−yn)⋅log(1−σ(xn))],
where NNN is the batch size. If reduction
is not 'none'
(default 'mean'
), then
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1.
It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:
ℓc(x,y)=Lc={l1,c,…,lN,c}⊤,ln,c=−wn,c[pcyn,c⋅logσ(xn,c)+(1−yn,c)⋅log(1−σ(xn,c))],\ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right], ℓc(x,y)=Lc={l1,c,…,lN,c}⊤,ln,c=−wn,c[pcyn,c⋅logσ(xn,c)+(1−yn,c)⋅log(1−σ(xn,c))],
where ccc is the class number (c>1c > 1c>1 for multi-label binary classification, c=1c = 1c=1 for single-label binary classification), nnn is the number of the sample in the batch and pcp_cpc is the weight of the positive answer for the class ccc .
pc>1p_c > 1pc>1 increases the recall, pc<1p_c < 1pc<1 increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to 300100=3\frac{300}{100}=3100300=3 . The loss would act as if the dataset contains 3×100=3003\times 100=3003×100=300 positive examples.
Examples:
>>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 0.999) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(0.999)) tensor(0.3135)
Parameters
Shape:
reduction
is 'none'
, then (N,∗)(N, *)(N,∗) , same shape as input.
Examples:
>>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()
MarginRankingLoss
class torch.nn.MarginRankingLoss
(margin=0.0, size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that measures the loss given inputs x1x1x1 , x2x2x2 , two 1D mini-batch Tensors, and a label 1D mini-batch tensor yyy (containing 1 or -1).
If y=1y = 1y=1 then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for y=−1y = -1y=−1 .
The loss function for each sample in the mini-batch is:
loss(x,y)=max(0,−y∗(x1−x2)+margin)\text{loss}(x, y) = \max(0, -y * (x1 - x2) + \text{margin}) loss(x,y)=max(0,−y∗(x1−x2)+margin)
Parameters
Shape:
HingeEmbeddingLoss
class torch.nn.HingeEmbeddingLoss
(margin=1.0, size_average=None, reduce=None, reduction='mean')[source]
Measures the loss given an input tensor xxx and a labels tensor yyy (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as xxx , and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for nnn -th sample in the mini-batch is
ln={xn,if yn=1,max{0,Δ−xn},if yn=−1,l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases} ln={xn,max{0,Δ−xn},ifyn=1,ifyn=−1,
and the total loss functions is
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
where L={l1,…,lN}⊤L = \{l_1,\dots,l_N\}^\topL={l1,…,lN}⊤ .
Parameters
Shape:
MultiLabelMarginLoss
class torch.nn.MultiLabelMarginLoss
(size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xxx (a 2D mini-batch Tensor) and output yyy (which is a 2D Tensor of target class indices). For each sample in the mini-batch:
loss(x,y)=∑ijmax(0,1−(x[y[j]]−x[i]))x.size(0)\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)} loss(x,y)=ij∑x.size(0)max(0,1−(x[y[j]]−x[i]))
where x∈{0, ⋯ , x.size(0)−1}x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}x∈{0,⋯,x.size(0)−1} , y∈{0, ⋯ , y.size(0)−1}y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}y∈{0,⋯,y.size(0)−1} , 0≤y[j]≤x.size(0)−10 \leq y[j] \leq \text{x.size}(0)-10≤y[j]≤x.size(0)−1 , and i≠y[j]i \neq y[j]i=y[j] for all iii and jjj .
yyy and xxx must have the same size.
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes.
Parameters
Shape:
Examples:
>>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500)
SmoothL1Loss
class torch.nn.SmoothL1Loss
(size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that uses a squared term if the absolute element-wise error falls below 1 and an L1 term otherwise. It is less sensitive to outliers than the MSELoss and in some cases prevents exploding gradients (e.g. see Fast R-CNN paper by Ross Girshick). Also known as the Huber loss:
loss(x,y)=1n∑izi\text{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i} loss(x,y)=n1i∑zi
where ziz_{i}zi is given by:
zi={0.5(xi−yi)2,if ∣xi−yi∣<1∣xi−yi∣−0.5,otherwise z_{i} = \begin{cases} 0.5 (x_i - y_i)^2, & \text{if } |x_i - y_i| < 1 \\ |x_i - y_i| - 0.5, & \text{otherwise } \end{cases} zi={0.5(xi−yi)2,∣xi−yi∣−0.5,if ∣xi−yi∣<1otherwise
xxx and yyy arbitrary shapes with a total of nnn elements each the sum operation still operates over all the elements, and divides by nnn .
The division by nnn can be avoided if sets reduction = 'sum'
.
Parameters
Shape:
SoftMarginLoss
class torch.nn.SoftMarginLoss
(size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that optimizes a two-class classification logistic loss between input tensor xxx and target tensor yyy (containing 1 or -1).
loss(x,y)=∑ilog(1+exp(−y[i]∗x[i]))x.nelement()\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()} loss(x,y)=i∑x.nelement()log(1+exp(−y[i]∗x[i]))
Parameters
Shape:
MultiLabelSoftMarginLoss
class torch.nn.MultiLabelSoftMarginLoss
(weight=None, size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xxx and target yyy of size (N,C)(N, C)(N,C) . For each sample in the minibatch:
loss(x,y)=−1C∗∑iy[i]∗log((1+exp(−x[i]))−1)+(1−y[i])∗log(exp(−x[i])(1+exp(−x[i])))loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right) loss(x,y)=−C1∗i∑y[i]∗log((1+exp(−x[i]))−1)+(1−y[i])∗log((1+exp(−x[i]))exp(−x[i]))
where i∈{0, ⋯ , x.nElement()−1}i \in \left\{0, \; \cdots , \; \text{x.nElement}() - 1\right\}i∈{0,⋯,x.nElement()−1} , y[i]∈{0, 1}y[i] \in \left\{0, \; 1\right\}y[i]∈{0,1} .
Parameters
Shape:
CosineEmbeddingLoss
class torch.nn.CosineEmbeddingLoss
(margin=0.0, size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that measures the loss given input tensors x1x_1x1 , x2x_2x2 and a Tensor label yyy with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
loss(x,y)={1−cos(x1,x2),if y=1max(0,cos(x1,x2)−margin),if y=−1\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases} loss(x,y)={1−cos(x1,x2),max(0,cos(x1,x2)−margin),if y=1if y=−1
Parameters
MultiMarginLoss
class torch.nn.MultiMarginLoss
(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xxx (a 2D mini-batch Tensor) and output yyy (which is a 1D tensor of target class indices, 0≤y≤x.size(1)−10 \leq y \leq \text{x.size}(1)-10≤y≤x.size(1)−1 ):
For each mini-batch sample, the loss in terms of the 1D input xxx and scalar output yyy is:
loss(x,y)=∑imax(0,margin−x[y]+x[i]))px.size(0)\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i]))^p}{\text{x.size}(0)} loss(x,y)=x.size(0)∑imax(0,margin−x[y]+x[i]))p
where x∈{0, ⋯ , x.size(0)−1}x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}x∈{0,⋯,x.size(0)−1} and i≠yi \neq yi=y .
Optionally, you can give non-equal weighting on the classes by passing a 1D weight
tensor into the constructor.
The loss function then becomes:
loss(x,y)=∑imax(0,w[y]∗(margin−x[y]+x[i]))p)x.size(0)\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p)}{\text{x.size}(0)} loss(x,y)=x.size(0)∑imax(0,w[y]∗(margin−x[y]+x[i]))p)
Parameters
TripletMarginLoss
class torch.nn.TripletMarginLoss
(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')[source]
Creates a criterion that measures the triplet loss given an input tensors x1x1x1 , x2x2x2 , x3x3x3 and a margin with a value greater than 000 . This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be (N,D)(N, D)(N,D) .
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
L(a,p,n)=max{d(ai,pi)−d(ai,ni)+margin,0}L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\} L(a,p,n)=max{d(ai,pi)−d(ai,ni)+margin,0}
where
d(xi,yi)=∥xi−yi∥pd(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p d(xi,yi)=∥xi−yi∥p
Parameters
Shape:
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2) >>> anchor = torch.randn(100, 128, requires_grad=True) >>> positive = torch.randn(100, 128, requires_grad=True) >>> negative = torch.randn(100, 128, requires_grad=True) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
Vision layers
PixelShuffle
class torch.nn.PixelShuffle
(upscale_factor)[source]
Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W)(∗,C×r2,H,W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r)(∗,C,H×r,W×r) .
This is useful for implementing efficient sub-pixel convolution with a stride of 1/r1/r1/r .
Look at the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016) for more details.
Parameters
upscale_factor (int) – factor to increase spatial resolution by
Shape:
Examples:
>>> pixel_shuffle = nn.PixelShuffle(3) >>> input = torch.randn(1, 9, 4, 4) >>> output = pixel_shuffle(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
Upsample
class torch.nn.Upsample
(size=None, scale_factor=None, mode='nearest', align_corners=None)[source]
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a scale_factor
or the target output size
to calculate the output size. (You cannot give both, as it is ambiguous)
Parameters
Shape:
Dout=⌊Din×scale_factor⌋D_{out} = \left\lfloor D_{in} \times \text{scale\_factor} \right\rfloor Dout=⌊Din×scale_factor⌋
Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋
Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋
Warning
With align_corners = True
, the linearly interpolating modes (linear, bilinear, bicubic, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is align_corners = False
. See below for concrete examples on how this affects the outputs.
Note
If you want downsampling/general resizing, you should use interpolate()
.
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
UpsamplingNearest2d
class torch.nn.UpsamplingNearest2d
(size=None, scale_factor=None)[source]
Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the size
or the scale_factor
as it’s constructor argument.
When size
is given, it is the output size of the image (h, w).
Parameters
Warning
This class is deprecated in favor of interpolate()
.
Shape:
Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋
Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]])
UpsamplingBilinear2d
class torch.nn.UpsamplingBilinear2d
(size=None, scale_factor=None)[source]
Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the size
or the scale_factor
as it’s constructor argument.
When size
is given, it is the output size of the image (h, w).
Parameters
Warning
This class is deprecated in favor of interpolate()
. It is equivalent to nn.functional.interpolate(..., mode='bilinear', align_corners=True)
.
Shape:
Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋
Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]])
DataParallel layers (multi-GPU, distributed)
DataParallel
class torch.nn.DataParallel
(module, device_ids=None, output_device=None, dim=0)[source]
Implements data parallelism at the module level.
This container parallelizes the application of the given module
by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.
The batch size should be larger than the number of GPUs used.
See also: Use nn.DataParallel instead of multiprocessing
Arbitrary positional and keyword inputs are allowed to be passed into DataParallel but some types are specially handled. tensors will be scattered on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model’s forward pass.
The parallelized module
must have its parameters and buffers on device_ids[0]
before running this DataParallel
module.
Warning
In each forward, module
is replicated on each device, so any updates to the running module in forward
will be lost. For example, if module
has a counter attribute that is incremented in each forward
, it will always stay at the initial value because the update is done on the replicas which are destroyed after forward
. However, DataParallel
guarantees that the replica on device[0]
will have its parameters and buffers sharing storage with the base parallelized module
. So in-place updates to the parameters or buffers on device[0]
will be recorded. E.g., BatchNorm2d
and spectral_norm()
rely on this behavior to update the buffers.
Warning
Forward and backward hooks defined on module
and its submodules will be invoked len(device_ids)
times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set via register_forward_pre_hook()
be executed before all len(device_ids)
forward()
calls, but that each such hook be executed before the corresponding forward()
call of that device.
Warning
When module
returns a scalar (i.e., 0-dimensional tensor) in forward()
, this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device.
Note
There is a subtlety in using the pack sequence -> recurrent network -> unpack sequence
pattern in a Module
wrapped in DataParallel
. See My recurrent network doesn’t work with data parallelism section in FAQ for details.
Parameters
Variables
~DataParallel.module (Module) – the module to be parallelized
Example:
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var) # input_var can be on any device, including CPU
DistributedDataParallel
class torch.nn.parallel.DistributedDataParallel
(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False)[source]
Implements distributed data parallelism that is based on torch.distributed
package at the module level.
This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged.
The batch size should be larger than the number of GPUs used locally.
See also: Basics and Use nn.DataParallel instead of multiprocessing. The same constraints on input as in torch.nn.DataParallel
apply.
Creation of this class requires that torch.distributed
to be already initialized, by calling torch.distributed.init_process_group()
.
DistributedDataParallel
can be used in the following two ways:
In this case, a single process will be spawned on each host/node and each process will operate on all the GPUs of the node where it’s running. To use DistributedDataParallel
in this way, you can simply construct the model as the following:
>>> torch.distributed.init_process_group(backend="nccl") >>> model = DistributedDataParallel(model) # device_ids will include all GPU devices by default
This is the highly recommended way to use DistributedDataParallel
, with multiple processes, each of which operates on a single GPU. This is currently the fastest approach to do data parallel training using PyTorch and applies to both single-node(multi-GPU) and multi-node data parallel training. It is proven to be significantly faster than torch.nn.DataParallel
for single-node multi-GPU data parallel training.
Here is how to use it: on each host with N GPUs, you should spawn up N processes, while ensuring that each process individually works on a single GPU from 0 to N-1. Therefore, it is your job to ensure that your training script operates on a single given GPU by calling:
>>> torch.cuda.set_device(i)
where i is from 0 to N-1. In each process, you should refer the following to construct this module:
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') >>> model = DistributedDataParallel(model, device_ids=[i], output_device=i)
In order to spawn up multiple processes per node, you can use either torch.distributed.launch
or torch.multiprocessing.spawn
Note
nccl
backend is currently the fastest and highly recommended backend to be used with Multi-Process Single-GPU distributed training and this applies to both single-node and multi-node distributed training
Note
This module also supports mixed-precision distributed training. This means that your model can have different types of parameters such as mixed types of fp16 and fp32, the gradient reduction on these mixed types of parameters will just work fine. Also note that nccl
backend is currently the fastest and highly recommended backend for fp16/fp32 mixed-precision training.
Note
If you use torch.save
on one process to checkpoint the module, and torch.load
on some other processes to recover it, make sure that map_location
is configured properly for every process. Without map_location
, torch.load
would recover the module to devices where the module was saved from.
Warning
This module works only with the gloo
and nccl
backends.
Warning
Constructor, forward method, and differentiation of the output (or a function of the output of this module) is a distributed synchronization point. Take that into account in case different processes might be executing different code.
Warning
This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers.
Warning
This module assumes all parameters are registered in the model of each distributed processes are in the same order. The module itself will conduct gradient all-reduction following the reverse order of the registered parameters of the model. In other words, it is users’ responsibility to ensure that each distributed process has the exact same model and thus the exact same parameter registration order.
Warning
This module assumes all buffers and gradients are dense.
Warning
This module doesn’t work with torch.autograd.grad()
(i.e. it will only work if gradients are to be accumulated in .grad
attributes of parameters).
Warning
If you plan on using this module with a nccl
backend or a gloo
backend (that uses Infiniband), together with a DataLoader that uses multiple workers, please change the multiprocessing start method to forkserver
(Python 3 only) or spawn
. Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork safe, and you will likely experience deadlocks if you don’t change this setting.
Warning
Forward and backward hooks defined on module
and its submodules won’t be invoked anymore, unless the hooks are initialized in the forward()
method.
Warning
You should never try to change your model’s parameters after wrapping up your model with DistributedDataParallel. In other words, when wrapping up your model with DistributedDataParallel, the constructor of DistributedDataParallel will register the additional gradient reduction functions on all the parameters of the model itself at the time of construction. If you change the model’s parameters after the DistributedDataParallel construction, this is not supported and unexpected behaviors can happen, since some parameters’ gradient reduction functions might not get called.
Note
Parameters are never broadcast between processes. The module performs an all-reduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way. Buffers (e.g. BatchNorm stats) are broadcast from the module in process of rank 0, to all other replicas in the system in every iteration.
Parameters
Variables
~DistributedDataParallel.module (Module) – the module to be parallelized
Example:
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') >>> net = torch.nn.DistributedDataParallel(model, pg)
no_sync
()[source]
A context manager to disable gradient synchronizations across DDP processes. Within this context, gradients will be accumulated on module variables, which will later be synchronized in the first forward-backward pass exiting the context.
Example:
>>> ddp = torch.nn.DistributedDataParallel(model, pg) >>> with ddp.no_sync(): ... for input in inputs: ... ddp(input).backward() # no synchronization, accumulate grads ... ddp(another_input).backward() # synchronize grads
Utilities
clip_grad_norm_
torch.nn.utils.clip_grad_norm_
(parameters, max_norm, norm_type=2)[source]
Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
Parameters
Returns
Total norm of the parameters (viewed as a single vector).
clip_grad_value_
torch.nn.utils.clip_grad_value_
(parameters, clip_value)[source]
Clips gradient of an iterable of parameters at specified value.
Gradients are modified in-place.
Parameters
parameters_to_vector
torch.nn.utils.parameters_to_vector
(parameters)[source]
Convert parameters to one vector
Parameters
parameters (Iterable[Tensor]) – an iterator of Tensors that are the parameters of a model.
Returns
The parameters represented by a single vector
vector_to_parameters
torch.nn.utils.vector_to_parameters
(vec, parameters)[source]
Convert one vector to the parameters
Parameters
weight_norm
torch.nn.utils.weight_norm
(module, name='weight', dim=0)[source]
Applies weight normalization to a parameter in the given module.
w=gv∥v∥\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|} w=g∥v∥v
Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by name
(e.g. 'weight'
) with two parameters: one specifying the magnitude (e.g. 'weight_g'
) and one specifying the direction (e.g. 'weight_v'
). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before every forward()
call.
By default, with dim=0
, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, use dim=None
.
See https://arxiv.org/abs/1602.07868
Parameters
Returns
The original module with the weight norm hook
Example:
>>> m = weight_norm(nn.Linear(20, 40), name='weight') >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_g.size() torch.Size([40, 1]) >>> m.weight_v.size() torch.Size([40, 20])
remove_weight_norm
torch.nn.utils.remove_weight_norm
(module, name='weight')[source]
Removes the weight normalization reparameterization from a module.
Parameters
Example
>>> m = weight_norm(nn.Linear(20, 40)) >>> remove_weight_norm(m)
spectral_norm
torch.nn.utils.spectral_norm
(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)[source]
Applies spectral normalization to a parameter in the given module.
WSN=Wσ(W),σ(W)=maxh:h≠0∥Wh∥2∥h∥2\mathbf{W}_{SN} = \dfrac{\mathbf{W}}{\sigma(\mathbf{W})}, \sigma(\mathbf{W}) = \max_{\mathbf{h}: \mathbf{h} \ne 0} \dfrac{\|\mathbf{W} \mathbf{h}\|_2}{\|\mathbf{h}\|_2} WSN=σ(W)W,σ(W)=h:h=0max∥h∥2∥Wh∥2
Spectral normalization stabilizes the training of discriminators (critics) in Generative Adversarial Networks (GANs) by rescaling the weight tensor with spectral norm σ\sigmaσ of the weight matrix calculated using power iteration method. If the dimension of the weight tensor is greater than 2, it is reshaped to 2D in power iteration method to get spectral norm. This is implemented via a hook that calculates spectral norm and rescales weight before every forward()
call.
See Spectral Normalization for Generative Adversarial Networks .
Parameters
Returns
The original module with the spectral norm hook
Example:
>>> m = spectral_norm(nn.Linear(20, 40)) >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_u.size() torch.Size([40])
remove_spectral_norm
torch.nn.utils.remove_spectral_norm
(module, name='weight')[source]
Removes the spectral normalization reparameterization from a module.
Parameters
Example
>>> m = spectral_norm(nn.Linear(40, 10)) >>> remove_spectral_norm(m)
PackedSequence
torch.nn.utils.rnn.PackedSequence
(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]
Holds the data and list of batch_sizes
of a packed sequence.
All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like pack_padded_sequence()
.
Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to pack_padded_sequence()
. For instance, given data abc
and x
the PackedSequence
would contain data axbc
with batch_sizes=[2,1,1]
.
Variables
Note
data
can be on arbitrary device and of arbitrary dtype. sorted_indices
and unsorted_indices
must be torch.int64
tensors on the same device as data
.
However, batch_sizes
should always be a CPU torch.int64
tensor.
This invariant is maintained throughout PackedSequence
class, and all functions that construct a :class:PackedSequence in PyTorch (i.e., they only pass in tensors conforming to this constraint).
pack_padded_sequence
torch.nn.utils.rnn.pack_padded_sequence
(input, lengths, batch_first=False, enforce_sorted=True)[source]
Packs a Tensor containing padded sequences of variable length.
input
can be of size T x B x *
where T is the length of the longest sequence (equal to lengths[0]
), B
is the batch size, and *
is any number of dimensions (including 0). If batch_first
is True
, B x T x *
input
is expected.
For unsorted sequences, use enforce_sorted = False. If enforce_sorted
is True
, the sequences should be sorted by length in a decreasing order, i.e. input[:,0]
should be the longest sequence, and input[:,B-1]
the shortest one. enforce_sorted = True is only necessary for ONNX export.
Note
This function accepts any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Tensor can be retrieved from a PackedSequence
object by accessing its .data
attribute.
Parameters
Returns
a PackedSequence
object
pad_packed_sequence
torch.nn.utils.rnn.pad_packed_sequence
(sequence, batch_first=False, padding_value=0.0, total_length=None)[source]
Pads a packed batch of variable length sequences.
It is an inverse operation to pack_padded_sequence()
.
The returned Tensor’s data will be of size T x B x *
, where T is the length of the longest sequence and B is the batch size. If batch_first
is True, the data will be transposed into B x T x *
format.
Batch elements will be ordered decreasingly by their length.
Note
total_length
is useful to implement the pack sequence -> recurrent network -> unpack sequence
pattern in a Module
wrapped in DataParallel
. See this FAQ section for details.
Parameters
Returns
Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch.
pad_sequence
torch.nn.utils.rnn.pad_sequence
(sequences, batch_first=False, padding_value=0)[source]
Pad a list of variable length Tensors with padding_value
pad_sequence
stacks a list of Tensors along a new dimension, and pads them to equal length. For example, if the input is list of sequences with size L x *
and if batch_first is False, and T x B x *
otherwise.
B is batch size. It is equal to the number of elements in sequences
. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none.
Example
>>> from torch.nn.utils.rnn import pad_sequence >>> a = torch.ones(25, 300) >>> b = torch.ones(22, 300) >>> c = torch.ones(15, 300) >>> pad_sequence([a, b, c]).size() torch.Size([25, 3, 300])
Note
This function returns a Tensor of size T x B x *
or B x T x *
where T is the length of the longest sequence. This function assumes trailing dimensions and type of all the Tensors in sequences are same.
Parameters
Returns
Tensor of size T x B x *
if batch_first
is False
. Tensor of size B x T x *
otherwise
pack_sequence
torch.nn.utils.rnn.pack_sequence
(sequences, enforce_sorted=True)[source]
Packs a list of variable length Tensors
sequences
should be a list of Tensors of size L x *
, where L is the length of a sequence and * is any number of trailing dimensions, including zero.
For unsorted sequences, use enforce_sorted = False. If enforce_sorted
is True
, the sequences should be sorted in the order of decreasing length. enforce_sorted = True
is only necessary for ONNX export.
Example
>>> from torch.nn.utils.rnn import pack_sequence >>> a = torch.tensor([1,2,3]) >>> b = torch.tensor([4,5]) >>> c = torch.tensor([6]) >>> pack_sequence([a, b, c]) PackedSequence(data=tensor([ 1, 4, 6, 2, 5, 3]), batch_sizes=tensor([ 3, 2, 1]))
Parameters
Returns
a PackedSequence
object
True
, will do this operation in-place. Default: False
True
, will do this operation in-place
True
, will do this operation in-place
True
, will do this operation in-place
padding_idx
(initialized to zeros) whenever it encounters the index.
max_norm
is renormalized to have norm max_norm
.
max_norm
option. Default 2
.
False
.
True
, gradient w.r.t. weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.
num_embeddings
, second as embedding_dim
.
True
, the tensor does not get updated in the learning process. Equivalent to embedding.weight.requires_grad = False
. Default: True
2
.
False
.
max_norm
is renormalized to have norm max_norm
.
max_norm
option. Default 2
.
False
. Note: this option is not supported when mode="max"
.
"sum"
, "mean"
or "max"
. Specifies the way to reduce the bag. "sum"
computes the weighted sum, taking per_sample_weights
into consideration. "mean"
computes the average of the values in the bag, "max"
computes the max value over each bag. Default: "mean"
True
, gradient w.r.t. weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when mode="max"
.
input
is 2D of shape (B, N),
it will be treated as B
bags (sequences) each of fixed length N
, and this will return B
values aggregated in a way depending on the mode
. offsets
is ignored and required to be None
in this case.
input
is 1D of shape (N),
it will be treated as a concatenation of multiple bags (sequences). offsets
is required to be a 1D tensor containing the starting index positions of each bag in input
. Therefore, for offsets
of shape (B), input
will be viewed as having B
bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros.
True
, the tensor does not get updated in the learning process. Equivalent to embeddingbag.weight.requires_grad = False
. Default: True
None
2
.
False
.
"mean"
False
.
keepdim
is True
, then (N,1)(N, 1)(N,1) .
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N,∗)(N, *)(N,∗) , same shape as the input
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
size_average
is True
, the loss is averaged over non-ignored targets.
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then the same size as the target: (N)(N)(N) , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K)(N,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss.
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the output losses will be divided by the target lengths and then the mean over the batch is taken. Default: 'mean'
False
Infinite losses mainly occur when the inputs are too short to be aligned to the targets.
torch.nn.functional.log_softmax()
).
target_n = targets[n,0:s_n]
for each target in a batch. Lengths must each be ≤S\leq S≤S If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor.
reduction
is 'none'
, then (N)(N)(N) , where N=batch sizeN = \text{batch size}N=batch size .
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
size_average
is True
, the loss is averaged over non-ignored targets.
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then the same size as the target: (N)(N)(N) , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K)(N,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss.
True
the loss is computed as exp(input)−target∗input\exp(\text{input}) - \text{target}*\text{input}exp(input)−target∗input , if False
the loss is input−target∗log(input+eps)\text{input} - \text{target}*\log(\text{input}+\text{eps})input−target∗log(input+eps) .
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
log_input = False
. Default: 1e-8
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N,∗)(N, *)(N,∗) , the same shape as the input
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'batchmean'
| 'sum'
| 'mean'
. 'none'
: no reduction will be applied. 'batchmean'
: the sum of the output will be divided by batchsize. 'sum'
: the output will be summed. 'mean'
: the output will be divided by the number of elements in the output. Default: 'mean'
reduction
is 'none'
, then (N,∗)(N, *)(N,∗) , the same shape as the input
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N,∗)(N, *)(N,∗) , same shape as input.
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N)(N)(N) .
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then same shape as the input
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N)(N)(N) .
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N,∗)(N, *)(N,∗) , same shape as the input
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then same shape as the input
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N)(N)(N) .
margin
is missing, the default value is 000 .
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
False
.
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average
is set to False
, the losses are instead summed for each minibatch. Ignored when reduce is False
. Default: True
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending on size_average
. When reduce
is False
, returns a loss per batch element instead and ignores size_average
. Default: True
'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
reduction
is 'none'
, then (N)(N)(N) .
'nearest'
, 'linear'
, 'bilinear'
, 'bicubic'
and 'trilinear'
. Default: 'nearest'
True
, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect when mode
is 'linear'
, 'bilinear'
, or 'trilinear'
. Default: False
i``th :attr:`module` replica is placed on ``device_ids[i]
. For multi-device modules and CPU modules, device_ids must be None or an empty list, and input data for the forward pass must be placed on the correct device. (default: all devices for single-device modules)
True
)
None
, the default process group, which is created by `torch.distributed.init_process_group`
, will be used. (default: None
)
bucket_cap_mb
controls the bucket size in MegaBytes (MB) (default: 25)
forward
function. Parameters that don’t receive gradients as part of this graph are preemptively marked as being ready to be reduced. Note that all forward
outputs that are derived from module parameters must participate in calculating loss and later the gradient computation. If they don’t, this wrapper will hang waiting for autograd to produce gradients for those parameters. Any outputs derived from module parameters that are otherwise unused can be detached from the autograd graph using torch.Tensor.detach
. (default: False
)
True
, it enables DistributedDataParallel to automatically check if the previous iteration’s backward reductions were successfully issued at the beginning of every iteration’s forward function. You normally don’t need this option enabled unless you are observing weird behaviors such as different ranks are getting different gradients, which should not happen if DistributedDataParallel is correctly used. (default: False
)
'inf'
for infinity norm.
0
, except for modules that are instances of ConvTranspose{1,2,3}d, when it is 1
PackedSequence
is constructed from sequences.
True
, the input is expected in B x T x *
format.
True
, the input is expected to contain sequences sorted by length in a decreasing order. If False
, this condition is not checked. Default: True
.
True
, the output will be in B x T x *
format.
None
, the output will be padded to have length total_length
. This method will throw ValueError
if total_length
is less than the max sequence length in sequence
.
B x T x *
if True, or in T x B x *
otherwise
True
, checks that the input contains sequences sorted by length in a decreasing order. If False
, this condition is not checked. Default: True
.
本文参与腾讯云自媒体分享计划，欢迎正在阅读的你也加入，一起分享。
我来说两句