版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_36670529/article/details/101199430
torch.abs
(input, out=None) → TensorComputes the element-wise absolute value of the given input
tensor.
outi=∣inputi∣\text{out}_{i} = |\text{input}_{i}| outi=∣inputi∣
Parameters
Example:
>>> torch.abs(torch.tensor([-1, -2, 3]))
tensor([ 1, 2, 3])
torch.acos
(input, out=None) → TensorReturns a new tensor with the arccosine of the elements of input
.
outi=cos−1(inputi)\text{out}_{i} = \cos^{-1}(\text{input}_{i}) outi=cos−1(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.3348, -0.5889, 0.2005, -0.1584])
>>> torch.acos(a)
tensor([ 1.2294, 2.2004, 1.3690, 1.7298])
torch.add
()torch.add
(input, other, out=None)
Adds the scalar other
to each element of the input input
and returns a new resulting tensor.
out=input+other\text{out} = \text{input} + \text{other} out=input+other
If input
is of type FloatTensor or DoubleTensor, other
must be a real number, otherwise it should be an integer.
Parameters
input
Keyword Arguments
out (Tensor, optional) – the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.0202, 1.0985, 1.3506, -0.6056])
>>> torch.add(a, 20)
tensor([ 20.0202, 21.0985, 21.3506, 19.3944])
torch.add
(input, alpha=1, other, out=None)Each element of the tensor other
is multiplied by the scalar alpha
and added to each element of the tensor input
. The resulting tensor is returned.
The shapes of input
and other
must be broadcastable.
out=input+alpha×other\text{out} = \text{input} + \text{alpha} \times \text{other} out=input+alpha×other
If other
is of type FloatTensor or DoubleTensor, alpha
must be a real number, otherwise it should be an integer.
Parameters
other
Keyword Arguments
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.9732, -0.3497, 0.6245, 0.4022])
>>> b = torch.randn(4, 1)
>>> b
tensor([[ 0.3743],
[-1.7724],
[-0.5811],
[-0.8017]])
>>> torch.add(a, 10, b)
tensor([[ 2.7695, 3.3930, 4.3672, 4.1450],
[-18.6971, -18.0736, -17.0994, -17.3216],
[ -6.7845, -6.1610, -5.1868, -5.4090],
[ -8.9902, -8.3667, -7.3925, -7.6147]])
torch.addcdiv
(input, value=1, tensor1, tensor2, out=None) → TensorPerforms the element-wise division of tensor1
by tensor2
, multiply the result by the scalar value
and add it to input
.
outi=inputi+value×tensor1itensor2i\text{out}_i = \text{input}_i + \text{value} \times \frac{\text{tensor1}_i}{\text{tensor2}_i} outi=inputi+value×tensor2itensor1i
The shapes of input
, tensor1
, and tensor2
must be broadcastable.
For inputs of type FloatTensor or DoubleTensor, value
must be a real number, otherwise an integer.
Parameters
Example:
>>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
>>> torch.addcdiv(t, 0.1, t1, t2)
tensor([[-0.2312, -3.6496, 0.1312],
[-1.0428, 3.4292, -0.1030],
[-0.5369, -0.9829, 0.0430]])
torch.addcmul
(input, value=1, tensor1, tensor2, out=None) → TensorPerforms the element-wise multiplication of tensor1
by tensor2
, multiply the result by the scalar value
and add it to input
.
outi=inputi+value×tensor1i×tensor2i\text{out}_i = \text{input}_i + \text{value} \times \text{tensor1}_i \times \text{tensor2}_i outi=inputi+value×tensor1i×tensor2i
The shapes of tensor
, tensor1
, and tensor2
must be broadcastable.
For inputs of type FloatTensor or DoubleTensor, value
must be a real number, otherwise an integer.
Parameters
Example:
>>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
>>> torch.addcmul(t, 0.1, t1, t2)
tensor([[-0.8635, -0.6391, 1.6174],
[-0.7617, -0.5879, 1.7388],
[-0.8353, -0.6249, 1.6511]])
torch.asin
(input, out=None) → TensorReturns a new tensor with the arcsine of the elements of input
.
outi=sin−1(inputi)\text{out}_{i} = \sin^{-1}(\text{input}_{i}) outi=sin−1(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.5962, 1.4985, -0.4396, 1.4525])
>>> torch.asin(a)
tensor([-0.6387, nan, -0.4552, nan])
torch.atan
(input, out=None) → TensorReturns a new tensor with the arctangent of the elements of input
.
outi=tan−1(inputi)\text{out}_{i} = \tan^{-1}(\text{input}_{i}) outi=tan−1(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.2341, 0.2539, -0.6256, -0.6448])
>>> torch.atan(a)
tensor([ 0.2299, 0.2487, -0.5591, -0.5727])
torch.atan2
(input, other, out=None) → TensorElement-wise arctangent of inputi/otheri\text{input}_{i} / \text{other}_{i}inputi/otheri with consideration of the quadrant. Returns a new tensor with the signed angles in radians between vector (otheri,inputi)(\text{other}_{i}, \text{input}_{i})(otheri,inputi) and vector (1,0)(1, 0)(1,0) . (Note that otheri\text{other}_{i}otheri , the second parameter, is the x-coordinate, while inputi\text{input}_{i}inputi , the first parameter, is the y-coordinate.)
The shapes of input
and other
must be broadcastable.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.9041, 0.0196, -0.3108, -2.4423])
>>> torch.atan2(a, torch.randn(4))
tensor([ 0.9833, 0.0811, -1.9743, -1.4151])
torch.bitwise_not
(input, out=None) → TensorComputes the bitwise NOT of the given input tensor. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical NOT.
Parameters
Example
>>> torch.bitwise_not(torch.tensor([-1, -2, 3], dtype=torch.int8))
tensor([ 0, 1, -4], dtype=torch.int8)
torch.ceil
(input, out=None) → TensorReturns a new tensor with the ceil of the elements of input
, the smallest integer greater than or equal to each element.
outi=⌈inputi⌉=⌊inputi⌋+1\text{out}_{i} = \left\lceil \text{input}_{i} \right\rceil = \left\lfloor \text{input}_{i} \right\rfloor + 1 outi=⌈inputi⌉=⌊inputi⌋+1
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.6341, -1.4208, -1.0900, 0.5826])
>>> torch.ceil(a)
tensor([-0., -1., -1., 1.])
torch.clamp
(input, min, max, out=None) → TensorClamp all elements in input
into the range [ min
, max
] and return a resulting tensor:
yi={minif xi<minxiif min≤xi≤maxmaxif xi>maxy_i = \begin{cases} \text{min} & \text{if } x_i < \text{min} \\ x_i & \text{if } \text{min} \leq x_i \leq \text{max} \\ \text{max} & \text{if } x_i > \text{max} \end{cases} yi=⎩⎪⎪⎨⎪⎪⎧minximaxif xi<minif min≤xi≤maxif xi>max
If input
is of type FloatTensor or DoubleTensor, args min
and max
must be real numbers, otherwise they should be integers.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-1.7120, 0.1734, -0.0478, -0.0922])
>>> torch.clamp(a, min=-0.5, max=0.5)
tensor([-0.5000, 0.1734, -0.0478, -0.0922])
torch.clamp
(input, *, min, out=None) → TensorClamps all elements in input
to be larger or equal min
.
If input
is of type FloatTensor or DoubleTensor, value
should be a real number, otherwise it should be an integer.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.0299, -2.3184, 2.1593, -0.8883])
>>> torch.clamp(a, min=0.5)
tensor([ 0.5000, 0.5000, 2.1593, 0.5000])
torch.clamp
(input, *, max, out=None) → TensorClamps all elements in input
to be smaller or equal max
.
If input
is of type FloatTensor or DoubleTensor, value
should be a real number, otherwise it should be an integer.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.7753, -0.4702, -0.4599, 1.1899])
>>> torch.clamp(a, max=0.5)
tensor([ 0.5000, -0.4702, -0.4599, 0.5000])
torch.cos
(input, out=None) → TensorReturns a new tensor with the cosine of the elements of input
.
outi=cos(inputi)\text{out}_{i} = \cos(\text{input}_{i}) outi=cos(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 1.4309, 1.2706, -0.8562, 0.9796])
>>> torch.cos(a)
tensor([ 0.1395, 0.2957, 0.6553, 0.5574])
torch.cosh
(input, out=None) → TensorReturns a new tensor with the hyperbolic cosine of the elements of input
.
outi=cosh(inputi)\text{out}_{i} = \cosh(\text{input}_{i}) outi=cosh(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.1632, 1.1835, -0.6979, -0.7325])
>>> torch.cosh(a)
tensor([ 1.0133, 1.7860, 1.2536, 1.2805])
torch.div
()torch.div
(input, other, out=None) → Tensor
Divides each element of the input input
with the scalar other
and returns a new resulting tensor.
outi=inputiother\text{out}_i = \frac{\text{input}_i}{\text{other}} outi=otherinputi
If input
is of type FloatTensor or DoubleTensor, other
should be a real number, otherwise it should be an integer
Parameters
input
Example:
>>> a = torch.randn(5)
>>> a
tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637])
>>> torch.div(a, 0.5)
tensor([ 0.7620, 2.5548, -0.5944, -0.7439, 0.9275])
torch.div
(input, other, out=None) → TensorEach element of the tensor input
is divided by each element of the tensor other
. The resulting tensor is returned. The shapes of input
and other
must be broadcastable.
outi=inputiotheri\text{out}_i = \frac{\text{input}_i}{\text{other}_i} outi=otheriinputi
Parameters
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3711, -1.9353, -0.4605, -0.2917],
[ 0.1815, -1.0111, 0.9805, -1.5923],
[ 0.1062, 1.4581, 0.7759, -1.2344],
[-0.1830, -0.0313, 1.1908, -1.4757]])
>>> b = torch.randn(4)
>>> b
tensor([ 0.8032, 0.2930, -0.8113, -0.2308])
>>> torch.div(a, b)
tensor([[-0.4620, -6.6051, 0.5676, 1.2637],
[ 0.2260, -3.4507, -1.2086, 6.8988],
[ 0.1322, 4.9764, -0.9564, 5.3480],
[-0.2278, -0.1068, -1.4678, 6.3936]])
torch.digamma
(input, out=None) → TensorComputes the logarithmic derivative of the gamma function on input.
ψ(x)=ddxln(Γ(x))=Γ′(x)Γ(x)\psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)} ψ(x)=dxdln(Γ(x))=Γ(x)Γ′(x)
Parameters
input (Tensor) – the tensor to compute the digamma function on
Example:
>>> a = torch.tensor([1, 0.5])
>>> torch.digamma(a)
tensor([-0.5772, -1.9635])
torch.erf
(input, out=None) → TensorComputes the error function of each element. The error function is defined as follows:
erf(x)=2π∫0xe−t2dt\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt erf(x)=π
2∫0xe−t2dt
Parameters
Example:
>>> torch.erf(torch.tensor([0, -1., 10.]))
tensor([ 0.0000, -0.8427, 1.0000])
torch.erfc
(input, out=None) → TensorComputes the complementary error function of each element of input
. The complementary error function is defined as follows:
erfc(x)=1−2π∫0xe−t2dt\mathrm{erfc}(x) = 1 - \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt erfc(x)=1−π
2∫0xe−t2dt
Parameters
Example:
>>> torch.erfc(torch.tensor([0, -1., 10.]))
tensor([ 1.0000, 1.8427, 0.0000])
torch.erfinv
(input, out=None) → TensorComputes the inverse error function of each element of input
. The inverse error function is defined in the range (−1,1)(-1, 1)(−1,1) as:
erfinv(erf(x))=x\mathrm{erfinv}(\mathrm{erf}(x)) = x erfinv(erf(x))=x
Parameters
Example:
>>> torch.erfinv(torch.tensor([0, 0.5, -1.]))
tensor([ 0.0000, 0.4769, -inf])
torch.exp
(input, out=None) → TensorReturns a new tensor with the exponential of the elements of the input tensor input
.
yi=exiy_{i} = e^{x_{i}} yi=exi
Parameters
Example:
>>> torch.exp(torch.tensor([0, math.log(2.)]))
tensor([ 1., 2.])
torch.expm1
(input, out=None) → TensorReturns a new tensor with the exponential of the elements minus 1 of input
.
yi=exi−1y_{i} = e^{x_{i}} - 1 yi=exi−1
Parameters
Example:
>>> torch.expm1(torch.tensor([0, math.log(2.)]))
tensor([ 0., 1.])
torch.floor
(input, out=None) → TensorReturns a new tensor with the floor of the elements of input
, the largest integer less than or equal to each element.
outi=⌊inputi⌋\text{out}_{i} = \left\lfloor \text{input}_{i} \right\rfloor outi=⌊inputi⌋
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.8166, 1.5308, -0.2530, -0.2091])
>>> torch.floor(a)
tensor([-1., 1., -1., -1.])
torch.fmod
(input, other, out=None) → TensorComputes the element-wise remainder of division.
The dividend and divisor may contain both for integer and floating point numbers. The remainder has the same sign as the dividend input
.
When other
is a tensor, the shapes of input
and other
must be broadcastable.
Parameters
Example:
>>> torch.fmod(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([-1., -0., -1., 1., 0., 1.])
>>> torch.fmod(torch.tensor([1., 2, 3, 4, 5]), 1.5)
tensor([ 1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
torch.frac
(input, out=None) → TensorComputes the fractional portion of each element in input
.
outi=inputi−⌊∣inputi∣⌋∗sgn(inputi)\text{out}_{i} = \text{input}_{i} - \left\lfloor |\text{input}_{i}| \right\rfloor * \operatorname{sgn}(\text{input}_{i}) outi=inputi−⌊∣inputi∣⌋∗sgn(inputi)
Example:
>>> torch.frac(torch.tensor([1, 2.5, -3.2]))
tensor([ 0.0000, 0.5000, -0.2000])
torch.lerp
(input, end, weight, out=None)Does a linear interpolation of two tensors start
(given by input
) and end
based on a scalar or tensor weight
and returns the resulting out
tensor.
outi=starti+weighti×(endi−starti)\text{out}_i = \text{start}_i + \text{weight}_i \times (\text{end}_i - \text{start}_i) outi=starti+weighti×(endi−starti)
The shapes of start
and end
must be broadcastable. If weight
is a tensor, then the shapes of weight
, start
, and end
must be broadcastable.
Parameters
Example:
>>> start = torch.arange(1., 5.)
>>> end = torch.empty(4).fill_(10)
>>> start
tensor([ 1., 2., 3., 4.])
>>> end
tensor([ 10., 10., 10., 10.])
>>> torch.lerp(start, end, 0.5)
tensor([ 5.5000, 6.0000, 6.5000, 7.0000])
>>> torch.lerp(start, end, torch.full_like(start, 0.5))
tensor([ 5.5000, 6.0000, 6.5000, 7.0000])
torch.log
(input, out=None) → TensorReturns a new tensor with the natural logarithm of the elements of input
.
yi=loge(xi)y_{i} = \log_{e} (x_{i}) yi=loge(xi)
Parameters
Example:
>>> a = torch.randn(5)
>>> a
tensor([-0.7168, -0.5471, -0.8933, -1.4428, -0.1190])
>>> torch.log(a)
tensor([ nan, nan, nan, nan, nan])
torch.log10
(input, out=None) → TensorReturns a new tensor with the logarithm to the base 10 of the elements of input
.
yi=log10(xi)y_{i} = \log_{10} (x_{i}) yi=log10(xi)
Parameters
Example:
>>> a = torch.rand(5)
>>> a
tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251])
>>> torch.log10(a)
tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476])
torch.log1p
(input, out=None) → TensorReturns a new tensor with the natural logarithm of (1 + input
).
yi=loge(xi+1)y_i = \log_{e} (x_i + 1) yi=loge(xi+1)
Note
This function is more accurate than torch.log()
for small values of input
Parameters
Example:
>>> a = torch.randn(5)
>>> a
tensor([-1.0090, -0.9923, 1.0249, -0.5372, 0.2492])
>>> torch.log1p(a)
tensor([ nan, -4.8653, 0.7055, -0.7705, 0.2225])
torch.log2
(input, out=None) → TensorReturns a new tensor with the logarithm to the base 2 of the elements of input
.
yi=log2(xi)y_{i} = \log_{2} (x_{i}) yi=log2(xi)
Parameters
Example:
>>> a = torch.rand(5)
>>> a
tensor([ 0.8419, 0.8003, 0.9971, 0.5287, 0.0490])
>>> torch.log2(a)
tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504])
torch.logical_not
(input, out=None) → TensorComputes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as False
and non-zeros are treated as True
.
Parameters
Example:
>>> torch.logical_not(torch.tensor([True, False]))
tensor([ False, True])
>>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double))
tensor([ True, False, False])
>>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16))
tensor([1, 0, 0], dtype=torch.int16)
torch.logical_xor
(input, other, out=None) → TensorComputes the element-wise logical XOR of the given input tensors. Both input tensors must have the bool dtype.
Parameters
Example:
>>> torch.logical_xor(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([ False, False, True])
torch.mul
()torch.mul
(input, other, out=None)
Multiplies each element of the input input
with the scalar other
and returns a new resulting tensor.
outi=other×inputi\text{out}_i = \text{other} \times \text{input}_i outi=other×inputi
If input
is of type FloatTensor or DoubleTensor, other
should be a real number, otherwise it should be an integer
Parameters
input
Example:
>>> a = torch.randn(3)
>>> a
tensor([ 0.2015, -0.4255, 2.6087])
>>> torch.mul(a, 100)
tensor([ 20.1494, -42.5491, 260.8663])
torch.mul
(input, other, out=None)Each element of the tensor input
is multiplied by the corresponding element of the Tensor other
. The resulting tensor is returned.
The shapes of input
and other
must be broadcastable.
outi=inputi×otheri\text{out}_i = \text{input}_i \times \text{other}_i outi=inputi×otheri
Parameters
Example:
>>> a = torch.randn(4, 1)
>>> a
tensor([[ 1.1207],
[-0.3137],
[ 0.0700],
[ 0.8378]])
>>> b = torch.randn(1, 4)
>>> b
tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])
>>> torch.mul(a, b)
tensor([[ 0.5767, 0.1363, -0.5877, 2.5083],
[-0.1614, -0.0382, 0.1645, -0.7021],
[ 0.0360, 0.0085, -0.0367, 0.1567],
[ 0.4312, 0.1019, -0.4394, 1.8753]])
torch.mvlgamma
(input, p) → TensorComputes the multivariate log-gamma function ([reference]) with dimension ppp element-wise, given by
log(Γp(a))=C+∑i=1plog(Γ(a−i−12))\log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p} \log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right) log(Γp(a))=C+i=1∑plog(Γ(a−2i−1))
where C=log(π)×p(p−1)4C = \log(\pi) \times \frac{p (p - 1)}{4}C=log(π)×4p(p−1) and Γ(⋅)\Gamma(\cdot)Γ(⋅) is the Gamma function.
If any of the elements are less than or equal to p−12\frac{p - 1}{2}2p−1 , then an error is thrown.
Parameters
Example:
>>> a = torch.empty(2, 3).uniform_(1, 2)
>>> a
tensor([[1.6835, 1.8474, 1.1929],
[1.0475, 1.7162, 1.4180]])
>>> torch.mvlgamma(a, 2)
tensor([[0.3928, 0.4007, 0.7586],
[1.0311, 0.3901, 0.5049]])
torch.neg
(input, out=None) → TensorReturns a new tensor with the negative of the elements of input
.
out=−1×input\text{out} = -1 \times \text{input} out=−1×input
Parameters
Example:
>>> a = torch.randn(5)
>>> a
tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])
>>> torch.neg(a)
tensor([-0.0090, 0.2262, 0.0682, 0.2866, -0.3940])
torch.pow
()torch.pow
(input, exponent, out=None) → Tensor
Takes the power of each element in input
with exponent
and returns a tensor with the result.
exponent
can be either a single float
number or a Tensor with the same number of elements as input
.
When exponent
is a scalar value, the operation applied is:
outi=xiexponent\text{out}_i = x_i ^ \text{exponent} outi=xiexponent
When exponent
is a tensor, the operation applied is:
outi=xiexponenti\text{out}_i = x_i ^ {\text{exponent}_i} outi=xiexponenti
When exponent
is a tensor, the shapes of input
and exponent
must be broadcastable.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.4331, 1.2475, 0.6834, -0.2791])
>>> torch.pow(a, 2)
tensor([ 0.1875, 1.5561, 0.4670, 0.0779])
>>> exp = torch.arange(1., 5.)
>>> a = torch.arange(1., 5.)
>>> a
tensor([ 1., 2., 3., 4.])
>>> exp
tensor([ 1., 2., 3., 4.])
>>> torch.pow(a, exp)
tensor([ 1., 4., 27., 256.])
torch.pow
(self, exponent, out=None) → Tensorself
is a scalar float
value, and exponent
is a tensor. The returned tensor out
is of the same shape as exponent
The operation applied is:
outi=selfexponenti\text{out}_i = \text{self} ^ {\text{exponent}_i} outi=selfexponenti
Parameters
Example:
>>> exp = torch.arange(1., 5.)
>>> base = 2
>>> torch.pow(base, exp)
tensor([ 2., 4., 8., 16.])
torch.reciprocal
(input, out=None) → TensorReturns a new tensor with the reciprocal of the elements of input
outi=1inputi\text{out}_{i} = \frac{1}{\text{input}_{i}} outi=inputi1
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.4595, -2.1219, -1.4314, 0.7298])
>>> torch.reciprocal(a)
tensor([-2.1763, -0.4713, -0.6986, 1.3702])
torch.remainder
(input, other, out=None) → TensorComputes the element-wise remainder of division.
The divisor and dividend may contain both for integer and floating point numbers. The remainder has the same sign as the divisor.
When other
is a tensor, the shapes of input
and other
must be broadcastable.
Parameters
Example:
>>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([ 1., 0., 1., 1., 0., 1.])
>>> torch.remainder(torch.tensor([1., 2, 3, 4, 5]), 1.5)
tensor([ 1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
See also
torch.fmod()
, which computes the element-wise remainder of division equivalently to the C library function fmod()
.
torch.round
(input, out=None) → TensorReturns a new tensor with each of the elements of input
rounded to the closest integer.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.9920, 0.6077, 0.9734, -1.0362])
>>> torch.round(a)
tensor([ 1., 1., 1., -1.])
torch.rsqrt
(input, out=None) → TensorReturns a new tensor with the reciprocal of the square-root of each of the elements of input
.
outi=1inputi\text{out}_{i} = \frac{1}{\sqrt{\text{input}_{i}}} outi=inputi
1
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.0370, 0.2970, 1.5420, -0.9105])
>>> torch.rsqrt(a)
tensor([ nan, 1.8351, 0.8053, nan])
torch.sigmoid
(input, out=None) → TensorReturns a new tensor with the sigmoid of the elements of input
.
outi=11+e−inputi\text{out}_{i} = \frac{1}{1 + e^{-\text{input}_{i}}} outi=1+e−inputi1
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.9213, 1.0887, -0.8858, -1.7683])
>>> torch.sigmoid(a)
tensor([ 0.7153, 0.7481, 0.2920, 0.1458])
torch.sign
(input, out=None) → TensorReturns a new tensor with the signs of the elements of input
.
outi=sgn(inputi)\text{out}_{i} = \operatorname{sgn}(\text{input}_{i}) outi=sgn(inputi)
Parameters
Example:
>>> a = torch.tensor([0.7, -1.2, 0., 2.3])
>>> a
tensor([ 0.7000, -1.2000, 0.0000, 2.3000])
>>> torch.sign(a)
tensor([ 1., -1., 0., 1.])
torch.sin
(input, out=None) → TensorReturns a new tensor with the sine of the elements of input
.
outi=sin(inputi)\text{out}_{i} = \sin(\text{input}_{i}) outi=sin(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.5461, 0.1347, -2.7266, -0.2746])
>>> torch.sin(a)
tensor([-0.5194, 0.1343, -0.4032, -0.2711])
torch.sinh
(input, out=None) → TensorReturns a new tensor with the hyperbolic sine of the elements of input
.
outi=sinh(inputi)\text{out}_{i} = \sinh(\text{input}_{i}) outi=sinh(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.5380, -0.8632, -0.1265, 0.9399])
>>> torch.sinh(a)
tensor([ 0.5644, -0.9744, -0.1268, 1.0845])
torch.sqrt
(input, out=None) → TensorReturns a new tensor with the square-root of the elements of input
.
outi=inputi\text{out}_{i} = \sqrt{\text{input}_{i}} outi=inputi
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-2.0755, 1.0226, 0.0831, 0.4806])
>>> torch.sqrt(a)
tensor([ nan, 1.0112, 0.2883, 0.6933])
torch.tan
(input, out=None) → TensorReturns a new tensor with the tangent of the elements of input
.
outi=tan(inputi)\text{out}_{i} = \tan(\text{input}_{i}) outi=tan(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([-1.2027, -1.7687, 0.4412, -1.3856])
>>> torch.tan(a)
tensor([-2.5930, 4.9859, 0.4722, -5.3366])
torch.tanh
(input, out=None) → TensorReturns a new tensor with the hyperbolic tangent of the elements of input
.
outi=tanh(inputi)\text{out}_{i} = \tanh(\text{input}_{i}) outi=tanh(inputi)
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.8986, -0.7279, 1.1745, 0.2611])
>>> torch.tanh(a)
tensor([ 0.7156, -0.6218, 0.8257, 0.2553])
torch.trunc
(input, out=None) → TensorReturns a new tensor with the truncated integer values of the elements of input
.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 3.4742, 0.5466, -0.8008, -0.9079])
>>> torch.trunc(a)
tensor([ 3., 0., -0., -0.])
torch.argmax
()torch.argmax
(input) → LongTensor
Returns the indices of the maximum value of all elements in the input
tensor.
This is the second value returned by torch.max()
. See its documentation for the exact semantics of this method.
Parameters
input (Tensor) – the input tensor.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],
[-0.7401, -0.8805, -0.3402, -1.1936],
[ 0.4907, -1.3948, -1.0691, -0.3132],
[-1.6092, 0.5419, -0.2993, 0.3195]])
>>> torch.argmax(a)
tensor(0)
torch.argmax
(input, dim, keepdim=False) → LongTensorReturns the indices of the maximum values of a tensor across a dimension.
This is the second value returned by torch.max()
. See its documentation for the exact semantics of this method.
Parameters
None
, the argmax of the flattened input is returned.
dim
retained or not. Ignored if dim=None
.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],
[-0.7401, -0.8805, -0.3402, -1.1936],
[ 0.4907, -1.3948, -1.0691, -0.3132],
[-1.6092, 0.5419, -0.2993, 0.3195]])
>>> torch.argmax(a, dim=1)
tensor([ 0, 2, 0, 1])
torch.argmin
()torch.argmin
(input) → LongTensor
Returns the indices of the minimum value of all elements in the input
tensor.
This is the second value returned by torch.min()
. See its documentation for the exact semantics of this method.
Parameters
input (Tensor) – the input tensor.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.1139, 0.2254, -0.1381, 0.3687],
[ 1.0100, -1.1975, -0.0102, -0.4732],
[-0.9240, 0.1207, -0.7506, -1.0213],
[ 1.7809, -1.2960, 0.9384, 0.1438]])
>>> torch.argmin(a)
tensor(13)
torch.argmin
(input, dim, keepdim=False, out=None) → LongTensorReturns the indices of the minimum values of a tensor across a dimension.
This is the second value returned by torch.min()
. See its documentation for the exact semantics of this method.
Parameters
None
, the argmin of the flattened input is returned.
dim
retained or not. Ignored if dim=None
.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.1139, 0.2254, -0.1381, 0.3687],
[ 1.0100, -1.1975, -0.0102, -0.4732],
[-0.9240, 0.1207, -0.7506, -1.0213],
[ 1.7809, -1.2960, 0.9384, 0.1438]])
>>> torch.argmin(a, dim=1)
tensor([ 2, 1, 3, 1])
torch.cumprod
(input, dim, out=None, dtype=None) → TensorReturns the cumulative product of elements of input
in the dimension dim
.
For example, if input
is a vector of size N, the result will also be a vector of size N, with elements.
yi=x1×x2×x3×⋯×xiy_i = x_1 \times x_2\times x_3\times \dots \times x_i yi=x1×x2×x3×⋯×xi
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(10)
>>> a
tensor([ 0.6001, 0.2069, -0.1919, 0.9792, 0.6727, 1.0062, 0.4126,
-0.2129, -0.4206, 0.1968])
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065,
0.0014, -0.0006, -0.0001])
>>> a[5] = 0.0
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,
0.0000, -0.0000, -0.0000])
torch.cumsum
(input, dim, out=None, dtype=None) → TensorReturns the cumulative sum of elements of input
in the dimension dim
.
For example, if input
is a vector of size N, the result will also be a vector of size N, with elements.
yi=x1+x2+x3+⋯+xiy_i = x_1 + x_2 + x_3 + \dots + x_i yi=x1+x2+x3+⋯+xi
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(10)
>>> a
tensor([-0.8286, -0.4890, 0.5155, 0.8443, 0.1865, -0.1752, -2.0595,
0.1850, -1.1571, -0.4243])
>>> torch.cumsum(a, dim=0)
tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058,
-1.8209, -2.9780, -3.4022])
torch.dist
(input, other, p=2) → TensorReturns the p-norm of (input
- other
)
The shapes of input
and other
must be broadcastable.
Parameters
Example:
>>> x = torch.randn(4)
>>> x
tensor([-1.5393, -0.8675, 0.5916, 1.6321])
>>> y = torch.randn(4)
>>> y
tensor([ 0.0967, -1.0511, 0.6295, 0.8360])
>>> torch.dist(x, y, 3.5)
tensor(1.6727)
>>> torch.dist(x, y, 3)
tensor(1.6973)
>>> torch.dist(x, y, 0)
tensor(inf)
>>> torch.dist(x, y, 1)
tensor(2.6537)
torch.logsumexp
(input, dim, keepdim=False, out=None)Returns the log of summed exponentials of each row of the input
tensor in the given dimension dim
. The computation is numerically stabilized.
For summation index jjj given by dim and other indices iii , the result is
logsumexp(x)i=log∑jexp(xij)\text{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij}) logsumexp(x)i=logj∑exp(xij)
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
Parameters
dim
retained or not.
Example::
>>> a = torch.randn(3, 3)
>>> torch.logsumexp(a, 1)
tensor([ 0.8442, 1.4322, 0.8711])
torch.mean
()torch.mean
(input) → Tensor
Returns the mean value of all elements in the input
tensor.
Parameters
input (Tensor) – the input tensor.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.2294, -0.5481, 1.3288]])
>>> torch.mean(a)
tensor(0.3367)
torch.mean
(input, dim, keepdim=False, out=None) → TensorReturns the mean value of each row of the input
tensor in the given dimension dim
. If dim
is a list of dimensions, reduce over all of them.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
Parameters
dim
retained or not.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3841, 0.6320, 0.4254, -0.7384],
[-0.9644, 1.0131, -0.6549, -1.4279],
[-0.2951, -1.3350, -0.7694, 0.5600],
[ 1.0842, -0.9580, 0.3623, 0.2343]])
>>> torch.mean(a, 1)
tensor([-0.0163, -0.5085, -0.4599, 0.1807])
>>> torch.mean(a, 1, True)
tensor([[-0.0163],
[-0.5085],
[-0.4599],
[ 0.1807]])
torch.median
()torch.median
(input) → Tensor
Returns the median value of all elements in the input
tensor.
Parameters
input (Tensor) – the input tensor.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 1.5219, -1.5212, 0.2202]])
>>> torch.median(a)
tensor(0.2202)
torch.median
(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)Returns a namedtuple (values, indices)
where values
is the median value of each row of the input
tensor in the given dimension dim
. And indices
is the index location of each median value found.
By default, dim
is the last dimension of the input
tensor.
If keepdim
is True
, the output tensors are of the same size as input
except in the dimension dim
where they are of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the outputs tensor having 1 fewer dimension than input
.
Parameters
dim
retained or not.
Example:
>>> a = torch.randn(4, 5)
>>> a
tensor([[ 0.2505, -0.3982, -0.9948, 0.3518, -1.3131],
[ 0.3180, -0.6993, 1.0436, 0.0438, 0.2270],
[-0.2751, 0.7303, 0.2192, 0.3321, 0.2488],
[ 1.0778, -1.9510, 0.7048, 0.4742, -0.7125]])
>>> torch.median(a, 1)
torch.return_types.median(values=tensor([-0.3982, 0.2270, 0.2488, 0.4742]), indices=tensor([1, 4, 4, 3]))
torch.mode
(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)Returns a namedtuple (values, indices)
where values
is the mode value of each row of the input
tensor in the given dimension dim
, i.e. a value which appears most often in that row, and indices
is the index location of each mode value found.
By default, dim
is the last dimension of the input
tensor.
If keepdim
is True
, the output tensors are of the same size as input
except in the dimension dim
where they are of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensors having 1 fewer dimension than input
.
Note
This function is not defined for torch.cuda.Tensor
yet.
Parameters
dim
retained or not.
Example:
>>> a = torch.randint(10, (5,))
>>> a
tensor([6, 5, 1, 0, 2])
>>> b = a + (torch.randn(50, 1) * 5).long()
>>> torch.mode(b, 0)
torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2]))
torch.norm
(input, p='fro', dim=None, keepdim=False, out=None, dtype=None)[source]Returns the matrix norm or vector norm of a given tensor.
Parameters
'fro'
The following norms can be calculated:
ord
matrix norm
vector norm
None
Frobenius norm
2-norm
’fro’
Frobenius norm
–
‘nuc’
nuclear norm
–
Other
as vec norm when dim is None
sum(abs(x)**ord)**(1./ord)
dim
retained or not. Ignored if dim
= None
and out
= None
. Default: False
dim
= None
and out
= None
.
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to :attr:’dtype’ while performing the operation. Default: None.
Example:
>>> import torch
>>> a = torch.arange(9, dtype= torch.float) - 4
>>> b = a.reshape((3, 3))
>>> torch.norm(a)
tensor(7.7460)
>>> torch.norm(b)
tensor(7.7460)
>>> torch.norm(a, float('inf'))
tensor(4.)
>>> torch.norm(b, float('inf'))
tensor(4.)
>>> c = torch.tensor([[ 1, 2, 3],[-1, 1, 4]] , dtype= torch.float)
>>> torch.norm(c, dim=0)
tensor([1.4142, 2.2361, 5.0000])
>>> torch.norm(c, dim=1)
tensor([3.7417, 4.2426])
>>> torch.norm(c, p=1, dim=1)
tensor([6., 6.])
>>> d = torch.arange(8, dtype= torch.float).reshape(2,2,2)
>>> torch.norm(d, dim=(1,2))
tensor([ 3.7417, 11.2250])
>>> torch.norm(d[0, :, :]), torch.norm(d[1, :, :])
(tensor(3.7417), tensor(11.2250))
torch.prod
()torch.prod
(input, dtype=None) → Tensor
Returns the product of all elements in the input
tensor.
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[-0.8020, 0.5428, -1.5854]])
>>> torch.prod(a)
tensor(0.6902)
torch.prod
(input, dim, keepdim=False, dtype=None) → TensorReturns the product of each row of the input
tensor in the given dimension dim
.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 fewer dimension than input
.
Parameters
dim
retained or not.
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(4, 2)
>>> a
tensor([[ 0.5261, -0.3837],
[ 1.1857, -0.2498],
[-1.1646, 0.0705],
[ 1.1131, -1.0629]])
>>> torch.prod(a, 1)
tensor([-0.2018, -0.2962, -0.0821, -1.1831])
torch.std
()torch.std
(input, unbiased=True) → Tensor
Returns the standard-deviation of all elements in the input
tensor.
If unbiased
is False
, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[-0.8166, -1.3802, -0.3560]])
>>> torch.std(a)
tensor(0.5130)
torch.std
(input, dim, keepdim=False, unbiased=True, out=None) → TensorReturns the standard-deviation of each row of the input
tensor in the dimension dim
. If dim
is a list of dimensions, reduce over all of them.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
If unbiased
is False
, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
dim
retained or not.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.2035, 1.2959, 1.8101, -0.4644],
[ 1.5027, -0.3270, 0.5905, 0.6538],
[-1.5745, 1.3330, -0.5596, -0.6548],
[ 0.1264, -0.5080, 1.6420, 0.1992]])
>>> torch.std(a, dim=1)
tensor([ 1.0311, 0.7477, 1.2204, 0.9087])
torch.std_mean
()torch.std_mean
(input, unbiased=True) -> (Tensor, Tensor)
Returns the standard-deviation and mean of all elements in the input
tensor.
If unbiased
is False
, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[0.3364, 0.3591, 0.9462]])
>>> torch.std_mean(a)
(tensor(0.3457), tensor(0.5472))
torch.std
(input, dim, keepdim=False, unbiased=True) -> (Tensor, Tensor)Returns the standard-deviation and mean of each row of the input
tensor in the dimension dim
. If dim
is a list of dimensions, reduce over all of them.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
If unbiased
is False
, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
dim
retained or not.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.5648, -0.5984, -1.2676, -1.4471],
[ 0.9267, 1.0612, 1.1050, -0.6014],
[ 0.0154, 1.9301, 0.0125, -1.0904],
[-1.9711, -0.7748, -1.3840, 0.5067]])
>>> torch.std_mean(a, 1)
(tensor([0.9110, 0.8197, 1.2552, 1.0608]), tensor([-0.6871, 0.6229, 0.2169, -0.9058]))
torch.sum
()torch.sum
(input, dtype=None) → Tensor
Returns the sum of all elements in the input
tensor.
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.1133, -0.9567, 0.2958]])
>>> torch.sum(a)
tensor(-0.5475)
torch.sum
(input, dim, keepdim=False, dtype=None) → TensorReturns the sum of each row of the input
tensor in the given dimension dim
. If dim
is a list of dimensions, reduce over all of them.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
Parameters
dim
retained or not.
torch.dtype
, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype
before the operation is performed. This is useful for preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0569, -0.2475, 0.0737, -0.3429],
[-0.2993, 0.9138, 0.9337, -1.6864],
[ 0.1132, 0.7892, -0.1003, 0.5688],
[ 0.3637, -0.9906, -0.4752, -1.5197]])
>>> torch.sum(a, 1)
tensor([-0.4598, -0.1381, 1.3708, -2.6217])
>>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
>>> torch.sum(b, (2, 1))
tensor([ 435., 1335., 2235., 3135.])
torch.unique
(input, sorted=True, return_inverse=False, return_counts=False, dim=None)[source]Returns the unique elements of the input tensor.
Parameters
None
, the unique of the flattened input is returned. default: None
Returns
A tensor or a tuple of tensors containing
return_inverse
is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.
return_counts
is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor.
Return type
(Tensor, Tensor (optional), Tensor (optional))
Example:
>>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))
>>> output
tensor([ 2, 3, 1])
>>> output, inverse_indices = torch.unique(
torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([ 1, 2, 3])
>>> inverse_indices
tensor([ 0, 2, 1, 2])
>>> output, inverse_indices = torch.unique(
torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([ 1, 2, 3])
>>> inverse_indices
tensor([[ 0, 2],
[ 1, 2]])
torch.unique_consecutive
(input, return_inverse=False, return_counts=False, dim=None)[source]Eliminates all but the first element from every consecutive group of equivalent elements.
Note
This function is different from torch.unique()
in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++.
Parameters
None
, the unique of the flattened input is returned. default: None
Returns
A tensor or a tuple of tensors containing
return_inverse
is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.
return_counts
is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor.
Return type
(Tensor, Tensor (optional), Tensor (optional))
Example:
>>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
>>> output = torch.unique_consecutive(x)
>>> output
tensor([1, 2, 3, 1, 2])
>>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> inverse_indices
tensor([0, 0, 1, 1, 2, 3, 3, 4])
>>> output, counts = torch.unique_consecutive(x, return_counts=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> counts
tensor([2, 2, 1, 2, 1])
torch.var
()torch.var
(input, unbiased=True) → Tensor
Returns the variance of all elements in the input
tensor.
If unbiased
is False
, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[-0.3425, -1.2636, -0.4864]])
>>> torch.var(a)
tensor(0.2455)
torch.var
(input, dim, keepdim=False, unbiased=True, out=None) → TensorReturns the variance of each row of the input
tensor in the given dimension dim
.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
If unbiased
is False
, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
dim
retained or not.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.3567, 1.7385, -1.3042, 0.7423],
[ 1.3436, -0.1015, -0.9834, -0.8438],
[ 0.6056, 0.1089, -0.3112, -1.4085],
[-0.7700, 0.6074, -0.1469, 0.7777]])
>>> torch.var(a, 1)
tensor([ 1.7444, 1.1363, 0.7356, 0.5112])
torch.var_mean
()torch.var_mean
(input, unbiased=True) -> (Tensor, Tensor)
Returns the variance and mean of all elements in the input
tensor.
If unbiased
is False
, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[0.0146, 0.4258, 0.2211]])
>>> torch.var_mean(a)
(tensor(0.0423), tensor(0.2205))
torch.var_mean
(input, dim, keepdim=False, unbiased=True) -> (Tensor, Tensor)Returns the variance and mean of each row of the input
tensor in the given dimension dim
.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension(s) dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 (or len(dim)
) fewer dimension(s).
If unbiased
is False
, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.
Parameters
dim
retained or not.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-1.5650, 2.0415, -0.1024, -0.5790],
[ 0.2325, -2.6145, -1.6428, -0.3537],
[-0.2159, -1.1069, 1.2882, -1.3265],
[-0.6706, -1.5893, 0.6827, 1.6727]])
>>> torch.var_mean(a, 1)
(tensor([2.3174, 1.6403, 1.4092, 2.0791]), tensor([-0.0512, -1.0946, -0.3403, 0.0239]))
torch.allclose
(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → boolThis function checks if all input
and other
satisfy the condition:
∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert ∣input−other∣≤atol+rtol×∣other∣
elementwise, for all elements of input
and other
. The behaviour of this function is analogous to numpy.allclose
Parameters
True
, then two NaN
s will be compared as equal. Default: False
Example:
>>> torch.allclose(torch.tensor([10000., 1e-07]), torch.tensor([10000.1, 1e-08]))
False
>>> torch.allclose(torch.tensor([10000., 1e-08]), torch.tensor([10000.1, 1e-09]))
True
>>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]))
False
>>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]), equal_nan=True)
True
torch.argsort
(input, dim=-1, descending=False, out=None) → LongTensorReturns the indices that sort a tensor along a given dimension in ascending order by value.
This is the second value returned by torch.sort()
. See its documentation for the exact semantics of this method.
Parameters
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0785, 1.5267, -0.8521, 0.4065],
[ 0.1598, 0.0788, -0.0745, -1.2700],
[ 1.2208, 1.0722, -0.7064, 1.2564],
[ 0.0669, -0.2318, -0.8229, -0.9280]])
>>> torch.argsort(a, dim=1)
tensor([[2, 0, 3, 1],
[3, 2, 1, 0],
[2, 1, 0, 3],
[3, 2, 1, 0]])
torch.eq
(input, other, out=None) → TensorComputes element-wise equality
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
Parameters
Returns
A torch.BoolTensor
containing a True at each location where comparison is true
Return type
Example:
>>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[ 1, 0],
[ 0, 1]], dtype=torch.uint8)
torch.equal
(input, other) → boolTrue
if two tensors have the same size and elements, False
otherwise.
Example:
>>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2]))
True
torch.ge
(input, other, out=None) → TensorComputes input≥other\text{input} \geq \text{other}input≥other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
Parameters
Returns
A torch.BoolTensor
containing a True at each location where comparison is true
Return type
Example:
>>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, True], [False, True]])
torch.gt
(input, other, out=None) → TensorComputes input>other\text{input} > \text{other}input>other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
Parameters
Returns
A torch.BoolTensor
containing a True at each location where comparison is true
Return type
Example:
>>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [False, False]])
torch.isfinite
(tensor)[source]Returns a new tensor with boolean elements representing if each element is Finite or not.
Parameters
tensor (Tensor) – A tensor to check
Returns
A torch.Tensor with dtype torch.bool
containing a True at each location of finite elements and False otherwise
Return type
Example:
>>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([True, False, True, False, False])
torch.isinf
(tensor)[source]Returns a new tensor with boolean elements representing if each element is +/-INF or not.
Parameters
Returns
A torch.Tensor with dtype torch.bool
containing a True at each location of +/-INF elements and False otherwise
Return type
Example:
>>> torch.isinf(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([False, True, False, True, False])
torch.isnan
()Returns a new tensor with boolean elements representing if each element is NaN or not.
Parameters
input (Tensor) – A tensor to check
Returns
A torch.BoolTensor
containing a True at each location of NaN elements.
Return type
Example:
>>> torch.isnan(torch.tensor([1, float('nan'), 2]))
tensor([False, True, False])
torch.kthvalue
(input, k, dim=None, keepdim=False, out=None) -> (Tensor, LongTensor)Returns a namedtuple (values, indices)
where values
is the k
th smallest element of each row of the input
tensor in the given dimension dim
. And indices
is the index location of each element found.
If dim
is not given, the last dimension of the input is chosen.
If keepdim
is True
, both the values
and indices
tensors are the same size as input
, except in the dimension dim
where they are of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in both the values
and indices
tensors having 1 fewer dimension than the input
tensor.
Parameters
dim
retained or not.
Example:
>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.kthvalue(x, 4)
torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3))
>>> x=torch.arange(1.,7.).resize_(2,3)
>>> x
tensor([[ 1., 2., 3.],
[ 4., 5., 6.]])
>>> torch.kthvalue(x, 2, 0, True)
torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]]))
torch.le
(input, other, out=None) → TensorComputes input≤other\text{input} \leq \text{other}input≤other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
Parameters
Returns
A torch.BoolTensor
containing a True at each location where comparison is true
Return type
Example:
>>> torch.le(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, False], [True, True]])
torch.lt
(input, other, out=None) → TensorComputes input<other\text{input} < \text{other}input<other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
Parameters
Returns
A torch.BoolTensor containing a True at each location where comparison is true
Return type
Example:
>>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, False], [True, False]])
torch.max
()torch.max
(input) → Tensor
Returns the maximum value of all elements in the input
tensor.
Parameters
{input} –
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6763, 0.7445, -2.2369]])
>>> torch.max(a)
tensor(0.7445)
torch.max
(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)Returns a namedtuple (values, indices)
where values
is the maximum value of each row of the input
tensor in the given dimension dim
. And indices
is the index location of each maximum value found (argmax).
If keepdim
is True
, the output tensors are of the same size as input
except in the dimension dim
where they are of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensors having 1 fewer dimension than input
.
Parameters
False
.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-1.2360, -0.2942, -0.1222, 0.8475],
[ 1.1949, -1.1127, -2.2379, -0.6702],
[ 1.5717, -0.9207, 0.1297, -1.8768],
[-0.6172, 1.0036, -0.6060, -0.2432]])
>>> torch.max(a, 1)
torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1]))
torch.max
(input, other, out=None) → TensorEach element of the tensor input
is compared with the corresponding element of the tensor other
and an element-wise maximum is taken.
The shapes of input
and other
don’t need to match, but they must be broadcastable.
outi=max(tensori,otheri)\text{out}_i = \max(\text{tensor}_i, \text{other}_i) outi=max(tensori,otheri)
Note
When the shapes do not match, the shape of the returned output tensor follows the broadcasting rules.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.2942, -0.7416, 0.2653, -0.1584])
>>> b = torch.randn(4)
>>> b
tensor([ 0.8722, -1.7421, -0.4141, -0.5055])
>>> torch.max(a, b)
tensor([ 0.8722, -0.7416, 0.2653, -0.1584])
torch.min
()torch.min
(input) → Tensor
Returns the minimum value of all elements in the input
tensor.
Parameters
{input} –
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6750, 1.0857, 1.7197]])
>>> torch.min(a)
tensor(0.6750)
torch.min
(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)Returns a namedtuple (values, indices)
where values
is the minimum value of each row of the input
tensor in the given dimension dim
. And indices
is the index location of each minimum value found (argmin).
If keepdim
is True
, the output tensors are of the same size as input
except in the dimension dim
where they are of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensors having 1 fewer dimension than input
.
Parameters
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[-0.6248, 1.1334, -1.1899, -0.2803],
[-1.4644, -0.2635, -0.3651, 0.6134],
[ 0.2457, 0.0384, 1.0128, 0.7015],
[-0.1153, 2.9849, 2.1458, 0.5788]])
>>> torch.min(a, 1)
torch.return_types.min(values=tensor([-1.1899, -1.4644, 0.0384, -0.1153]), indices=tensor([2, 0, 1, 0]))
torch.min
(input, other, out=None) → TensorEach element of the tensor input
is compared with the corresponding element of the tensor other
and an element-wise minimum is taken. The resulting tensor is returned.
The shapes of input
and other
don’t need to match, but they must be broadcastable.
outi=min(tensori,otheri)\text{out}_i = \min(\text{tensor}_i, \text{other}_i) outi=min(tensori,otheri)
Note
When the shapes do not match, the shape of the returned output tensor follows the broadcasting rules.
Parameters
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.8137, -1.1740, -0.6460, 0.6308])
>>> b = torch.randn(4)
>>> b
tensor([-0.1369, 0.1555, 0.4019, -0.1929])
>>> torch.min(a, b)
tensor([-0.1369, -1.1740, -0.6460, -0.1929])
torch.ne
(input, other, out=None) → TensorComputes input≠otherinput \neq otherinput=other element-wise.
The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
Parameters
Returns
A torch.BoolTensor
containing a True at each location where comparison is true.
Return type
Example:
>>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[False, True], [True, False]])
torch.sort
(input, dim=-1, descending=False, out=None) -> (Tensor, LongTensor)Sorts the elements of the input
tensor along a given dimension in ascending order by value.
If dim
is not given, the last dimension of the input is chosen.
If descending
is True
then the elements are sorted in descending order by value.
A namedtuple of (values, indices) is returned, where the values are the sorted values and indices are the indices of the elements in the original input tensor.
Parameters
Example:
>>> x = torch.randn(3, 4)
>>> sorted, indices = torch.sort(x)
>>> sorted
tensor([[-0.2162, 0.0608, 0.6719, 2.3332],
[-0.5793, 0.0061, 0.6058, 0.9497],
[-0.5071, 0.3343, 0.9553, 1.0960]])
>>> indices
tensor([[ 1, 0, 2, 3],
[ 3, 1, 0, 2],
[ 0, 3, 1, 2]])
>>> sorted, indices = torch.sort(x, 0)
>>> sorted
tensor([[-0.5071, -0.2162, 0.6719, -0.5793],
[ 0.0608, 0.0061, 0.9497, 0.3343],
[ 0.6058, 0.9553, 1.0960, 2.3332]])
>>> indices
tensor([[ 2, 0, 0, 1],
[ 0, 1, 1, 2],
[ 1, 2, 2, 0]])
torch.topk
(input, k, dim=None, largest=True, sorted=True, out=None) -> (Tensor, LongTensor)Returns the k
largest elements of the given input
tensor along a given dimension.
If dim
is not given, the last dimension of the input is chosen.
If largest
is False
then the k smallest elements are returned.
A namedtuple of (values, indices) is returned, where the indices are the indices of the elements in the original input tensor.
The boolean option sorted
if True
, will make sure that the returned k elements are themselves sorted
Parameters
Example:
>>> x = torch.arange(1., 6.)
>>> x
tensor([ 1., 2., 3., 4., 5.])
>>> torch.topk(x, 3)
torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))