版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_36670529/article/details/101069085
torch.Tensor
是一个包含单一数据类型元素的多维矩阵。Torch定义了9种CPU张量类型和9种GPU张量类型:
Data type | dtype | CPU tensor | GPU tensor |
---|---|---|---|
32-bit floating point | torch.float32 or torch.float | torch.FloatTensor | torch.cuda.FloatTensor |
64-bit floating point | torch.float64 or torch.double | torch.DoubleTensor | torch.cuda.DoubleTensor |
16-bit floating point | torch.float16 or torch.half | torch.HalfTensor | torch.cuda.HalfTensor |
8-bit integer (unsigned) | torch.uint8 | torch.ByteTensor | torch.cuda.ByteTensor |
8-bit integer (signed) | torch.int8 | torch.CharTensor | torch.cuda.CharTensor |
16-bit integer (signed) | torch.int16 or torch.short | torch.ShortTensor | torch.cuda.ShortTensor |
32-bit integer (signed) | torch.int32 or torch.int | torch.IntTensor | torch.cuda.IntTensor |
64-bit integer (signed) | torch.int64 or torch.long | torch.LongTensor | torch.cuda.LongTensor |
Boolean | torch.bool | torch.BoolTensor | torch.cuda.BoolTensor |
torch.Tensor
is an alias for the default tensor type (torch.FloatTensor
).
A tensor can be constructed from a Python list
or sequence using the torch.tensor()
constructor:
>>> torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1.0000, -1.0000],
[ 1.0000, -1.0000]])
>>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
Warning
torch.tensor()
always copies data
. If you have a Tensor data
and just want to change its requires_grad
flag, use requires_grad_()
or detach()
to avoid a copy. If you have a numpy array and want to avoid a copy, use torch.as_tensor()
.
A tensor of specific data type can be constructed by passing a torch.dtype
and/or a torch.device
to a constructor or tensor creation op:
>>> torch.zeros([2, 4], dtype=torch.int32)
tensor([[ 0, 0, 0, 0],
[ 0, 0, 0, 0]], dtype=torch.int32)
>>> cuda0 = torch.device('cuda:0')
>>> torch.ones([2, 4], dtype=torch.float64, device=cuda0)
tensor([[ 1.0000, 1.0000, 1.0000, 1.0000],
[ 1.0000, 1.0000, 1.0000, 1.0000]], dtype=torch.float64, device='cuda:0')
The contents of a tensor can be accessed and modified using Python’s indexing and slicing notation:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6]])
>>> print(x[1][2])
tensor(6)
>>> x[0][1] = 8
>>> print(x)
tensor([[ 1, 8, 3],
[ 4, 5, 6]])
Use torch.Tensor.item()
to get a Python number from a tensor containing a single value:
>>> x = torch.tensor([[1]])
>>> x
tensor([[ 1]])
>>> x.item()
1
>>> x = torch.tensor(2.5)
>>> x
tensor(2.5000)
>>> x.item()
2.5
A tensor can be created with requires_grad=True
so that torch.autograd
records operations on them for automatic differentiation.
>>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True)
>>> out = x.pow(2).sum()
>>> out.backward()
>>> x.grad
tensor([[ 2.0000, -2.0000],
[ 2.0000, 2.0000]])
Each tensor has an associated torch.Storage
, which holds its data. The tensor class provides multi-dimensional, strided view of a storage and defines numeric operations on it.
Note
For more information on the torch.dtype
, torch.device
, and torch.layout
attributes of a torch.Tensor
, see Tensor Attributes.
Note
Methods which mutate a tensor are marked with an underscore suffix. For example, torch.FloatTensor.abs_()
computes the absolute value in-place and returns the modified tensor, while torch.FloatTensor.abs()
computes the result in a new tensor.
Note
To change an existing tensor’s torch.device
and/or torch.dtype
, consider using to()
method on the tensor.
Warning
Current implementation of torch.Tensor
introduces memory overhead, thus it might lead to unexpectedly high memory usage in the applications with many tiny tensors. If this is your case, consider using one large structure.
class torch.Tensor
There are a few main ways to create a tensor, depending on your use case.
torch.tensor()
.
torch.*
tensor creation ops (see Creation Ops).
torch.*_like
tensor creation ops (see Creation Ops).
tensor.new_*
creation ops.
new_tensor
(data, dtype=None, device=None, requires_grad=False) → Tensor
Returns a new Tensor with data
as the tensor data. By default, the returned Tensor has the same torch.dtype
and torch.device
as this tensor.
Warning
new_tensor()
always copies data
. If you have a Tensor data
and want to avoid a copy, use torch.Tensor.requires_grad_()
or torch.Tensor.detach()
. If you have a numpy array and want to avoid a copy, use torch.from_numpy()
.
Warning
When data is a tensor x, new_tensor()
reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore tensor.new_tensor(x)
is equivalent to x.clone().detach()
and tensor.new_tensor(x, requires_grad=True)
is equivalent to x.clone().detach().requires_grad_(True)
. The equivalents using clone()
and detach()
are recommended.
Parameters
data
.
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, same torch.dtype
as this tensor.
torch.device
, optional) – the desired device of returned tensor. Default: if None, same torch.device
as this tensor.
False
.
Example:
>>> tensor = torch.ones((2,), dtype=torch.int8)
>>> data = [[0, 1], [2, 3]]
>>> tensor.new_tensor(data)
tensor([[ 0, 1],
[ 2, 3]], dtype=torch.int8)
new_full
(size, fill_value, dtype=None, device=None, requires_grad=False) → Tensor
Returns a Tensor of size size
filled with fill_value
. By default, the returned Tensor has the same torch.dtype
and torch.device
as this tensor.
Parameters
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, same torch.dtype
as this tensor.
torch.device
, optional) – the desired device of returned tensor. Default: if None, same torch.device
as this tensor.
False
.
Example:
>>> tensor = torch.ones((2,), dtype=torch.float64)
>>> tensor.new_full((3, 4), 3.141592)
tensor([[ 3.1416, 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
new_empty
(size, dtype=None, device=None, requires_grad=False) → Tensor
Returns a Tensor of size size
filled with uninitialized data. By default, the returned Tensor has the same torch.dtype
and torch.device
as this tensor.
Parameters
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, same torch.dtype
as this tensor.
torch.device
, optional) – the desired device of returned tensor. Default: if None, same torch.device
as this tensor.
False
.
Example:
>>> tensor = torch.ones(())
>>> tensor.new_empty((2, 3))
tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30],
[ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
new_ones
(size, dtype=None, device=None, requires_grad=False) → Tensor
Returns a Tensor of size size
filled with 1
. By default, the returned Tensor has the same torch.dtype
and torch.device
as this tensor.
Parameters
torch.Size
of integers defining the shape of the output tensor.
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, same torch.dtype
as this tensor.
torch.device
, optional) – the desired device of returned tensor. Default: if None, same torch.device
as this tensor.
False
.
Example:
>>> tensor = torch.tensor((), dtype=torch.int32)
>>> tensor.new_ones((2, 3))
tensor([[ 1, 1, 1],
[ 1, 1, 1]], dtype=torch.int32)
new_zeros
(size, dtype=None, device=None, requires_grad=False) → Tensor
Returns a Tensor of size size
filled with 0
. By default, the returned Tensor has the same torch.dtype
and torch.device
as this tensor.
Parameters
torch.Size
of integers defining the shape of the output tensor.
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, same torch.dtype
as this tensor.
torch.device
, optional) – the desired device of returned tensor. Default: if None, same torch.device
as this tensor.
False
.
Example:
>>> tensor = torch.tensor((), dtype=torch.float64)
>>> tensor.new_zeros((2, 3))
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]], dtype=torch.float64)
is_cuda
Is True
if the Tensor is stored on the GPU, False
otherwise.
device
Is the torch.device
where this Tensor is.
grad
This attribute is None
by default and becomes a Tensor the first time a call to backward()
computes gradients for self
. The attribute will then contain the gradients computed and future calls to backward()
will accumulate (add) gradients into it.
ndim
Alias for dim()
T
Is this Tensor with its dimensions reversed.
If n
is the number of dimensions in x
, x.T
is equivalent to x.permute(n-1, n-2, ..., 0)
.
abs
() → Tensor
See torch.abs()
abs_
() → Tensor
In-place version of abs()
acos
() → Tensor
See torch.acos()
acos_
() → Tensor
In-place version of acos()
add
(value) → Tensor
add(value=1, other) -> Tensor
See torch.add()
add_
(value) → Tensor
add_(value=1, other) -> Tensor
In-place version of add()
addbmm
(beta=1, alpha=1, batch1, batch2) → Tensor
See torch.addbmm()
addbmm_
(beta=1, alpha=1, batch1, batch2) → Tensor
In-place version of addbmm()
addcdiv
(value=1, tensor1, tensor2) → Tensor
See torch.addcdiv()
addcdiv_
(value=1, tensor1, tensor2) → Tensor
In-place version of addcdiv()
addcmul
(value=1, tensor1, tensor2) → Tensor
See torch.addcmul()
addcmul_
(value=1, tensor1, tensor2) → Tensor
In-place version of addcmul()
addmm
(beta=1, alpha=1, mat1, mat2) → Tensor
See torch.addmm()
addmm_
(beta=1, alpha=1, mat1, mat2) → Tensor
In-place version of addmm()
addmv
(beta=1, alpha=1, mat, vec) → Tensor
See torch.addmv()
addmv_
(beta=1, alpha=1, mat, vec) → Tensor
In-place version of addmv()
addr
(beta=1, alpha=1, vec1, vec2) → Tensor
See torch.addr()
addr_
(beta=1, alpha=1, vec1, vec2) → Tensor
In-place version of addr()
allclose
(other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor
See torch.allclose()
apply_
(callable) → Tensor
Applies the function callable
to each element in the tensor, replacing each element with the value returned by callable
.
Note
This function only works with CPU tensors and should not be used in code sections that require high performance.
argmax
(dim=None, keepdim=False) → LongTensor
See torch.argmax()
argmin
(dim=None, keepdim=False) → LongTensor
See torch.argmin()
argsort
(dim=-1, descending=False) → LongTensor
See :func: torch.argsort
asin
() → Tensor
See torch.asin()
asin_
() → Tensor
In-place version of asin()
as_strided
(size, stride, storage_offset=0) → Tensor
atan
() → Tensor
See torch.atan()
atan2
(other) → Tensor
See torch.atan2()
atan2_
(other) → Tensor
In-place version of atan2()
atan_
() → Tensor
In-place version of atan()
backward
(gradient=None, retain_graph=None, create_graph=False)[source]
Computes the gradient of current tensor w.r.t. graph leaves.
The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient
. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self
.
This function accumulates gradients in the leaves - you might need to zero them before calling it.
Parameters
create_graph
is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional.
False
, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph
.
True
, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False
.
baddbmm
(beta=1, alpha=1, batch1, batch2) → Tensor
See torch.baddbmm()
baddbmm_
(beta=1, alpha=1, batch1, batch2) → Tensor
In-place version of baddbmm()
bernoulli
(*, generator=None) → Tensor
Returns a result tensor where each result[i]\texttt{result[i]}result[i] is independently sampled from Bernoulli(self[i])\text{Bernoulli}(\texttt{self[i]})Bernoulli(self[i]) . self
must have floating point dtype
, and the result will have the same dtype
.
bernoulli_
()
bernoulli_
(p=0.5, *, generator=None) → Tensor
Fills each location of self
with an independent sample from Bernoulli(p)\text{Bernoulli}(\texttt{p})Bernoulli(p) . self
can have integral dtype
.
bernoulli_
(p_tensor, *, generator=None) → Tensor
p_tensor
should be a tensor containing probabilities to be used for drawing the binary random number.
The ith\text{i}^{th}ith element of self
tensor will be set to a value sampled from Bernoulli(p_tensor[i])\text{Bernoulli}(\texttt{p\_tensor[i]})Bernoulli(p_tensor[i]) .
self
can have integral dtype
, but p_tensor
must have floating point dtype
.
See also bernoulli()
and torch.bernoulli()
bfloat16
() → Tensor
self.bfloat16()
is equivalent to self.to(torch.bfloat16)
. See to()
.
bincount
(weights=None, minlength=0) → Tensor
See torch.bincount()
bitwise_not
() → Tensor
bitwise_not_
() → Tensor
In-place version of bitwise_not()
bmm
(batch2) → Tensor
See torch.bmm()
bool
() → Tensor
self.bool()
is equivalent to self.to(torch.bool)
. See to()
.
byte
() → Tensor
self.byte()
is equivalent to self.to(torch.uint8)
. See to()
.
cauchy_
(median=0, sigma=1, *, generator=None) → Tensor
Fills the tensor with numbers drawn from the Cauchy distribution:
f(x)=1πσ(x−median)2+σ2f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2}f(x)=π1(x−median)2+σ2σ
ceil
() → Tensor
See torch.ceil()
ceil_
() → Tensor
In-place version of ceil()
char
() → Tensor
self.char()
is equivalent to self.to(torch.int8)
. See to()
.
cholesky
(upper=False) → Tensor
See torch.cholesky()
cholesky_inverse
(upper=False) → Tensor
cholesky_solve
(input2, upper=False) → Tensor
chunk
(chunks, dim=0) → List of Tensors
See torch.chunk()
clamp
(min, max) → Tensor
See torch.clamp()
clamp_
(min, max) → Tensor
In-place version of clamp()
clone
() → Tensor
Returns a copy of the self
tensor. The copy has the same size and data type as self
.
Note
Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.
contiguous
() → Tensor
Returns a contiguous tensor containing the same data as self
tensor. If self
tensor is contiguous, this function returns the self
tensor.
copy_
(src, non_blocking=False) → Tensor
Copies the elements from src
into self
tensor and returns self
.
The src
tensor must be broadcastable with the self
tensor. It may be of a different data type or reside on a different device.
Parameters
True
and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.
cos
() → Tensor
See torch.cos()
cos_
() → Tensor
In-place version of cos()
cosh
() → Tensor
See torch.cosh()
cosh_
() → Tensor
In-place version of cosh()
cpu
() → Tensor
Returns a copy of this object in CPU memory.
If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.
cross
(other, dim=-1) → Tensor
See torch.cross()
cuda
(device=None, non_blocking=False) → Tensor
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
Parameters
torch.device
) – The destination GPU device. Defaults to the current CUDA device.
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default: False
.
cumprod
(dim, dtype=None) → Tensor
See torch.cumprod()
cumsum
(dim, dtype=None) → Tensor
See torch.cumsum()
data_ptr
() → int
Returns the address of the first element of self
tensor.
dequantize
() → Tensor
Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
det
() → Tensor
See torch.det()
dense_dim
() → int
If self
is a sparse COO tensor (i.e., with torch.sparse_coo
layout), this returns the number of dense dimensions. Otherwise, this throws an error.
See also Tensor.sparse_dim()
.
detach
()
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
Note
Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resize_as_ / set_ / transpose_) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero_ / copy_ / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.
detach_
()
Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.
diag
(diagonal=0) → Tensor
See torch.diag()
diag_embed
(offset=0, dim1=-2, dim2=-1) → Tensor
diagflat
(offset=0) → Tensor
See torch.diagflat()
diagonal
(offset=0, dim1=0, dim2=1) → Tensor
See torch.diagonal()
fill_diagonal_
(fill_value, wrap=False) → Tensor
Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor.
Parameters
Example:
>>> a = torch.zeros(3, 3)
>>> a.fill_diagonal_(5)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.]])
>>> b = torch.zeros(7, 3)
>>> b.fill_diagonal_(5)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
>>> c = torch.zeros(7, 3)
>>> c.fill_diagonal_(5, wrap=True)
tensor([[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.],
[0., 0., 0.],
[5., 0., 0.],
[0., 5., 0.],
[0., 0., 5.]])
digamma
() → Tensor
See torch.digamma()
digamma_
() → Tensor
In-place version of digamma()
dim
() → int
Returns the number of dimensions of self
tensor.
dist
(other, p=2) → Tensor
See torch.dist()
div
(value) → Tensor
See torch.div()
div_
(value) → Tensor
In-place version of div()
dot
(tensor2) → Tensor
See torch.dot()
double
() → Tensor
self.double()
is equivalent to self.to(torch.float64)
. See to()
.
eig
(eigenvectors=False) -> (Tensor, Tensor)
See torch.eig()
element_size
() → int
Returns the size in bytes of an individual element.
Example:
>>> torch.tensor([]).element_size()
4
>>> torch.tensor([], dtype=torch.uint8).element_size()
1
eq
(other) → Tensor
See torch.eq()
eq_
(other) → Tensor
In-place version of eq()
equal
(other) → bool
See torch.equal()
erf
() → Tensor
See torch.erf()
erf_
() → Tensor
In-place version of erf()
erfc
() → Tensor
See torch.erfc()
erfc_
() → Tensor
In-place version of erfc()
erfinv
() → Tensor
See torch.erfinv()
erfinv_
() → Tensor
In-place version of erfinv()
exp
() → Tensor
See torch.exp()
exp_
() → Tensor
In-place version of exp()
expm1
() → Tensor
See torch.expm1()
expm1_
() → Tensor
In-place version of expm1()
expand
(*sizes) → Tensor
Returns a new view of the self
tensor with singleton dimensions expanded to a larger size.
Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride
to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.
Parameters
*sizes (torch.Size or int...) – the desired expanded size
Warning
More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.
Example:
>>> x = torch.tensor([[1], [2], [3]])
>>> x.size()
torch.Size([3, 1])
>>> x.expand(3, 4)
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
>>> x.expand(-1, 4) # -1 means not changing the size of that dimension
tensor([[ 1, 1, 1, 1],
[ 2, 2, 2, 2],
[ 3, 3, 3, 3]])
expand_as
(other) → Tensor
Expand this tensor to the same size as other
. self.expand_as(other)
is equivalent to self.expand(other.size())
.
Please see expand()
for more information about expand
.
Parameters
other (torch.Tensor
) – The result tensor has the same size as other
.
exponential_
(lambd=1, *, generator=None) → Tensor
Fills self
tensor with elements drawn from the exponential distribution:
f(x)=λe−λxf(x) = \lambda e^{-\lambda x}f(x)=λe−λx
fft
(signal_ndim, normalized=False) → Tensor
See torch.fft()
fill_
(value) → Tensor
Fills self
tensor with the specified value.
flatten
(input, start_dim=0, end_dim=-1) → Tensor
see torch.flatten()
flip
(dims) → Tensor
See torch.flip()
float
() → Tensor
self.float()
is equivalent to self.to(torch.float32)
. See to()
.
floor
() → Tensor
See torch.floor()
floor_
() → Tensor
In-place version of floor()
fmod
(divisor) → Tensor
See torch.fmod()
fmod_
(divisor) → Tensor
In-place version of fmod()
frac
() → Tensor
See torch.frac()
frac_
() → Tensor
In-place version of frac()
gather
(dim, index) → Tensor
See torch.gather()
ge
(other) → Tensor
See torch.ge()
ge_
(other) → Tensor
In-place version of ge()
gels
(A)[source]
See torch.lstsq()
geometric_
(p, *, generator=None) → Tensor
Fills self
tensor with elements drawn from the geometric distribution:
f(X=k)=pk−1(1−p)f(X=k) = p^{k - 1} (1 - p)f(X=k)=pk−1(1−p)
geqrf
() -> (Tensor, Tensor)
See torch.geqrf()
ger
(vec2) → Tensor
See torch.ger()
get_device
() -> Device ordinal (Integer)
For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown.
Example:
>>> x = torch.randn(3, 4, 5, device='cuda:0')
>>> x.get_device()
0
>>> x.cpu().get_device() # RuntimeError: get_device is not implemented for type torch.FloatTensor
gt
(other) → Tensor
See torch.gt()
gt_
(other) → Tensor
In-place version of gt()
half
() → Tensor
self.half()
is equivalent to self.to(torch.float16)
. See to()
.
hardshrink
(lambd=0.5) → Tensor
See torch.nn.functional.hardshrink()
histc
(bins=100, min=0, max=0) → Tensor
See torch.histc()
ifft
(signal_ndim, normalized=False) → Tensor
See torch.ifft()
index_add_
(dim, index, tensor) → Tensor
Accumulate the elements of tensor
into the self
tensor by adding to the indices in the order given in index
. For example, if dim == 0
and index[i] == j
, then the i
th row of tensor
is added to the j
th row of self
.
The dim
th dimension of tensor
must have the same size as the length of index
(which must be a vector), and all other dimensions must match self
, or an error will be raised.
Note
When using the CUDA backend, this operation may induce nondeterministic behaviour that is not easily switched off. Please see the notes on Reproducibility for background.
Parameters
tensor
to select from
Example:
>>> x = torch.ones(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_add_(0, index, t)
tensor([[ 2., 3., 4.],
[ 1., 1., 1.],
[ 8., 9., 10.],
[ 1., 1., 1.],
[ 5., 6., 7.]])
index_add
(dim, index, tensor) → Tensor
Out-of-place version of torch.Tensor.index_add_()
index_copy_
(dim, index, tensor) → Tensor
Copies the elements of tensor
into the self
tensor by selecting the indices in the order given in index
. For example, if dim == 0
and index[i] == j
, then the i
th row of tensor
is copied to the j
th row of self
.
The dim
th dimension of tensor
must have the same size as the length of index
(which must be a vector), and all other dimensions must match self
, or an error will be raised.
Parameters
tensor
to select from
Example:
>>> x = torch.zeros(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_copy_(0, index, t)
tensor([[ 1., 2., 3.],
[ 0., 0., 0.],
[ 7., 8., 9.],
[ 0., 0., 0.],
[ 4., 5., 6.]])
index_copy
(dim, index, tensor) → Tensor
Out-of-place version of torch.Tensor.index_copy_()
index_fill_
(dim, index, val) → Tensor
Fills the elements of the self
tensor with value val
by selecting the indices in the order given in index
.
Parameters
self
tensor to fill in
Example::
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 2])
>>> x.index_fill_(1, index, -1)
tensor([[-1., 2., -1.],
[-1., 5., -1.],
[-1., 8., -1.]])
index_fill
(dim, index, value) → Tensor
Out-of-place version of torch.Tensor.index_fill_()
index_put_
(indices, value, accumulate=False) → Tensor
Puts values from the tensor value
into the tensor self
using the indices specified in indices
(which is a tuple of Tensors). The expression tensor.index_put_(indices, value)
is equivalent to tensor[indices] = value
. Returns self
.
If accumulate
is True
, the elements in tensor
are added to self
. If accumulate is False
, the behavior is undefined if indices contain duplicate elements.
Parameters
index_put
(indices, value, accumulate=False) → Tensor
Out-place version of index_put_()
index_select
(dim, index) → Tensor
indices
() → Tensor
If self
is a sparse COO tensor (i.e., with torch.sparse_coo
layout), this returns a view of the contained indices tensor. Otherwise, this throws an error.
See also Tensor.values()
.
Note
This method can only be called on a coalesced sparse tensor. See Tensor.coalesce()
for details.
int
() → Tensor
self.int()
is equivalent to self.to(torch.int32)
. See to()
.
int_repr
() → Tensor
Given a quantized Tensor, self.int_repr()
returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.
inverse
() → Tensor
See torch.inverse()
irfft
(signal_ndim, normalized=False, onesided=True, signal_sizes=None) → Tensor
See torch.irfft()
is_contiguous
() → bool
Returns True if self
tensor is contiguous in memory in C order.
is_floating_point
() → bool
Returns True if the data type of self
is a floating point data type.
is_leaf
()
All Tensors that have requires_grad
which is False
will be leaf Tensors by convention.
For Tensors that have requires_grad
which is True
, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn
is None.
Only leaf Tensors will have their grad
populated during a call to backward()
. To get grad
populated for non-leaf Tensors, you can use retain_grad()
.
Example:
>>> a = torch.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
# c was created by the addition operation
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
# e requires gradients and has no operations creating it
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
# f requires grad, has no operation creating it
is_pinned
()[source]
Returns true if this tensor resides in pinned memory
is_set_to
(tensor) → bool
Returns True if this object refers to the same THTensor
object from the Torch C API as the given tensor.
is_shared
()[source]
Checks if tensor is in shared memory.
This is always True
for CUDA tensors.
is_signed
() → bool
Returns True if the data type of self
is a signed data type.
is_sparse
()
item
() → number
Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist()
.
This operation is not differentiable.
Example:
>>> x = torch.tensor([1.0])
>>> x.item()
1.0
kthvalue
(k, dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.kthvalue()
le
(other) → Tensor
See torch.le()
le_
(other) → Tensor
In-place version of le()
lerp
(end, weight) → Tensor
See torch.lerp()
lerp_
(end, weight) → Tensor
In-place version of lerp()
log
() → Tensor
See torch.log()
log_
() → Tensor
In-place version of log()
logdet
() → Tensor
See torch.logdet()
log10
() → Tensor
See torch.log10()
log10_
() → Tensor
In-place version of log10()
log1p
() → Tensor
See torch.log1p()
log1p_
() → Tensor
In-place version of log1p()
log2
() → Tensor
See torch.log2()
log2_
() → Tensor
In-place version of log2()
log_normal_
(mean=1, std=2, *, generator=None)
Fills self
tensor with numbers samples from the log-normal distribution parameterized by the given mean μ\muμ and standard deviation σ\sigmaσ . Note that mean
and std
are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:
f(x)=1xσ2π e−(lnx−μ)22σ2f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}}f(x)=xσ2π
1 e−2σ2(lnx−μ)2
logsumexp
(dim, keepdim=False) → Tensor
long
() → Tensor
self.long()
is equivalent to self.to(torch.int64)
. See to()
.
lstsq
(A) -> (Tensor, Tensor)
See torch.lstsq()
lt
(other) → Tensor
See torch.lt()
lt_
(other) → Tensor
In-place version of lt()
lu
(pivot=True, get_infos=False)[source]
See torch.lu()
lu_solve
(LU_data, LU_pivots) → Tensor
See torch.lu_solve()
map_
(tensor, callable)
Applies callable
for each element in self
tensor and the given tensor
and stores the results in self
tensor. self
tensor and the given tensor
must be broadcastable.
The callable
should have the signature:
def callable(a, b) -> number
masked_scatter_
(mask, source)
Copies elements from source
into self
tensor at positions where the mask
is True. The shape of mask
must be broadcastable with the shape of the underlying tensor. The source
should have at least as many elements as the number of ones in mask
Parameters
Note
The mask
operates on the self
tensor, not on the given source
tensor.
masked_scatter
(mask, tensor) → Tensor
Out-of-place version of torch.Tensor.masked_scatter_()
masked_fill_
(mask, value)
Fills elements of self
tensor with value
where mask
is True. The shape of mask
must be broadcastable with the shape of the underlying tensor.
Parameters
masked_fill
(mask, value) → Tensor
Out-of-place version of torch.Tensor.masked_fill_()
masked_select
(mask) → Tensor
matmul
(tensor2) → Tensor
See torch.matmul()
matrix_power
(n) → Tensor
max
(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See torch.max()
mean
(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See torch.mean()
median
(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.median()
min
(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
See torch.min()
mm
(mat2) → Tensor
See torch.mm()
mode
(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.mode()
mul
(value) → Tensor
See torch.mul()
mul_
(value)
In-place version of mul()
multinomial
(num_samples, replacement=False, *, generator=None) → Tensor
mv
(vec) → Tensor
See torch.mv()
mvlgamma
(p) → Tensor
See torch.mvlgamma()
mvlgamma_
(p) → Tensor
In-place version of mvlgamma()
narrow
(dimension, start, length) → Tensor
See torch.narrow()
Example:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> x.narrow(0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> x.narrow(1, 1, 2)
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]])
narrow_copy
(dimension, start, length) → Tensor
Same as Tensor.narrow()
except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling `narrow_copy
with `dimemsion > self.sparse_dim()`
will return a copy with the relevant dense dimension narrowed, and `self.shape`
updated accordingly.
ndimension
() → int
Alias for dim()
ne
(other) → Tensor
See torch.ne()
ne_
(other) → Tensor
In-place version of ne()
neg
() → Tensor
See torch.neg()
neg_
() → Tensor
In-place version of neg()
nelement
() → int
Alias for numel()
nonzero
() → LongTensor
See torch.nonzero()
norm
(p='fro', dim=None, keepdim=False, dtype=None)[source]
See torch.norm()
normal_
(mean=0, std=1, *, generator=None) → Tensor
Fills self
tensor with elements samples from the normal distribution parameterized by mean
and std
.
numel
() → int
See torch.numel()
numpy
() → numpy.ndarray
Returns self
tensor as a NumPy ndarray
. This tensor and the returned ndarray
share the same underlying storage. Changes to self
tensor will be reflected in the ndarray
and vice versa.
orgqr
(input2) → Tensor
See torch.orgqr()
ormqr
(input2, input3, left=True, transpose=False) → Tensor
See torch.ormqr()
permute
(*dims) → Tensor
Permute the dimensions of this tensor.
Parameters
*dims (int...) – The desired ordering of dimensions
Example
>>> x = torch.randn(2, 3, 5)
>>> x.size()
torch.Size([2, 3, 5])
>>> x.permute(2, 0, 1).size()
torch.Size([5, 2, 3])
pin_memory
() → Tensor
Copies the tensor to pinned memory, if it’s not already pinned.
pinverse
() → Tensor
See torch.pinverse()
pow
(exponent) → Tensor
See torch.pow()
pow_
(exponent) → Tensor
In-place version of pow()
prod
(dim=None, keepdim=False, dtype=None) → Tensor
See torch.prod()
put_
(indices, tensor, accumulate=False) → Tensor
Copies the elements from tensor
into the positions specified by indices. For the purpose of indexing, the self
tensor is treated as if it were a 1-D tensor.
If accumulate
is True
, the elements in tensor
are added to self
. If accumulate is False
, the behavior is undefined if indices contain duplicate elements.
Parameters
Example:
>>> src = torch.tensor([[4, 3, 5],
[6, 7, 8]])
>>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10]))
tensor([[ 4, 9, 5],
[ 10, 7, 8]])
qr
(some=True) -> (Tensor, Tensor)
See torch.qr()
qscheme
() → torch.qscheme
Returns the quantization scheme of a given QTensor.
q_scale
() → float
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
q_zero_point
() → int
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
random_
(from=0, to=None, *, generator=None) → Tensor
Fills self
tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]
. If not specified, the values are usually only bounded by self
tensor’s data type. However, for floating point types, if unspecified, range will be [0, 2^mantissa]
to ensure that every value is representable. For example, torch.tensor(1, dtype=torch.double).random_() will be uniform in [0, 2^53]
.
reciprocal
() → Tensor
reciprocal_
() → Tensor
In-place version of reciprocal()
register_hook
(hook)[source]
Registers a backward hook.
The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature:
hook(grad) -> Tensor or None
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad
.
This function returns a handle with a method handle.remove()
that removes the hook from the module.
Example:
>>> v = torch.tensor([0., 0., 0.], requires_grad=True)
>>> h = v.register_hook(lambda grad: grad * 2) # double the gradient
>>> v.backward(torch.tensor([1., 2., 3.]))
>>> v.grad
2
4
6
[torch.FloatTensor of size (3,)]
>>> h.remove() # removes the hook
remainder
(divisor) → Tensor
remainder_
(divisor) → Tensor
In-place version of remainder()
renorm
(p, dim, maxnorm) → Tensor
See torch.renorm()
renorm_
(p, dim, maxnorm) → Tensor
In-place version of renorm()
repeat
(*sizes) → Tensor
Repeats this tensor along the specified dimensions.
Unlike expand()
, this function copies the tensor’s data.
Warning
torch.repeat()
behaves differently from numpy.repeat, but is more similar to numpy.tile. For the operator similar to numpy.repeat, see torch.repeat_interleave()
.
Parameters
sizes (torch.Size or int...) – The number of times to repeat this tensor along each dimension
Example:
>>> x = torch.tensor([1, 2, 3])
>>> x.repeat(4, 2)
tensor([[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3]])
>>> x.repeat(4, 2, 1).size()
torch.Size([4, 2, 3])
repeat_interleave
(repeats, dim=None) → Tensor
See torch.repeat_interleave()
.
requires_grad
()
Is True
if gradients need to be computed for this Tensor, False
otherwise.
Note
The fact that gradients need to be computed for a Tensor do not mean that the grad
attribute will be populated, see is_leaf
for more details.
requires_grad_
(requires_grad=True) → Tensor
Change if autograd should record operations on this tensor: sets this tensor’s requires_grad
attribute in-place. Returns this tensor.
require_grad_()
’s main use case is to tell autograd to begin recording operations on a Tensor tensor
. If tensor
has requires_grad=False
(because it was obtained through a DataLoader, or required preprocessing or initialization), tensor.requires_grad_()
makes it so that autograd will begin to record operations on tensor
.
Parameters
requires_grad (bool) – If autograd should record operations on this tensor. Default: True
.
Example:
>>> # Let's say we want to preprocess some saved weights and use
>>> # the result as new weights.
>>> saved_weights = [0.1, 0.2, 0.3, 0.25]
>>> loaded_weights = torch.tensor(saved_weights)
>>> weights = preprocess(loaded_weights) # some function
>>> weights
tensor([-0.5503, 0.4926, -2.1158, -0.8303])
>>> # Now, start to record operations done to weights
>>> weights.requires_grad_()
>>> out = weights.pow(2).sum()
>>> out.backward()
>>> weights.grad
tensor([-1.1007, 0.9853, -4.2316, -1.6606])
reshape
(*shape) → Tensor
Returns a tensor with the same data and number of elements as self
but with the specified shape. This method returns a view if shape
is compatible with the current shape. See torch.Tensor.view()
on when it is possible to return a view.
See torch.reshape()
Parameters
shape (tuple of python:ints or int...) – the desired shape
reshape_as
(other) → Tensor
Returns this tensor as the same shape as other
. self.reshape_as(other)
is equivalent to self.reshape(other.sizes())
. This method returns a view if other.sizes()
is compatible with the current shape. See torch.Tensor.view()
on when it is possible to return a view.
Please see reshape()
for more information about reshape
.
Parameters
other (torch.Tensor
) – The result tensor has the same shape as other
.
resize_
(*sizes) → Tensor
Resizes self
tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.
Warning
This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view()
, which checks for contiguity, or reshape()
, which copies data if needed. To change the size in-place with custom strides, see set_()
.
Parameters
sizes (torch.Size or int...) – the desired size
Example:
>>> x = torch.tensor([[1, 2], [3, 4], [5, 6]])
>>> x.resize_(2, 2)
tensor([[ 1, 2],
[ 3, 4]])
resize_as_
(tensor) → Tensor
Resizes the self
tensor to be the same size as the specified tensor
. This is equivalent to self.resize_(tensor.size())
.
retain_grad
()[source]
Enables .grad attribute for non-leaf Tensors.
rfft
(signal_ndim, normalized=False, onesided=True) → Tensor
See torch.rfft()
roll
(shifts, dims) → Tensor
See torch.roll()
rot90
(k, dims) → Tensor
See torch.rot90()
round
() → Tensor
See torch.round()
round_
() → Tensor
In-place version of round()
rsqrt
() → Tensor
See torch.rsqrt()
rsqrt_
() → Tensor
In-place version of rsqrt()
scatter
(dim, index, source) → Tensor
Out-of-place version of torch.Tensor.scatter_()
scatter_
(dim, index, src) → Tensor
Writes all values from the tensor src
into self
at the indices specified in the index
tensor. For each value in src
, its output index is specified by its index in src
for dimension != dim
and by the corresponding value in index
for dimension = dim
.
For a 3-D tensor, self
is updated as:
self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in gather()
.
self
, index
and src
(if it is a Tensor) should have same number of dimensions. It is also required that index.size(d) <= src.size(d)
for all dimensions d
, and that index.size(d) <= self.size(d)
for all dimensions d != dim
.
Moreover, as for gather()
, the values of index
must be between 0
and self.size(dim) - 1
inclusive, and all values in a row along the specified dimension dim
must be unique.
Parameters
Example:
>>> x = torch.rand(2, 5)
>>> x
tensor([[ 0.3992, 0.2908, 0.9044, 0.4850, 0.6004],
[ 0.5735, 0.9006, 0.6797, 0.4152, 0.1732]])
>>> torch.zeros(3, 5).scatter_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
tensor([[ 0.3992, 0.9006, 0.6797, 0.4850, 0.6004],
[ 0.0000, 0.2908, 0.0000, 0.4152, 0.0000],
[ 0.5735, 0.0000, 0.9044, 0.0000, 0.1732]])
>>> z = torch.zeros(2, 4).scatter_(1, torch.tensor([[2], [3]]), 1.23)
>>> z
tensor([[ 0.0000, 0.0000, 1.2300, 0.0000],
[ 0.0000, 0.0000, 0.0000, 1.2300]])
scatter_add_
(dim, index, other) → Tensor
Adds all values from the tensor other
into self
at the indices specified in the index
tensor in a similar fashion as scatter_()
. For each value in other
, it is added to an index in self
which is specified by its index in other
for dimension != dim
and by the corresponding value in index
for dimension = dim
.
For a 3-D tensor, self
is updated as:
self[index[i][j][k]][j][k] += other[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] += other[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] += other[i][j][k] # if dim == 2
self
, index
and other
should have same number of dimensions. It is also required that index.size(d) <= other.size(d)
for all dimensions d
, and that index.size(d) <= self.size(d)
for all dimensions d != dim
.
Moreover, as for gather()
, the values of index
must be between 0
and self.size(dim) - 1
inclusive, and all values in a row along the specified dimension dim
must be unique.
Note
When using the CUDA backend, this operation may induce nondeterministic behaviour that is not easily switched off. Please see the notes on Reproducibility for background.
Parameters
Example:
>>> x = torch.rand(2, 5)
>>> x
tensor([[0.7404, 0.0427, 0.6480, 0.3806, 0.8328],
[0.7953, 0.2009, 0.9154, 0.6782, 0.9620]])
>>> torch.ones(3, 5).scatter_add_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
tensor([[1.7404, 1.2009, 1.9154, 1.3806, 1.8328],
[1.0000, 1.0427, 1.0000, 1.6782, 1.0000],
[1.7953, 1.0000, 1.6480, 1.0000, 1.9620]])
scatter_add
(dim, index, source) → Tensor
Out-of-place version of torch.Tensor.scatter_add_()
select
(dim, index) → Tensor
Slices the self
tensor along the selected dimension at the given index. This function returns a tensor with the given dimension removed.
Parameters
Note
select()
is equivalent to slicing. For example, tensor.select(0, index)
is equivalent to tensor[index]
and tensor.select(2, index)
is equivalent to tensor[:,:,index]
.
set_
(source=None, storage_offset=0, size=None, stride=None) → Tensor
Sets the underlying storage, size, and strides. If source
is a tensor, self
tensor will share the same storage and have the same size and strides as source
. Changes to elements in one tensor will be reflected in the other.
If source
is a Storage
, the method sets the underlying storage, offset, size, and stride.
Parameters
share_memory_
()[source]
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.
short
() → Tensor
self.short()
is equivalent to self.to(torch.int16)
. See to()
.
sigmoid
() → Tensor
See torch.sigmoid()
sigmoid_
() → Tensor
In-place version of sigmoid()
sign
() → Tensor
See torch.sign()
sign_
() → Tensor
In-place version of sign()
sin
() → Tensor
See torch.sin()
sin_
() → Tensor
In-place version of sin()
sinh
() → Tensor
See torch.sinh()
sinh_
() → Tensor
In-place version of sinh()
size
() → torch.Size
Returns the size of the self
tensor. The returned value is a subclass of tuple
.
Example:
>>> torch.empty(3, 4, 5).size()
torch.Size([3, 4, 5])
slogdet
() -> (Tensor, Tensor)
See torch.slogdet()
solve
(A) → Tensor, Tensor
See torch.solve()
sort
(dim=-1, descending=False) -> (Tensor, LongTensor)
See torch.sort()
split
(split_size, dim=0)[source]
See torch.split()
sparse_mask
(input, mask) → Tensor
Returns a new SparseTensor with values from Tensor input
filtered by indices of mask
and values are ignored. input
and mask
must have the same shape.
Parameters
input
based on its indices
Example:
>>> nnz = 5
>>> dims = [5, 5, 2, 2]
>>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),
torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)
>>> V = torch.randn(nnz, dims[2], dims[3])
>>> size = torch.Size(dims)
>>> S = torch.sparse_coo_tensor(I, V, size).coalesce()
>>> D = torch.randn(dims)
>>> D.sparse_mask(S)
tensor(indices=tensor([[0, 0, 0, 2],
[0, 1, 4, 3]]),
values=tensor([[[ 1.6550, 0.2397],
[-0.1611, -0.0779]],
[[ 0.2326, -1.0558],
[ 1.4711, 1.9678]],
[[-0.5138, -0.0411],
[ 1.9417, 0.5158]],
[[ 0.0793, 0.0036],
[-0.2569, -0.1055]]]),
size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
sparse_dim
() → int
If self
is a sparse COO tensor (i.e., with torch.sparse_coo
layout), this returns the number of sparse dimensions. Otherwise, this throws an error.
See also Tensor.dense_dim()
.
sqrt
() → Tensor
See torch.sqrt()
sqrt_
() → Tensor
In-place version of sqrt()
squeeze
(dim=None) → Tensor
See torch.squeeze()
squeeze_
(dim=None) → Tensor
In-place version of squeeze()
std
(dim=None, unbiased=True, keepdim=False) → Tensor
See torch.std()
stft
(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=True)[source]
See torch.stft()
Warning
This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.
storage
() → torch.Storage
Returns the underlying storage.
storage_offset
() → int
Returns self
tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).
Example:
>>> x = torch.tensor([1, 2, 3, 4, 5])
>>> x.storage_offset()
0
>>> x[3:].storage_offset()
3
storage_type
() → type
Returns the type of the underlying storage.
stride
(dim) → tuple or int
Returns the stride of self
tensor.
Stride is the jump necessary to go from one element to the next one in the specified dimension dim
. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimension dim
.
Parameters
dim (int, optional) – the desired dimension in which stride is required
Example:
>>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
>>> x.stride()
(5, 1)
>>>x.stride(0)
5
>>> x.stride(-1)
1
sub
(value, other) → Tensor
Subtracts a scalar or tensor from self
tensor. If both value
and other
are specified, each element of other
is scaled by value
before being used.
When other
is a tensor, the shape of other
must be broadcastable with the shape of the underlying tensor.
sub_
(x) → Tensor
In-place version of sub()
sum
(dim=None, keepdim=False, dtype=None) → Tensor
See torch.sum()
sum_to_size
(*size) → Tensor
Sum this
tensor to size
. size
must be broadcastable to this
tensor size.
Parameters
size (int...) – a sequence of integers defining the shape of the output tensor.
svd
(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)
See torch.svd()
symeig
(eigenvectors=False, upper=True) -> (Tensor, Tensor)
See torch.symeig()
t
() → Tensor
See torch.t()
t_
() → Tensor
In-place version of t()
to
(*args, **kwargs) → Tensor
Performs Tensor dtype and/or device conversion. A torch.dtype
and torch.device
are inferred from the arguments of self.to(*args, **kwargs)
.
Note
If the self
Tensor already has the correct torch.dtype
and torch.device
, then self
is returned. Otherwise, the returned tensor is a copy of self
with the desired torch.dtype
and torch.device
.
Here are the ways to call to
:
to
(dtype, non_blocking=False, copy=False) → Tensor
Returns a Tensor with the specified dtype
to
(device=None, dtype=None, non_blocking=False, copy=False) → Tensor
Returns a Tensor with the specified device
and (optional) dtype
. If dtype
is None
it is inferred to be self.dtype
. When non_blocking
, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy
is set, a new Tensor is created even when the Tensor already matches the desired conversion.
to
(other, non_blocking=False, copy=False) → Tensor
Returns a Tensor with same torch.dtype
and torch.device
as the Tensor other
. When non_blocking
, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy
is set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu
>>> tensor.to(torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64)
>>> cuda0 = torch.device('cuda:0')
>>> tensor.to(cuda0)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], device='cuda:0')
>>> tensor.to(cuda0, dtype=torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
>>> other = torch.randn((), dtype=torch.float64, device=cuda0)
>>> tensor.to(other, non_blocking=True)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
to_mkldnn
() → Tensor
Returns a copy of the tensor in torch.mkldnn
layout.
take
(indices) → Tensor
See torch.take()
tan
() → Tensor
See torch.tan()
tan_
() → Tensor
In-place version of tan()
tanh
() → Tensor
See torch.tanh()
tanh_
() → Tensor
In-place version of tanh()
tolist
()
” tolist() -> list or number
Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with item()
. Tensors are automatically moved to the CPU first if necessary.
This operation is not differentiable.
Examples:
>>> a = torch.randn(2, 2)
>>> a.tolist()
[[0.012766935862600803, 0.5415473580360413],
[-0.08909505605697632, 0.7729271650314331]]
>>> a[0,0].tolist()
0.012766935862600803
topk
(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)
See torch.topk()
to_sparse
(sparseDims) → Tensor
Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format.
Parameters
sparseDims (int, optional) – the number of sparse dimensions to include in the new sparse tensor
Example:
>>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])
>>> d
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
>>> d.to_sparse()
tensor(indices=tensor([[1, 1],
[0, 2]]),
values=tensor([ 9, 10]),
size=(3, 3), nnz=2, layout=torch.sparse_coo)
>>> d.to_sparse(1)
tensor(indices=tensor([[1]]),
values=tensor([[ 9, 0, 10]]),
size=(3, 3), nnz=1, layout=torch.sparse_coo)
trace
() → Tensor
See torch.trace()
transpose
(dim0, dim1) → Tensor
transpose_
(dim0, dim1) → Tensor
In-place version of transpose()
triangular_solve
(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
tril
(k=0) → Tensor
See torch.tril()
tril_
(k=0) → Tensor
In-place version of tril()
triu
(k=0) → Tensor
See torch.triu()
triu_
(k=0) → Tensor
In-place version of triu()
trunc
() → Tensor
See torch.trunc()
trunc_
() → Tensor
In-place version of trunc()
type
(dtype=None, non_blocking=False, **kwargs) → str or Tensor
Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
Parameters
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.
async
in place of the non_blocking
argument. The async
arg is deprecated.
type_as
(tensor) → Tensor
Returns this tensor cast to the type of the given tensor.
This is a no-op if the tensor is already of the correct type. This is equivalent to self.type(tensor.type())
Parameters
tensor (Tensor) – the tensor which has the desired type
unbind
(dim=0) → seq
See torch.unbind()
unfold
(dimension, size, step) → Tensor
Returns a tensor which contains all slices of size size
from self
tensor in the dimension dimension
.
Step between two slices is given by step
.
If sizedim is the size of dimension dimension
for self
, the size of dimension dimension
in the returned tensor will be (sizedim - size) / step + 1.
An additional dimension of size size
is appended in the returned tensor.
Parameters
Example:
>>> x = torch.arange(1., 8)
>>> x
tensor([ 1., 2., 3., 4., 5., 6., 7.])
>>> x.unfold(0, 2, 1)
tensor([[ 1., 2.],
[ 2., 3.],
[ 3., 4.],
[ 4., 5.],
[ 5., 6.],
[ 6., 7.]])
>>> x.unfold(0, 2, 2)
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.]])
uniform_
(from=0, to=1) → Tensor
Fills self
tensor with numbers sampled from the continuous uniform distribution:
P(x)=1to−fromP(x) = \dfrac{1}{\text{to} - \text{from}} P(x)=to−from1
unique
(sorted=True, return_inverse=False, return_counts=False, dim=None)[source]
Returns the unique elements of the input tensor.
See torch.unique()
unique_consecutive
(return_inverse=False, return_counts=False, dim=None)[source]
Eliminates all but the first element from every consecutive group of equivalent elements.
See torch.unique_consecutive()
unsqueeze
(dim) → Tensor
unsqueeze_
(dim) → Tensor
In-place version of unsqueeze()
values
() → Tensor
If self
is a sparse COO tensor (i.e., with torch.sparse_coo
layout), this returns a view of the contained values tensor. Otherwise, this throws an error.
See also Tensor.indices()
.
Note
This method can only be called on a coalesced sparse tensor. See Tensor.coalesce()
for details.
var
(dim=None, unbiased=True, keepdim=False) → Tensor
See torch.var()
view
(*shape) → Tensor
Returns a new tensor with the same data as the self
tensor but of a different shape
.
The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions d,d+1,…,d+kd, d+1, \dots, d+kd,d+1,…,d+k that satisfy the following contiguity-like condition that ∀i=0,…,k−1\forall i = 0, \dots, k-1∀i=0,…,k−1 ,
stride[i]=stride[i+1]×size[i+1]\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]stride[i]=stride[i+1]×size[i+1]
Otherwise, contiguous()
needs to be called before the tensor can be viewed. See also: reshape()
, which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous()
) otherwise.
Parameters
shape (torch.Size or int...) – the desired size
Example:
>>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
>>> a = torch.randn(1, 2, 3, 4)
>>> a.size()
torch.Size([1, 2, 3, 4])
>>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension
>>> b.size()
torch.Size([1, 3, 2, 4])
>>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory
>>> c.size()
torch.Size([1, 3, 2, 4])
>>> torch.equal(b, c)
False
view_as
(other) → Tensor
View this tensor as the same size as other
. self.view_as(other)
is equivalent to self.view(other.size())
.
Please see view()
for more information about view
.
Parameters
other (torch.Tensor
) – The result tensor has the same size as other
.
where
(condition, y) → Tensor
self.where(condition, y)
is equivalent to torch.where(condition, self, y)
. See torch.where()
zero_
() → Tensor
Fills self
tensor with zeros.
class torch.BoolTensor
The following methods are unique to torch.BoolTensor
.
all
()
all
() → bool
Returns True if all elements in the tensor are True, False otherwise.
Example:
>>> a = torch.rand(1, 2).bool()
>>> a
tensor([[False, True]], dtype=torch.bool)
>>> a.all()
tensor(False, dtype=torch.bool)
all
(dim, keepdim=False, out=None) → Tensor
Returns True if all elements in each row of the tensor in the given dimension dim
are True, False otherwise.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 fewer dimension than input
.
Parameters
dim
retained or not
Example:
>>> a = torch.rand(4, 2).bool()
>>> a
tensor([[True, True],
[True, False],
[True, True],
[True, True]], dtype=torch.bool)
>>> a.all(dim=1)
tensor([ True, False, True, True], dtype=torch.bool)
>>> a.all(dim=0)
tensor([ True, False], dtype=torch.bool)
any
()
any
() → bool
Returns True if any elements in the tensor are True, False otherwise.
Example:
>>> a = torch.rand(1, 2).bool()
>>> a
tensor([[False, True]], dtype=torch.bool)
>>> a.any()
tensor(True, dtype=torch.bool)
any
(dim, keepdim=False, out=None) → Tensor
Returns True if any elements in each row of the tensor in the given dimension dim
are True, False otherwise.
If keepdim
is True
, the output tensor is of the same size as input
except in the dimension dim
where it is of size 1. Otherwise, dim
is squeezed (see torch.squeeze()
), resulting in the output tensor having 1 fewer dimension than input
.
Parameters
dim
retained or not
Example:
>>> a = torch.randn(4, 2) < 0
>>> a
tensor([[ True, True],
[False, True],
[ True, True],
[False, False]])
>>> a.any(1)
tensor([ True, True, True, False])
>>> a.any(0)
tensor([True, True])