The torch package contains data structures for multi-dimensional tensors and mathematical operations over these are defined. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.
torch.is_tensor
(obj)[source]Returns True if obj is a PyTorch tensor.
Parameters
obj (Object) – Object to test
torch.is_storage
(obj)[source]Returns True if obj is a PyTorch storage object.
Parameters
obj (Object) – Object to test
torch.is_floating_point
(input) -> (bool)
Returns True if the data type of input
is a floating point data type i.e., one of torch.float64
, torch.float32
and torch.float16
.
Parameters
input (Tensor) – the PyTorch tensor to test
torch.set_default_dtype
(d)[source]Sets the default floating point dtype to d
. This type will be used as default floating point type for type inference in torch.tensor()
.
The default floating point dtype is initially torch.float32
.
Parameters
d (torch.dtype
) – the floating point dtype to make the default
Example:
>>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_dtype(torch.float64)
>>> torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64
torch.get_default_dtype
() → torch.dtypeGet the current default floating point torch.dtype
.
Example:
>>> torch.get_default_dtype() # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_dtype(torch.float64)
>>> torch.get_default_dtype() # default is now changed to torch.float64
torch.float64
>>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this
>>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor
torch.float32
torch.set_default_tensor_type
(t)[source]Sets the default torch.Tensor
type to floating point tensor type t
. This type will also be used as default floating point type for type inference in torch.tensor()
.
The default floating point tensor type is initially torch.FloatTensor
.
Parameters
t (type or string) – the floating point tensor type or its name
Example:
>>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_tensor_type(torch.DoubleTensor)
>>> torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64
torch.numel
(input) → intReturns the total number of elements in the input
tensor.
Parameters
input (Tensor) – the input tensor.
Example:
>>> a = torch.randn(1, 2, 3, 4, 5)
>>> torch.numel(a)
120
>>> a = torch.zeros(4,4)
>>> torch.numel(a)
16
torch.set_printoptions
(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None)[source]Set options for printing. Items shamelessly taken from NumPy
Parameters
torch.set_flush_denormal
(mode) → boolDisables denormal floating numbers on CPU.
Returns True
if your system supports flushing denormal numbers and it successfully configures flush denormal mode. set_flush_denormal()
is only supported on x86 architectures supporting SSE3.
Parameters
mode (bool) – Controls whether to enable flush denormal mode or not
Example:
>>> torch.set_flush_denormal(True)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor([ 0.], dtype=torch.float64)
>>> torch.set_flush_denormal(False)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor(9.88131e-324 *
[ 1.0000], dtype=torch.float64)
Note
Random sampling creation ops are listed under Random sampling and include: torch.rand()
torch.rand_like()
torch.randn()
torch.randn_like()
torch.randint()
torch.randint_like()
torch.randperm()
You may also use torch.empty()
with the In-place random sampling methods to create torch.Tensor
s with values sampled from a broader range of distributions.
torch.tensor
(data, dtype=None, device=None, requires_grad=False, pin_memory=False) → TensorConstructs a tensor with data
.
Warning
torch.tensor()
always copies data
. If you have a Tensor data
and want to avoid a copy, use torch.Tensor.requires_grad_()
or torch.Tensor.detach()
. If you have a NumPy ndarray
and want to avoid a copy, use torch.as_tensor()
.
Warning
When data is a tensor x, torch.tensor()
reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore torch.tensor(x)
is equivalent to x.clone().detach()
and torch.tensor(x, requires_grad=True)
is equivalent to x.clone().detach().requires_grad_(True)
. The equivalents using clone()
and detach()
are recommended.
Parameters
ndarray
, scalar, and other types.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, infers data type from data
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
False
.
Example:
>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000, 1.2000],
[ 2.2000, 3.1000],
[ 4.9000, 5.2000]])
>>> torch.tensor([0, 1]) # Type inference on data
tensor([ 0, 1])
>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
dtype=torch.float64,
device=torch.device('cuda:0')) # creates a torch.cuda.DoubleTensor
tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')
>>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor)
tensor(3.1416)
>>> torch.tensor([]) # Create an empty tensor (of size (0,))
tensor([])
torch.sparse_coo_tensor
(indices, values, size=None, dtype=None, device=None, requires_grad=False) → TensorConstructs a sparse tensors in COO(rdinate) format with non-zero elements at the given indices
with the given values
. A sparse tensor can be uncoalesced, in that case, there are duplicate coordinates in the indices, and the value at that index is the sum of all duplicate value entries: torch.sparse.
Parameters
ndarray
, scalar, and other types. Will be cast to a torch.LongTensor
internally. The indices are the coordinates of the non-zero values in the matrix, and thus should be two-dimensional where the first dimension is the number of tensor dimensions and the second dimension is the number of non-zero values.
ndarray
, scalar, and other types.
torch.Size
, optional) – Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None, infers data type from values
.
torch.device
, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> i = torch.tensor([[0, 1, 1],
[2, 0, 2]])
>>> v = torch.tensor([3, 4, 5], dtype=torch.float32)
>>> torch.sparse_coo_tensor(i, v, [2, 4])
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
size=(2, 4), nnz=3, layout=torch.sparse_coo)
>>> torch.sparse_coo_tensor(i, v) # Shape inference
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
size=(2, 3), nnz=3, layout=torch.sparse_coo)
>>> torch.sparse_coo_tensor(i, v, [2, 4],
dtype=torch.float64,
device=torch.device('cuda:0'))
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,
layout=torch.sparse_coo)
# Create an empty sparse tensor with the following invariants:
# 1. sparse_dim + dense_dim = len(SparseTensor.shape)
# 2. SparseTensor._indices().shape = (sparse_dim, nnz)
# 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:])
#
# For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and
# sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0))
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])
tensor(indices=tensor([], size=(1, 0)),
values=tensor([], size=(0,)),
size=(1,), nnz=0, layout=torch.sparse_coo)
# and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and
# sparse_dim = 1
>>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])
tensor(indices=tensor([], size=(1, 0)),
values=tensor([], size=(0, 2)),
size=(1, 2), nnz=0, layout=torch.sparse_coo)
torch.as_tensor
(data, dtype=None, device=None) → TensorConvert the data into a torch.Tensor. If the data is already a Tensor with the same dtype and device, no copy will be performed, otherwise a new Tensor will be returned with computational graph retained if data Tensor has requires_grad=True
. Similarly, if the data is an ndarray
of the corresponding dtype and the device is the cpu, no copy will be performed.
Parameters
ndarray
, scalar, and other types.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, infers data type from data
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
Example:
>>> a = numpy.array([1, 2, 3])
>>> t = torch.as_tensor(a)
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
>>> a = numpy.array([1, 2, 3])
>>> t = torch.as_tensor(a, device=torch.device('cuda'))
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([1, 2, 3])
torch.as_strided
(input, size, stride, storage_offset=0) → TensorCreate a view of an existing torch.Tensor input
with specified size
, stride
and storage_offset
.
Warning
More than one element of a created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.
Many PyTorch functions, which return a view of a tensor, are internally implemented with this function. Those functions, like torch.Tensor.expand()
, are easier to read and are therefore more advisable to use.
Parameters
Example:
>>> x = torch.randn(3, 3)
>>> x
tensor([[ 0.9039, 0.6291, 1.0795],
[ 0.1586, 2.1939, -0.4900],
[-0.1909, -0.7503, 1.9355]])
>>> t = torch.as_strided(x, (2, 2), (1, 2))
>>> t
tensor([[0.9039, 1.0795],
[0.6291, 0.1586]])
>>> t = torch.as_strided(x, (2, 2), (1, 2), 1)
tensor([[0.6291, 0.1586],
[1.0795, 2.1939]])
torch.from_numpy
(ndarray) → TensorCreates a Tensor
from a numpy.ndarray
.
The returned tensor and ndarray
share the same memory. Modifications to the tensor will be reflected in the ndarray
and vice versa. The returned tensor is not resizable.
It currently accepts ndarray
with dtypes of numpy.float64
, numpy.float32
, numpy.float16
, numpy.int64
, numpy.int32
, numpy.int16
, numpy.int8
, numpy.uint8
, and numpy.bool
.
Example:
>>> a = numpy.array([1, 2, 3])
>>> t = torch.from_numpy(a)
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
torch.zeros
(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a tensor filled with the scalar value 0, with the shape defined by the variable argument size
.
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.zeros(2, 3)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> torch.zeros(5)
tensor([ 0., 0., 0., 0., 0.])
torch.zeros_like
(input, dtype=None, layout=None, device=None, requires_grad=False) → TensorReturns a tensor filled with the scalar value 0, with the same size as input
. torch.zeros_like(input)
is equivalent to torch.zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)
.
Warning
As of 0.4, this function does not support an out
keyword. As an alternative, the old torch.zeros_like(input, out=output)
is equivalent to torch.zeros(input.size(), out=output)
.
Parameters
input
will determine size of the output tensor.
torch.dtype
, optional) – the desired data type of returned Tensor. Default: if None
, defaults to the dtype of input
.
torch.layout
, optional) – the desired layout of returned tensor. Default: if None
, defaults to the layout of input
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, defaults to the device of input
.
False
.
Example:
>>> input = torch.empty(2, 3)
>>> torch.zeros_like(input)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
torch.ones
(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a tensor filled with the scalar value 1, with the shape defined by the variable argument size
.
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.ones(2, 3)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.]])
>>> torch.ones(5)
tensor([ 1., 1., 1., 1., 1.])
torch.ones_like
(input, dtype=None, layout=None, device=None, requires_grad=False) → TensorReturns a tensor filled with the scalar value 1, with the same size as input
. torch.ones_like(input)
is equivalent to torch.ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)
.
Warning
As of 0.4, this function does not support an out
keyword. As an alternative, the old torch.ones_like(input, out=output)
is equivalent to torch.ones(input.size(), out=output)
.
Parameters
input
will determine size of the output tensor.
torch.dtype
, optional) – the desired data type of returned Tensor. Default: if None
, defaults to the dtype of input
.
torch.layout
, optional) – the desired layout of returned tensor. Default: if None
, defaults to the layout of input
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, defaults to the device of input
.
False
.
Example:
>>> input = torch.empty(2, 3)
>>> torch.ones_like(input)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.]])
torch.arange
(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a 1-D tensor of size ⌈end−startstep⌉\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil⌈stepend−start⌉ with values from the interval [start, end)
taken with common difference step
beginning from start.
Note that non-integer step
is subject to floating point rounding errors when comparing against end
; to avoid inconsistency, we advise adding a small epsilon to end
in such cases.
outi+1=outi+step\text{out}_{{i+1}} = \text{out}_{i} + \text{step} outi+1=outi+step
Parameters
0
.
1
.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype()
. Otherwise, the dtype is inferred to be torch.int64.
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.arange(5)
tensor([ 0, 1, 2, 3, 4])
>>> torch.arange(1, 4)
tensor([ 1, 2, 3])
>>> torch.arange(1, 2.5, 0.5)
tensor([ 1.0000, 1.5000, 2.0000])
torch.range
(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a 1-D tensor of size ⌊end−startstep⌋+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1⌊stepend−start⌋+1 with values from start
to end
with step step
. Step is the gap between two values in the tensor.
outi+1=outi+step.\text{out}_{i+1} = \text{out}_i + \text{step}. outi+1=outi+step.
Warning
This function is deprecated in favor of torch.arange()
.
Parameters
0
.
1
.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype()
. Otherwise, the dtype is inferred to be torch.int64.
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.range(1, 4)
tensor([ 1., 2., 3., 4.])
>>> torch.range(1, 4, 0.5)
tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000])
torch.linspace
(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a one-dimensional tensor of steps
equally spaced points between start
and end
.
The output tensor is 1-D of size steps
.
Parameters
start
and end
. Default: 100
.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.linspace(3, 10, steps=5)
tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])
>>> torch.linspace(-10, 10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=1)
tensor([-10.])
torch.logspace
(start, end, steps=100, base=10.0, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a one-dimensional tensor of steps
points logarithmically spaced with base base
between basestart{\text{base}}^{\text{start}}basestart and baseend{\text{base}}^{\text{end}}baseend .
The output tensor is 1-D of size steps
.
Parameters
start
and end
. Default: 100
.
10.0
.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.logspace(start=-10, end=10, steps=5)
tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
>>> torch.logspace(start=0.1, end=1.0, steps=5)
tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])
>>> torch.logspace(start=0.1, end=1.0, steps=1)
tensor([1.2589])
>>> torch.logspace(start=2, end=2, steps=1, base=2)
tensor([4.0])
torch.eye
(n, m=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a 2-D tensor with ones on the diagonal and zeros elsewhere.
Parameters
n
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Returns
A 2-D tensor with ones on the diagonal and zeros elsewhere
Return type
Example:
>>> torch.eye(3)
tensor([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
torch.empty
(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) → TensorReturns a tensor filled with uninitialized data. The shape of the tensor is defined by the variable argument size
.
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
False
.
Example:
>>> torch.empty(2, 3)
tensor(1.00000e-08 *
[[ 6.3984, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]])
torch.empty_like
(input, dtype=None, layout=None, device=None, requires_grad=False) → TensorReturns an uninitialized tensor with the same size as input
. torch.empty_like(input)
is equivalent to torch.empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)
.
Parameters
input
will determine size of the output tensor.
torch.dtype
, optional) – the desired data type of returned Tensor. Default: if None
, defaults to the dtype of input
.
torch.layout
, optional) – the desired layout of returned tensor. Default: if None
, defaults to the layout of input
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, defaults to the device of input
.
False
.
Example:
>>> torch.empty((2,3), dtype=torch.int64)
tensor([[ 9.4064e+13, 2.8000e+01, 9.3493e+13],
[ 7.5751e+18, 7.1428e+18, 7.5955e+18]])
torch.empty_strided
(size, stride, dtype=None, layout=None, device=None, requires_grad=False, pin_memory=False) → TensorReturns a tensor filled with uninitialized data. The shape and strides of the tensor is defined by the variable argument size
and stride
respectively. torch.empty_strided(size, stride)
is equivalent to torch.empty(size).as_strided(size, stride)
.
Warning
More than one element of the created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.
Parameters
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
False
.
Example:
>>> a = torch.empty_strided((2, 3), (1, 2))
>>> a
tensor([[8.9683e-44, 4.4842e-44, 5.1239e+07],
[0.0000e+00, 0.0000e+00, 3.0705e-41]])
>>> a.stride()
(1, 2)
>>> a.size()
torch.Size([2, 3])
torch.full
(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a tensor of size size
filled with fill_value
.
Parameters
torch.Size
of integers defining the shape of the output tensor.
torch.dtype
, optional) – the desired data type of returned tensor. Default: if None
, uses a global default (see torch.set_default_tensor_type()
).
torch.layout
, optional) – the desired layout of returned Tensor. Default: torch.strided
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, uses the current device for the default tensor type (see torch.set_default_tensor_type()
). device
will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
False
.
Example:
>>> torch.full((2, 3), 3.141592)
tensor([[ 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416]])
torch.full_like
(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → TensorReturns a tensor with the same size as input
filled with fill_value
. torch.full_like(input, fill_value)
is equivalent to torch.full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device)
.
Parameters
input
will determine size of the output tensor.
torch.dtype
, optional) – the desired data type of returned Tensor. Default: if None
, defaults to the dtype of input
.
torch.layout
, optional) – the desired layout of returned tensor. Default: if None
, defaults to the layout of input
.
torch.device
, optional) – the desired device of returned tensor. Default: if None
, defaults to the device of input
.
False
.
torch.cat
(tensors, dim=0, out=None) → Tensor
Concatenates the given sequence of seq
tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.
torch.cat()
can be seen as an inverse operation for torch.split()
and torch.chunk()
.
torch.cat()
can be best understood via examples.
Parameters
Example:
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580,
-1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034,
-0.5790, 0.1497]])
torch.chunk
(input, chunks, dim=0) → List of TensorsSplits a tensor into a specific number of chunks.
Last chunk will be smaller if the tensor size along the given dimension dim
is not divisible by chunks
.
Parameters
torch.gather
(input, dim, index, out=None, sparse_grad=False) → TensorGathers values along an axis specified by dim.
For a 3-D tensor the output is specified by:
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
If input
is an n-dimensional tensor with size (x0,x1...,xi−1,xi,xi+1,...,xn−1)(x_0, x_1..., x_{i-1}, x_i, x_{i+1}, ..., x_{n-1})(x0,x1...,xi−1,xi,xi+1,...,xn−1) and dim = i
, then index
must be an nnn -dimensional tensor with size (x0,x1,...,xi−1,y,xi+1,...,xn−1)(x_0, x_1, ..., x_{i-1}, y, x_{i+1}, ..., x_{n-1})(x0,x1,...,xi−1,y,xi+1,...,xn−1) where y≥1y \geq 1y≥1 and out
will have the same size as index
.
Parameters
True
, gradient w.r.t. input
will be a sparse tensor.
Example:
>>> t = torch.tensor([[1,2],[3,4]])
>>> torch.gather(t, 1, torch.tensor([[0,0],[1,0]]))
tensor([[ 1, 1],
[ 4, 3]])
torch.index_select
(input, dim, index, out=None) → TensorReturns a new tensor which indexes the input
tensor along dimension dim
using the entries in index
which is a LongTensor.
The returned tensor has the same number of dimensions as the original tensor (input
). The dim
th dimension has the same size as the length of index
; other dimensions have the same size as in the original tensor.
Note
The returned tensor does not use the same storage as the original tensor. If out
has a different shape than expected, we silently change it to the correct shape, reallocating the underlying storage if necessary.
Parameters
Example:
>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
[-0.4664, 0.2647, -0.1228, -1.1068],
[-1.1734, -0.6571, 0.7230, -0.6004]])
>>> indices = torch.tensor([0, 2])
>>> torch.index_select(x, 0, indices)
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
[-1.1734, -0.6571, 0.7230, -0.6004]])
>>> torch.index_select(x, 1, indices)
tensor([[ 0.1427, -0.5414],
[-0.4664, -0.1228],
[-1.1734, 0.7230]])
torch.masked_select
(input, mask, out=None) → Tensor
Returns a new 1-D tensor which indexes the input
tensor according to the boolean mask mask
which is a BoolTensor.
The shapes of the mask
tensor and the input
tensor don’t need to match, but they must be broadcastable.
Note
The returned tensor does not use the same storage as the original tensor
Parameters
Example:
>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
[-1.2035, 1.2252, 0.5002, 0.6248],
[ 0.1307, -2.0608, 0.1244, 2.0139]])
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, False, False, False],
[False, True, True, True],
[False, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([ 1.2252, 0.5002, 0.6248, 2.0139])
torch.narrow
(input, dim, start, length) → TensorReturns a new tensor that is a narrowed version of input
tensor. The dimension dim
is input from start
to start + length
. The returned tensor and input
tensor share the same underlying storage.
Parameters
Example:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> torch.narrow(x, 0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> torch.narrow(x, 1, 1, 2)
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]])
torch.nonzero
(input, *, out=None, as_tuple=False) → LongTensor or tuple of LongTensorsWhen as_tuple
is false or unspecified:
Returns a tensor containing the indices of all non-zero elements of input
. Each row in the result contains the indices of a non-zero element in input
. The result is sorted lexicographically, with the last index changing the fastest (C-style).
If input
has n dimensions, then the resulting indices tensor out
is of size (z×n)(z \times n)(z×n) , where zzz is the total number of non-zero elements in the input
tensor.
When as_tuple
is true:
Returns a tuple of 1-D tensors, one for each dimension in input
, each containing the indices (in that dimension) of all non-zero elements of input
.
If input
has n dimensions, then the resulting tuple contains n tensors of size z, where z is the total number of non-zero elements in the input
tensor.
As a special case, when input
has zero dimensions and a nonzero scalar value, it is treated as a one-dimensional tensor with one element.
Parameters
Returns
If as_tuple
is false, the output tensor containing indices. If as_tuple
is true, one 1-D tensor for each dimension, containing the indices of each nonzero element along that dimension.
Return type
LongTensor or tuple of LongTensor
Example:
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
tensor([[ 0],
[ 1],
[ 2],
[ 4]])
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
[0.0, 0.4, 0.0, 0.0],
[0.0, 0.0, 1.2, 0.0],
[0.0, 0.0, 0.0,-0.4]]))
tensor([[ 0, 0],
[ 1, 1],
[ 2, 2],
[ 3, 3]])
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]), as_tuple=True)
(tensor([0, 1, 2, 4]),)
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
[0.0, 0.4, 0.0, 0.0],
[0.0, 0.0, 1.2, 0.0],
[0.0, 0.0, 0.0,-0.4]]), as_tuple=True)
(tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))
>>> torch.nonzero(torch.tensor(5), as_tuple=True)
(tensor([0]),)
torch.reshape
(input, shape) → TensorReturns a tensor with the same data and number of elements as input
, but with the specified shape. When possible, the returned tensor will be a view of input
. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.
See torch.Tensor.view()
on when it is possible to return a view.
A single dimension may be -1, in which case it’s inferred from the remaining dimensions and the number of elements in input
.
Parameters
Example:
>>> a = torch.arange(4.)
>>> torch.reshape(a, (2, 2))
tensor([[ 0., 1.],
[ 2., 3.]])
>>> b = torch.tensor([[0, 1], [2, 3]])
>>> torch.reshape(b, (-1,))
tensor([ 0, 1, 2, 3])
torch.split
(tensor, split_size_or_sections, dim=0)[source]Splits the tensor into chunks.
If split_size_or_sections
is an integer type, then tensor
will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim
is not divisible by split_size
.
If split_size_or_sections
is a list, then tensor
will be split into len(split_size_or_sections)
chunks with sizes in dim
according to split_size_or_sections
.
Parameters
torch.squeeze
(input, dim=None, out=None) → TensorReturns a tensor with all the dimensions of input
of size 1 removed.
For example, if input is of shape: (A×1×B×C×1×D)(A \times 1 \times B \times C \times 1 \times D)(A×1×B×C×1×D) then the out tensor will be of shape: (A×B×C×D)(A \times B \times C \times D)(A×B×C×D) .
When dim
is given, a squeeze operation is done only in the given dimension. If input is of shape: (A×1×B)(A \times 1 \times B)(A×1×B) , squeeze(input, 0)
leaves the tensor unchanged, but squeeze(input, 1)
will squeeze the tensor to the shape (A×B)(A \times B)(A×B) .
Note
The returned tensor shares the storage with the input tensor, so changing the contents of one will change the contents of the other.
Parameters
Example:
>>> x = torch.zeros(2, 1, 2, 1, 2)
>>> x.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x)
>>> y.size()
torch.Size([2, 2, 2])
>>> y = torch.squeeze(x, 0)
>>> y.size()
torch.Size([2, 1, 2, 1, 2])
>>> y = torch.squeeze(x, 1)
>>> y.size()
torch.Size([2, 2, 1, 2])
torch.stack
(tensors, dim=0, out=None) → TensorConcatenates sequence of tensors along a new dimension.
All tensors need to be of the same size.
Parameters
torch.t
(input) → Tensor
Expects input
to be <= 2-D tensor and transposes dimensions 0 and 1.
0-D and 1-D tensors are returned as it is and 2-D tensor can be seen as a short-hand function for transpose(input, 0, 1)
.
Parameters
input (Tensor) – the input tensor.
Example:
>>> x = torch.randn(())
>>> x
tensor(0.1995)
>>> torch.t(x)
tensor(0.1995)
>>> x = torch.randn(3)
>>> x
tensor([ 2.4320, -0.4608, 0.7702])
>>> torch.t(x)
tensor([.2.4320,.-0.4608,..0.7702])
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.4875, 0.9158, -0.5872],
[ 0.3938, -0.6929, 0.6932]])
>>> torch.t(x)
tensor([[ 0.4875, 0.3938],
[ 0.9158, -0.6929],
[-0.5872, 0.6932]])
torch.take
(input, index) → TensorReturns a new tensor with the elements of input
at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices.
Parameters
Example:
>>> src = torch.tensor([[4, 3, 5],
[6, 7, 8]])
>>> torch.take(src, torch.tensor([0, 2, 5]))
tensor([ 4, 5, 8])
torch.transpose
(input, dim0, dim1) → TensorReturns a tensor that is a transposed version of input
. The given dimensions dim0
and dim1
are swapped.
The resulting out
tensor shares it’s underlying storage with the input
tensor, so changing the content of one would change the content of the other.
Parameters
Example:
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 1.0028, -0.9893, 0.5809],
[-0.1669, 0.7299, 0.4942]])
>>> torch.transpose(x, 0, 1)
tensor([[ 1.0028, -0.1669],
[-0.9893, 0.7299],
[ 0.5809, 0.4942]])
torch.unbind
(input, dim=0) → seqRemoves a tensor dimension.
Returns a tuple of all slices along a given dimension, already without it.
Parameters
Example:
>>> torch.unbind(torch.tensor([[1, 2, 3],
>>> [4, 5, 6],
>>> [7, 8, 9]]))
(tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))
torch.unsqueeze
(input, dim, out=None) → TensorReturns a new tensor with a dimension of size one inserted at the specified position.
The returned tensor shares the same underlying data with this tensor.
A dim
value within the range [-input.dim() - 1, input.dim() + 1)
can be used. Negative dim
will correspond to unsqueeze()
applied at dim
= dim + input.dim() + 1
.
Parameters
Example:
>>> x = torch.tensor([1, 2, 3, 4])
>>> torch.unsqueeze(x, 0)
tensor([[ 1, 2, 3, 4]])
>>> torch.unsqueeze(x, 1)
tensor([[ 1],
[ 2],
[ 3],
[ 4]])
torch.where
()
torch.where
(condition, x, y) → TensorReturn a tensor of elements selected from either x
or y
, depending on condition
.
The operation is defined as:
outi={xiif conditioniyiotherwise\text{out}_i = \begin{cases} \text{x}_i & \text{if } \text{condition}_i \\ \text{y}_i & \text{otherwise} \\ \end{cases} outi={xiyiif conditioniotherwise
Note
The tensors condition
, x
, y
must be broadcastable.
Parameters
condition
is True
condition
is False
Returns
A tensor of shape equal to the broadcasted shape of condition
, x
, y
Return type
Example:
>>> x = torch.randn(3, 2)
>>> y = torch.ones(3, 2)
>>> x
tensor([[-0.4620, 0.3139],
[ 0.3898, -0.7197],
[ 0.0478, -0.1657]])
>>> torch.where(x > 0, x, y)
tensor([[ 1.0000, 0.3139],
[ 0.3898, 1.0000],
[ 0.0478, 1.0000]])
torch.where
(condition) → tuple of LongTensor
torch.where(condition)
is identical to torch.nonzero(condition, as_tuple=True)
.
Note
See also torch.nonzero()
.