目录
torch.optim
is a package implementing various optimization algorithms. Most commonly used methods are already supported, and the interface is general enough, so that more sophisticated ones can be also easily integrated in the future.
To use torch.optim
you have to construct an optimizer object, that will hold the current state and will update the parameters based on the computed gradients.
To construct an Optimizer
you have to give it an iterable containing the parameters (all should be Variable
s) to optimize. Then, you can specify optimizer-specific options such as the learning rate, weight decay, etc.
Note
If you need to move a model to GPU via .cuda()
, please do so before constructing optimizers for it. Parameters of a model after .cuda()
will be different objects with those before the call.
In general, you should make sure that optimized parameters live in consistent locations when optimizers are constructed and used.
Example:
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam([var1, var2], lr=0.0001)
Optimizer
s also support specifying per-parameter options. To do this, instead of passing an iterable of Variable
s, pass in an iterable of dict
s. Each of them will define a separate parameter group, and should contain a params
key, containing a list of parameters belonging to it. Other keys should match the keyword arguments accepted by the optimizers, and will be used as optimization options for this group.
Note
You can still pass options as keyword arguments. They will be used as defaults, in the groups that didn’t override them. This is useful when you only want to vary a single option, while keeping all others consistent between parameter groups.
For example, this is very useful when one wants to specify per-layer learning rates:
optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
This means that model.base
’s parameters will use the default learning rate of 1e-2
, model.classifier
’s parameters will use a learning rate of 1e-3
, and a momentum of 0.9
will be used for all parameters.
All optimizers implement a step()
method, that updates the parameters. It can be used in two ways:
optimizer.step()
This is a simplified version supported by most optimizers. The function can be called once the gradients are computed using e.g. backward()
.
Example:
for input, target in dataset:
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
optimizer.step()
optimizer.step(closure)
Some optimization algorithms such as Conjugate Gradient and LBFGS need to reevaluate the function multiple times, so you have to pass in a closure that allows them to recompute your model. The closure should clear the gradients, compute the loss, and return it.
Example:
for input, target in dataset:
def closure():
optimizer.zero_grad()
output = model(input)
loss = loss_fn(output, target)
loss.backward()
return loss
optimizer.step(closure)
class torch.optim.Optimizer
(params, defaults)[source]
Base class for all optimizers.
Warning
Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries.
Parameters
torch.Tensor
s or dict
s. Specifies what Tensors should be optimized.
add_param_group
(param_group)[source]
Add a param group to the Optimizer
s param_groups.
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer
as training progresses.
Parameters
load_state_dict
(state_dict)[source]
Loads the optimizer state.
Parameters
state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict()
.
state_dict
()[source]
Returns the state of the optimizer as a dict
.
It contains two entries:
step
(closure)[source]
Performs a single optimization step (parameter update).
Parameters
closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.
zero_grad
()[source]
Clears the gradients of all optimized torch.Tensor
s.
class torch.optim.Adadelta
(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)[source]
Implements Adadelta algorithm.
It has been proposed in ADADELTA: An Adaptive Learning Rate Method.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Adagrad
(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0)[source]
Implements Adagrad algorithm.
It has been proposed in Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Adam
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)[source]
Implements Adam algorithm.
It has been proposed in Adam: A Method for Stochastic Optimization.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.AdamW
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)[source]
Implements AdamW algorithm.
The original Adam algorithm was proposed in Adam: A Method for Stochastic Optimization. The AdamW variant was proposed in Decoupled Weight Decay Regularization.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.SparseAdam
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08)[source]
Implements lazy version of Adam algorithm suitable for sparse tensors.
In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Adamax
(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]
Implements Adamax algorithm (a variant of Adam based on infinity norm).
It has been proposed in Adam: A Method for Stochastic Optimization.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.ASGD
(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0)[source]
Implements Averaged Stochastic Gradient Descent.
It has been proposed in Acceleration of stochastic approximation by averaging.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.LBFGS
(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-05, tolerance_change=1e-09, history_size=100, line_search_fn=None)[source]
Implements L-BFGS algorithm, heavily inspired by minFunc <https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>.
Warning
This optimizer doesn’t support per-parameter options and parameter groups (there can be only one).
Warning
Right now all parameters have to be on a single device. This will be improved in the future.
Note
This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1)
bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm.
Parameters
step
(closure)[source]
Performs a single optimization step.
Parameters
closure (callable) – A closure that reevaluates the model and returns the loss.
class torch.optim.RMSprop
(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)[source]
Implements RMSprop algorithm.
Proposed by G. Hinton in his course.
The centered version first appears in Generating Sequences With Recurrent Neural Networks.
Parameters
True
, compute the centered RMSProp, the gradient is normalized by an estimation of its variance
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.Rprop
(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50))[source]
Implements the resilient backpropagation algorithm.
Parameters
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
class torch.optim.SGD
(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)[source]
Implements stochastic gradient descent (optionally with momentum).
Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning.
Parameters
Example
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
Note
The implementation of SGD with Momentum/Nesterov subtly differs from Sutskever et. al. and implementations in some other frameworks.
Considering the specific case of Momentum, the update can be written as
v=ρ∗v+gp=p−lr∗vv = \rho * v + g \\ p = p - lr * v v=ρ∗v+gp=p−lr∗v
where p, g, v and ρ\rhoρ denote the parameters, gradient, velocity, and momentum respectively.
This is in contrast to Sutskever et. al. and other frameworks which employ an update of the form
v=ρ∗v+lr∗gp=p−vv = \rho * v + lr * g \\ p = p - v v=ρ∗v+lr∗gp=p−v
The Nesterov version is analogously modified.
step
(closure=None)[source]
Performs a single optimization step.
Parameters
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
torch.optim.lr_scheduler
provides several methods to adjust the learning rate based on the number of epochs. torch.optim.lr_scheduler.ReduceLROnPlateau
allows dynamic learning rate reducing based on some validation measurements.
Learning rate scheduling should be applied after optimizer’s update; e.g., you should write your code this way:
>>> scheduler = ...
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
Warning
Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler (calling scheduler.step()
) before the optimizer’s update (calling optimizer.step()
), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after upgrading to PyTorch 1.1.0, please check if you are calling scheduler.step()
at the wrong time.
class torch.optim.lr_scheduler.LambdaLR
(optimizer, lr_lambda, last_epoch=-1)[source]
Sets the learning rate of each parameter group to the initial lr times a given function. When last_epoch=-1, sets initial lr as lr.
Parameters
Example
>>> # Assuming optimizer has two groups.
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95 ** epoch
>>> scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
load_state_dict
(state_dict)[source]
Loads the schedulers state.
Parameters
state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict()
.
state_dict
()[source]
Returns the state of the scheduler as a dict
.
It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.
class torch.optim.lr_scheduler.StepLR
(optimizer, step_size, gamma=0.1, last_epoch=-1)[source]
Sets the learning rate of each parameter group to the initial lr decayed by gamma every step_size epochs. When last_epoch=-1, sets initial lr as lr.
Parameters
Example
>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 60
>>> # lr = 0.0005 if 60 <= epoch < 90
>>> # ...
>>> scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
class torch.optim.lr_scheduler.MultiStepLR
(optimizer, milestones, gamma=0.1, last_epoch=-1)[source]
Set the learning rate of each parameter group to the initial lr decayed by gamma once the number of epoch reaches one of the milestones. When last_epoch=-1, sets initial lr as lr.
Parameters
Example
>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05 if epoch < 30
>>> # lr = 0.005 if 30 <= epoch < 80
>>> # lr = 0.0005 if epoch >= 80
>>> scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
class torch.optim.lr_scheduler.ExponentialLR
(optimizer, gamma, last_epoch=-1)[source]
Set the learning rate of each parameter group to the initial lr decayed by gamma every epoch. When last_epoch=-1, sets initial lr as lr.
Parameters
class torch.optim.lr_scheduler.CosineAnnealingLR
(optimizer, T_max, eta_min=0, last_epoch=-1)[source]
Set the learning rate of each parameter group using a cosine annealing schedule, where ηmax\eta_{max}ηmax is set to the initial lr and TcurT_{cur}Tcur is the number of epochs since the last restart in SGDR:
ηt=ηmin+12(ηmax−ηmin)(1+cos(TcurTmaxπ))\eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})(1 + \cos(\frac{T_{cur}}{T_{max}}\pi)) ηt=ηmin+21(ηmax−ηmin)(1+cos(TmaxTcurπ))
When last_epoch=-1, sets initial lr as lr.
It has been proposed in SGDR: Stochastic Gradient Descent with Warm Restarts. Note that this only implements the cosine annealing part of SGDR, and not the restarts.
Parameters
class torch.optim.lr_scheduler.ReduceLROnPlateau
(optimizer, mode='min', factor=0.1, patience=10, verbose=False, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]
Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.
Parameters
True
, prints a message to stdout for each update. Default: False
.
Example
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = ReduceLROnPlateau(optimizer, 'min')
>>> for epoch in range(10):
>>> train(...)
>>> val_loss = validate(...)
>>> # Note that step should be called after validate()
>>> scheduler.step(val_loss)
class torch.optim.lr_scheduler.CyclicLR
(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1)[source]
Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper Cyclical Learning Rates for Training Neural Networks. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.
Cyclical learning rate policy changes the learning rate after every batch. step should be called after a batch has been used for training.
This class has three built-in policies, as put forth in the paper: “triangular”:
A basic triangular cycle w/ no amplitude scaling.
“triangular2”:
A basic triangular cycle that scales initial amplitude by half each cycle.
“exp_range”:
A cycle that scales initial amplitude by gamma**(cycle iterations) at each cycle iteration.
This implementation was adapted from the github repo: bckenstler/CLR
Parameters
True
, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. Default: True
Example
>>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
>>> scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1)
>>> data_loader = torch.utils.data.DataLoader(...)
>>> for epoch in range(10):
>>> for batch in data_loader:
>>> train_batch(...)
>>> scheduler.step()
get_lr
()[source]
Calculates the learning rate at batch index. This function treats self.last_epoch as the last batch index.
If self.cycle_momentum is True
, this function has a side effect of updating the optimizer’s momentum.