OneCycleLR

OneCycleLR

class modelzoo.common.pytorch.optim.lr_scheduler.OneCycleLR (optimizer: torch.optim.optimizer.Optimizer, initial_learning_rate: float, max_lr: float, total_steps: int, pct_start: float, final_div_factor: float, three_phase: bool, anneal_strategy: str, disable_lr_steps_reset: bool = False)

Sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate. This policy was initially described in the paper Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.

This scheduler is not chainable.

Similar to https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html#torch.optim.lr_scheduler.OneCycleLR

Parameters:

optimizer – The optimizer to schedule

initial_learning_rate – Initial learning rate. Compared with PyTorch, this is equivalent to max_lr / div_factor.

max_lr – Upper learning rate boundaries in the cycle.

total_steps – The total number of steps in the cycle.

pct_start – The percentage of the cycle (in number of steps) spent increasing the learning rate.

final_div_factor – Determines the minimum learning rate via min_lr = initial_lr/final_div_factor.

three_phase – If True, use a third phase of the schedule to annihilate the learning rate

anneal_strategy – Specifies the annealing strategy: “cos” for cosine annealing, “linear” for linear annealing.