LinearLRSchedulerWithLinearWarmup

class lightning_ir.schedulers.lr_schedulers.LinearLRSchedulerWithLinearWarmup(optimizer: Optimizer, num_warmup_steps: int, **kwargs)[source]

Bases: WarmupLRScheduler, LinearSchedulerWithLinearWarmup

Scheduler for linearly decreasing learning rate with linear warmup.

__init__(optimizer: Optimizer, num_warmup_steps: int, **kwargs) None

Base class for learning rate schedulers with warmup.

Parameters:
  • optimizer (torch.optim.Optimizer) – Optimizer to adjust the learning rate for.

  • num_warmup_steps (int) – Number of warmup steps.

Methods

get_last_lr() list[float]

Return last computed learning rate by current scheduler.

get_lr()

Compute learning rate.

load_state_dict(state_dict)

Load the scheduler’s state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

state_dict()

Return the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

step(epoch: int | None = None)

Perform a step.

value_lambda(current_step: int) float

Lambda function for linearly decreasing values with linear warmup.

Parameters:

current_step (int) – Current step

Returns:

Value at the current step

Return type:

float