L2Regularization
- class lightning_ir.loss.regularization.L2Regularization(query_weight: float = 0.0001, doc_weight: float = 0.0001)[source]
Bases:
RegularizationLossFunctionL2 Regularization loss function for query and document embeddings.
L2 Regularization, also known as Ridge Regression, adds a penalty term to the loss function that is proportional to the square of the magnitude of the model’s parameters (in this case, the query and document embeddings). This encourages the model to keep the embeddings small, which can help prevent overfitting by discouraging the model from relying too heavily on any single feature. The L2 penalty is differentiable and leads to a smooth optimization landscape, making it a popular choice for regularization in machine learning models.
Originally proposed in: Ridge Regression: Biased Estimation for Nonorthogonal Problems
Methods
compute_loss(output)Compute the L2 regularization loss.
- compute_loss(output: BiEncoderOutput) torch.Tensor[source]
Compute the L2 regularization loss.
- Parameters:
output (BiEncoderOutput) – The output from the model containing query and document embeddings.
- Returns:
The computed loss.
- Return type:
torch.Tensor