SpladeConfig
- class lightning_ir.models.splade.SpladeConfig(query_length: int = 32, doc_length: int = 512, similarity_function: Literal['cosine', 'dot'] = 'dot', sparsification: Literal['relu', 'relu_log'] | None = 'relu_log', query_pooling_strategy: Literal['first', 'mean', 'max', 'sum'] = 'max', doc_pooling_strategy: Literal['first', 'mean', 'max', 'sum'] = 'max', **kwargs)[source]
Bases:
SingleVectorBiEncoderConfigConfiguration class for a SPLADE model.
- __init__(query_length: int = 32, doc_length: int = 512, similarity_function: Literal['cosine', 'dot'] = 'dot', sparsification: Literal['relu', 'relu_log'] | None = 'relu_log', query_pooling_strategy: Literal['first', 'mean', 'max', 'sum'] = 'max', doc_pooling_strategy: Literal['first', 'mean', 'max', 'sum'] = 'max', **kwargs) None[source]
A SPLADE model encodes queries and documents separately. Before computing the similarity score, the contextualized token embeddings are projected into a logit distribution over the vocabulary using a pre-trained masked language model (MLM) head. The logit distribution is then sparsified and aggregated to obtain a single embedding for the query and document.
- Parameters:
query_length (int) – Maximum query length. Defaults to 32.
doc_length (int) – Maximum document length. Defaults to 512.
similarity_function (Literal["cosine", "dot"]) – Similarity function to compute scores between query and document embeddings. Defaults to “dot”.
sparsification (Literal["relu", "relu_log"] | None) – Sparsification function to apply. Defaults to “relu_log”.
query_pooling_strategy (Literal["first", "mean", "max", "sum"]) – Pooling strategy for query embeddings. Defaults to “max”.
doc_pooling_strategy (Literal["first", "mean", "max", "sum"]) – Pooling strategy for document embeddings. Defaults to “max”.
Methods
__init__([query_length, doc_length, ...])A SPLADE model encodes queries and documents separately.
Attributes
embedding_dimModel type for a SPLADE model.
- backbone_model_type: str | None = None
Backbone model type for the configuration. Set by
LightningIRModelClassFactory().
- classmethod from_pretrained(pretrained_model_name_or_path: str | Path, *args, **kwargs) LightningIRConfig
Loads the configuration from a pretrained model. Wraps the transformers.PretrainedConfig.from_pretrained
- Parameters:
pretrained_model_name_or_path (str | Path) – Pretrained model name or path.
- Returns:
Derived LightningIRConfig class.
- Return type:
- Raises:
ValueError – If pretrained_model_name_or_path is not a Lightning IR model and no
LightningIRConfigis passed.
- get_tokenizer_kwargs(Tokenizer: Type[LightningIRTokenizer]) Dict[str, Any]
Returns the keyword arguments for the tokenizer. This method is used to pass the configuration parameters to the tokenizer.
- Parameters:
Tokenizer (Type[LightningIRTokenizer]) – Class of the tokenizer to be used.
- Returns:
Keyword arguments for the tokenizer.
- Return type:
Dict[str, Any]
- to_dict() Dict[str, Any]
Overrides the transformers.PretrainedConfig.to_dict method to include the added arguments and the backbone model type.
- Returns:
Configuration dictionary.
- Return type:
Dict[str, Any]
- to_diff_dict() dict[str, Any]
Removes all attributes from the configuration that correspond to the default config attributes for better readability, while always retaining the config attribute from the class. Serializes to a Python dictionary.
- Returns:
Dictionary of all the attributes that make up this configuration instance.
- Return type:
dict[str, Any]