RiNALMo
Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective.
Disclaimer
This is an UNOFFICIAL implementation of the RiNALMo: General-Purpose RNA Language Models Can Generalize Well on Structure Prediction Tasks by Rafael Josip Penić, et al.
The OFFICIAL repository of RiNALMo is at lbcb-sci/RiNALMo.
Tip
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
The team releasing RiNALMo did not write this model card for this model so this model card has been written by the MultiMolecule team.
Model Details
RiNALMo is a bert-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the Training Details section for more information on the training process.
Variants
Model Specification
| Variants |
Num Layers |
Hidden Size |
Num Heads |
Intermediate Size |
Num Parameters (M) |
FLOPs (G) |
MACs (G) |
Max Num Tokens |
| RiNALMo-Giga |
33 |
1280 |
20 |
5120 |
650.88 |
168.92 |
84.43 |
1022 |
| RiNALMo-Mega |
30 |
640 |
2560 |
148.04 |
39.03 |
19.5 |
| RiNALMo-Micro |
12 |
480 |
1920 |
33.48 |
8.88 |
4.44 |
Links
Usage
The model file depends on the multimolecule library. You can install it using pip:
| Bash |
|---|
| pip install multimolecule
|
Direct Use
Masked Language Modeling
You can use this model directly with a pipeline for masked language modeling:
| Python |
|---|
| import multimolecule # you must import multimolecule to register models
from transformers import pipeline
predictor = pipeline("fill-mask", model="multimolecule/rinalmo-giga")
output = predictor("gguc<mask>cucugguuagaccagaucugagccu")
|
Downstream Use
Here is how to use this model to get the features of a given sequence in PyTorch:
| Python |
|---|
| from multimolecule import RnaTokenizer, RiNALMoModel
tokenizer = RnaTokenizer.from_pretrained("multimolecule/rinalmo-giga")
model = RiNALMoModel.from_pretrained("multimolecule/rinalmo-giga")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
output = model(**input)
|
Sequence Classification / Regression
Note
This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
| Python |
|---|
| import torch
from multimolecule import RnaTokenizer, RiNALMoForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/rinalmo-giga")
model = RiNALMoForSequencePrediction.from_pretrained("multimolecule/rinalmo-giga")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])
output = model(**input, labels=label)
|
Token Classification / Regression
Note
This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
| Python |
|---|
| import torch
from multimolecule import RnaTokenizer, RiNALMoForTokenPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/rinalmo-giga")
model = RiNALMoForTokenPrediction.from_pretrained("multimolecule/rinalmo-giga")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
|
Note
This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
| Python |
|---|
| import torch
from multimolecule import RnaTokenizer, RiNALMoForContactPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/rinalmo-giga")
model = RiNALMoForContactPrediction.from_pretrained("multimolecule/rinalmo-giga")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
|
Training Details
RiNALMo used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
Training Data
The RiNALMo model was pre-trained on a cocktail of databases including RNAcentral, Rfam, Ensembl Genome Browser, and Nucleotide.
The training data contains 36 million unique ncRNA sequences.
To ensure sequence diversity in each training batch, RiNALMo clustered the sequences with MMSeqs2 into 17 million clusters and then sampled each sequence in the batch from a different cluster.
RiNALMo preprocessed all tokens by replacing “U”s with “T”s.
Note that during model conversions, “T” is replaced with “U”. RnaTokenizer will convert “T”s to “U”s for you, you may disable this behaviour by passing replace_T_with_U=False.
Training Procedure
Preprocessing
RiNALMo used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by
<mask>.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Pre-training
The model was trained on 7 NVIDIA A100 GPUs with 80GiB memories.
- Batch Size: 1344
- Epochs: 6
- Learning rate: 5e-5
- Learning rate scheduler: Cosine
- Learning rate warm-up: 2,000 steps
- Learning rate minimum: 1e-5
- Dropout: 0.1
Citation
| BibTeX |
|---|
| @ARTICLE{Penic2025-qf,
title = "{RiNALMo}: general-purpose {RNA} language models can generalize
well on structure prediction tasks",
author = "Peni{\'c}, Rafael Josip and Vla{\v s}i{\'c}, Tin and Huber,
Roland G and Wan, Yue and {\v S}iki{\'c}, Mile",
abstract = "While RNA has recently been recognized as an interesting
small-molecule drug target, many challenges remain to be
addressed before we take full advantage of it. This emphasizes
the necessity to improve our understanding of its structures and
functions. Over the years, sequencing technologies have produced
an enormous amount of unlabeled RNA data, which hides a huge
potential. Motivated by the successes of protein language
models, we introduce RiboNucleic Acid Language Model (RiNALMo)
to unveil the hidden code of RNA. RiNALMo is the largest RNA
language model to date, with 650M parameters pre-trained on 36M
non-coding RNA sequences from several databases. It can extract
hidden knowledge and capture the underlying structure
information implicitly embedded within the RNA sequences.
RiNALMo achieves state-of-the-art results on several downstream
tasks. Notably, we show that its generalization capabilities
overcome the inability of other deep learning methods for
secondary structure prediction to generalize on unseen RNA
families.",
journal = "Nature Communications",
publisher = "Springer Science and Business Media LLC",
volume = 16,
number = 1,
pages = "5671",
month = jul,
year = 2025,
copyright = "https://creativecommons.org/licenses/by-nc-nd/4.0",
language = "en"
}
|
Note
The artifacts distributed in this repository are part of the MultiMolecule project.
If you use MultiMolecule in your research, you must cite the MultiMolecule project as follows:
| BibTeX |
|---|
| @software{chen_2024_12638419,
author = {Chen, Zhiyuan and Zhu, Sophia Y.},
title = {MultiMolecule},
doi = {10.5281/zenodo.12638419},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.12638419},
year = 2024,
month = may,
day = 4
}
|
Please use GitHub issues of MultiMolecule for any questions or comments on the model card.
Please contact the authors of the RiNALMo paper for questions or comments on the paper/model.
License
This model is licensed under the GNU Affero General Public License.
For additional terms and clarifications, please refer to our License FAQ.
| Text Only |
|---|
| SPDX-License-Identifier: AGPL-3.0-or-later
|
multimolecule.models.rinalmo
RnaTokenizer
Bases: Tokenizer
Tokenizer for RNA sequences.
Parameters:
| Name |
Type |
Description |
Default |
alphabet
|
Alphabet | str | List[str] | None
|
alphabet to use for tokenization.
- If is
None, the standard RNA alphabet will be used.
- If is a
string, it should correspond to the name of a predefined alphabet. The options include
standard
extended
streamline
nucleobase
- If is an alphabet or a list of characters, that specific alphabet will be used.
|
None
|
nmers
|
int
|
Size of kmer to tokenize.
|
1
|
codon
|
bool
|
Whether to tokenize into codons.
|
False
|
replace_T_with_U
|
bool
|
Whether to replace T with U.
|
True
|
do_upper_case
|
bool
|
Whether to convert input to uppercase.
|
True
|
Examples:
| Python Console Session |
|---|
| >>> from multimolecule import RnaTokenizer
>>> tokenizer = RnaTokenizer()
>>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
[1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
>>> tokenizer('acgu')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer = RnaTokenizer(replace_T_with_U=False)
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 3, 2]
>>> tokenizer = RnaTokenizer(nmers=3)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 17, 64, 49, 96, 84, 22, 2]
>>> tokenizer = RnaTokenizer(codon=True)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 49, 22, 2]
>>> tokenizer('uagcuuauca')["input_ids"]
Traceback (most recent call last):
ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
|
Source code in multimolecule/tokenisers/rna/tokenization_rna.py
| Python |
|---|
| class RnaTokenizer(Tokenizer):
"""
Tokenizer for RNA sequences.
Args:
alphabet: alphabet to use for tokenization.
- If is `None`, the standard RNA alphabet will be used.
- If is a `string`, it should correspond to the name of a predefined alphabet. The options include
+ `standard`
+ `extended`
+ `streamline`
+ `nucleobase`
- If is an alphabet or a list of characters, that specific alphabet will be used.
nmers: Size of kmer to tokenize.
codon: Whether to tokenize into codons.
replace_T_with_U: Whether to replace T with U.
do_upper_case: Whether to convert input to uppercase.
Examples:
>>> from multimolecule import RnaTokenizer
>>> tokenizer = RnaTokenizer()
>>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
[1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
>>> tokenizer('acgu')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer = RnaTokenizer(replace_T_with_U=False)
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 3, 2]
>>> tokenizer = RnaTokenizer(nmers=3)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 17, 64, 49, 96, 84, 22, 2]
>>> tokenizer = RnaTokenizer(codon=True)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 49, 22, 2]
>>> tokenizer('uagcuuauca')["input_ids"]
Traceback (most recent call last):
ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
"""
model_input_names = ["input_ids", "attention_mask"]
def __init__(
self,
alphabet: Alphabet | str | List[str] | None = None,
nmers: int = 1,
codon: bool = False,
replace_T_with_U: bool = True,
do_upper_case: bool = True,
additional_special_tokens: List | Tuple | None = None,
**kwargs,
):
if codon and (nmers > 1 and nmers != 3):
raise ValueError("Codon and nmers cannot be used together.")
if codon:
nmers = 3 # set to 3 to get correct vocab
if not isinstance(alphabet, Alphabet):
alphabet = get_alphabet(alphabet, nmers=nmers)
super().__init__(
alphabet=alphabet,
nmers=nmers,
codon=codon,
replace_T_with_U=replace_T_with_U,
do_upper_case=do_upper_case,
additional_special_tokens=additional_special_tokens,
**kwargs,
)
self.replace_T_with_U = replace_T_with_U
self.nmers = nmers
self.codon = codon
def _tokenize(self, text: str, **kwargs):
if self.do_upper_case:
text = text.upper()
if self.replace_T_with_U:
text = text.replace("T", "U")
if self.codon:
if len(text) % 3 != 0:
raise ValueError(
f"length of input sequence must be a multiple of 3 for codon tokenization, but got {len(text)}"
)
return [text[i : i + 3] for i in range(0, len(text), 3)]
if self.nmers > 1:
return [text[i : i + self.nmers] for i in range(len(text) - self.nmers + 1)] # noqa: E203
return list(text)
|
RiNALMoConfig
Bases: PreTrainedConfig
This is the configuration class to store the configuration of a RiNALMoModel.
It is used to instantiate a RiNALMo model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the RiNALMo
lbcb-sci/RiNALMo architecture.
Configuration objects inherit from PreTrainedConfig and can be used to
control the model outputs. Read the documentation from PreTrainedConfig
for more information.
Parameters:
| Name |
Type |
Description |
Default |
vocab_size
|
int
|
Vocabulary size of the RiNALMo model. Defines the number of different tokens that can be represented by the
input_ids passed when calling [RiNALMoModel].
|
26
|
hidden_size
|
int
|
Dimensionality of the encoder layers and the pooler layer.
|
1280
|
num_hidden_layers
|
int
|
Number of hidden layers in the Transformer encoder.
|
33
|
num_attention_heads
|
int
|
Number of attention heads for each attention layer in the Transformer encoder.
|
20
|
|
|
int
|
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
|
5120
|
hidden_act
|
str
|
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
|
'gelu'
|
hidden_dropout
|
float
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
0.1
|
attention_dropout
|
float
|
The dropout ratio for the attention probabilities.
|
0.1
|
max_position_embeddings
|
int
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
|
1024
|
initializer_range
|
float
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
0.02
|
layer_norm_eps
|
float
|
The epsilon used by the layer normalization layers.
|
1e-05
|
position_embedding_type
|
str
|
|
'rotary'
|
is_decoder
|
bool
|
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
|
False
|
use_cache
|
bool
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
|
True
|
emb_layer_norm_before
|
bool
|
Whether to apply layer normalization after embeddings but before the main stem of the network.
|
True
|
learnable_beta
|
bool
|
Whether to make the swish-gate beta parameter learnable.
|
True
|
token_dropout
|
bool
|
When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.
|
True
|
head
|
HeadConfig | None
|
The configuration of the head.
|
None
|
lm_head
|
MaskedLMHeadConfig | None
|
The configuration of the masked language model head.
|
None
|
add_cross_attention
|
bool
|
Whether to add cross-attention layers when the model is used as a decoder.
|
False
|
Examples:
| Python Console Session |
|---|
| >>> from multimolecule import RiNALMoConfig, RiNALMoModel
>>> # Initializing a RiNALMo multimolecule/rinalmo style configuration
>>> configuration = RiNALMoConfig()
>>> # Initializing a model (with random weights) from the multimolecule/rinalmo style configuration
>>> model = RiNALMoModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
|
Source code in multimolecule/models/rinalmo/configuration_rinalmo.py
| Python |
|---|
| class RiNALMoConfig(PreTrainedConfig):
r"""
This is the configuration class to store the configuration of a [`RiNALMoModel`][multimolecule.models.RiNALMoModel].
It is used to instantiate a RiNALMo model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the RiNALMo
[lbcb-sci/RiNALMo](https://github.com/lbcb-sci/RiNALMo) architecture.
Configuration objects inherit from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig] and can be used to
control the model outputs. Read the documentation from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig]
for more information.
Args:
vocab_size:
Vocabulary size of the RiNALMo model. Defines the number of different tokens that can be represented by the
`input_ids` passed when calling [`RiNALMoModel`].
hidden_size:
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers:
Number of hidden layers in the Transformer encoder.
num_attention_heads:
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size:
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act:
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout:
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout:
The dropout ratio for the attention probabilities.
max_position_embeddings:
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range:
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps:
The epsilon used by the layer normalization layers.
position_embedding_type:
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`,
`"rotary"`.
For positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
is_decoder:
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
use_cache:
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
emb_layer_norm_before:
Whether to apply layer normalization after embeddings but before the main stem of the network.
learnable_beta:
Whether to make the swish-gate beta parameter learnable.
token_dropout:
When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.
head:
The configuration of the head.
lm_head:
The configuration of the masked language model head.
add_cross_attention:
Whether to add cross-attention layers when the model is used as a decoder.
Examples:
>>> from multimolecule import RiNALMoConfig, RiNALMoModel
>>> # Initializing a RiNALMo multimolecule/rinalmo style configuration
>>> configuration = RiNALMoConfig()
>>> # Initializing a model (with random weights) from the multimolecule/rinalmo style configuration
>>> model = RiNALMoModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
"""
model_type = "rinalmo"
def __init__(
self,
vocab_size: int = 26,
hidden_size: int = 1280,
num_hidden_layers: int = 33,
num_attention_heads: int = 20,
intermediate_size: int = 5120,
hidden_act: str = "gelu",
hidden_dropout: float = 0.1,
attention_dropout: float = 0.1,
max_position_embeddings: int = 1024,
initializer_range: float = 0.02,
layer_norm_eps: float = 1e-5,
position_embedding_type: str = "rotary",
is_decoder: bool = False,
use_cache: bool = True,
emb_layer_norm_before: bool = True,
learnable_beta: bool = True,
token_dropout: bool = True,
head: HeadConfig | None = None,
lm_head: MaskedLMHeadConfig | None = None,
add_cross_attention: bool = False,
**kwargs,
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.hidden_dropout = hidden_dropout
self.attention_dropout = attention_dropout
self.max_position_embeddings = max_position_embeddings
self.initializer_range = initializer_range
self.layer_norm_eps = layer_norm_eps
self.position_embedding_type = position_embedding_type
self.is_decoder = is_decoder
self.use_cache = use_cache
self.learnable_beta = learnable_beta
self.token_dropout = token_dropout
self.head = HeadConfig(**head) if head is not None else None
self.lm_head = MaskedLMHeadConfig(**lm_head) if lm_head is not None else None
self.emb_layer_norm_before = emb_layer_norm_before
self.add_cross_attention = add_cross_attention
|
Bases: RiNALMoPreTrainedModel
Examples:
| Python Console Session |
|---|
| >>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForContactPrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForContactPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoForContactPrediction(RiNALMoPreTrainedModel):
"""
Examples:
>>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForContactPrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForContactPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RiNALMoConfig):
super().__init__(config)
self.model = RiNALMoModel(config, add_pooling_layer=False)
self.contact_head = ContactPredictionHead(config)
self.head_config = self.contact_head.config
self.require_attentions = self.contact_head.require_attentions
# Initialize weights and apply final processing
self.post_init()
@can_return_tuple
def forward(
self,
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | ContactPredictorOutput:
if self.require_attentions:
output_attentions = kwargs.get("output_attentions", self.config.output_attentions)
if output_attentions is False:
warn("output_attentions must be True since prediction head requires attentions.")
kwargs["output_attentions"] = True
outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
return_dict=True,
**kwargs,
)
output = self.contact_head(outputs, attention_mask, input_ids, labels)
logits, loss = output.logits, output.loss
return ContactPredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
|
Bases: RiNALMoPreTrainedModel
Examples:
| Python Console Session |
|---|
| >>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForMaskedLM, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForMaskedLM(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=input["input_ids"])
>>> output["logits"].shape
torch.Size([1, 7, 26])
>>> output["loss"]
tensor(..., grad_fn=<NllLossBackward0>)
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoForMaskedLM(RiNALMoPreTrainedModel):
"""
Examples:
>>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForMaskedLM, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForMaskedLM(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=input["input_ids"])
>>> output["logits"].shape
torch.Size([1, 7, 26])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<NllLossBackward0>)
"""
_tied_weights_keys = {
"lm_head.decoder.weight": "model.embeddings.word_embeddings.weight",
"lm_head.decoder.bias": "lm_head.bias",
}
def __init__(self, config: RiNALMoConfig):
super().__init__(config)
if config.is_decoder:
warn(
"If you want to use `RiNALMoForMaskedLM` make sure `config.is_decoder=False` for "
"bi-directional self-attention."
)
self.model = RiNALMoModel(config, add_pooling_layer=False)
self.lm_head = MaskedLMHead(config)
# Initialize weights and apply final processing
self.post_init()
def get_output_embeddings(self):
return self.lm_head.decoder
def set_output_embeddings(self, embeddings):
self.lm_head.decoder = embeddings
if hasattr(self.lm_head, "bias"):
self.lm_head.bias = embeddings.bias
@can_return_tuple
def forward(
self,
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
encoder_hidden_states: Tensor | None = None,
encoder_attention_mask: Tensor | None = None,
labels: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | MaskedLMOutput:
outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
return_dict=True,
**kwargs,
)
output = self.lm_head(outputs, labels)
logits, loss = output.logits, output.loss
return MaskedLMOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
|
RiNALMoForSecondaryStructurePrediction
Bases: RiNALMoForMaskedLM
Examples:
| Python Console Session |
|---|
| >>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForSecondaryStructurePrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForSecondaryStructurePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoForSecondaryStructurePrediction(RiNALMoForMaskedLM):
"""
Examples:
>>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForSecondaryStructurePrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForSecondaryStructurePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
"""
def __init__(self, config: RiNALMoConfig):
super().__init__(config)
self.model = RiNALMoModel(config, add_pooling_layer=False)
self.ss_head = RiNALMoSecondaryStructurePredictionHead(config)
self.require_attentions = self.ss_head.require_attentions
# Initialize weights and apply final processing
self.post_init()
@can_return_tuple
def forward( # type: ignore[override]
self,
input_ids: Tensor | NestedTensor,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
encoder_hidden_states: Tensor | None = None,
encoder_attention_mask: Tensor | None = None,
labels: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | ContactPredictorOutput:
if self.require_attentions:
output_attentions = kwargs.get("output_attentions", self.config.output_attentions)
if output_attentions is False:
warn("output_attentions must be True since prediction head requires attentions.")
kwargs["output_attentions"] = True
outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
return_dict=True,
**kwargs,
)
output = self.ss_head(outputs, attention_mask, input_ids, labels=labels)
logits, loss = output.logits, output.loss
return ContactPredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
|
RiNALMoForSequencePrediction
Bases: RiNALMoPreTrainedModel
Examples:
| Python Console Session |
|---|
| >>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForSequencePrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForSequencePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.tensor([[1]]))
>>> output["logits"].shape
torch.Size([1, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoForSequencePrediction(RiNALMoPreTrainedModel):
"""
Examples:
>>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForSequencePrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForSequencePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.tensor([[1]]))
>>> output["logits"].shape
torch.Size([1, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RiNALMoConfig):
super().__init__(config)
self.model = RiNALMoModel(config)
self.sequence_head = SequencePredictionHead(config)
self.head_config = self.sequence_head.config
# Initialize weights and apply final processing
self.post_init()
@can_return_tuple
def forward(
self,
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | SequencePredictorOutput:
outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
return_dict=True,
**kwargs,
)
output = self.sequence_head(outputs, labels)
logits, loss = output.logits, output.loss
return SequencePredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
|
RiNALMoForTokenPrediction
Bases: RiNALMoPreTrainedModel
Examples:
| Python Console Session |
|---|
| >>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForTokenPrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForTokenPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoForTokenPrediction(RiNALMoPreTrainedModel):
"""
Examples:
>>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoForTokenPrediction, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoForTokenPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RiNALMoConfig):
super().__init__(config)
self.model = RiNALMoModel(config, add_pooling_layer=False)
self.token_head = TokenPredictionHead(config)
self.head_config = self.token_head.config
# Initialize weights and apply final processing
self.post_init()
@can_return_tuple
def forward(
self,
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | TokenPredictorOutput:
outputs = self.model(
input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
return_dict=True,
**kwargs,
)
output = self.token_head(outputs, attention_mask, input_ids, labels)
logits, loss = output.logits, output.loss
return TokenPredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
|
RiNALMoModel
Bases: RiNALMoPreTrainedModel
Examples:
| Python Console Session |
|---|
| >>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoModel, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoModel(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["last_hidden_state"].shape
torch.Size([1, 7, 1280])
>>> output["pooler_output"].shape
torch.Size([1, 1280])
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoModel(RiNALMoPreTrainedModel):
"""
Examples:
>>> import torch
>>> from multimolecule import RiNALMoConfig, RiNALMoModel, RnaTokenizer
>>> config = RiNALMoConfig()
>>> model = RiNALMoModel(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["last_hidden_state"].shape
torch.Size([1, 7, 1280])
>>> output["pooler_output"].shape
torch.Size([1, 1280])
"""
def __init__(self, config: RiNALMoConfig, add_pooling_layer: bool = True):
super().__init__(config)
self.pad_token_id = config.pad_token_id
self.gradient_checkpointing = False
self.embeddings = RiNALMoEmbeddings(config)
self.encoder = RiNALMoEncoder(config)
self.pooler = RiNALMoPooler(config) if add_pooling_layer else None
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.embeddings.word_embeddings
def set_input_embeddings(self, value):
self.embeddings.word_embeddings = value
@check_model_inputs
def forward(
self,
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
encoder_hidden_states: Tensor | None = None,
encoder_attention_mask: Tensor | None = None,
past_key_values: Cache | None = None,
use_cache: bool | None = None,
cache_position: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | BaseModelOutputWithPoolingAndCrossAttentions:
r"""
Args:
encoder_hidden_states:
Shape: `(batch_size, sequence_length, hidden_size)`
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask:
Shape: `(batch_size, sequence_length)`
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.
past_key_values:
Tuple of length `config.n_layers` with each tuple having 4 tensors of shape
`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up
decoding.
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those
that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of
all `decoder_input_ids` of shape `(batch_size, sequence_length)`.
use_cache:
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
(see `past_key_values`).
"""
if self.config.is_decoder:
use_cache = use_cache if use_cache is not None else self.config.use_cache
else:
use_cache = False
if use_cache and past_key_values is None:
past_key_values = (
EncoderDecoderCache(DynamicCache(config=self.config), DynamicCache(config=self.config))
if encoder_hidden_states is not None or self.config.is_encoder_decoder
else DynamicCache(config=self.config)
)
if isinstance(input_ids, NestedTensor) and attention_mask is None:
attention_mask = input_ids.mask
if isinstance(inputs_embeds, NestedTensor) and attention_mask is None:
attention_mask = inputs_embeds.mask
if (input_ids is None) ^ (inputs_embeds is not None):
raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
if input_ids is not None:
device = input_ids.device
seq_length = input_ids.shape[1]
else:
device = inputs_embeds.device # type: ignore[union-attr]
seq_length = inputs_embeds.shape[1] # type: ignore[union-attr]
# past_key_values_length
past_key_values_length = past_key_values.get_seq_length() if past_key_values is not None else 0
if cache_position is None:
cache_position = torch.arange(past_key_values_length, past_key_values_length + seq_length, device=device)
if attention_mask is None and input_ids is not None and self.pad_token_id is not None:
attention_mask = input_ids.ne(self.pad_token_id)
embedding_output = self.embeddings(
input_ids=input_ids,
position_ids=position_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds,
past_key_values_length=past_key_values_length,
)
attention_mask, encoder_attention_mask = self._create_attention_masks(
attention_mask=attention_mask,
encoder_attention_mask=encoder_attention_mask,
embedding_output=embedding_output,
encoder_hidden_states=encoder_hidden_states,
cache_position=cache_position,
past_key_values=past_key_values,
)
encoder_outputs = self.encoder(
embedding_output,
attention_mask,
encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
past_key_values=past_key_values,
use_cache=use_cache,
cache_position=cache_position,
position_ids=position_ids,
**kwargs,
)
sequence_output = encoder_outputs.last_hidden_state
pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
return BaseModelOutputWithPoolingAndCrossAttentions(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
past_key_values=encoder_outputs.past_key_values,
)
def _create_attention_masks(
self,
attention_mask,
encoder_attention_mask,
embedding_output,
encoder_hidden_states,
cache_position,
past_key_values,
):
if self.config.is_decoder:
attention_mask = create_causal_mask(
config=self.config,
input_embeds=embedding_output,
attention_mask=attention_mask,
cache_position=cache_position,
past_key_values=past_key_values,
)
else:
attention_mask = create_bidirectional_mask(
config=self.config, input_embeds=embedding_output, attention_mask=attention_mask
)
if encoder_attention_mask is not None:
encoder_attention_mask = create_bidirectional_mask(
config=self.config,
input_embeds=embedding_output,
attention_mask=encoder_attention_mask,
encoder_hidden_states=encoder_hidden_states,
)
return attention_mask, encoder_attention_mask
|
forward
Pythonforward(
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
encoder_hidden_states: Tensor | None = None,
encoder_attention_mask: Tensor | None = None,
past_key_values: Cache | None = None,
use_cache: bool | None = None,
cache_position: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs]
) -> (
Tuple[Tensor, ...]
| BaseModelOutputWithPoolingAndCrossAttentions
)
Parameters:
| Name |
Type |
Description |
Default |
encoder_hidden_states
|
Tensor | None
|
Shape: (batch_size, sequence_length, hidden_size)
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
|
None
|
encoder_attention_mask
|
Tensor | None
|
Shape: (batch_size, sequence_length)
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
- 1 for tokens that are not masked,
- 0 for tokens that are masked.
|
None
|
past_key_values
|
Cache | None
|
Tuple of length config.n_layers with each tuple having 4 tensors of shape
`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up
decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
|
None
|
use_cache
|
bool | None
|
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
|
None
|
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| @check_model_inputs
def forward(
self,
input_ids: Tensor | NestedTensor | None = None,
attention_mask: Tensor | None = None,
position_ids: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
encoder_hidden_states: Tensor | None = None,
encoder_attention_mask: Tensor | None = None,
past_key_values: Cache | None = None,
use_cache: bool | None = None,
cache_position: Tensor | None = None,
**kwargs: Unpack[TransformersKwargs],
) -> Tuple[Tensor, ...] | BaseModelOutputWithPoolingAndCrossAttentions:
r"""
Args:
encoder_hidden_states:
Shape: `(batch_size, sequence_length, hidden_size)`
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask:
Shape: `(batch_size, sequence_length)`
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.
past_key_values:
Tuple of length `config.n_layers` with each tuple having 4 tensors of shape
`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up
decoding.
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those
that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of
all `decoder_input_ids` of shape `(batch_size, sequence_length)`.
use_cache:
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
(see `past_key_values`).
"""
if self.config.is_decoder:
use_cache = use_cache if use_cache is not None else self.config.use_cache
else:
use_cache = False
if use_cache and past_key_values is None:
past_key_values = (
EncoderDecoderCache(DynamicCache(config=self.config), DynamicCache(config=self.config))
if encoder_hidden_states is not None or self.config.is_encoder_decoder
else DynamicCache(config=self.config)
)
if isinstance(input_ids, NestedTensor) and attention_mask is None:
attention_mask = input_ids.mask
if isinstance(inputs_embeds, NestedTensor) and attention_mask is None:
attention_mask = inputs_embeds.mask
if (input_ids is None) ^ (inputs_embeds is not None):
raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
if input_ids is not None:
device = input_ids.device
seq_length = input_ids.shape[1]
else:
device = inputs_embeds.device # type: ignore[union-attr]
seq_length = inputs_embeds.shape[1] # type: ignore[union-attr]
# past_key_values_length
past_key_values_length = past_key_values.get_seq_length() if past_key_values is not None else 0
if cache_position is None:
cache_position = torch.arange(past_key_values_length, past_key_values_length + seq_length, device=device)
if attention_mask is None and input_ids is not None and self.pad_token_id is not None:
attention_mask = input_ids.ne(self.pad_token_id)
embedding_output = self.embeddings(
input_ids=input_ids,
position_ids=position_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds,
past_key_values_length=past_key_values_length,
)
attention_mask, encoder_attention_mask = self._create_attention_masks(
attention_mask=attention_mask,
encoder_attention_mask=encoder_attention_mask,
embedding_output=embedding_output,
encoder_hidden_states=encoder_hidden_states,
cache_position=cache_position,
past_key_values=past_key_values,
)
encoder_outputs = self.encoder(
embedding_output,
attention_mask,
encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
past_key_values=past_key_values,
use_cache=use_cache,
cache_position=cache_position,
position_ids=position_ids,
**kwargs,
)
sequence_output = encoder_outputs.last_hidden_state
pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
return BaseModelOutputWithPoolingAndCrossAttentions(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
past_key_values=encoder_outputs.past_key_values,
)
|
RiNALMoPreTrainedModel
Bases: PreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
Source code in multimolecule/models/rinalmo/modeling_rinalmo.py
| Python |
|---|
| class RiNALMoPreTrainedModel(PreTrainedModel):
"""
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
"""
config_class = RiNALMoConfig
base_model_prefix = "model"
supports_gradient_checkpointing = True
_supports_flash_attn = True
_supports_sdpa = True
_supports_flex_attn = True
_supports_attention_backend = True
_can_record_outputs: dict[str, Any] | None = None
_no_split_modules = ["RiNALMoLayer", "RiNALMoEmbeddings"]
@torch.no_grad()
def _init_weights(self, module: nn.Module):
super()._init_weights(module)
if isinstance(module, RiNALMoEmbeddings):
init.copy_(module.position_ids, torch.arange(module.position_ids.shape[-1]).expand((1, -1)))
|