Skip to content

ERNIE-RNA

Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective.

Disclaimer

This is an UNOFFICIAL implementation of the ERNIE-RNA: An RNA Language Model with Structure-enhanced Representations by Weijie Yin, Zhaoyu Zhang, Liang He, et al.

The OFFICIAL repository of ERNIE-RNA is at Bruce-ywj/ERNIE-RNA.

Tip

The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.

The team releasing ERNIE-RNA did not write this model card for this model so this model card has been written by the MultiMolecule team.

Model Details

ERNIE-RNA is a bert-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the Training Details section for more information on the training process.

Variations

Model Specification

Num Layers Hidden Size Num Heads Intermediate Size Num Parameters (M) FLOPs (G) MACs (G) Max Num Tokens
12 768 12 3072 85.67 22.36 11.17 1024

Usage

The model file depends on the multimolecule library. You can install it using pip:

Bash
pip install multimolecule

Direct Use

You can use this model directly with a pipeline for masked language modeling:

Python
>>> import multimolecule  # you must import multimolecule to register models
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='multimolecule/ernierna')
>>> unmasker("uagc<mask>uaucagacugauguuga")

[{'score': 0.22777850925922394,
  'token': 9,
  'token_str': 'U',
  'sequence': 'U A G C U U A U C A G A C U G A U G U U G A'},
 {'score': 0.21105751395225525,
  'token': 6,
  'token_str': 'A',
  'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
 {'score': 0.18962091207504272,
  'token': 7,
  'token_str': 'C',
  'sequence': 'U A G C C U A U C A G A C U G A U G U U G A'},
 {'score': 0.11191495507955551,
  'token': 8,
  'token_str': 'G',
  'sequence': 'U A G C G U A U C A G A C U G A U G U U G A'},
 {'score': 0.09583593904972076,
  'token': 21,
  'token_str': '.',
  'sequence': 'U A G C. U A U C A G A C U G A U G U U G A'}]

Downstream Use

Extract Features

Here is how to use this model to get the features of a given sequence in PyTorch:

Python
from multimolecule import RnaTokenizer, ErnieRnaModel


tokenizer = RnaTokenizer.from_pretrained('multimolecule/ernierna')
model = ErnieRnaModel.from_pretrained('multimolecule/ernierna')

text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')

output = model(**input)

Sequence Classification / Regression

Note: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.

Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:

Python
import torch
from multimolecule import RnaTokenizer, ErnieRnaForSequencePrediction


tokenizer = RnaTokenizer.from_pretrained('multimolecule/ernierna')
model = ErnieRnaForSequencePrediction.from_pretrained('multimolecule/ernierna')

text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.tensor([1])

output = model(**input, labels=label)

Nucleotide Classification / Regression

Note: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.

Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:

Python
import torch
from multimolecule import RnaTokenizer, ErnieRnaForNucleotidePrediction


tokenizer = RnaTokenizer.from_pretrained('multimolecule/ernierna')
model = ErnieRnaForNucleotidePrediction.from_pretrained('multimolecule/ernierna')

text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), ))

output = model(**input, labels=label)

Contact Classification / Regression

Note: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.

Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:

Python
import torch
from multimolecule import RnaTokenizer, ErnieRnaForContactPrediction


tokenizer = RnaTokenizer.from_pretrained('multimolecule/ernierna')
model = ErnieRnaForContactPrediction.from_pretrained('multimolecule/ernierna')

text = "UAGCUUAUCAGACUGAUGUUGA"
input = tokenizer(text, return_tensors='pt')
label = torch.randint(2, (len(text), len(text)))

output = model(**input, labels=label)

Training Details

ERNIE-RNA used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.

Training Data

The ERNIE-RNA model was pre-trained on RNAcentral. RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of Expert Databases representing a broad range of organisms and RNA types.

ERNIE-RNA applied CD-HIT (CD-HIT-EST) with a cut-off at 100% sequence identity to remove redundancy from the RNAcentral, resulting 25 million unique sequences. Sequences longer than 1024 nucleotides were subsequently excluded. The final dataset contains 20.4 million non-redundant RNA sequences. ERNIE-RNA preprocessed all tokens by replacing “T”s with “S”s.

Note that RnaTokenizer will convert “T”s to “U”s for you, you may disable this behaviour by passing replace_T_with_U=False.

Training Procedure

Preprocessing

ERNIE-RNA used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:

  • 15% of the tokens are masked.
  • In 80% of the cases, the masked tokens are replaced by <mask>.
  • In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
  • In the 10% remaining cases, the masked tokens are left as is.

PreTraining

The model was trained on 24 NVIDIA V100 GPUs with 32GiB memories.

  • Learning rate: 1e-4
  • Weight decay: 0.01
  • Learning rate warm-up: 20,000 steps

Citation

BibTeX:

BibTeX
@article {Yin2024.03.17.585376,
    author = {Yin, Weijie and Zhang, Zhaoyu and He, Liang and Jiang, Rui and Zhang, Shuo and Liu, Gan and Zhang, Xuegong and Qin, Tao and Xie, Zhen},
    title = {ERNIE-RNA: An RNA Language Model with Structure-enhanced Representations},
    elocation-id = {2024.03.17.585376},
    year = {2024},
    doi = {10.1101/2024.03.17.585376},
    publisher = {Cold Spring Harbor Laboratory},
    abstract = {With large amounts of unlabeled RNA sequences data produced by high-throughput sequencing technologies, pre-trained RNA language models have been developed to estimate semantic space of RNA molecules, which facilities the understanding of grammar of RNA language. However, existing RNA language models overlook the impact of structure when modeling the RNA semantic space, resulting in incomplete feature extraction and suboptimal performance across various downstream tasks. In this study, we developed a RNA pre-trained language model named ERNIE-RNA (Enhanced Representations with base-pairing restriction for RNA modeling) based on a modified BERT (Bidirectional Encoder Representations from Transformers) by incorporating base-pairing restriction with no MSA (Multiple Sequence Alignment) information. We found that the attention maps from ERNIE-RNA with no fine-tuning are able to capture RNA structure in the zero-shot experiment more precisely than conventional methods such as fine-tuned RNAfold and RNAstructure, suggesting that the ERNIE-RNA can provide comprehensive RNA structural representations. Furthermore, ERNIE-RNA achieved SOTA (state-of-the-art) performance after fine-tuning for various downstream tasks, including RNA structural and functional predictions. In summary, our ERNIE-RNA model provides general features which can be widely and effectively applied in various subsequent research tasks. Our results indicate that introducing key knowledge-based prior information in the BERT framework may be a useful strategy to enhance the performance of other language models.Competing Interest StatementOne patent based on the study was submitted by Z.X. and W.Y., which is entitled as "A Pre-training Approach for RNA Sequences and Its Applications"(application number, no 202410262527.5). The remaining authors declare no competing interests.},
    URL = {https://www.biorxiv.org/content/early/2024/03/17/2024.03.17.585376},
    eprint = {https://www.biorxiv.org/content/early/2024/03/17/2024.03.17.585376.full.pdf},
    journal = {bioRxiv}
}

Contact

Please use GitHub issues of MultiMolecule for any questions or comments on the model card.

Please contact the authors of the ERNIE-RNA paper for questions or comments on the paper/model.

License

This model is licensed under the AGPL-3.0 License.

Text Only
SPDX-License-Identifier: AGPL-3.0-or-later

multimolecule.models.ernierna

RnaTokenizer

Bases: Tokenizer

Tokenizer for RNA sequences.

Parameters:

Name Type Description Default
alphabet Alphabet | str | List[str] | None

alphabet to use for tokenization.

  • If is None, the standard RNA alphabet will be used.
  • If is a string, it should correspond to the name of a predefined alphabet. The options include
    • standard
    • extended
    • streamline
    • nucleobase
  • If is an alphabet or a list of characters, that specific alphabet will be used.
None
nmers int

Size of kmer to tokenize.

1
codon bool

Whether to tokenize into codons.

False
replace_T_with_U bool

Whether to replace T with U.

True
do_upper_case bool

Whether to convert input to uppercase.

True

Examples:

Python Console Session
>>> from multimolecule import RnaTokenizer
>>> tokenizer = RnaTokenizer()
>>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
[1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
>>> tokenizer('acgu')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer = RnaTokenizer(replace_T_with_U=False)
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 3, 2]
>>> tokenizer = RnaTokenizer(nmers=3)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 17, 64, 49, 96, 84, 22, 2]
>>> tokenizer = RnaTokenizer(codon=True)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 49, 22, 2]
>>> tokenizer('uagcuuauca')["input_ids"]
Traceback (most recent call last):
ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
Source code in multimolecule/tokenisers/rna/tokenization_rna.py
Python
class RnaTokenizer(Tokenizer):
    """
    Tokenizer for RNA sequences.

    Args:
        alphabet: alphabet to use for tokenization.

            - If is `None`, the standard RNA alphabet will be used.
            - If is a `string`, it should correspond to the name of a predefined alphabet. The options include
                + `standard`
                + `extended`
                + `streamline`
                + `nucleobase`
            - If is an alphabet or a list of characters, that specific alphabet will be used.
        nmers: Size of kmer to tokenize.
        codon: Whether to tokenize into codons.
        replace_T_with_U: Whether to replace T with U.
        do_upper_case: Whether to convert input to uppercase.

    Examples:
        >>> from multimolecule import RnaTokenizer
        >>> tokenizer = RnaTokenizer()
        >>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
        [1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
        >>> tokenizer('acgu')["input_ids"]
        [1, 6, 7, 8, 9, 2]
        >>> tokenizer('acgt')["input_ids"]
        [1, 6, 7, 8, 9, 2]
        >>> tokenizer = RnaTokenizer(replace_T_with_U=False)
        >>> tokenizer('acgt')["input_ids"]
        [1, 6, 7, 8, 3, 2]
        >>> tokenizer = RnaTokenizer(nmers=3)
        >>> tokenizer('uagcuuauc')["input_ids"]
        [1, 83, 17, 64, 49, 96, 84, 22, 2]
        >>> tokenizer = RnaTokenizer(codon=True)
        >>> tokenizer('uagcuuauc')["input_ids"]
        [1, 83, 49, 22, 2]
        >>> tokenizer('uagcuuauca')["input_ids"]
        Traceback (most recent call last):
        ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
    """

    model_input_names = ["input_ids", "attention_mask"]

    def __init__(
        self,
        alphabet: Alphabet | str | List[str] | None = None,
        nmers: int = 1,
        codon: bool = False,
        replace_T_with_U: bool = True,
        do_upper_case: bool = True,
        additional_special_tokens: List | Tuple | None = None,
        **kwargs,
    ):
        if codon and (nmers > 1 and nmers != 3):
            raise ValueError("Codon and nmers cannot be used together.")
        if codon:
            nmers = 3  # set to 3 to get correct vocab
        if not isinstance(alphabet, Alphabet):
            alphabet = get_alphabet(alphabet, nmers=nmers)
        super().__init__(
            alphabet=alphabet,
            nmers=nmers,
            codon=codon,
            replace_T_with_U=replace_T_with_U,
            do_upper_case=do_upper_case,
            additional_special_tokens=additional_special_tokens,
            **kwargs,
        )
        self.replace_T_with_U = replace_T_with_U
        self.nmers = nmers
        self.condon = codon

    def _tokenize(self, text: str, **kwargs):
        if self.do_upper_case:
            text = text.upper()
        if self.replace_T_with_U:
            text = text.replace("T", "U")
        if self.condon:
            if len(text) % 3 != 0:
                raise ValueError(
                    f"length of input sequence must be a multiple of 3 for codon tokenization, but got {len(text)}"
                )
            return [text[i : i + 3] for i in range(0, len(text), 3)]
        if self.nmers > 1:
            return [text[i : i + self.nmers] for i in range(len(text) - self.nmers + 1)]  # noqa: E203
        return list(text)

ErnieRnaConfig

Bases: PreTrainedConfig

This is the configuration class to store the configuration of a ErnieRnaModel. It is used to instantiate a ErnieRna model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ErnieRna Bruce-ywj/ERNIE-RNA architecture.

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Parameters:

Name Type Description Default
vocab_size int

Vocabulary size of the ErnieRna model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling [ErnieRnaModel].

26
hidden_size int

Dimensionality of the encoder layers and the pooler layer.

768
num_hidden_layers int

Number of hidden layers in the Transformer encoder.

12
num_attention_heads int

Number of attention heads for each attention layer in the Transformer encoder.

12
intermediate_size int

Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.

3072
hidden_dropout float

The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

0.1
attention_dropout float

The dropout ratio for the attention probabilities.

0.1
max_position_embeddings int

The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

1026
initializer_range float

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

0.02
layer_norm_eps float

The epsilon used by the layer normalization layers.

1e-12

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaModel, ErnieRnaConfig
Python Console Session
>>> # Initializing a ERNIE-RNA multimolecule/ernierna style configuration
>>> configuration = ErnieRnaConfig()
Python Console Session
>>> # Initializing a model (with random weights) from the multimolecule/ernierna style configuration
>>> model = ErnieRnaModel(configuration)
Python Console Session
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in multimolecule/models/ernierna/configuration_ernierna.py
Python
class ErnieRnaConfig(PreTrainedConfig):
    r"""
    This is the configuration class to store the configuration of a
    [`ErnieRnaModel`][multimolecule.models.ErnieRnaModel]. It is used to instantiate a ErnieRna model according to the
    specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a
    similar configuration to that of the ErnieRna [Bruce-ywj/ERNIE-RNA](https://github.com/Bruce-ywj/ERNIE-RNA)
    architecture.

    Configuration objects inherit from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig] and can be used to
    control the model outputs. Read the documentation from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig]
    for more information.

    Args:
        vocab_size:
            Vocabulary size of the ErnieRna model. Defines the number of different tokens that can be represented by
            the `inputs_ids` passed when calling [`ErnieRnaModel`].
        hidden_size:
            Dimensionality of the encoder layers and the pooler layer.
        num_hidden_layers:
            Number of hidden layers in the Transformer encoder.
        num_attention_heads:
            Number of attention heads for each attention layer in the Transformer encoder.
        intermediate_size:
            Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
        hidden_dropout:
            The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
        attention_dropout:
            The dropout ratio for the attention probabilities.
        max_position_embeddings:
            The maximum sequence length that this model might ever be used with. Typically set this to something large
            just in case (e.g., 512 or 1024 or 2048).
        initializer_range:
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        layer_norm_eps:
            The epsilon used by the layer normalization layers.

    Examples:
        >>> from multimolecule import ErnieRnaModel, ErnieRnaConfig

        >>> # Initializing a ERNIE-RNA multimolecule/ernierna style configuration
        >>> configuration = ErnieRnaConfig()

        >>> # Initializing a model (with random weights) from the multimolecule/ernierna style configuration
        >>> model = ErnieRnaModel(configuration)

        >>> # Accessing the model configuration
        >>> configuration = model.config
    """

    model_type = "ernierna"

    def __init__(
        self,
        vocab_size: int = 26,
        hidden_size: int = 768,
        num_hidden_layers: int = 12,
        num_attention_heads: int = 12,
        intermediate_size: int = 3072,
        hidden_act: str = "gelu",
        hidden_dropout: float = 0.1,
        attention_dropout: float = 0.1,
        max_position_embeddings: int = 1026,
        initializer_range: float = 0.02,
        layer_norm_eps: float = 1e-12,
        position_embedding_type: str = "sinusoidal",
        pairwise_alpha: float = 0.8,
        is_decoder: bool = False,
        use_cache: bool = True,
        head: HeadConfig | None = None,
        lm_head: MaskedLMHeadConfig | None = None,
        **kwargs,
    ):
        super().__init__(**kwargs)

        self.vocab_size = vocab_size
        self.type_vocab_size = 2
        self.hidden_size = hidden_size
        self.num_hidden_layers = num_hidden_layers
        self.num_attention_heads = num_attention_heads
        self.intermediate_size = intermediate_size
        self.hidden_act = hidden_act
        self.hidden_dropout = hidden_dropout
        self.attention_dropout = attention_dropout
        self.max_position_embeddings = max_position_embeddings
        self.initializer_range = initializer_range
        self.layer_norm_eps = layer_norm_eps
        self.position_embedding_type = position_embedding_type
        self.pairwise_alpha = pairwise_alpha
        self.is_decoder = is_decoder
        self.use_cache = use_cache
        self.head = HeadConfig(**head if head is not None else {})
        self.lm_head = MaskedLMHeadConfig(**lm_head if lm_head is not None else {})

ErnieRnaForContactClassification

Bases: ErnieRnaForPreTraining

Examples:

Python Console Session
>>> from multimolecule.models import ErnieRnaConfig, ErnieRnaForContactClassification, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaForContactClassification(config)
>>> tokenizer = RnaTokenizer()
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaForContactClassification(ErnieRnaForPreTraining):
    """
    Examples:
        >>> from multimolecule.models import ErnieRnaConfig, ErnieRnaForContactClassification, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaForContactClassification(config)
        >>> tokenizer = RnaTokenizer()
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input)
    """

    def __init__(self, config: ErnieRnaConfig):
        super().__init__(config)
        self.ss_head = ErnieRnaContactClassificationHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(  # type: ignore[override]  # pylint: disable=W0221
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels_lm: Tensor | None = None,
        labels_ss: Tensor | None = None,
        output_attentions: bool | None = None,
        output_attention_biases: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaForContactClassificationOutput:
        if output_attentions is False:
            warn("output_attentions must be True for contact classification and will be ignored.")
        outputs = self.ernierna(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=True,
            output_attention_biases=output_attention_biases,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
            **kwargs,
        )
        output_lm = self.lm_head(outputs, labels_lm)
        output_ss = self.ss_head(outputs[-1][-1], attention_mask, input_ids, labels_ss)
        logits_lm, loss_lm = output_lm.logits, output_lm.loss
        logits_ss, loss_ss = output_ss.logits, output_ss.loss

        loss = None
        if loss_lm is not None and loss_ss is not None:
            loss = loss_lm + loss_ss
        elif loss_lm is not None:
            loss = loss_lm
        elif loss_ss is not None:
            loss = loss_ss

        if not return_dict:
            output = outputs[2:]
            output = ((logits_ss, loss_ss) + output) if loss_ss is not None else ((logits_ss,) + output)
            output = ((logits_lm, loss_lm) + output) if loss_lm is not None else ((logits_lm,) + output)
            return ((loss,) + output) if loss is not None else output

        return ErnieRnaForContactClassificationOutput(
            loss=loss,
            logits_lm=logits_lm,
            loss_lm=loss_lm,
            logits_ss=logits_ss,
            loss_ss=loss_ss,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
            attention_biases=outputs.attention_biases,
        )

ErnieRnaForContactPrediction

Bases: ErnieRnaPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaConfig, ErnieRnaForContactPrediction, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaForContactPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 2])
>>> output["loss"]
tensor(..., grad_fn=<NllLossBackward0>)
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaForContactPrediction(ErnieRnaPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import ErnieRnaConfig, ErnieRnaForContactPrediction, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaForContactPrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
        >>> output["logits"].shape
        torch.Size([1, 5, 5, 2])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<NllLossBackward0>)
    """

    def __init__(self, config: ErnieRnaConfig):
        super().__init__(config)
        self.ernierna = ErnieRnaModel(config, add_pooling_layer=True)
        self.contact_head = ContactPredictionHead(config)
        self.head_config = self.contact_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaContactPredictorOutput:
        if output_attentions is False:
            warn("output_attentions must be True for contact classification and will be ignored.")
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ernierna(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=True,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.contact_head(outputs, attention_mask, input_ids, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return ErnieRnaContactPredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

ErnieRnaForMaskedLM

Bases: ErnieRnaPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaConfig, ErnieRnaForMaskedLM, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaForMaskedLM(config)
>>> tokenizer = RnaTokenizer()
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=input["input_ids"])
>>> output["logits"].shape
torch.Size([1, 7, 26])
>>> output["loss"]
tensor(..., grad_fn=<NllLossBackward0>)
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaForMaskedLM(ErnieRnaPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import ErnieRnaConfig, ErnieRnaForMaskedLM, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaForMaskedLM(config)
        >>> tokenizer = RnaTokenizer()
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=input["input_ids"])
        >>> output["logits"].shape
        torch.Size([1, 7, 26])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<NllLossBackward0>)
    """

    _tied_weights_keys = ["lm_head.decoder.weight", "lm_head.decoder.bias"]

    def __init__(self, config: ErnieRnaConfig):
        super().__init__(config)
        if config.is_decoder:
            logger.warning(
                "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for "
                "bi-directional self-attention."
            )
        self.ernierna = ErnieRnaModel(config, add_pooling_layer=False)
        self.lm_head = MaskedLMHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        return self.lm_head.decoder

    def set_output_embeddings(self, new_embeddings):
        self.lm_head.decoder = new_embeddings

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        encoder_hidden_states: Tensor | None = None,
        encoder_attention_mask: Tensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_attention_biases: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaForMaskedLMOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ernierna(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            output_attentions=output_attentions,
            output_attention_biases=output_attention_biases,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.lm_head(outputs, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return ErnieRnaForMaskedLMOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

ErnieRnaForNucleotidePrediction

Bases: ErnieRnaPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaConfig, ErnieRnaForNucleotidePrediction, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaForNucleotidePrediction(config)
>>> tokenizer = RnaTokenizer()
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randn(1, 5, 2))
>>> output["logits"].shape
torch.Size([1, 5, 2])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaForNucleotidePrediction(ErnieRnaPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import ErnieRnaConfig, ErnieRnaForNucleotidePrediction, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaForNucleotidePrediction(config)
        >>> tokenizer = RnaTokenizer()
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=torch.randn(1, 5, 2))
        >>> output["logits"].shape
        torch.Size([1, 5, 2])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
    """

    def __init__(self, config: ErnieRnaConfig):
        super().__init__(config)
        self.ernierna = ErnieRnaModel(config, add_pooling_layer=True)
        self.nucleotide_head = NucleotidePredictionHead(config)
        self.head_config = self.nucleotide_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_attention_biases: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaNucleotidePredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ernierna(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_attention_biases=output_attention_biases,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.nucleotide_head(outputs, attention_mask, input_ids, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return ErnieRnaNucleotidePredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

ErnieRnaForSequencePrediction

Bases: ErnieRnaPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaConfig, ErnieRnaForSequencePrediction, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaForSequencePrediction(config)
>>> tokenizer = RnaTokenizer()
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["logits"].shape
torch.Size([1, 2])
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaForSequencePrediction(ErnieRnaPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import ErnieRnaConfig, ErnieRnaForSequencePrediction, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaForSequencePrediction(config)
        >>> tokenizer = RnaTokenizer()
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input)
        >>> output["logits"].shape
        torch.Size([1, 2])
    """

    def __init__(self, config: ErnieRnaConfig):
        super().__init__(config)
        self.ernierna = ErnieRnaModel(config)
        self.sequence_head = SequencePredictionHead(config)
        self.head_config = self.sequence_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_attention_biases: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaSequencePredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ernierna(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_attention_biases=output_attention_biases,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.sequence_head(outputs, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return ErnieRnaSequencePredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

ErnieRnaForTokenPrediction

Bases: ErnieRnaPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaConfig, ErnieRnaForTokenPrediction, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaForTokenPrediction(config)
>>> tokenizer = RnaTokenizer()
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 7)))
>>> output["logits"].shape
torch.Size([1, 7, 2])
>>> output["loss"]
tensor(..., grad_fn=<NllLossBackward0>)
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaForTokenPrediction(ErnieRnaPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import ErnieRnaConfig, ErnieRnaForTokenPrediction, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaForTokenPrediction(config)
        >>> tokenizer = RnaTokenizer()
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=torch.randint(2, (1, 7)))
        >>> output["logits"].shape
        torch.Size([1, 7, 2])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<NllLossBackward0>)
    """

    def __init__(self, config: ErnieRnaConfig):
        super().__init__(config)
        self.num_labels = config.num_labels
        self.ernierna = ErnieRnaModel(config, add_pooling_layer=True)
        self.token_head = TokenPredictionHead(config)
        self.head_config = self.token_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_attention_biases: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaTokenPredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ernierna(
            input_ids,
            attention_mask=attention_mask,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_attention_biases=output_attention_biases,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.token_head(outputs, attention_mask, input_ids, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return ErnieRnaTokenPredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

ErnieRnaModel

Bases: ErnieRnaPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import ErnieRnaConfig, ErnieRnaModel, RnaTokenizer
>>> config = ErnieRnaConfig()
>>> model = ErnieRnaModel(config)
>>> tokenizer = RnaTokenizer()
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["last_hidden_state"].shape
torch.Size([1, 7, 768])
>>> output["pooler_output"].shape
torch.Size([1, 768])
Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaModel(ErnieRnaPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import ErnieRnaConfig, ErnieRnaModel, RnaTokenizer
        >>> config = ErnieRnaConfig()
        >>> model = ErnieRnaModel(config)
        >>> tokenizer = RnaTokenizer()
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input)
        >>> output["last_hidden_state"].shape
        torch.Size([1, 7, 768])
        >>> output["pooler_output"].shape
        torch.Size([1, 768])
    """

    pairwise_bias_map: Tensor

    def __init__(
        self, config: ErnieRnaConfig, add_pooling_layer: bool = True, tokenizer: PreTrainedTokenizer | None = None
    ):
        super().__init__(config)
        if tokenizer is None:
            tokenizer = AutoTokenizer.from_pretrained("multimolecule/rna")
        self.tokenizer = tokenizer
        self.pad_token_id = tokenizer.pad_token_id
        self.vocab_size = len(self.tokenizer)
        if self.vocab_size != config.vocab_size:
            raise ValueError(
                f"Vocab size in tokenizer ({self.vocab_size}) does not match the one in config ({config.vocab_size})"
            )
        token_to_ids = self.tokenizer._token_to_id
        tokens = sorted(token_to_ids, key=token_to_ids.get)
        pairwise_bias_dict = get_pairwise_bias_dict(config.pairwise_alpha)
        self.register_buffer(
            "pairwise_bias_map",
            torch.tensor([[pairwise_bias_dict.get(f"{i}{j}", 0) for i in tokens] for j in tokens]),
            persistent=False,
        )
        self.pairwise_bias_proj = nn.Sequential(
            nn.Linear(1, config.num_attention_heads // 2),
            nn.GELU(),
            nn.Linear(config.num_attention_heads // 2, config.num_attention_heads),
        )
        self.embeddings = ErnieRnaEmbeddings(config)
        self.encoder = ErnieRnaEncoder(config)
        self.pooler = ErnieRnaPooler(config) if add_pooling_layer else None

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self):
        return self.embeddings.word_embeddings

    def set_input_embeddings(self, value):
        self.embeddings.word_embeddings = value

    def _prune_heads(self, heads_to_prune):
        """
        Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
        class PreTrainedModel
        """
        for layer, heads in heads_to_prune.items():
            self.encoder.layer[layer].attention.prune_heads(heads)

    def get_pairwise_bias(
        self, input_ids: Tensor | NestedTensor, attention_mask: Tensor | NestedTensor | None = None
    ) -> Tensor | NestedTensor:
        batch_size, seq_len = input_ids.shape

        # Broadcasting data indices to compute indices
        data_index_x = input_ids.unsqueeze(2).expand(batch_size, seq_len, seq_len)
        data_index_y = input_ids.unsqueeze(1).expand(batch_size, seq_len, seq_len)

        # Get bias from pairwise_bias_map
        return self.pairwise_bias_map[data_index_x, data_index_y]

        # Zhiyuan: Is it really necessary to mask the bias?
        # The mask position should have been nan, and the implementation is incorrect anyway
        # if attention_mask is not None:
        #     attention_mask = attention_mask.unsqueeze(1).expand(batch_size, seq_len, seq_len)
        #     bias = bias * attention_mask

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        position_ids: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        encoder_hidden_states: Tensor | None = None,
        encoder_attention_mask: Tensor | None = None,
        past_key_values: Tuple[Tuple[torch.FloatTensor, torch.FloatTensor], ...] | None = None,
        use_cache: bool | None = None,
        output_attentions: bool | None = None,
        output_attention_biases: bool | None = None,
        output_hidden_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | ErnieRnaModelOutputWithPoolingAndCrossAttentions:
        r"""
        encoder_hidden_states  (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
            Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
            the model is configured as a decoder.
        encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
            the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

            - 1 for tokens that are **not masked**,
            - 0 for tokens that are **masked**.
        past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors
            of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
            Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

            If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
            don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
            `decoder_input_ids` of shape `(batch_size, sequence_length)`.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
            `past_key_values`).
        """
        if kwargs:
            warn(
                f"Additional keyword arguments `{', '.join(kwargs)}` are detected in "
                f"`{self.__class__.__name__}.forward`, they will be ignored.\n"
                "This is provided for backward compatibility and may lead to unexpected behavior."
            )
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if self.config.is_decoder:
            use_cache = use_cache if use_cache is not None else self.config.use_cache
        else:
            use_cache = False

        pairwise_bias = self.get_pairwise_bias(input_ids, attention_mask)
        attention_bias = self.pairwise_bias_proj(pairwise_bias.unsqueeze(-1)).transpose(1, 3)

        if isinstance(input_ids, NestedTensor):
            input_ids, attention_mask = input_ids.tensor, input_ids.mask
        if input_ids is not None and inputs_embeds is not None:
            raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
        if input_ids is not None:
            self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
            input_shape = input_ids.size()
        elif inputs_embeds is not None:
            input_shape = inputs_embeds.size()[:-1]
        else:
            raise ValueError("You have to specify either input_ids or inputs_embeds")

        batch_size, seq_length = input_shape
        device = input_ids.device if input_ids is not None else inputs_embeds.device  # type: ignore[union-attr]

        # past_key_values_length
        past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0

        if attention_mask is None:
            attention_mask = (
                input_ids.ne(self.pad_token_id)
                if self.pad_token_id is not None
                else torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
            )

        # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
        # ourselves in which case we just need to make it broadcastable to all heads.
        extended_attention_mask: Tensor = self.get_extended_attention_mask(attention_mask, input_shape)

        # If a 2D or 3D attention mask is provided for the cross-attention
        # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
        if self.config.is_decoder and encoder_hidden_states is not None:
            encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
            encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
            if encoder_attention_mask is None:
                encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
            encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
        else:
            encoder_extended_attention_mask = None

        # Prepare head mask if needed
        # 1.0 in head_mask indicate we keep the head
        # attention_probs has shape bsz x n_heads x N x N
        # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
        # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
        head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

        embedding_output = self.embeddings(
            input_ids=input_ids,
            position_ids=position_ids,
            inputs_embeds=inputs_embeds,
            past_key_values_length=past_key_values_length,
        )
        encoder_outputs = self.encoder(
            embedding_output,
            attention_mask=extended_attention_mask,
            attention_bias=attention_bias,
            head_mask=head_mask,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_extended_attention_mask,
            past_key_values=past_key_values,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_attention_biases=output_attention_biases,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        sequence_output = encoder_outputs[0]
        pooled_output = self.pooler(sequence_output) if self.pooler is not None else None

        if not return_dict:
            return (sequence_output, pooled_output) + encoder_outputs[1:]

        return ErnieRnaModelOutputWithPoolingAndCrossAttentions(
            last_hidden_state=sequence_output,
            pooler_output=pooled_output,
            past_key_values=encoder_outputs.past_key_values,
            hidden_states=encoder_outputs.hidden_states,
            attention_biases=encoder_outputs.attention_biases,
            attentions=encoder_outputs.attentions,
            cross_attentions=encoder_outputs.cross_attentions,
        )

forward

Python
forward(input_ids: Tensor | NestedTensor, attention_mask: Tensor | None = None, position_ids: Tensor | None = None, head_mask: Tensor | None = None, inputs_embeds: Tensor | NestedTensor | None = None, encoder_hidden_states: Tensor | None = None, encoder_attention_mask: Tensor | None = None, past_key_values: Tuple[Tuple[FloatTensor, FloatTensor], ...] | None = None, use_cache: bool | None = None, output_attentions: bool | None = None, output_attention_biases: bool | None = None, output_hidden_states: bool | None = None, return_dict: bool | None = None, **kwargs) -> Tuple[Tensor, ...] | ErnieRnaModelOutputWithPoolingAndCrossAttentions

encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

Text Only
1
2
- 1 for tokens that are **not masked**,
- 0 for tokens that are **masked**.

past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

Text Only
1
2
3
If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
`decoder_input_ids` of shape `(batch_size, sequence_length)`.

use_cache (bool, optional): If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
def forward(
    self,
    input_ids: Tensor | NestedTensor,
    attention_mask: Tensor | None = None,
    position_ids: Tensor | None = None,
    head_mask: Tensor | None = None,
    inputs_embeds: Tensor | NestedTensor | None = None,
    encoder_hidden_states: Tensor | None = None,
    encoder_attention_mask: Tensor | None = None,
    past_key_values: Tuple[Tuple[torch.FloatTensor, torch.FloatTensor], ...] | None = None,
    use_cache: bool | None = None,
    output_attentions: bool | None = None,
    output_attention_biases: bool | None = None,
    output_hidden_states: bool | None = None,
    return_dict: bool | None = None,
    **kwargs,
) -> Tuple[Tensor, ...] | ErnieRnaModelOutputWithPoolingAndCrossAttentions:
    r"""
    encoder_hidden_states  (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
        Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
        the model is configured as a decoder.
    encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
        Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
        the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

        - 1 for tokens that are **not masked**,
        - 0 for tokens that are **masked**.
    past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors
        of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
        Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.

        If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
        don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
        `decoder_input_ids` of shape `(batch_size, sequence_length)`.
    use_cache (`bool`, *optional*):
        If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
        `past_key_values`).
    """
    if kwargs:
        warn(
            f"Additional keyword arguments `{', '.join(kwargs)}` are detected in "
            f"`{self.__class__.__name__}.forward`, they will be ignored.\n"
            "This is provided for backward compatibility and may lead to unexpected behavior."
        )
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if self.config.is_decoder:
        use_cache = use_cache if use_cache is not None else self.config.use_cache
    else:
        use_cache = False

    pairwise_bias = self.get_pairwise_bias(input_ids, attention_mask)
    attention_bias = self.pairwise_bias_proj(pairwise_bias.unsqueeze(-1)).transpose(1, 3)

    if isinstance(input_ids, NestedTensor):
        input_ids, attention_mask = input_ids.tensor, input_ids.mask
    if input_ids is not None and inputs_embeds is not None:
        raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    if input_ids is not None:
        self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
        input_shape = input_ids.size()
    elif inputs_embeds is not None:
        input_shape = inputs_embeds.size()[:-1]
    else:
        raise ValueError("You have to specify either input_ids or inputs_embeds")

    batch_size, seq_length = input_shape
    device = input_ids.device if input_ids is not None else inputs_embeds.device  # type: ignore[union-attr]

    # past_key_values_length
    past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0

    if attention_mask is None:
        attention_mask = (
            input_ids.ne(self.pad_token_id)
            if self.pad_token_id is not None
            else torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
        )

    # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
    # ourselves in which case we just need to make it broadcastable to all heads.
    extended_attention_mask: Tensor = self.get_extended_attention_mask(attention_mask, input_shape)

    # If a 2D or 3D attention mask is provided for the cross-attention
    # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
    if self.config.is_decoder and encoder_hidden_states is not None:
        encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
        encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
        if encoder_attention_mask is None:
            encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
        encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
    else:
        encoder_extended_attention_mask = None

    # Prepare head mask if needed
    # 1.0 in head_mask indicate we keep the head
    # attention_probs has shape bsz x n_heads x N x N
    # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
    # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
    head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

    embedding_output = self.embeddings(
        input_ids=input_ids,
        position_ids=position_ids,
        inputs_embeds=inputs_embeds,
        past_key_values_length=past_key_values_length,
    )
    encoder_outputs = self.encoder(
        embedding_output,
        attention_mask=extended_attention_mask,
        attention_bias=attention_bias,
        head_mask=head_mask,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_extended_attention_mask,
        past_key_values=past_key_values,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_attention_biases=output_attention_biases,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    sequence_output = encoder_outputs[0]
    pooled_output = self.pooler(sequence_output) if self.pooler is not None else None

    if not return_dict:
        return (sequence_output, pooled_output) + encoder_outputs[1:]

    return ErnieRnaModelOutputWithPoolingAndCrossAttentions(
        last_hidden_state=sequence_output,
        pooler_output=pooled_output,
        past_key_values=encoder_outputs.past_key_values,
        hidden_states=encoder_outputs.hidden_states,
        attention_biases=encoder_outputs.attention_biases,
        attentions=encoder_outputs.attentions,
        cross_attentions=encoder_outputs.cross_attentions,
    )

ErnieRnaPreTrainedModel

Bases: PreTrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

Source code in multimolecule/models/ernierna/modeling_ernierna.py
Python
class ErnieRnaPreTrainedModel(PreTrainedModel):
    """
    An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
    models.
    """

    config_class = ErnieRnaConfig
    base_model_prefix = "ernierna"
    supports_gradient_checkpointing = True
    _no_split_modules = ["ErnieRnaLayer", "ErnieRnaEmbeddings"]

    # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights
    def _init_weights(self, module: nn.Module):
        """Initialize the weights"""
        if isinstance(module, nn.Linear):
            # Slightly different from the TF version which uses truncated_normal for initialization
            # cf https://github.com/pytorch/pytorch/pull/5617
            module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
            if module.bias is not None:
                module.bias.data.zero_()
        elif isinstance(module, nn.Embedding):
            module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
            if module.padding_idx is not None:
                module.weight.data[module.padding_idx].zero_()
        elif isinstance(module, nn.LayerNorm):
            module.bias.data.zero_()
            module.weight.data.fill_(1.0)