Skip to content

RibonanzaNet

Pre-trained model on RNA chemical mapping for modeling RNA structure and other properties.

Disclaimer

This is an UNOFFICIAL implementation of the Ribonanza: deep learning of RNA structure through dual crowdsourcing by Shujun He, Rui Huang, et al.

The OFFICIAL repository of RibonanzaNet is at Shujun-He/RibonanzaNet.

Warning

The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.

The original implementation of RibonanzaNet does not prepend <cls> and append <eos> tokens to the input sequence. This should not affect the performance of the model in most cases, but it can lead to unexpected behavior in some cases.

Please set bos_token=None, cls_token=None, eos_token = None in the tokenizer and set bos_token_id=None, cls_token_id=None, eos_token_id=None in the model configuration if you want the exact behavior of the original implementation.

Warning

The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.

The original implementation of RibonanzaNet applied dropout-residual-norm path twice to the output of the Self-Attention layer.

By default, the MultiMolecule follows the original implementation.

You can set fix_attention_norm=True in the model configuration to apply the dropout-residual-norm path once.

See more at issue #3

Warning

The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.

The original implementation of RibonanzaNet does not apply attention mask correctly.

By default, the MultiMolecule follows the original implementation.

You can set fix_attention_mask=True in the model configuration to apply the correct attention mask.

See more at issue #4, issue #5, and issue #7

Warning

The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.

The original implementation of RibonanzaNet applies dropout in an axis different from the one described in the paper.

By default, the MultiMolecule follows the original implementation.

You can set fix_pairwise_dropout=True in the model configuration to follow the description in the paper.

See more at issue #6

Tip

The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.

The team releasing RibonanzaNet did not write this model card for this model so this model card has been written by the MultiMolecule team.

Model Details

RibonanzaNet is a bert-style model. RibonanzaNet follows the modification from the RNAdegformer where it introduces a 1D convolution with residual connection at the beginning of each encoder layer. Different from RNAdegformer, RibonanzaNet does not apply deconvolution at the end of the encoder layers, and updates the pairwise representation through outer product mean and triangular update.

RibonanzaNet is pre-trained on a large corpus of RNA sequences with chemical mapping (2A3 and DMS) measurements. Please refer to the Training Details section for more information on the training process.

Model Specification

Num Layers Hidden Size Num Heads Intermediate Size Num Parameters (M) FLOPs (G) MACs (G) Max Num Tokens
9 256 8 1024 11.37 53.65 26.66 inf

Usage

The model file depends on the multimolecule library. You can install it using pip:

Bash
pip install multimolecule

Direct Use

You can use this model directly to predict the chemical mapping of an RNA sequence:

Python
>>> from multimolecule import RnaTokenizer, RibonanzaNetForPreTraining

>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
>>> model = RibonanzaNetForPreTraining.from_pretrained("multimolecule/ribonanzanet")
>>> output = model(**tokenizer("agcagucauuauggcgaa", return_tensors="pt"))

>>> output.logits_2a3.squeeze()
tensor([0.2116, 0.1959, 0.1863, 0.2411, 0.5629, 0.3353, 0.2938, 0.5226, 0.7966,
        0.6312, 0.5053, 0.5033, 0.0474, 0.0964, 0.0708, 0.2409, 0.0618, 0.5135],
       grad_fn=<SqueezeBackward0>)

>>> output.logits_dms.squeeze()
tensor([0.7978, 0.0660, 0.5246, 0.7001, 0.1195, 0.0703, 0.4358, 0.6551, 0.2573,
        0.1782, 0.5363, 0.1984, 0.0778, 0.0465, 0.2489, 0.0728, 0.7808, 0.6782],
       grad_fn=<SqueezeBackward0>)

Downstream Use

Extract Features

Here is how to use this model to get the features of a given sequence in PyTorch:

Python
from multimolecule import RnaTokenizer, RibonanzaNetModel


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetModel.from_pretrained("multimolecule/ribonanzanet")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")

output = model(**input)

Sequence Classification / Regression

Note

This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.

Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:

Python
import torch
from multimolecule import RnaTokenizer, RibonanzaNetForSequencePrediction


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetForSequencePrediction.from_pretrained("multimolecule/ribonanzanet")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])

output = model(**input, labels=label)

Token Classification / Regression

Note

This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.

Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:

Python
import torch
from multimolecule import RnaTokenizer, RibonanzaNetForTokenPrediction


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetForTokenPrediction.from_pretrained("multimolecule/ribonanzanet")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))

output = model(**input, labels=label)

Contact Classification / Regression

Note

This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.

Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:

Python
import torch
from multimolecule import RnaTokenizer, RibonanzaNetForContactPrediction


tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetForContactPrediction.from_pretrained("multimolecule/ribonanzanet")

text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))

output = model(**input, labels=label)

Training Details

RibonanzaNet used chemical mapping data as the training objective. The model takes an RNA sequence as input and predicts the chemical reactivity of each nucleotide.

Training Data

The RibonanzaNet model was trained on the Ribonanza dataset. Ribonanza is a dataset of chemical mapping measurements on two million diverse RNA sequences. The data was collected from the crowdsourced initiative Eterna, as well as expert databases such as Rfam, the PDB archive, Pseudobase, and the RNA Mapping Database.

Training Procedure

RibonanzaNet was trained using a three-stage process:

Initial Training

The initial model was trained using sequences that had either or both 2A3/DMS profiles with a signal-to-noise ratio (SNR) above 1.0. This dataset comprised 214,831 training sequences.

Pre-training

  1. Noisy Training Data: The model was first pre-trained on the data with a signal-to-noise ratio (SNR) below 1.0 using predictions from top 3 Kaggle models as pseudo-labels. This dataset comprised 563,796 sequences.
  2. Experimental Determined Data: The model was then further trained for 10 epochs using only the true labels of sequences with high SNR (either 2A3 or DMS profiles).

Final Training

  1. Noisy Training Data: The model was first pre-trained on all training and testing data using predictions from top 3 Kaggle models as pseudo-labels. This dataset comprised 1,907,619 sequences.
  2. Experimental Determined Data: The model was then further annelaed on the true training labels.

The model was trained on 10 NVIDIA L40S GPUs with 48GiB memories.

Sequence flip augmentation was applied to the training data.

Citation

BibTeX:

BibTeX
@article{He2024.02.24.581671,
  author       = {He, Shujun and Huang, Rui and Townley, Jill and Kretsch, Rachael C. and Karagianes, Thomas G. and Cox, David B.T. and Blair, Hamish and Penzar, Dmitry and Vyaltsev, Valeriy and Aristova, Elizaveta and Zinkevich, Arsenii and Bakulin, Artemy and Sohn, Hoyeol and Krstevski, Daniel and Fukui, Takaaki and Tatematsu, Fumiya and Uchida, Yusuke and Jang, Donghoon and Lee, Jun Seong and Shieh, Roger and Ma, Tom and Martynov, Eduard and Shugaev, Maxim V. and Bukhari, Habib S.T. and Fujikawa, Kazuki and Onodera, Kazuki and Henkel, Christof and Ron, Shlomo and Romano, Jonathan and Nicol, John J. and Nye, Grace P. and Wu, Yuan and Choe, Christian and Reade, Walter and Eterna participants and Das, Rhiju},
  title        = {Ribonanza: deep learning of RNA structure through dual crowdsourcing},
  elocation-id = {2024.02.24.581671},
  year         = {2024},
  doi          = {10.1101/2024.02.24.581671},
  publisher    = {Cold Spring Harbor Laboratory},
  abstract     = {Prediction of RNA structure from sequence remains an unsolved problem, and progress has been slowed by a paucity of experimental data. Here, we present Ribonanza, a dataset of chemical mapping measurements on two million diverse RNA sequences collected through Eterna and other crowdsourced initiatives. Ribonanza measurements enabled solicitation, training, and prospective evaluation of diverse deep neural networks through a Kaggle challenge, followed by distillation into a single, self-contained model called RibonanzaNet. When fine tuned on auxiliary datasets, RibonanzaNet achieves state-of-the-art performance in modeling experimental sequence dropout, RNA hydrolytic degradation, and RNA secondary structure, with implications for modeling RNA tertiary structure.Competing Interest StatementStanford University is filing patent applications based on concepts described in this paper. R.D. is a cofounder of Inceptive.},
  url          = {https://www.biorxiv.org/content/early/2024/06/11/2024.02.24.581671},
  eprint       = {https://www.biorxiv.org/content/early/2024/06/11/2024.02.24.581671.full.pdf},
  journal      = {bioRxiv}
}

Contact

Please use GitHub issues of MultiMolecule for any questions or comments on the model card.

Please contact the authors of the RibonanzaNet paper for questions or comments on the paper/model.

License

This model is licensed under the AGPL-3.0 License.

Text Only
SPDX-License-Identifier: AGPL-3.0-or-later

multimolecule.models.ribonanzanet

RnaTokenizer

Bases: Tokenizer

Tokenizer for RNA sequences.

Parameters:

Name Type Description Default

alphabet

Alphabet | str | List[str] | None

alphabet to use for tokenization.

  • If is None, the standard RNA alphabet will be used.
  • If is a string, it should correspond to the name of a predefined alphabet. The options include
    • standard
    • extended
    • streamline
    • nucleobase
  • If is an alphabet or a list of characters, that specific alphabet will be used.
None

nmers

int

Size of kmer to tokenize.

1

codon

bool

Whether to tokenize into codons.

False

replace_T_with_U

bool

Whether to replace T with U.

True

do_upper_case

bool

Whether to convert input to uppercase.

True

Examples:

Python Console Session
>>> from multimolecule import RnaTokenizer
>>> tokenizer = RnaTokenizer()
>>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
[1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
>>> tokenizer('acgu')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer = RnaTokenizer(replace_T_with_U=False)
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 3, 2]
>>> tokenizer = RnaTokenizer(nmers=3)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 17, 64, 49, 96, 84, 22, 2]
>>> tokenizer = RnaTokenizer(codon=True)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 49, 22, 2]
>>> tokenizer('uagcuuauca')["input_ids"]
Traceback (most recent call last):
ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
Source code in multimolecule/tokenisers/rna/tokenization_rna.py
Python
class RnaTokenizer(Tokenizer):
    """
    Tokenizer for RNA sequences.

    Args:
        alphabet: alphabet to use for tokenization.

            - If is `None`, the standard RNA alphabet will be used.
            - If is a `string`, it should correspond to the name of a predefined alphabet. The options include
                + `standard`
                + `extended`
                + `streamline`
                + `nucleobase`
            - If is an alphabet or a list of characters, that specific alphabet will be used.
        nmers: Size of kmer to tokenize.
        codon: Whether to tokenize into codons.
        replace_T_with_U: Whether to replace T with U.
        do_upper_case: Whether to convert input to uppercase.

    Examples:
        >>> from multimolecule import RnaTokenizer
        >>> tokenizer = RnaTokenizer()
        >>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
        [1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
        >>> tokenizer('acgu')["input_ids"]
        [1, 6, 7, 8, 9, 2]
        >>> tokenizer('acgt')["input_ids"]
        [1, 6, 7, 8, 9, 2]
        >>> tokenizer = RnaTokenizer(replace_T_with_U=False)
        >>> tokenizer('acgt')["input_ids"]
        [1, 6, 7, 8, 3, 2]
        >>> tokenizer = RnaTokenizer(nmers=3)
        >>> tokenizer('uagcuuauc')["input_ids"]
        [1, 83, 17, 64, 49, 96, 84, 22, 2]
        >>> tokenizer = RnaTokenizer(codon=True)
        >>> tokenizer('uagcuuauc')["input_ids"]
        [1, 83, 49, 22, 2]
        >>> tokenizer('uagcuuauca')["input_ids"]
        Traceback (most recent call last):
        ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
    """

    model_input_names = ["input_ids", "attention_mask"]

    def __init__(
        self,
        alphabet: Alphabet | str | List[str] | None = None,
        nmers: int = 1,
        codon: bool = False,
        replace_T_with_U: bool = True,
        do_upper_case: bool = True,
        additional_special_tokens: List | Tuple | None = None,
        **kwargs,
    ):
        if codon and (nmers > 1 and nmers != 3):
            raise ValueError("Codon and nmers cannot be used together.")
        if codon:
            nmers = 3  # set to 3 to get correct vocab
        if not isinstance(alphabet, Alphabet):
            alphabet = get_alphabet(alphabet, nmers=nmers)
        super().__init__(
            alphabet=alphabet,
            nmers=nmers,
            codon=codon,
            replace_T_with_U=replace_T_with_U,
            do_upper_case=do_upper_case,
            additional_special_tokens=additional_special_tokens,
            **kwargs,
        )
        self.replace_T_with_U = replace_T_with_U
        self.nmers = nmers
        self.codon = codon

    def _tokenize(self, text: str, **kwargs):
        if self.do_upper_case:
            text = text.upper()
        if self.replace_T_with_U:
            text = text.replace("T", "U")
        if self.codon:
            if len(text) % 3 != 0:
                raise ValueError(
                    f"length of input sequence must be a multiple of 3 for codon tokenization, but got {len(text)}"
                )
            return [text[i : i + 3] for i in range(0, len(text), 3)]
        if self.nmers > 1:
            return [text[i : i + self.nmers] for i in range(len(text) - self.nmers + 1)]  # noqa: E203
        return list(text)

RibonanzaNetConfig

Bases: PreTrainedConfig

This is the configuration class to store the configuration of a RibonanzaNetModel. It is used to instantiate a RibonanzaNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the RibonanzaNet Shujun-He/RibonanzaNet architecture.

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Parameters:

Name Type Description Default

vocab_size

int

Vocabulary size of the RibonanzaNet model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling [RibonanzaNetModel].

26

hidden_size

int

Dimensionality of the encoder layers and the pooler layer.

256

num_hidden_layers

int

Number of hidden layers in the Transformer encoder.

9

num_attention_heads

int

Number of attention heads for each attention layer in the Transformer encoder.

8

intermediate_size

int

Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.

1024

hidden_act

str

The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.

'gelu'

hidden_dropout

float

The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

0.05

attention_dropout

float

The dropout ratio for the attention probabilities.

0.05

max_position_embeddings

The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

required

initializer_range

float

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

0.02

layer_norm_eps

float

The epsilon used by the layer normalization layers.

1e-12

position_embedding_type

Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).

required

is_decoder

Whether the model is used as a decoder or not. If False, the model is used as an encoder.

required

use_cache

Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True.

required

emb_layer_norm_before

Whether to apply layer normalization after embeddings but before the main stem of the network.

required

token_dropout

When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.

required

head

HeadConfig | None

The configuration of the head.

None

lm_head

MaskedLMHeadConfig | None

The configuration of the masked language model head.

None

Examples:

Python Console Session
1
2
3
4
5
6
7
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel
>>> # Initializing a RibonanzaNet multimolecule/ribonanzanet style configuration
>>> configuration = RibonanzaNetConfig()
>>> # Initializing a model (with random weights) from the multimolecule/ribonanzanet style configuration
>>> model = RibonanzaNetModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in multimolecule/models/ribonanzanet/configuration_ribonanzanet.py
Python
class RibonanzaNetConfig(PreTrainedConfig):
    r"""
    This is the configuration class to store the configuration of a
    [`RibonanzaNetModel`][multimolecule.models.RibonanzaNetModel].
    It is used to instantiate a RibonanzaNet model according to the specified arguments, defining the model
    architecture.
    Instantiating a configuration with the defaults will yield a similar configuration to that of the RibonanzaNet
    [Shujun-He/RibonanzaNet](https://github.com/Shujun-He/RibonanzaNet) architecture.

    Configuration objects inherit from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig] and can be used to
    control the model outputs. Read the documentation from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig]
    for more information.

    Args:
        vocab_size:
            Vocabulary size of the RibonanzaNet model. Defines the number of different tokens that can be represented by
            the `inputs_ids` passed when calling [`RibonanzaNetModel`].
        hidden_size:
            Dimensionality of the encoder layers and the pooler layer.
        num_hidden_layers:
            Number of hidden layers in the Transformer encoder.
        num_attention_heads:
            Number of attention heads for each attention layer in the Transformer encoder.
        intermediate_size:
            Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
        hidden_act:
            The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
            `"relu"`, `"silu"` and `"gelu_new"` are supported.
        hidden_dropout:
            The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
        attention_dropout:
            The dropout ratio for the attention probabilities.
        max_position_embeddings:
            The maximum sequence length that this model might ever be used with. Typically set this to something large
            just in case (e.g., 512 or 1024 or 2048).
        initializer_range:
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        layer_norm_eps:
            The epsilon used by the layer normalization layers.
        position_embedding_type:
            Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
            positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
            [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
            For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
            with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
        is_decoder:
            Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
        use_cache:
            Whether or not the model should return the last key/values attentions (not used by all models). Only
            relevant if `config.is_decoder=True`.
        emb_layer_norm_before:
            Whether to apply layer normalization after embeddings but before the main stem of the network.
        token_dropout:
            When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.
        head:
            The configuration of the head.
        lm_head:
            The configuration of the masked language model head.

    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel
        >>> # Initializing a RibonanzaNet multimolecule/ribonanzanet style configuration
        >>> configuration = RibonanzaNetConfig()
        >>> # Initializing a model (with random weights) from the multimolecule/ribonanzanet style configuration
        >>> model = RibonanzaNetModel(configuration)
        >>> # Accessing the model configuration
        >>> configuration = model.config
    """

    model_type = "ribonanzanet"

    def __init__(
        self,
        vocab_size: int = 26,
        hidden_size: int = 256,
        num_hidden_layers: int = 9,
        num_attention_heads: int = 8,
        intermediate_size: int = 1024,
        pairwise_size: int = 64,
        pairwise_attention_size: int = 32,
        pairwise_intermediate_size: int = 256,
        pairwise_num_attention_heads: int = 4,
        kernel_size: int = 5,
        use_triangular_attention: bool = False,
        hidden_act: str = "gelu",
        pairwise_hidden_act: str = "relu",
        hidden_dropout: float = 0.05,
        attention_dropout: float = 0.05,
        output_pairwise_states: bool = False,
        initializer_range: float = 0.02,
        layer_norm_eps: float = 1e-12,
        head: HeadConfig | None = None,
        lm_head: MaskedLMHeadConfig | None = None,
        fix_attention_mask: bool = False,
        fix_attention_norm: bool = False,
        fix_pairwise_dropout: bool = False,
        **kwargs,
    ):
        super().__init__(**kwargs)
        self.vocab_size = vocab_size
        self.type_vocab_size = 2
        self.hidden_size = hidden_size
        self.num_hidden_layers = num_hidden_layers
        self.num_attention_heads = num_attention_heads
        self.intermediate_size = intermediate_size
        self.pairwise_size = pairwise_size
        self.pairwise_attention_size = pairwise_attention_size
        self.pairwise_intermediate_size = pairwise_intermediate_size
        self.pairwise_num_attention_heads = pairwise_num_attention_heads
        self.kernel_size = kernel_size
        self.use_triangular_attention = use_triangular_attention
        self.hidden_act = hidden_act
        self.pairwise_hidden_act = pairwise_hidden_act
        self.hidden_dropout = hidden_dropout
        self.attention_dropout = attention_dropout
        self.output_pairwise_states = output_pairwise_states
        self.initializer_range = initializer_range
        self.layer_norm_eps = layer_norm_eps
        self.head = HeadConfig(**head) if head is not None else None
        self.lm_head = MaskedLMHeadConfig(**lm_head) if lm_head is not None else None
        self.fix_attention_mask = fix_attention_mask
        self.fix_attention_norm = fix_attention_norm
        self.fix_pairwise_dropout = fix_pairwise_dropout

RibonanzaNetForContactPrediction

Bases: RibonanzaNetPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForContactPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForContactPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetForContactPrediction(RibonanzaNetPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForContactPrediction, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetForContactPrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
        >>> output["logits"].shape
        torch.Size([1, 5, 5, 1])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
    """

    def __init__(self, config: RibonanzaNetConfig):
        super().__init__(config)
        self.ribonanzanet = RibonanzaNetModel(config)
        self.contact_head = ContactPredictionHead(config)
        self.head_config = self.contact_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetContactPredictorOutput:
        if output_attentions is False:
            warn("output_attentions must be True for contact classification and will be ignored.")
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ribonanzanet(
            input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=True,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=output_pairwise_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.contact_head(outputs, attention_mask, input_ids, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return RibonanzaNetContactPredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            pairwise_states=outputs.pairwise_states,
            attentions=outputs.attentions,
        )

RibonanzaNetForDegradationPrediction

Bases: RibonanzaNetPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForDegradationPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForDegradationPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels_reactivity=torch.randn(1, 5))
>>> output["logits_reactivity"].shape
torch.Size([1, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<MeanBackward0>)
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetForDegradationPrediction(RibonanzaNetPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForDegradationPrediction, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetForDegradationPrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels_reactivity=torch.randn(1, 5))
        >>> output["logits_reactivity"].shape
        torch.Size([1, 5, 1])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<MeanBackward0>)
    """

    def __init__(self, config: RibonanzaNetConfig):
        super().__init__(config)
        self.ribonanzanet = RibonanzaNetModel(config, add_pooling_layer=False)
        self.reactivity_head = TokenPredictionHead(config)
        self.deg_Mg_pH10_head = TokenPredictionHead(config)
        self.deg_pH10_head = TokenPredictionHead(config)
        self.deg_Mg_50C_head = TokenPredictionHead(config)
        self.deg_50C_head = TokenPredictionHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels_reactivity: Tensor | None = None,
        labels_deg_Mg_pH10: Tensor | None = None,
        labels_deg_pH10: Tensor | None = None,
        labels_deg_Mg_50C: Tensor | None = None,
        labels_deg_50C: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetForDegradationPredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ribonanzanet(
            input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=output_pairwise_states,
            return_dict=return_dict,
            **kwargs,
        )

        output_reactivity = self.reactivity_head(outputs, attention_mask, input_ids, labels_reactivity)
        logits_reactivity, loss_reactivity = output_reactivity.logits, output_reactivity.loss

        output_deg_Mg_pH10 = self.deg_Mg_pH10_head(outputs, attention_mask, input_ids, labels_deg_Mg_pH10)
        logits_deg_Mg_pH10, loss_deg_Mg_pH10 = output_deg_Mg_pH10.logits, output_deg_Mg_pH10.loss

        output_deg_pH10 = self.deg_pH10_head(outputs, attention_mask, input_ids, labels_deg_pH10)
        logits_deg_pH10, loss_deg_pH10 = output_deg_pH10.logits, output_deg_pH10.loss

        output_deg_Mg_50C = self.deg_Mg_50C_head(outputs, attention_mask, input_ids, labels_deg_Mg_50C)
        logits_deg_Mg_50C, loss_deg_Mg_50C = output_deg_Mg_50C.logits, output_deg_Mg_50C.loss

        output_deg_50C = self.deg_50C_head(outputs, attention_mask, input_ids, labels_deg_50C)
        logits_deg_50C, loss_deg_50C = output_deg_50C.logits, output_deg_50C.loss

        losses = tuple(
            l
            for l in (loss_reactivity, loss_deg_Mg_pH10, loss_deg_pH10, loss_deg_Mg_50C, loss_deg_50C)  # noqa: E741
            if l is not None
        )
        loss = torch.mean(torch.stack(losses)) if losses else None

        if not return_dict:
            output = outputs[2:]
            output = (
                ((logits_deg_50C, loss_deg_50C) + output) if loss_deg_50C is not None else ((logits_deg_50C,) + output)
            )
            output = (
                ((logits_deg_Mg_50C, loss_deg_Mg_50C) + output)
                if loss_deg_Mg_50C is not None
                else ((logits_deg_Mg_50C,) + output)
            )
            output = (
                ((logits_deg_pH10, loss_deg_pH10) + output)
                if loss_deg_pH10 is not None
                else ((logits_deg_pH10,) + output)
            )
            output = (
                ((logits_deg_Mg_pH10, loss_deg_Mg_pH10) + output)
                if loss_deg_Mg_pH10 is not None
                else ((logits_deg_Mg_pH10,) + output)
            )
            output = (
                ((logits_reactivity, loss_reactivity) + output)
                if loss_reactivity is not None
                else ((logits_reactivity,) + output)
            )
            return ((loss,) + output) if loss is not None else output

        return RibonanzaNetForDegradationPredictorOutput(
            loss=loss,
            logits_reactivity=logits_reactivity,
            loss_reactivity=loss_reactivity,
            logits_deg_50C=logits_deg_50C,
            loss_deg_50C=loss_deg_50C,
            logits_deg_Mg_50C=logits_deg_Mg_50C,
            loss_deg_Mg_50C=loss_deg_Mg_50C,
            logits_deg_pH10=logits_deg_pH10,
            loss_deg_pH10=loss_deg_pH10,
            logits_deg_Mg_pH10=logits_deg_Mg_pH10,
            loss_deg_Mg_pH10=loss_deg_Mg_pH10,
            hidden_states=outputs.hidden_states,
            pairwise_states=outputs.pairwise_states,
            attentions=outputs.attentions,
        )

RibonanzaNetForSecondaryStructurePrediction

Bases: RibonanzaNetForPreTraining

Examples:

Python Console Session
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSecondaryStructurePrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSecondaryStructurePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels_ss=torch.randint(2, (1, 5, 5)))
>>> output["logits_ss"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<MeanBackward0>)
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetForSecondaryStructurePrediction(RibonanzaNetForPreTraining):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSecondaryStructurePrediction, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetForSecondaryStructurePrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels_ss=torch.randint(2, (1, 5, 5)))
        >>> output["logits_ss"].shape
        torch.Size([1, 5, 5, 1])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<MeanBackward0>)
    """

    def __init__(self, config: RibonanzaNetConfig):
        super().__init__(config)
        self.ribonanzanet = RibonanzaNetModel(config, add_pooling_layer=False)
        self.ss_head = RibonanzaNetSecondaryStructurePredictionHead(config)
        self.a3c_head = TokenPredictionHead(config)
        self.dms_head = TokenPredictionHead(config)
        self.head_config = self.ss_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(  # type: ignore[override]  # pylint: disable=arguments-renamed
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels_ss: Tensor | None = None,
        labels_2a3: Tensor | None = None,
        labels_dms: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool = True,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetForSecondaryStructurePredictorOutput:
        if not output_pairwise_states:
            warn("output_pairwise_states must be True since prediction head requires pairwise states.")
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ribonanzanet(
            input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=True,
            return_dict=return_dict,
            **kwargs,
        )

        output_ss = self.ss_head(outputs, attention_mask, input_ids, labels_ss)
        logits_ss, loss_ss = output_ss.logits, output_ss.loss

        output_2a3 = self.a3c_head(outputs, attention_mask, input_ids, labels_2a3)
        logits_2a3, loss_2a3 = output_2a3.logits, output_2a3.loss

        output_dms = self.dms_head(outputs, attention_mask, input_ids, labels_dms)
        logits_dms, loss_dms = output_dms.logits, output_dms.loss

        losses = tuple(l for l in (loss_2a3, loss_dms, loss_ss) if l is not None)  # noqa: E741
        loss = torch.mean(torch.stack(losses)) if losses else None

        if not return_dict:
            output = outputs[2:]
            output = ((logits_dms, loss_dms) + output) if loss_dms is not None else ((logits_dms,) + output)
            output = ((logits_2a3, loss_2a3) + output) if loss_2a3 is not None else ((logits_2a3,) + output)
            output = ((logits_ss, loss_ss) + output) if loss_ss is not None else ((logits_ss,) + output)
            return ((loss,) + output) if loss is not None else output

        return RibonanzaNetForSecondaryStructurePredictorOutput(
            loss=loss,
            logits_ss=logits_ss,
            loss_ss=loss_ss,
            logits_2a3=logits_2a3,
            loss_2a3=loss_2a3,
            logits_dms=logits_dms,
            loss_dms=loss_dms,
            hidden_states=outputs.hidden_states,
            pairwise_states=outputs.pairwise_states,
            attentions=outputs.attentions,
        )

RibonanzaNetForSequenceDropoutPrediction

Bases: RibonanzaNetPreTrainedModel

Examples:

Python Console Session
1
2
3
4
5
6
7
8
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSequenceDropoutPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSequenceDropoutPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels_reactivity=torch.randn(1, 5))
>>> output["logits_2a3"].shape
torch.Size([1, 1])
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetForSequenceDropoutPrediction(RibonanzaNetPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSequenceDropoutPrediction, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetForSequenceDropoutPrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels_reactivity=torch.randn(1, 5))
        >>> output["logits_2a3"].shape
        torch.Size([1, 1])
    """

    def __init__(self, config: RibonanzaNetConfig):
        super().__init__(config)
        self.ribonanzanet = RibonanzaNetModel(config, add_pooling_layer=False)
        self.a3c_head = RibonanzaNetSequenceDropoutPredictionHead(config)
        self.dms_head = RibonanzaNetSequenceDropoutPredictionHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels_2a3: Tensor | None = None,
        labels_dms: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetForDegradationPredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ribonanzanet(
            input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=output_pairwise_states,
            return_dict=return_dict,
            **kwargs,
        )

        output_2a3 = self.a3c_head(outputs, attention_mask, input_ids, labels_2a3)
        logits_2a3, loss_2a3 = output_2a3.logits, output_2a3.loss

        output_dms = self.dms_head(outputs, attention_mask, input_ids, labels_dms)
        logits_dms, loss_dms = output_dms.logits, output_dms.loss

        losses = tuple(l for l in (loss_2a3, loss_dms) if l is not None)  # noqa: E741
        loss = torch.mean(torch.stack(losses)) if losses else None

        if not return_dict:
            output = outputs[2:]
            output = ((logits_dms, loss_dms) + output) if loss_dms is not None else ((logits_dms,) + output)
            output = ((logits_2a3, loss_2a3) + output) if loss_2a3 is not None else ((logits_2a3,) + output)
            return ((loss,) + output) if loss is not None else output

        return RibonanzaNetSequenceDropoutPredictorOutput(
            loss=loss,
            logits_2a3=logits_2a3,
            loss_2a3=loss_2a3,
            logits_dms=logits_dms,
            loss_dms=loss_dms,
            hidden_states=outputs.hidden_states,
            pairwise_states=outputs.pairwise_states,
            attentions=outputs.attentions,
        )

RibonanzaNetForSequencePrediction

Bases: RibonanzaNetPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSequencePrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSequencePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.tensor([[1]]))
>>> output["logits"].shape
torch.Size([1, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetForSequencePrediction(RibonanzaNetPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSequencePrediction, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetForSequencePrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=torch.tensor([[1]]))
        >>> output["logits"].shape
        torch.Size([1, 1])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
    """

    def __init__(self, config: RibonanzaNetConfig):
        super().__init__(config)
        self.ribonanzanet = RibonanzaNetModel(config)
        self.sequence_head = SequencePredictionHead(config)
        self.head_config = self.sequence_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetSequencePredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ribonanzanet(
            input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=output_pairwise_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.sequence_head(outputs, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return RibonanzaNetSequencePredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            pairwise_states=outputs.pairwise_states,
            attentions=outputs.attentions,
        )

RibonanzaNetForTokenPrediction

Bases: RibonanzaNetPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForTokenPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForTokenPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetForTokenPrediction(RibonanzaNetPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForTokenPrediction, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetForTokenPrediction(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input, labels=torch.randint(2, (1, 5)))
        >>> output["logits"].shape
        torch.Size([1, 5, 1])
        >>> output["loss"]  # doctest:+ELLIPSIS
        tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
    """

    def __init__(self, config: RibonanzaNetConfig):
        super().__init__(config)
        self.ribonanzanet = RibonanzaNetModel(config)
        self.token_head = TokenPredictionHead(config)
        self.head_config = self.token_head.config

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        labels: Tensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetTokenPredictorOutput:
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        outputs = self.ribonanzanet(
            input_ids,
            attention_mask=attention_mask,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=output_pairwise_states,
            return_dict=return_dict,
            **kwargs,
        )
        output = self.token_head(outputs, attention_mask, input_ids, labels)
        logits, loss = output.logits, output.loss

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return RibonanzaNetTokenPredictorOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            pairwise_states=outputs.pairwise_states,
            attentions=outputs.attentions,
        )

RibonanzaNetModel

Bases: RibonanzaNetPreTrainedModel

Examples:

Python Console Session
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetModel(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["last_hidden_state"].shape
torch.Size([1, 7, 256])
>>> output["pooler_output"].shape
torch.Size([1, 256])
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetModel(RibonanzaNetPreTrainedModel):
    """
    Examples:
        >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel, RnaTokenizer
        >>> config = RibonanzaNetConfig()
        >>> model = RibonanzaNetModel(config)
        >>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
        >>> input = tokenizer("ACGUN", return_tensors="pt")
        >>> output = model(**input)
        >>> output["last_hidden_state"].shape
        torch.Size([1, 7, 256])
        >>> output["pooler_output"].shape
        torch.Size([1, 256])
    """

    def __init__(self, config: RibonanzaNetConfig, add_pooling_layer: bool = True):
        super().__init__(config)
        self.pad_token_id = config.pad_token_id
        self.embeddings = RibonanzaNetEmbeddings(config)
        self.encoder = RibonanzaNetEncoder(config)
        self.pooler = RibonanzaNetPooler(config) if add_pooling_layer else None
        self.fix_attention_mask = config.fix_attention_mask

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self):
        return self.embeddings.word_embeddings

    def set_input_embeddings(self, value):
        self.embeddings.word_embeddings = value

    def _prune_heads(self, heads_to_prune):
        """
        Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
        class PreTrainedModel
        """
        for layer, heads in heads_to_prune.items():
            self.encoder.layer[layer].attention.prune_heads(heads)

    def forward(
        self,
        input_ids: Tensor | NestedTensor,
        attention_mask: Tensor | None = None,
        head_mask: Tensor | None = None,
        inputs_embeds: Tensor | NestedTensor | None = None,
        output_attentions: bool | None = None,
        output_hidden_states: bool | None = None,
        output_pairwise_states: bool | None = None,
        return_dict: bool | None = None,
        **kwargs,
    ) -> Tuple[Tensor, ...] | RibonanzaNetModelOutputWithPooling:
        if kwargs:
            warn(
                f"Additional keyword arguments `{', '.join(kwargs)}` are detected in "
                f"`{self.__class__.__name__}.forward`, they will be ignored.\n"
                "This is provided for backward compatibility and may lead to unexpected behavior."
            )
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        output_pairwise_states = (
            output_pairwise_states if output_pairwise_states is not None else self.config.output_pairwise_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if isinstance(input_ids, NestedTensor):
            input_ids, attention_mask = input_ids.tensor, input_ids.mask
        if input_ids is not None and inputs_embeds is not None:
            raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
        if input_ids is not None:
            self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
            input_shape = input_ids.size()
        elif inputs_embeds is not None:
            input_shape = inputs_embeds.size()[:-1]
        else:
            raise ValueError("You have to specify either input_ids or inputs_embeds")

        batch_size, seq_length = input_shape
        device = input_ids.device if input_ids is not None else inputs_embeds.device  # type: ignore[union-attr]

        if attention_mask is None:
            attention_mask = (
                input_ids.ne(self.pad_token_id)
                if self.pad_token_id is not None
                else torch.ones(((batch_size, seq_length)), device=device)
            )
        else:
            # Must make a clone here because the attention mask might be reused in other modules
            # and we need to process it to mimic the behavior of the original implementation.
            # See more in https://github.com/Shujun-He/RibonanzaNet/issues/4
            attention_mask = attention_mask.clone()

        # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
        # ourselves in which case we just need to make it broadcastable to all heads.
        extended_attention_mask: Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
        attention_mask = attention_mask.float().unsqueeze(-1)

        # Prepare head mask if needed
        # 1.0 in head_mask indicate we keep the head
        # attention_probs has shape bsz x n_heads x N x N
        # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
        # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
        head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

        embedding_output = self.embeddings(
            input_ids=input_ids,
            inputs_embeds=inputs_embeds,
        )
        encoder_outputs = self.encoder(
            embedding_output,
            attention_mask=attention_mask,
            extended_attention_mask=extended_attention_mask,
            head_mask=head_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            output_pairwise_states=output_pairwise_states,
            return_dict=return_dict,
        )
        sequence_output = encoder_outputs[0]
        pooled_output = self.pooler(sequence_output) if self.pooler is not None else None

        if not return_dict:
            return (sequence_output, pooled_output) + encoder_outputs[1:]

        return RibonanzaNetModelOutputWithPooling(
            last_hidden_state=sequence_output,
            pooler_output=pooled_output,
            hidden_states=encoder_outputs.hidden_states,
            pairwise_states=encoder_outputs.pairwise_states,
            attentions=encoder_outputs.attentions,
        )

    def get_extended_attention_mask(
        self, attention_mask: Tensor, input_shape: Tuple[int], dtype: torch.dtype | None = None
    ) -> Tensor:
        """
        Makes broadcastable attention and causal masks so that future and masked tokens are ignored.

        Arguments:
            attention_mask (`torch.Tensor`):
                Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
            input_shape (`Tuple[int]`):
                The shape of the input to the model.

        Returns:
            `torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`.
        """
        if dtype is None:
            dtype = self.dtype

        if attention_mask.dim() == 2:
            attention_mask = attention_mask.unsqueeze(-1)
            if not self.fix_attention_mask:
                attention_mask[attention_mask == 0] = -1
            attention_mask = torch.matmul(attention_mask, attention_mask.transpose(1, 2))
        elif attention_mask.shape != 3:
            raise ValueError(
                f"Wrong shape for input_ids (shape {input_shape}) or attention_mask (shape {attention_mask.shape})"
            )

        extended_attention_mask = attention_mask[:, None, :, :]
        # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
        # masked positions, this operation will create a tensor which is 0.0 for
        # positions we want to attend and the dtype's smallest value for masked positions.
        # Since we are adding it to the raw scores before the softmax, this is
        # effectively the same as removing these entirely.
        extended_attention_mask = extended_attention_mask.to(dtype=dtype)  # fp16 compatibility
        if self.fix_attention_mask:
            extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(dtype).min
        return extended_attention_mask

get_extended_attention_mask

Python
get_extended_attention_mask(attention_mask: Tensor, input_shape: Tuple[int], dtype: dtype | None = None) -> Tensor

Makes broadcastable attention and causal masks so that future and masked tokens are ignored.

Parameters:

Name Type Description Default
attention_mask
`torch.Tensor`

Mask with ones indicating tokens to attend to, zeros for tokens to ignore.

required
input_shape
`Tuple[int]`

The shape of the input to the model.

required

Returns:

Type Description
Tensor

torch.Tensor The extended attention mask, with a the same dtype as attention_mask.dtype.

Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
def get_extended_attention_mask(
    self, attention_mask: Tensor, input_shape: Tuple[int], dtype: torch.dtype | None = None
) -> Tensor:
    """
    Makes broadcastable attention and causal masks so that future and masked tokens are ignored.

    Arguments:
        attention_mask (`torch.Tensor`):
            Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
        input_shape (`Tuple[int]`):
            The shape of the input to the model.

    Returns:
        `torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`.
    """
    if dtype is None:
        dtype = self.dtype

    if attention_mask.dim() == 2:
        attention_mask = attention_mask.unsqueeze(-1)
        if not self.fix_attention_mask:
            attention_mask[attention_mask == 0] = -1
        attention_mask = torch.matmul(attention_mask, attention_mask.transpose(1, 2))
    elif attention_mask.shape != 3:
        raise ValueError(
            f"Wrong shape for input_ids (shape {input_shape}) or attention_mask (shape {attention_mask.shape})"
        )

    extended_attention_mask = attention_mask[:, None, :, :]
    # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
    # masked positions, this operation will create a tensor which is 0.0 for
    # positions we want to attend and the dtype's smallest value for masked positions.
    # Since we are adding it to the raw scores before the softmax, this is
    # effectively the same as removing these entirely.
    extended_attention_mask = extended_attention_mask.to(dtype=dtype)  # fp16 compatibility
    if self.fix_attention_mask:
        extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(dtype).min
    return extended_attention_mask

RibonanzaNetPreTrainedModel

Bases: PreTrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python
class RibonanzaNetPreTrainedModel(PreTrainedModel):
    """
    An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
    models.
    """

    config_class = RibonanzaNetConfig
    base_model_prefix = "ribonanzanet"
    supports_gradient_checkpointing = True
    _no_split_modules = ["RibonanzaNetLayer", "RibonanzaNetEmbeddings"]

    # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights
    def _init_weights(self, module: nn.Module):
        """Initialize the weights"""
        if isinstance(module, nn.Linear):
            # Slightly different from the TF version which uses truncated_normal for initialization
            # cf https://github.com/pytorch/pytorch/pull/5617
            module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
            if module.bias is not None:
                module.bias.data.zero_()
        elif isinstance(module, nn.Embedding):
            module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
            if module.padding_idx is not None:
                module.weight.data[module.padding_idx].zero_()
        elif isinstance(module, nn.LayerNorm):
            module.bias.data.zero_()
            module.weight.data.fill_(1.0)
        for n, m in module.named_modules():
            if "_gate" in n:
                m.weight.data.zero_()
                m.bias.data.fill_(1.0)