RibonanzaNet
The original implementation of RibonanzaNet applied dropout-residual-norm
path twice to the output of the Self-Attention layer.
By default, the MultiMolecule follows the original implementation.
You can set fix_attention_norm=True
in the model configuration to apply the dropout-residual-norm
path once.
See more at issue #3
Caution
The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.
The original implementation of RibonanzaNet does not apply attention mask correctly.
By default, the MultiMolecule follows the original implementation.
You can set fix_attention_mask=True
in the model configuration to apply the correct attention mask.
See more at issue #4, issue #5, and issue #7
Caution
The MultiMolecule team is aware of a potential risk in reproducing the results of RibonanzaNet.
The original implementation of RibonanzaNet applies dropout in an axis different from the one described in the paper.
By default, the MultiMolecule follows the original implementation.
You can set fix_pairwise_dropout=True
in the model configuration to follow the description in the paper.
See more at issue #6
Tip
The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
The team releasing RibonanzaNet did not write this model card for this model so this model card has been written by the MultiMolecule team.
Model Details
RibonanzaNet is a bert-style model pre-trained on a large corpus of RNA sequences. Please refer to the Training Details section for more information on the training process.
Model Specification
Num Layers |
Hidden Size |
Num Heads |
Intermediate Size |
Num Parameters (M) |
FLOPs (G) |
MACs (G) |
Max Num Tokens |
9 |
256 |
8 |
1024 |
11.37 |
107.31 |
53.32 |
inf |
Links
- Code: multimolecule.ribonanzanet
- Weights:
multimolecule/ribonanzanet
- Data: Ribonanza
- Paper: Ribonanza: deep learning of RNA structure through dual crowdsourcing
- Developed by: Shujun He, Rui Huang, Jill Townley, Rachael C. Kretsch, Thomas G. Karagianes, David B.T. Cox, Hamish Blair, Dmitry Penzar, Valeriy Vyaltsev, Elizaveta Aristova, Arsenii Zinkevich, Artemy Bakulin, Hoyeol Sohn, Daniel Krstevski, Takaaki Fukui, Fumiya Tatematsu, Yusuke Uchida, Donghoon Jang, Jun Seong Lee, Roger Shieh, Tom Ma, Eduard Martynov, Maxim V. Shugaev, Habib S.T. Bukhari, Kazuki Fujikawa, Kazuki Onodera, Christof Henkel, Shlomo Ron, Jonathan Romano, John J. Nicol, Grace P. Nye, Yuan Wu, Christian Choe, Walter Reade, Eterna participants, Rhiju Das
- Model type: BERT
- Original Repository: Shujun-He/RibonanzaNet
Usage
The model file depends on the multimolecule
library. You can install it using pip:
Bash |
---|
| pip install multimolecule
|
Direct Use
You can use this model directly to predict chemical mapping:
Python |
---|
| >>> from multimolecule import RnaTokenizer, RibonanzaNetForPreTraining
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
>>> model = RibonanzaNetForPreTraining.from_pretrained("multimolecule/ribonanzanet")
|
Downstream Use
Here is how to use this model to get the features of a given sequence in PyTorch:
Python |
---|
| from multimolecule import RnaTokenizer, RibonanzaNetModel
tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetModel.from_pretrained("multimolecule/ribonanzanet")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
output = model(**input)
|
Sequence Classification / Regression
Note: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
Python |
---|
| import torch
from multimolecule import RnaTokenizer, RibonanzaNetForSequencePrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetForSequencePrediction.from_pretrained("multimolecule/ribonanzanet")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.tensor([1])
output = model(**input, labels=label)
|
Token Classification / Regression
Note: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
Python |
---|
| import torch
from multimolecule import RnaTokenizer, RibonanzaNetForTokenPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetForTokenPrediction.from_pretrained("multimolecule/ribonanzanet")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), ))
output = model(**input, labels=label)
|
Note: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
Python |
---|
| import torch
from multimolecule import RnaTokenizer, RibonanzaNetForContactPrediction
tokenizer = RnaTokenizer.from_pretrained("multimolecule/ribonanzanet")
model = RibonanzaNetForContactPrediction.from_pretrained("multimolecule/ribonanzanet")
text = "UAGCUUAUCAGACUGAUGUUG"
input = tokenizer(text, return_tensors="pt")
label = torch.randint(2, (len(text), len(text)))
output = model(**input, labels=label)
|
Training Details
RibonanzaNet used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
Training Data
The RibonanzaNet model was pre-trained on Ribonanza.
RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of Expert Databases representing a broad range of organisms and RNA types.
RibonanzaNet applied CD-HIT (CD-HIT-EST) with a cut-off at 100% sequence identity to remove redundancy from the RNAcentral. The final dataset contains 23.7 million non-redundant RNA sequences.
RibonanzaNet preprocessed all tokens by replacing “U”s with “T”s.
Note that during model conversions, “T” is replaced with “U”. RnaTokenizer
will convert “T”s to “U”s for you, you may disable this behaviour by passing replace_T_with_U=False
.
Training Procedure
Preprocessing
RibonanzaNet used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by
<mask>
.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
PreTraining
The model was trained on 10 NVIDIA L40S GPUs with 48GiB memories.
- Learning rate: 1e-4
- Weight decay: 0.01
Citation
BibTeX:
BibTeX |
---|
| @article{He2024.02.24.581671,
author = {He, Shujun and Huang, Rui and Townley, Jill and Kretsch, Rachael C. and Karagianes, Thomas G. and Cox, David B.T. and Blair, Hamish and Penzar, Dmitry and Vyaltsev, Valeriy and Aristova, Elizaveta and Zinkevich, Arsenii and Bakulin, Artemy and Sohn, Hoyeol and Krstevski, Daniel and Fukui, Takaaki and Tatematsu, Fumiya and Uchida, Yusuke and Jang, Donghoon and Lee, Jun Seong and Shieh, Roger and Ma, Tom and Martynov, Eduard and Shugaev, Maxim V. and Bukhari, Habib S.T. and Fujikawa, Kazuki and Onodera, Kazuki and Henkel, Christof and Ron, Shlomo and Romano, Jonathan and Nicol, John J. and Nye, Grace P. and Wu, Yuan and Choe, Christian and Reade, Walter and Eterna participants and Das, Rhiju},
title = {Ribonanza: deep learning of RNA structure through dual crowdsourcing},
elocation-id = {2024.02.24.581671},
year = {2024},
doi = {10.1101/2024.02.24.581671},
publisher = {Cold Spring Harbor Laboratory},
abstract = {Prediction of RNA structure from sequence remains an unsolved problem, and progress has been slowed by a paucity of experimental data. Here, we present Ribonanza, a dataset of chemical mapping measurements on two million diverse RNA sequences collected through Eterna and other crowdsourced initiatives. Ribonanza measurements enabled solicitation, training, and prospective evaluation of diverse deep neural networks through a Kaggle challenge, followed by distillation into a single, self-contained model called RibonanzaNet. When fine tuned on auxiliary datasets, RibonanzaNet achieves state-of-the-art performance in modeling experimental sequence dropout, RNA hydrolytic degradation, and RNA secondary structure, with implications for modeling RNA tertiary structure.Competing Interest StatementStanford University is filing patent applications based on concepts described in this paper. R.D. is a cofounder of Inceptive.},
url = {https://www.biorxiv.org/content/early/2024/06/11/2024.02.24.581671},
eprint = {https://www.biorxiv.org/content/early/2024/06/11/2024.02.24.581671.full.pdf},
journal = {bioRxiv}
}
|
Please use GitHub issues of MultiMolecule for any questions or comments on the model card.
Please contact the authors of the RibonanzaNet paper for questions or comments on the paper/model.
License
This model is licensed under the AGPL-3.0 License.
Text Only |
---|
| SPDX-License-Identifier: AGPL-3.0-or-later
|
multimolecule.models.ribonanzanet
RnaTokenizer
Bases: Tokenizer
Tokenizer for RNA sequences.
Parameters:
Name |
Type |
Description |
Default |
alphabet
|
Alphabet | str | List[str] | None
|
alphabet to use for tokenization.
- If is
None , the standard RNA alphabet will be used.
- If is a
string , it should correspond to the name of a predefined alphabet. The options include
standard
extended
streamline
nucleobase
- If is an alphabet or a list of characters, that specific alphabet will be used.
|
None
|
nmers
|
int
|
Size of kmer to tokenize.
|
1
|
codon
|
bool
|
Whether to tokenize into codons.
|
False
|
replace_T_with_U
|
bool
|
Whether to replace T with U.
|
True
|
do_upper_case
|
bool
|
Whether to convert input to uppercase.
|
True
|
Examples:
Python Console Session |
---|
| >>> from multimolecule import RnaTokenizer
>>> tokenizer = RnaTokenizer()
>>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
[1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
>>> tokenizer('acgu')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer = RnaTokenizer(replace_T_with_U=False)
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 3, 2]
>>> tokenizer = RnaTokenizer(nmers=3)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 17, 64, 49, 96, 84, 22, 2]
>>> tokenizer = RnaTokenizer(codon=True)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 49, 22, 2]
>>> tokenizer('uagcuuauca')["input_ids"]
Traceback (most recent call last):
ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
|
Source code in multimolecule/tokenisers/rna/tokenization_rna.py
Python |
---|
| class RnaTokenizer(Tokenizer):
"""
Tokenizer for RNA sequences.
Args:
alphabet: alphabet to use for tokenization.
- If is `None`, the standard RNA alphabet will be used.
- If is a `string`, it should correspond to the name of a predefined alphabet. The options include
+ `standard`
+ `extended`
+ `streamline`
+ `nucleobase`
- If is an alphabet or a list of characters, that specific alphabet will be used.
nmers: Size of kmer to tokenize.
codon: Whether to tokenize into codons.
replace_T_with_U: Whether to replace T with U.
do_upper_case: Whether to convert input to uppercase.
Examples:
>>> from multimolecule import RnaTokenizer
>>> tokenizer = RnaTokenizer()
>>> tokenizer('<pad><cls><eos><unk><mask><null>ACGUNRYSWKMBDHV.X*-I')["input_ids"]
[1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 2]
>>> tokenizer('acgu')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 9, 2]
>>> tokenizer = RnaTokenizer(replace_T_with_U=False)
>>> tokenizer('acgt')["input_ids"]
[1, 6, 7, 8, 3, 2]
>>> tokenizer = RnaTokenizer(nmers=3)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 17, 64, 49, 96, 84, 22, 2]
>>> tokenizer = RnaTokenizer(codon=True)
>>> tokenizer('uagcuuauc')["input_ids"]
[1, 83, 49, 22, 2]
>>> tokenizer('uagcuuauca')["input_ids"]
Traceback (most recent call last):
ValueError: length of input sequence must be a multiple of 3 for codon tokenization, but got 10
"""
model_input_names = ["input_ids", "attention_mask"]
def __init__(
self,
alphabet: Alphabet | str | List[str] | None = None,
nmers: int = 1,
codon: bool = False,
replace_T_with_U: bool = True,
do_upper_case: bool = True,
additional_special_tokens: List | Tuple | None = None,
**kwargs,
):
if codon and (nmers > 1 and nmers != 3):
raise ValueError("Codon and nmers cannot be used together.")
if codon:
nmers = 3 # set to 3 to get correct vocab
if not isinstance(alphabet, Alphabet):
alphabet = get_alphabet(alphabet, nmers=nmers)
super().__init__(
alphabet=alphabet,
nmers=nmers,
codon=codon,
replace_T_with_U=replace_T_with_U,
do_upper_case=do_upper_case,
additional_special_tokens=additional_special_tokens,
**kwargs,
)
self.replace_T_with_U = replace_T_with_U
self.nmers = nmers
self.codon = codon
def _tokenize(self, text: str, **kwargs):
if self.do_upper_case:
text = text.upper()
if self.replace_T_with_U:
text = text.replace("T", "U")
if self.codon:
if len(text) % 3 != 0:
raise ValueError(
f"length of input sequence must be a multiple of 3 for codon tokenization, but got {len(text)}"
)
return [text[i : i + 3] for i in range(0, len(text), 3)]
if self.nmers > 1:
return [text[i : i + self.nmers] for i in range(len(text) - self.nmers + 1)] # noqa: E203
return list(text)
|
RibonanzaNetConfig
Bases: PreTrainedConfig
This is the configuration class to store the configuration of a
RibonanzaNetModel
.
It is used to instantiate a RibonanzaNet model according to the specified arguments, defining the model
architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the RibonanzaNet
Shujun-He/RibonanzaNet architecture.
Configuration objects inherit from PreTrainedConfig
and can be used to
control the model outputs. Read the documentation from PreTrainedConfig
for more information.
Parameters:
Name |
Type |
Description |
Default |
vocab_size
|
int
|
Vocabulary size of the RibonanzaNet model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling [RibonanzaNetModel ].
|
26
|
hidden_size
|
int
|
Dimensionality of the encoder layers and the pooler layer.
|
256
|
num_hidden_layers
|
int
|
Number of hidden layers in the Transformer encoder.
|
9
|
num_attention_heads
|
int
|
Number of attention heads for each attention layer in the Transformer encoder.
|
8
|
|
int
|
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
|
1024
|
hidden_act
|
str
|
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu" ,
"relu" , "silu" and "gelu_new" are supported.
|
'gelu'
|
hidden_dropout
|
float
|
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
|
0.05
|
attention_dropout
|
float
|
The dropout ratio for the attention probabilities.
|
0.05
|
max_position_embeddings
|
|
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
|
required
|
initializer_range
|
float
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
0.02
|
layer_norm_eps
|
float
|
The epsilon used by the layer normalization layers.
|
1e-12
|
position_embedding_type
|
|
|
required
|
is_decoder
|
|
Whether the model is used as a decoder or not. If False , the model is used as an encoder.
|
required
|
use_cache
|
|
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True .
|
required
|
emb_layer_norm_before
|
|
Whether to apply layer normalization after embeddings but before the main stem of the network.
|
required
|
token_dropout
|
|
When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.
|
required
|
head
|
HeadConfig | None
|
The configuration of the head.
|
None
|
lm_head
|
MaskedLMHeadConfig | None
|
The configuration of the masked language model head.
|
None
|
Examples:
Python Console Session |
---|
| >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel
>>> # Initializing a RibonanzaNet multimolecule/ribonanzanet style configuration
>>> configuration = RibonanzaNetConfig()
>>> # Initializing a model (with random weights) from the multimolecule/ribonanzanet style configuration
>>> model = RibonanzaNetModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
|
Source code in multimolecule/models/ribonanzanet/configuration_ribonanzanet.py
Python |
---|
| class RibonanzaNetConfig(PreTrainedConfig):
r"""
This is the configuration class to store the configuration of a
[`RibonanzaNetModel`][multimolecule.models.RibonanzaNetModel].
It is used to instantiate a RibonanzaNet model according to the specified arguments, defining the model
architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the RibonanzaNet
[Shujun-He/RibonanzaNet](https://github.com/Shujun-He/RibonanzaNet) architecture.
Configuration objects inherit from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig] and can be used to
control the model outputs. Read the documentation from [`PreTrainedConfig`][multimolecule.models.PreTrainedConfig]
for more information.
Args:
vocab_size:
Vocabulary size of the RibonanzaNet model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`RibonanzaNetModel`].
hidden_size:
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers:
Number of hidden layers in the Transformer encoder.
num_attention_heads:
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size:
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act:
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout:
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout:
The dropout ratio for the attention probabilities.
max_position_embeddings:
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range:
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps:
The epsilon used by the layer normalization layers.
position_embedding_type:
Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For
positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to
[Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155).
For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658).
is_decoder:
Whether the model is used as a decoder or not. If `False`, the model is used as an encoder.
use_cache:
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
emb_layer_norm_before:
Whether to apply layer normalization after embeddings but before the main stem of the network.
token_dropout:
When this is enabled, masked tokens are treated as if they had been dropped out by input dropout.
head:
The configuration of the head.
lm_head:
The configuration of the masked language model head.
Examples:
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel
>>> # Initializing a RibonanzaNet multimolecule/ribonanzanet style configuration
>>> configuration = RibonanzaNetConfig()
>>> # Initializing a model (with random weights) from the multimolecule/ribonanzanet style configuration
>>> model = RibonanzaNetModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
"""
model_type = "ribonanzanet"
def __init__(
self,
vocab_size: int = 26,
hidden_size: int = 256,
num_hidden_layers: int = 9,
num_attention_heads: int = 8,
intermediate_size: int = 1024,
pairwise_size: int = 64,
pairwise_attention_size: int = 32,
pairwise_intermediate_size: int = 256,
pairwise_num_attention_heads: int = 4,
kernel_size: int = 5,
use_triangular_attention: bool = False,
hidden_act: str = "gelu",
pairwise_hidden_act: str = "relu",
hidden_dropout: float = 0.05,
attention_dropout: float = 0.05,
output_pairwise_states: bool = False,
initializer_range: float = 0.02,
layer_norm_eps: float = 1e-12,
head: HeadConfig | None = None,
lm_head: MaskedLMHeadConfig | None = None,
fix_attention_mask: bool = False,
fix_attention_norm: bool = False,
fix_pairwise_dropout: bool = False,
**kwargs,
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.type_vocab_size = 2
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.pairwise_size = pairwise_size
self.pairwise_attention_size = pairwise_attention_size
self.pairwise_intermediate_size = pairwise_intermediate_size
self.pairwise_num_attention_heads = pairwise_num_attention_heads
self.kernel_size = kernel_size
self.use_triangular_attention = use_triangular_attention
self.hidden_act = hidden_act
self.pairwise_hidden_act = pairwise_hidden_act
self.hidden_dropout = hidden_dropout
self.attention_dropout = attention_dropout
self.output_pairwise_states = output_pairwise_states
self.initializer_range = initializer_range
self.layer_norm_eps = layer_norm_eps
self.head = HeadConfig(**head) if head is not None else None
self.lm_head = MaskedLMHeadConfig(**lm_head) if lm_head is not None else None
self.fix_attention_mask = fix_attention_mask
self.fix_attention_norm = fix_attention_norm
self.fix_pairwise_dropout = fix_pairwise_dropout
|
Bases: RibonanzaNetPreTrainedModel
Examples:
Python Console Session |
---|
| >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForContactPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForContactPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| class RibonanzaNetForContactPrediction(RibonanzaNetPreTrainedModel):
"""
Examples:
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForContactPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForContactPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RibonanzaNetConfig):
super().__init__(config)
self.ribonanzanet = RibonanzaNetModel(config)
self.contact_head = ContactPredictionHead(config)
self.head_config = self.contact_head.config
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
input_ids: Tensor | NestedTensor,
attention_mask: Tensor | None = None,
head_mask: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
output_attentions: bool | None = None,
output_hidden_states: bool | None = None,
output_pairwise_states: bool | None = None,
return_dict: bool | None = None,
**kwargs,
) -> Tuple[Tensor, ...] | RibonanzaNetContactPredictorOutput:
if output_attentions is False:
warn("output_attentions must be True for contact classification and will be ignored.")
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.ribonanzanet(
input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=True,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
**kwargs,
)
output = self.contact_head(outputs, attention_mask, input_ids, labels)
logits, loss = output.logits, output.loss
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return RibonanzaNetContactPredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
pairwise_states=outputs.pairwise_states,
attentions=outputs.attentions,
)
|
RibonanzaNetForSecondaryStructurePrediction
Bases: RibonanzaNetPreTrainedModel
Examples:
Python Console Session |
---|
| >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSecondaryStructurePrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSecondaryStructurePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| class RibonanzaNetForSecondaryStructurePrediction(RibonanzaNetPreTrainedModel):
"""
Examples:
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSecondaryStructurePrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSecondaryStructurePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 5, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RibonanzaNetConfig):
super().__init__(config)
self.ribonanzanet = RibonanzaNetModel(config, add_pooling_layer=False)
self.ss_head = RibonanzaNetForSecondaryStructurePredictionHead(config)
self.decoder = nn.Linear(config.hidden_size, config.num_labels)
self.head_config = self.ss_head.config
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
input_ids: Tensor | NestedTensor,
attention_mask: Tensor | None = None,
head_mask: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
output_attentions: bool | None = None,
output_hidden_states: bool | None = None,
output_pairwise_states: bool | None = None,
return_dict: bool | None = None,
**kwargs,
) -> Tuple[Tensor, ...] | RibonanzaNetForSecondaryStructurePredictorOutput:
if not output_pairwise_states:
warn("output_pairwise_states must be True since prediction head requires pairwise states.")
output_pairwise_states = True
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.ribonanzanet(
input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
output_pairwise_states=output_pairwise_states,
return_dict=return_dict,
**kwargs,
)
output = self.ss_head(outputs, attention_mask, input_ids, labels)
logits, loss = output.logits, output.loss
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return RibonanzaNetForSecondaryStructurePredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
pairwise_states=outputs.pairwise_states,
attentions=outputs.attentions,
)
|
RibonanzaNetForSequencePrediction
Bases: RibonanzaNetPreTrainedModel
Examples:
Python Console Session |
---|
| >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSequencePrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSequencePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.tensor([[1]]))
>>> output["logits"].shape
torch.Size([1, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| class RibonanzaNetForSequencePrediction(RibonanzaNetPreTrainedModel):
"""
Examples:
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForSequencePrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForSequencePrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.tensor([[1]]))
>>> output["logits"].shape
torch.Size([1, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RibonanzaNetConfig):
super().__init__(config)
self.ribonanzanet = RibonanzaNetModel(config)
self.sequence_head = SequencePredictionHead(config)
self.head_config = self.sequence_head.config
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
input_ids: Tensor | NestedTensor,
attention_mask: Tensor | None = None,
head_mask: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
output_attentions: bool | None = None,
output_hidden_states: bool | None = None,
output_pairwise_states: bool | None = None,
return_dict: bool | None = None,
**kwargs,
) -> Tuple[Tensor, ...] | RibonanzaNetSequencePredictorOutput:
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.ribonanzanet(
input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
output_pairwise_states=output_pairwise_states,
return_dict=return_dict,
**kwargs,
)
output = self.sequence_head(outputs, labels)
logits, loss = output.logits, output.loss
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return RibonanzaNetSequencePredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
pairwise_states=outputs.pairwise_states,
attentions=outputs.attentions,
)
|
RibonanzaNetForTokenPrediction
Bases: RibonanzaNetPreTrainedModel
Examples:
Python Console Session |
---|
| >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForTokenPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForTokenPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 1])
>>> output["loss"]
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
|
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| class RibonanzaNetForTokenPrediction(RibonanzaNetPreTrainedModel):
"""
Examples:
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetForTokenPrediction, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetForTokenPrediction(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input, labels=torch.randint(2, (1, 5)))
>>> output["logits"].shape
torch.Size([1, 5, 1])
>>> output["loss"] # doctest:+ELLIPSIS
tensor(..., grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)
"""
def __init__(self, config: RibonanzaNetConfig):
super().__init__(config)
self.ribonanzanet = RibonanzaNetModel(config)
self.token_head = TokenPredictionHead(config)
self.head_config = self.token_head.config
# Initialize weights and apply final processing
self.post_init()
def forward(
self,
input_ids: Tensor | NestedTensor,
attention_mask: Tensor | None = None,
head_mask: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
labels: Tensor | None = None,
output_attentions: bool | None = None,
output_hidden_states: bool | None = None,
output_pairwise_states: bool | None = None,
return_dict: bool | None = None,
**kwargs,
) -> Tuple[Tensor, ...] | RibonanzaNetTokenPredictorOutput:
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.ribonanzanet(
input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
output_pairwise_states=output_pairwise_states,
return_dict=return_dict,
**kwargs,
)
output = self.token_head(outputs, attention_mask, input_ids, labels)
logits, loss = output.logits, output.loss
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return RibonanzaNetTokenPredictorOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
pairwise_states=outputs.pairwise_states,
attentions=outputs.attentions,
)
|
RibonanzaNetModel
Bases: RibonanzaNetPreTrainedModel
Examples:
Python Console Session |
---|
| >>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetModel(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["last_hidden_state"].shape
torch.Size([1, 7, 256])
>>> output["pooler_output"].shape
torch.Size([1, 256])
|
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| class RibonanzaNetModel(RibonanzaNetPreTrainedModel):
"""
Examples:
>>> from multimolecule import RibonanzaNetConfig, RibonanzaNetModel, RnaTokenizer
>>> config = RibonanzaNetConfig()
>>> model = RibonanzaNetModel(config)
>>> tokenizer = RnaTokenizer.from_pretrained("multimolecule/rna")
>>> input = tokenizer("ACGUN", return_tensors="pt")
>>> output = model(**input)
>>> output["last_hidden_state"].shape
torch.Size([1, 7, 256])
>>> output["pooler_output"].shape
torch.Size([1, 256])
"""
def __init__(self, config: RibonanzaNetConfig, add_pooling_layer: bool = True):
super().__init__(config)
self.pad_token_id = config.pad_token_id
self.embeddings = RibonanzaNetEmbeddings(config)
self.encoder = RibonanzaNetEncoder(config)
self.pooler = RibonanzaNetPooler(config) if add_pooling_layer else None
self.fix_attention_mask = config.fix_attention_mask
# Initialize weights and apply final processing
self.post_init()
def get_input_embeddings(self):
return self.embeddings.word_embeddings
def set_input_embeddings(self, value):
self.embeddings.word_embeddings = value
def _prune_heads(self, heads_to_prune):
"""
Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
class PreTrainedModel
"""
for layer, heads in heads_to_prune.items():
self.encoder.layer[layer].attention.prune_heads(heads)
def forward(
self,
input_ids: Tensor | NestedTensor,
attention_mask: Tensor | None = None,
head_mask: Tensor | None = None,
inputs_embeds: Tensor | NestedTensor | None = None,
output_attentions: bool | None = None,
output_hidden_states: bool | None = None,
output_pairwise_states: bool | None = None,
return_dict: bool | None = None,
**kwargs,
) -> Tuple[Tensor, ...] | RibonanzaNetModelOutputWithPooling:
if kwargs:
warn(
f"Additional keyword arguments `{', '.join(kwargs)}` are detected in "
f"`{self.__class__.__name__}.forward`, they will be ignored.\n"
"This is provided for backward compatibility and may lead to unexpected behavior."
)
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
output_pairwise_states = (
output_pairwise_states if output_pairwise_states is not None else self.config.output_pairwise_states
)
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
if isinstance(input_ids, NestedTensor):
input_ids, attention_mask = input_ids.tensor, input_ids.mask
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
if input_ids is not None:
self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
input_shape = input_ids.size()
elif inputs_embeds is not None:
input_shape = inputs_embeds.size()[:-1]
else:
raise ValueError("You have to specify either input_ids or inputs_embeds")
batch_size, seq_length = input_shape
device = input_ids.device if input_ids is not None else inputs_embeds.device # type: ignore[union-attr]
if attention_mask is None:
attention_mask = (
input_ids.ne(self.pad_token_id)
if self.pad_token_id is not None
else torch.ones(((batch_size, seq_length)), device=device)
)
else:
# Must make a clone here because the attention mask might be reused in other modules
# and we need to process it to mimic the behavior of the original implementation.
# See more in https://github.com/Shujun-He/RibonanzaNet/issues/4
attention_mask = attention_mask.clone()
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
extended_attention_mask: Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
attention_mask = attention_mask.float().unsqueeze(-1)
# Prepare head mask if needed
# 1.0 in head_mask indicate we keep the head
# attention_probs has shape bsz x n_heads x N x N
# input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
# and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
embedding_output = self.embeddings(
input_ids=input_ids,
inputs_embeds=inputs_embeds,
)
encoder_outputs = self.encoder(
embedding_output,
attention_mask=attention_mask,
extended_attention_mask=extended_attention_mask,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
output_pairwise_states=output_pairwise_states,
return_dict=return_dict,
)
sequence_output = encoder_outputs[0]
pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
if not return_dict:
return (sequence_output, pooled_output) + encoder_outputs[1:]
return RibonanzaNetModelOutputWithPooling(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
hidden_states=encoder_outputs.hidden_states,
pairwise_states=encoder_outputs.pairwise_states,
attentions=encoder_outputs.attentions,
)
def get_extended_attention_mask(
self, attention_mask: Tensor, input_shape: Tuple[int], dtype: torch.dtype | None = None
) -> Tensor:
"""
Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
Arguments:
attention_mask (`torch.Tensor`):
Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
input_shape (`Tuple[int]`):
The shape of the input to the model.
Returns:
`torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`.
"""
if dtype is None:
dtype = self.dtype
if attention_mask.dim() == 2:
attention_mask = attention_mask.unsqueeze(-1)
if not self.fix_attention_mask:
attention_mask[attention_mask == 0] = -1
attention_mask = torch.matmul(attention_mask, attention_mask.transpose(1, 2))
elif attention_mask.shape != 3:
raise ValueError(
f"Wrong shape for input_ids (shape {input_shape}) or attention_mask (shape {attention_mask.shape})"
)
extended_attention_mask = attention_mask[:, None, :, :]
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and the dtype's smallest value for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
extended_attention_mask = extended_attention_mask.to(dtype=dtype) # fp16 compatibility
if self.fix_attention_mask:
extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(dtype).min
return extended_attention_mask
|
_prune_heads
Python_prune_heads(heads_to_prune)
Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
class PreTrainedModel
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| def _prune_heads(self, heads_to_prune):
"""
Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
class PreTrainedModel
"""
for layer, heads in heads_to_prune.items():
self.encoder.layer[layer].attention.prune_heads(heads)
|
get_extended_attention_mask
Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
Parameters:
Name |
Type |
Description |
Default |
attention_mask
|
`torch.Tensor`
|
Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
|
required
|
|
`Tuple[int]`
|
The shape of the input to the model.
|
required
|
Returns:
Type |
Description |
Tensor
|
torch.Tensor The extended attention mask, with a the same dtype as attention_mask.dtype .
|
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| def get_extended_attention_mask(
self, attention_mask: Tensor, input_shape: Tuple[int], dtype: torch.dtype | None = None
) -> Tensor:
"""
Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
Arguments:
attention_mask (`torch.Tensor`):
Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
input_shape (`Tuple[int]`):
The shape of the input to the model.
Returns:
`torch.Tensor` The extended attention mask, with a the same dtype as `attention_mask.dtype`.
"""
if dtype is None:
dtype = self.dtype
if attention_mask.dim() == 2:
attention_mask = attention_mask.unsqueeze(-1)
if not self.fix_attention_mask:
attention_mask[attention_mask == 0] = -1
attention_mask = torch.matmul(attention_mask, attention_mask.transpose(1, 2))
elif attention_mask.shape != 3:
raise ValueError(
f"Wrong shape for input_ids (shape {input_shape}) or attention_mask (shape {attention_mask.shape})"
)
extended_attention_mask = attention_mask[:, None, :, :]
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and the dtype's smallest value for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
extended_attention_mask = extended_attention_mask.to(dtype=dtype) # fp16 compatibility
if self.fix_attention_mask:
extended_attention_mask = (1.0 - extended_attention_mask) * torch.finfo(dtype).min
return extended_attention_mask
|
RibonanzaNetPreTrainedModel
Bases: PreTrainedModel
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| class RibonanzaNetPreTrainedModel(PreTrainedModel):
"""
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
"""
config_class = RibonanzaNetConfig
base_model_prefix = "ribonanzanet"
supports_gradient_checkpointing = True
_no_split_modules = ["RibonanzaNetLayer", "RibonanzaNetEmbeddings"]
# Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights
def _init_weights(self, module: nn.Module):
"""Initialize the weights"""
if isinstance(module, nn.Linear):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
for n, m in module.named_modules():
if "_gate" in n:
m.weight.data.zero_()
m.bias.data.fill_(1.0)
|
_init_weights
Python_init_weights(module: Module)
Initialize the weights
Source code in multimolecule/models/ribonanzanet/modeling_ribonanzanet.py
Python |
---|
| def _init_weights(self, module: nn.Module):
"""Initialize the weights"""
if isinstance(module, nn.Linear):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.Embedding):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.padding_idx is not None:
module.weight.data[module.padding_idx].zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
for n, m in module.named_modules():
if "_gate" in n:
m.weight.data.zero_()
m.bias.data.fill_(1.0)
|