modeling_outputs¶
multimolecule.models.modeling_outputs
¶
SequencePredictorOutput
dataclass
¶
Bases: ModelOutput
Base class for outputs of sentence classification & regression models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
FloatTensor | None
|
Optional, returned when |
None
|
|
FloatTensor
|
Prediction outputs. |
None
|
|
Tuple[FloatTensor, ...] | None
|
Tuple of Optional, returned when Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. |
None
|
|
Tuple[FloatTensor, ...] | None
|
Tuple of Optional, eturned when Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. |
None
|
Source code in multimolecule/models/modeling_outputs.py
TokenPredictorOutput
dataclass
¶
Bases: ModelOutput
Base class for outputs of token classification & regression models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
FloatTensor | None
|
Optional, returned when |
None
|
|
FloatTensor
|
Prediction outputs. |
None
|
|
Tuple[FloatTensor, ...] | None
|
Tuple of Optional, returned when Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. |
None
|
|
Tuple[FloatTensor, ...] | None
|
Tuple of Optional, eturned when Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. |
None
|
Source code in multimolecule/models/modeling_outputs.py
ContactPredictorOutput
dataclass
¶
Bases: ModelOutput
Base class for outputs of contact classification & regression models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
FloatTensor | None
|
Optional, returned when |
None
|
|
FloatTensor
|
Prediction outputs. |
None
|
|
Tuple[FloatTensor, ...] | None
|
Tuple of Optional, returned when Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. |
None
|
|
Tuple[FloatTensor, ...] | None
|
Tuple of Optional, eturned when Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. |
None
|