Rostlab

18 models • 1 total models in database
Sort by:

prot_t5_xl_uniref50

--- tags: - protein language model datasets: - UniRef50 ---

665,818
54

prot_t5_xl_half_uniref50-enc

--- tags: - protein language model datasets: - UniRef50 ---

346,895
17

prot_bert

Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is trained on uppercase amino acids: it only works with capital letter amino acids. ProtBert is based on Bert model which pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences. One important difference between our Bert model and the original Bert version is the way of dealing with sequences as separate documents. This means the Next sentence prediction is not used, as each sequence is treated as a complete document. The masking follows the original Bert training with randomly masks 15% of the amino acids in the input. At the end, the feature extracted from this model revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The model could be used for protein feature extraction or to be fine-tuned on downstream tasks. We have noticed in some tasks you could gain more accuracy by fine-tuning the model rather than using it as a feature extractor. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given protein sequence in PyTorch: The ProtBert model was pretrained on Uniref100, a dataset consisting of 217 million protein sequences. The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" were mapped to "X". The inputs of the model are then of the form: Furthermore, each protein sequence was treated as a separate document. The preprocessing step was performed twice, once for a combined length (2 sequences) of less than 512 amino acids, and another time using a combined length (2 sequences) of less than 2048 amino acids. The details of the masking procedure for each sequence followed the original Bert model as following: - 15% of the amino acids are masked. - In 80% of the cases, the masked amino acids are replaced by `[MASK]`. - In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. - In the 10% remaining cases, the masked amino acids are left as is. The model was trained on a single TPU Pod V3-512 for 400k steps in total. 300K steps using sequence length 512 (batch size 15k), and 100K steps using sequence length 2048 (batch size 2.5k). The optimizer used is Lamb with a learning rate of 0.002, a weight decay of 0.01, learning rate warmup for 40k steps and linear decay of the learning rate after. When fine-tuned on downstream tasks, this model achieves the following results: | Task/Dataset | secondary structure (3-states) | secondary structure (8-states) | Localization | Membrane | |:-----:|:-----:|:-----:|:-----:|:-----:| | CASP12 | 75 | 63 | | | | TS115 | 83 | 72 | | | | CB513 | 81 | 66 | | | | DeepLoc | | | 79 | 91 |

55,166
122

prot_bert_bfd

40,681
17

ProstT5

license:mit
11,035
29

ProstT5_fp16

license:mit
1,296
3

prot_t5_base_mt_uniref50

996
0

prot_t5_xl_bfd

767
10

prot_xlnet

740
1

prot_albert

342
3

prot_electra_discriminator_bfd

245
1

prot_bert_bfd_ss3

107
2

prot_bert_bfd_localization

75
1

prot_t5_xxl_uniref50

40
1

prot-t5-xl-uniref50-enc-onnx

NaNK
license:mit
24
0

prot_bert_bfd_membrane

20
2

prot_t5_xxl_bfd

19
1

prot_electra_generator_bfd

6
1