DAMO-NLP-SG
VideoLLaMA3-7B
SigLIP-NaViT
VL3-SigLIP-NaViT
VideoLLaMA3-2B
VideoLLaMA2.1-7B-AV
VideoLLaMA2-7B
VideoLLaMA2.1-7B-16F
VideoLLaMA3-7B-Image
VideoLLaMA3-2B-Image
Zero Shot Classify SSTuning XLM R
Zero-shot text classification (multilingual version) trained with self-supervised tuning Zero-shot text classification model trained with self-supervised tuning (SSTuning). It was introduced in the paper Zero-Shot Text Classification via Self-Supervised Tuning by Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing and first released in this repository. The model is tuned with unlabeled data using a first sentence prediction (FSP) learning objective. The FSP task is designed by considering both the nature of the unlabeled corpus and the input/output format of classification tasks. The training and validation sets are constructed from the unlabeled corpus using FSP. During tuning, BERT-like pre-trained masked language models such as RoBERTa and ALBERT are employed as the backbone, and an output layer for classification is added. The learning objective for FSP is to predict the index of the correct label. A cross-entropy loss is used for tuning the model. Model variations There are four versions of models released. The details are: | Model | Backbone | #params | lang | acc | Speed | #Train |------------|-----------|----------|-------|-------|----|-------------| | zero-shot-classify-SSTuning-base | roberta-base | 125M | En | Low | High | 20.48M | | zero-shot-classify-SSTuning-large | roberta-large | 355M | En | Medium | Medium | 5.12M | | zero-shot-classify-SSTuning-ALBERT | albert-xxlarge-v2 | 235M | En | High | Low| 5.12M | | zero-shot-classify-SSTuning-XLM-R | xlm-roberta-base | 278M | Multi | - | - | 20.48M | Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as xlm-roberta supports. Please check this repository for the performance of each model. Intended uses & limitations The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed. How to use You can try the model with the Colab Notebook.
VideoLLaMA2-7B-Base
VideoRefer-VideoLLaMA3-7B
Qwen2.5-7B-LongPO-128K
VideoRefer-VideoLLaMA3-2B
VideoLLaMA2-7B-16F
Zero Shot Classify SSTuning Base
Zero-shot text classification (base-sized model) trained with self-supervised tuning Zero-shot text classification model trained with self-supervised tuning (SSTuning). It was introduced in the paper Zero-Shot Text Classification via Self-Supervised Tuning by Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing and first released in this repository. The model is tuned with unlabeled data using a learning objective called first sentence prediction (F...
Zero Shot Classify SSTuning ALBERT
Zero-shot text classification (model based on albert-xxlarge-v2) trained with self-supervised tuning Zero-shot text classification model trained with self-supervised tuning (SSTuning). It was introduced in the paper Zero-Shot Text Classification via Self-Supervised Tuning by Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing and first released in this repository. Model description The model is tuned with unlabeled data using a learning objective c...
CLEX-Phi-2-32K
Zero Shot Classify SSTuning Large
Zero-shot text classification (large-sized model) trained with self-supervised tuning Zero-shot text classification model trained with self-supervised tuning (SSTuning). It was introduced in the paper Zero-Shot Text Classification via Self-Supervised Tuning by Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing and first released in this repository. Model description The model is tuned with unlabeled data using a learning objective called first sen...