Xenova
speecht5_tts
--- base_model: microsoft/speecht5_tts library_name: transformers.js pipeline_tag: text-to-speech ---
segformer-b0-finetuned-ade-512-512
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Image segmentation with `Xenova/segformer-b0-finetuned-ade-512-512`. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
bge-base-en-v1.5
--- base_model: BAAI/bge-base-en-v1.5 library_name: transformers.js license: mit ---
t5-small
--- base_model: t5-small library_name: transformers.js ---
all-MiniLM-L6-v2
--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: transformers.js license: apache-2.0 ---
whisper-medium
--- base_model: openai/whisper-medium library_name: transformers.js ---
paraphrase-multilingual-MiniLM-L12-v2
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
clip-vit-base-patch32
whisper-tiny.en
jina-embeddings-v2-small-en
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
bge-small-en-v1.5
whisper-tiny
bge-m3
whisper-base.en
jina-embeddings-v2-base-en
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
detr-resnet-50
clip-vit-base-patch16
gte-small
tiny-random-Phi3ForCausalLM
slimsam-77-uniform
distilbert-base-uncased-finetuned-sst-2-english
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: You can then use the model to classify text like this: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
sentence-camembert-large
distilgpt2
ms-marco-MiniLM-L-6-v2
e5-small-v2
modnet
For more information, check out the official repository and example colab. If you haven't already, you can install the Transformers.js JavaScript library from NPM using: You can then use the model for portrait matting, as follows: | Input image | Output mask | |--------|--------| | | | Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
distiluse-base-multilingual-cased-v2
all-mpnet-base-v2
whisper-base
distilbert-base-cased-distilled-squad
vit-base-patch16-224
toxic-bert
nomic-embed-text-v1
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
distilbart-cnn-6-6
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
whisper-small
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
multilingual-e5-small
nllb-200-distilled-600M
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: You can then perform multilingual translation like this: See here for the full list of languages and their corresponding codes. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
bert-base-NER
multilingual-e5-base
depth-anything-small-hf
msmarco-distilbert-base-v4
gpt2
paraphrase-multilingual-mpnet-base-v2
musicgen-small
gelan-c_all
vit-gpt2-image-captioning
mobilebert-uncased-mnli
distilbert-base-uncased-mnli
opus-mt-en-es
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
llama2.c-stories15M
bert-base-multilingual-cased-ner-hrl
ms-marco-TinyBERT-L-2-v2
bart-large-mnli
bge-large-en-v1.5
bge-reranker-base
e5-base-v2
multilingual-e5-large
bert-base-multilingual-uncased-sentiment
whisper-small.en
dino-vits16
GIST Small Embedding V0
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
speecht5_hifigan
flan-t5-small
twitter-roberta-base-sentiment-latest
LaMini-Flan-T5-783M
all-MiniLM-L12-v2
distilbert-base-uncased-distilled-squad
yolos-tiny
quickdraw-mobilevit-small
bert-base-uncased
resnet-50
distilbert-base-uncased
opus-mt-en-ru
detr-resnet-50-panoptic
yolos-small-300
Phi-3-mini-4k-instruct
face-parsing
LaMini-Flan-T5-248M
nli-deberta-v3-xsmall
jina-embeddings-v2-base-zh
Opus Mt En Zh
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
TinyLlama-1.1B-Chat-v1.0
opus-mt-es-en
M2m100 418M
opus-mt-fr-en
distilbart-cnn-12-6
opus-mt-en-fr
whisper-large
ms-marco-MiniLM-L-12-v2
sam-vit-base
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Perform mask generation with `Xenova/sam-vit-base`. Next, select the channel with the highest IoU score, which in this case is the second (green) channel. Intersecting this with the original image gives us an isolated version of the subject: We've also got an online demo, which you can try out here. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
opus-mt-en-id
bge-small-zh-v1.5
whisper-large-v3
facial_emotions_image_detection
opus-mt-ar-en
clip-vit-large-patch14
LaMini-Flan-T5-77M
siglip-base-patch16-224
gelan-c
Qwen1.5-0.5B-Chat
opus-mt-en-de
opus-mt-hi-en
opus-mt-sv-en
flan-t5-base
UAE-Large-V1
bart-large-cnn
segformer-b2-finetuned-ade-512-512
clap-htsat-unfused
Phi-3-mini-4k-instruct_fp16
vit-base-patch16-224-in21k
trocr-small-printed
whisper-medium.en
gte-large
yolov9-c_all
distiluse-base-multilingual-cased-v1
moondream2
trocr-base-printed
nli-deberta-v3-small
opus-mt-en-ar
mms-tts-eng
opus-mt-en-it
bert-base-NER-uncased
LaMini-T5-61M
tiny-random-Florence2ForConditionalGeneration
Segformer B0 Clothes
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Clothes segmentation with `Xenova/segformerb0clothes`. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
opus-mt-de-en
ms-marco-MiniLM-L-4-v2
owlvit-base-patch32
chinese-clip-vit-base-patch16
bge-small-en
tiny-random-LlavaForConditionalGeneration
codegen-350M-mono
sam-vit-large
SapBERT-from-PubMedBERT-fulltext
distilbert-base-multilingual-cased-ner-hrl
segformer-b2-finetuned-cityscapes-1024-1024
tiny-random-WhisperForConditionalGeneration
Mobileclip S0
If you haven't already, you can install the Transformers.js JavaScript library from NPM using:
tiny-random-GemmaForCausalLM
distilbert-base-multilingual-cased-sentiments-student
dpt-hybrid-midas
trocr-small-handwritten
tiny-random-M2M100ForConditionalGeneration
trocr-base-handwritten
yolov9-c
tiny-random-Wav2Vec2ForCTC-ONNX
tiny-random-vits
gpt-neo-125M
wav2vec2-base-960h
wavlm-base-plus-sv
swin2SR-classical-sr-x2-64
LaBSE
paraphrase-MiniLM-L6-v2
bert-base-multilingual-cased
t5-base
bert-base-cased
DeBERTa-v3-base-mnli
donut-base-finetuned-docvqa
whisper-large-v2
finbert
LaMini-GPT-774M
mt5-small
multi-qa-MiniLM-L6-cos-v1
sponsorblock-small
opus-mt-it-en
mbart-large-50-many-to-many-mmt
tiny_starcoder_py
bert-base-multilingual-uncased
opus-mt-ru-en
larger_clap_music_and_speech
multi-qa-mpnet-base-dot-v1
opus-mt-ko-en
depth-anything-base-hf
opus-mt-zh-en
paraphrase-MiniLM-L3-v2
gte-base
distilbart-cnn-12-3
jina-embeddings-v2-base-de
segformer_b2_clothes
Swin2SR Realworld Sr X4 64 Bsrgan Psnr
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Upscale an image with `Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr`. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
opus-mt-ja-en
opus-mt-nl-en
mms-tts-ara
bge-reranker-large
mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
opus-mt-en-hi
LaMini-T5-738M
mobileclip_b
slimsam-50-uniform
nb-whisper-tiny-beta
clip-vit-large-patch14-336
e5-large-v2
detr-resnet-101
e5-small
dinov2-small
llama2.c-stories110M
all-distilroberta-v1
dinov2-large
pix2struct-tiny-random
pix2struct-ai2d-base
nli-deberta-v3-base
flan-alpaca-base
depth-anything-large-hf
bloomz-560m
LaMini-Cerebras-256M
opus-mt-cs-en
distilbart-xsum-12-6
all-roberta-large-v1
ast-finetuned-audioset-10-10-0.4593
owlv2-base-patch16-finetuned
bge-large-zh-v1.5
opus-mt-mul-en
convnext-tiny-224
paraphrase-mpnet-base-v2
bge-base-zh-v1.5
swin2SR-lightweight-x2-64
paraphrase-albert-small-v2
opus-mt-th-en
opus-mt-en-sv
convnextv2-nano-22k-384
LaMini-Cerebras-590M
distilbart-xsum-6-6
LaMini-T5-223M
distilbert-base-nli-mean-tokens
DeBERTa-v3-base-mnli-fever-anli
mobilevit-x-small
long-t5-tglobal-base-16384-book-summary
mt5-base
squeezebert-uncased
llama-68m
Siglip Base Patch16 512
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Zero-shot image classification w/ `Xenova/siglip-base-patch16-512`: Example: Compute text embeddings with `SiglipTextModel`. Example: Compute vision embeddings with `SiglipVisionModel`. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
wav2vec2-large-xlsr-53-gender-recognition-librispeech
bert-base-chinese
opus-mt-en-vi
bert-base-nli-mean-tokens
distilbert-base-nli-stsb-mean-tokens
bge-base-en
distilbart-xsum-12-1
robertuito-sentiment-analysis
opus-mt-pl-en
mbart-large-50-many-to-one-mmt
texify
dpt-large
Qwen1.5-1.8B-Chat
4x APISR GRL GAN Generator Onnx
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Upscale an image with `Xenova/4xAPISRGRLGANgenerator-onnx`. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
clipseg-rd64-refined
llama-160m
mms-tts-fra
phi-1_5_dev
roberta-large-mnli
ClinicalBERT
bart-large-xsum
xlm-r-100langs-bert-base-nli-stsb-mean-tokens
e5-large
e5-base
distilbart-xsum-9-6
long-t5-tglobal-base
swin2SR-compressed-sr-x4-48
scibert_scivocab_uncased
opus-mt-en-nl
distilbart-xsum-12-3
LaMini-GPT-124M
deberta-large-mnli-zero-cls
t5-base-grammar-correction
sponsorblock-classifier-v2
nli-mpnet-base-v2
mobilevit-small
sentence_bert
opus-mt-en-hu
segformer-b5-finetuned-ade-640-640
distilroberta-finetuned-financial-news-sentiment-analysis
resnet-18
LiteLlama-460M-1T
deberta-v3-base-tasksource-nli
squeezebert-mnli
indobert-base-p1
bge-base-zh
larger_clap_general
table-transformer-detection
texify2
opus-mt-en-cs
rubert-base-cased
mms-lid-1024
DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
nli-deberta-v3-large
opus-mt-vi-en
wavlm-base-plus
blenderbot-400M-distill
esm2_t6_8M_UR50D_sequence_classifier_v1
siglip-large-patch16-384
tiny-random-ErnieMModel
WizardCoder-1B-V1.0
mms-tts-rus
bge-large-zh
paraphrase-albert-base-v2
bert-base-chinese-ner
multi-qa-distilbert-cos-v1
spanbert-large-cased
spanbert-base-cased
kobert
mms-lid-2048
DeBERTa-v3-large-mnli-fever-anli-ling-wanli
yolos-small
pythia-14m
conv-bert-base
nucleotide-transformer-500m-1000g
sam-vit-huge
distilbert-base-cased
t5-v1_1-small
opus-mt-en-jap
bert-base-chinese-pos
roberta-base
multi-qa-mpnet-base-cos-v1
mms-lid-126
opt-350m
herbert-large-cased
tiny-random-Swin2SRModel
text2vec-base-chinese-sentence
instructor-large
wavlm-large
geoclip-large-patch14
albert-base-v2
xlm-roberta-base
nli-deberta-base
dinov2-base
tiny-random-RoFormerModel
tiny-random-ErnieModel
colbertv2.0
UMLSBert_ENG
mms-lid-4017
w2v-bert-2.0
stablelm-2-zephyr-1_6b
ernie-3.0-mini-zh
ernie-3.0-micro-zh
propositionizer-wiki-flan-t5-large
bge-large-en
yolos-base
beit-large-patch16-512
swin2SR-classical-sr-x4-64
conv-bert-small
flan-alpaca-large
siglip-base-patch16-384
wav2vec2-bert-CV16-en
owlv2-base-patch16
nanoLLaVA
wav2vec2-base-superb-ks
deberta-v3-large-tasksource-nli
swin-tiny-patch4-window7-224
ernie-gram-zh
text2vec-base-chinese-paraphrase
mms-lid-256
opus-mt-ROMANCE-en
wavlm-base
opus-mt-en-ro
opus-mt-hu-en
pygmalion-350m
herbert-base-cased
long-t5-encodec-tglobal-base
electra-base-discriminator
electra-small-discriminator
nomic-embed-text-v1-unsupervised
Xenova/2x_APISR_RRDB_GAN_generator-onnx
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Upscale an image with `Xenova/2xAPISRRRDBGANgenerator-onnx`. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
t5-v1_1-base
hubert-base-ls960
ernie-3.0-nano-zh
mobilevit-xx-small
instructor-base
sentence-t5-large
deit-base-distilled-patch16-224
pythia-70m
opus-mt-en-uk
blenderbot_small-90M
owlvit-large-patch14
conv-bert-medium-small
deepseek-coder-1.3b-instruct
Phi-3-mini-4k-instruct-hf
TinyLLama-v0
llama2.c-stories42M
opus-mt-id-en
mms-300m
opus-mt-tr-en
bge-large-zh-noinstruct
swin-base-patch4-window7-224
swin-small-patch4-window7-224
nb-whisper-large-beta
nucleotide-transformer-500m-human-ref
ms-marco-MiniLM-L-2-v2
siglip-base-patch16-256
nomic-embed-text-v1-ablated
yolov9-e
mms-tts-spa
mms-tts-deu
albert-large-v2
mms-1b-fl102
mms-1b
tamillama_tiny_30m
opt-125m
convnext-large-224-22k
convnextv2-tiny-22k-224
mms-tts-yor
table-transformer-structure-recognition-v1.1-all
table-transformer-structure-recognition-v1.1-fin
ernie-2.0-large-en
ernie-2.0-base-en
ernie-health-zh
nougat-small
Mms Tts Hin
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Example: Generate Hindi speech with `Xenova/mms-tts-hin`. Optionally, save the audio to a wav file (Node.js): Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
Ast Finetuned Speech Commands V2
If you haven't already, you can install the Transformers.js JavaScript library from NPM using: Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using 🤗 Optimum and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).