voidful
wav2vec2-xlsr-multilingual-56
Qwen3.5-27B-earica
llm-codec
albert_chinese_small
albert_chinese_base
Qwen3.5-27B-gemini-3.1-opus-4.6-reasoning
albert_chinese_large
llm-codec-abl-ftp
Llama-3.2-8B-Instruct
Qwen3.5-9B-gemini-3.1-opus-4.6-reasoning
llm-codec-abl-baseline
llm-codec-abl-ste
llm-codec-abl-k1
llm-codec-abl-k10
desta25_4b_R1_lean
Llama-3.1-TAIDE-R1-8B-Chat
albert_chinese_tiny
desta25_4b_R2_full
mhubert-base
llm-codec-abl-k3
Qwen3.5-35B-A3B-gemini-3.1-opus-4.6-reasoning
QAQ_0.6b_orca_all
llmcodec-librispeech-abl-ftp
bart-eqg-question-generator
QAQ
llmcodec-librispeech-abl-sa
wav2vec2-large-xlsr-53-tw-gpt
albert_chinese_small_sentiment
QAQ_0.6b_orca
bart-base-chinese
phi-1_5_chat_128k
context-only-question-generator
QAQ_4b_orca
mmlm-conv-training-full
bart-distractor-generation
albert_chinese_xxlarge
whisper-small-zh-TW
QAQ_0.6b
mamba-790m-chat
earica-audio-1b
gemma-3-omni-4b-it
gemma-3-omni-27b-it
albert_chinese_xlarge
dpr-ctx_encoder-bert-base-multilingual
Mhubert Unit Tts
This repository provides a text to unit model form mhubert and trained with bart model. The model was trained on the LibriSpeech ASR dataset for the English language and Train epoch 13: `WER:30.41` `CER: 20.22` Datasets The model was trained on the LibriSpeech ASR dataset for the English language. Language The model is trained for the English language. Metrics The model's performance is evaluated using Word Error Rate (WER). Tags The model can be tagged with "hubert" and "tts".
hubert-tiny
Qwen3-0.6B-SFT-Tulu3
gpt2-base-ptt
dpr-question_encoder-bert-base-multilingual
qd-phi-1_5
bart-distractor-generation-pm
unifiedqg-bart-base
changpt-bart
whisper-v3-finetuned-multilingual
tts_hubert_cluster_bart_base
hubert-tiny-v2
recurrentgemma-2b-base
ssr-gemma-3-1b-it
earica-omni-27b
Llama-Breeze2-8B-Instruct-text-only
gemma-3-omni-processor
bart-distractor-generation-both
phoneme-longt5-global
SmolLM2-360M-Instruct-Whisper
smol-360m-ft
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback. The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1. For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.
UnitDeSTA-3.0-8B-base
Qwen2.5-7b-ds-vl
wav2vec2-large-xlsr-53-hk
unit-mbart-large
stablelm-tuned-alpha-3b-unit
Llama-3.2-11B-Vision-Instruct
asr_hubert_cluster_bart_base
bart_base_cnndm
bart-base-unit
bart-qg-zh-chatgpt
hubert-tiny-v2-unit-beamnorm
byt5_base_v3
phi-1_5_base
qd-zh-phi-1_5
Llama-3.2-11B-Whisper
whisper-small-hi
Llama-Typhoon-8B-R1
phoneme_byt5_g2p_v1
hubert-base-100-pr
ADL_HW
open-s1-mistral-small-24b-zh
UnitDeSTA-3.1-8B-base
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]