voidful

88 models • 2 total models in database
Sort by:

wav2vec2-xlsr-multilingual-56

license:apache-2.0
13,926
33

Qwen3.5-27B-earica

NaNK
license:apache-2.0
6,194
0

llm-codec

3,573
0

albert_chinese_small

3,151
3

albert_chinese_base

1,549
16

Qwen3.5-27B-gemini-3.1-opus-4.6-reasoning

NaNK
license:apache-2.0
1,408
10

albert_chinese_large

828
6

llm-codec-abl-ftp

499
0

Llama-3.2-8B-Instruct

NaNK
llama
483
8

Qwen3.5-9B-gemini-3.1-opus-4.6-reasoning

NaNK
license:apache-2.0
360
8

llm-codec-abl-baseline

343
0

llm-codec-abl-ste

338
0

llm-codec-abl-k1

331
0

llm-codec-abl-k10

291
0

desta25_4b_R1_lean

NaNK
285
0

Llama-3.1-TAIDE-R1-8B-Chat

NaNK
llama
242
23

albert_chinese_tiny

190
17

desta25_4b_R2_full

NaNK
168
0

mhubert-base

103
4

llm-codec-abl-k3

101
0

Qwen3.5-35B-A3B-gemini-3.1-opus-4.6-reasoning

NaNK
license:apache-2.0
80
3

QAQ_0.6b_orca_all

NaNK
73
0

llmcodec-librispeech-abl-ftp

59
0

bart-eqg-question-generator

56
14

QAQ

51
0

llmcodec-librispeech-abl-sa

36
0

wav2vec2-large-xlsr-53-tw-gpt

license:apache-2.0
35
3

albert_chinese_small_sentiment

24
3

QAQ_0.6b_orca

NaNK
19
0

bart-base-chinese

15
0

phi-1_5_chat_128k

license:mit
13
6

context-only-question-generator

12
26

QAQ_4b_orca

NaNK
12
0

mmlm-conv-training-full

11
0

bart-distractor-generation

10
4

albert_chinese_xxlarge

10
3

whisper-small-zh-TW

10
2

QAQ_0.6b

NaNK
9
0

mamba-790m-chat

license:mit
9
0

earica-audio-1b

NaNK
9
0

gemma-3-omni-4b-it

NaNK
8
0

gemma-3-omni-27b-it

NaNK
8
0

albert_chinese_xlarge

7
1

dpr-ctx_encoder-bert-base-multilingual

6
6

Mhubert Unit Tts

This repository provides a text to unit model form mhubert and trained with bart model. The model was trained on the LibriSpeech ASR dataset for the English language and Train epoch 13: `WER:30.41` `CER: 20.22` Datasets The model was trained on the LibriSpeech ASR dataset for the English language. Language The model is trained for the English language. Metrics The model's performance is evaluated using Word Error Rate (WER). Tags The model can be tagged with "hubert" and "tts".

6
5

hubert-tiny

6
0

Qwen3-0.6B-SFT-Tulu3

NaNK
5
0

gpt2-base-ptt

5
0

dpr-question_encoder-bert-base-multilingual

4
4

qd-phi-1_5

license:mit
4
1

bart-distractor-generation-pm

4
0

unifiedqg-bart-base

4
0

changpt-bart

4
0

whisper-v3-finetuned-multilingual

4
0

tts_hubert_cluster_bart_base

license:apache-2.0
3
1

hubert-tiny-v2

3
0

recurrentgemma-2b-base

NaNK
3
0

ssr-gemma-3-1b-it

NaNK
3
0

earica-omni-27b

NaNK
3
0

Llama-Breeze2-8B-Instruct-text-only

NaNK
llama
2
2

gemma-3-omni-processor

2
1

bart-distractor-generation-both

2
0

phoneme-longt5-global

2
0

SmolLM2-360M-Instruct-Whisper

llama
2
0

smol-360m-ft

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback. The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1. For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.

llama
2
0

UnitDeSTA-3.0-8B-base

NaNK
llama
2
0

Qwen2.5-7b-ds-vl

NaNK
2
0

wav2vec2-large-xlsr-53-hk

license:apache-2.0
1
2

unit-mbart-large

1
1

stablelm-tuned-alpha-3b-unit

NaNK
1
1

Llama-3.2-11B-Vision-Instruct

NaNK
mllama_text_model
1
1

asr_hubert_cluster_bart_base

license:apache-2.0
1
0

bart_base_cnndm

1
0

bart-base-unit

1
0

bart-qg-zh-chatgpt

1
0

hubert-tiny-v2-unit-beamnorm

1
0

byt5_base_v3

1
0

phi-1_5_base

license:mit
1
0

qd-zh-phi-1_5

1
0

Llama-3.2-11B-Whisper

NaNK
mllama_text_model
1
0

whisper-small-hi

NaNK
license:apache-2.0
1
0

Llama-Typhoon-8B-R1

NaNK
license:mit
0
9

phoneme_byt5_g2p_v1

0
1

hubert-base-100-pr

0
1

ADL_HW

0
1

open-s1-mistral-small-24b-zh

NaNK
0
1

UnitDeSTA-3.1-8B-base

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
llama
0
1

ssr-SmolLM2-1.7B

NaNK
0
1