mrm8488

423 models • 5 total models in database
Sort by:

distilroberta-finetuned-financial-news-sentiment-analysis

--- license: apache-2.0 thumbnail: https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png tags: - generated_from_trainer - financial - stocks - sentiment widget: - text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ." datasets: - financial_phrasebank metrics: - accuracy model-index: - name: distilRoberta-financial-sentiment results: - task: name: Text Classification type: text-classification dataset: name: fina

license:apache-2.0
398,998
420

bert-spanish-cased-finetuned-ner

72,290
25

deberta-v3-ft-financial-news-sentiment-analysis

license:mit
62,247
28

bert2bert_shared-spanish-finetuned-summarization

NaNK
33,455
32

bert-tiny-finetuned-sms-spam-detection

BERT-Tiny fine-tuned on on smsspam dataset for spam detection

28,595
52

mobilebert-finetuned-pos

license:mit
13,527
8

t5-base-finetuned-question-generation-ap

T5-base fine-tuned on SQuAD for Question Generation Google's T5 fine-tuned on SQuAD v1.1 for Question Generation by just prepending the answer to the context. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ | Dataset | Split | # samples | | -------- | ----- | --------- | | squad | train | 87599 | | squad | valid | 10570 | Check out more about this dataset and others in NLP Viewer The training script is a slightly modified version of this awesome one by Suraj Patil He also made a great research on Question Generation Citation If you want to cite this model you can use this:

license:apache-2.0
13,092
118

t5-base-finetuned-emotion

10,182
54

t5-small-finetuned-common_gen

5,655
0

codebert-base-finetuned-detect-insecure-code

5,357
32

bert2bert_shared-german-finetuned-summarization

NaNK
4,334
24

bert-tiny-finetuned-fake-news-detection

3,816
2

t5-base-finetuned-span-sentiment-extraction

3,716
11

bert-base-spanish-wwm-cased-finetuned-spa-squad2-es

3,562
12

modernbert-embed-base-ft-sts-spanish-matryoshka-768-64

2,846
3

bert-medium-finetuned-squadv2

1,384
1

distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es

license:apache-2.0
1,322
48

bert-tiny-finetuned-squadv2

1,316
2

t5-base-finetuned-wikiSQL

license:apache-2.0
1,297
57

t5-base-finetuned-squadv2

1,247
5

bert-multi-cased-finetuned-xquadv1

1,236
5

camembert2camembert_shared-finetuned-french-summarization

1,134
14

phi-4-14B-grpo-gsm8k-3e-q4

NaNK
llama
752
2

llama-2-coder-7b

NaNK
llama
714
53

limstral-7B-v0.1

NaNK
license:apache-2.0
676
6

t5-base-finetuned-sarcasm-twitter

526
16

longformer-base-4096-finetuned-squadv2

493
15

t5-base-finetuned-summarize-news

348
42

spanish-gpt2

license:mit
316
19

chEMBL_smiles_v1

298
5

bert-mini-finetuned-age_news-classification

BERT-Mini fine-tuned on agenews dataset for news classification

283
9

longformer-base-4096-spanish

license:mit
279
16

bert-small2bert-small-finetuned-cnn_daily_mail-summarization

NaNK
license:apache-2.0
277
10

mbart-large-finetuned-opus-es-en-translation

250
2

spanbert-large-finetuned-squadv2

239
1

ddpm-ema-pokemon-64

license:apache-2.0
237
1

spanish-t5-small-sqac-for-qa

225
4

bert-small-finetuned-squadv2

216
1

bert-tiny-5-finetuned-squadv2

205
5

ModernBERT-base-ft-financial-news-sentiment-analysis

license:apache-2.0
176
1

t5-base-e2e-question-generation

167
6

phi-2-coder

160
26

spanbert-finetuned-squadv2

156
5

distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es

142
3

multilingual-e5-large-ft-sts-spanish-matryoshka-768-16-5e

NaNK
131
5

deberta-v3-small-finetuned-sst2

license:mit
129
3

mT5-small-finetuned-tydiqa-for-xqa

122
2

t5-small-finetuned-wikiSQL

120
7

t5-small-finetuned-text-simplification

license:apache-2.0
110
1

bert-tiny-finetuned-enron-spam-detection

license:apache-2.0
108
8

t5-base-finetuned-imdb-sentiment

107
7

t5-base-finetuned-break_data

102
3

longformer-base-4096-spanish-finetuned-squad

100
7

deberta-v3-base-goemotions

license:mit
97
1

ddpm-ema-butterflies-128

dataset:huggan/smithsonian_butterflies_subset
92
1

bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization

NaNK
license:apache-2.0
86
5

T5 Base Finetuned Common Gen

83
47

layoutlm-finetuned-funsd

80
2

bert-base-portuguese-cased-finetuned-squad-v1-pt

license:apache-2.0
74
11

t5-small-finetuned-quora-for-paraphrasing

70
9

bert-base-german-finetuned-ler

69
2

deberta-v3-small-finetuned-mnli

license:mit
64
3

spanish-TinyBERT-betito

64
0

electra-small-finetuned-squadv2

license:apache-2.0
62
1

t5-small-finetuned-emotion

58
1

bert-hash-femto-ft-prompt-injection

license:mit
58
0

bert-hash-nano-ft-prompt-injection

license:mit
54
0

deberta-v3-small-finetuned-cola

license:mit
53
3

TinyBERT-spanish-uncased-finetuned-ner

52
3

bert2bert_shared-turkish-summarization

NaNK
51
20

codebert-base-finetuned-stackoverflow-ner

license:mit
47
15

electricidad-small-finetuned-squadv1-es

47
1

vit-base-patch16-224_finetuned-kvasirv2-colonoscopy

40
7

bert-hash-pico-ft-prompt-injection

license:mit
40
1

bert-multi-uncased-finetuned-xquadv1

39
0

bart-legal-base-es

37
16

t5-small-finetuned-squadv1

36
0

electricidad-small-finetuned-restaurant-sentiment-analysis

30
5

vit-base-patch16-224-pretrained-cifar10

30
3

distilroberta-finetuned-tweets-hate-speech

28
6

deberta-v3-base-finetuned-squadv2

28
1

electricidad-base-discriminator

27
5

codebert-base-finetuned-code-ner

24
0

deberta-v3-small-finetuned-squadv2

23
0

bert-italian-finedtuned-squadv1-it-alfa

22
14

t5-small-finetuned-squadv2

21
1

bloom-7b1-sharded-fp16

NaNK
21
0

bloomz-7b1-sharded-bf16

NaNK
21
0

bert-spanish-cased-finetuned-pos

20
6

distilbert-multi-finedtuned-squad-pt

19
1

distilroberta-finetuned-age_news-classification

19
1

mobilebert-finetuned-ner

license:mit
19
1

distilroberta-base-ft-allnli-matryoshka-768-64-1e-256bs

NaNK
19
0

falcoder-7b

NaNK
license:apache-2.0
18
89

bert2bert_shared-spanish-finetuned-paus-x-paraphrasing

NaNK
18
4

camembert-base-finetuned-movie-review-sentiment-analysis

18
0

spanbert-base-finetuned-tacred

16
0

spanbert-large-finetuned-tacred

16
0

xlm-multi-finetuned-xquadv1

16
0

convnext-tiny-finetuned-eurosat

license:apache-2.0
15
6

deberta-v3-large-finetuned-mnli

license:mit
15
2

t5-base-finetuned-spa-squadv1

15
0

bloomz-7b1-mt-sharded-bf16

NaNK
15
0

codebert2codebert-finetuned-code-defect-detection

14
2

multilingual-e5-large-ft-sts-spanish-matryoshka-768-64-5e

NaNK
14
2

dqn-SpaceInvadersNoFrameskip-v4

14
0

bloom-560m-finetuned-the-stack-rust

13
8

biomedtra-small-finenuned-clinical-ner

13
4

flan-t5-base-finetuned-gsm8k

license:apache-2.0
13
3

RuPERTa-base

13
2

deberta-v3-small-finetuned-squad

13
1

dqn-BeamRiderNoFrameskip-v4

NaNK
13
1

starcoder-sharded-bf16

13
1

dqn-SpaceInvadersNoFrameskip-v4-3

13
0

bart-base-es-finetuned-mlsum-es-3

13
0

mobilebert-uncased-finetuned-squadv2

12
2

RuPERTa-base-finetuned-ner

12
1

dqn-EnduroNoFrameskip-v4

12
0

CodeBERTaPy

11
4

bloom-560m-finetuned-wikilingua-spanish-summarization

11
3

dqn-SpaceInvadersNoFrameskip-v4-2

11
1

bert2bert-spanish-question-generation

NaNK
10
10

flan-t5-small-finetuned-openai-summarize_from_feedback

license:apache-2.0
10
9

xlm-roberta-base-finetuned-HC3-mix

10
8

bloom-7b1-sharded-bf16

NaNK
10
1

bert-mini-finetuned-squadv2

10
0

multilingual-e5-large-instruct-es-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

10
0

t5-base-finetuned-e2m-intent

9
12

santacoder-finetuned-the-stack-bash-shell

9
5

bert2bert-medium_shared-question-generation

NaNK
9
1

roberta-med-small_shared-finetuned-bbc_xsum-summarization

license:apache-2.0
9
1

santacoder-finetuned-the-stack-clojure

9
1

open_llama_13b-sharded-bf16

NaNK
llama
9
1

ModernBERT-large-ft-fineweb-edu-annotations-4k

license:apache-2.0
9
1

GPT-2-finetuned-CORD19

9
0

electricidad-base-finetuned-ner

9
0

t5-small-finetuned-translation-es-to-pt

9
0

llama-3-8b-ft-en-es-rag-gguf-q8_0

NaNK
llama
9
0

t5-base-finetuned-wikiSQL-sql-to-en

8
12

convbert-small-spanish

license:mit
8
3

roberta-base-bne-finetuned-sqac

license:apache-2.0
8
2

t5-base-finetuned-math-qa-test

8
2

vit-base-patch16-224_finetuned-pneumothorax

8
1

bert-base-german-dbmdz-cased-finetuned-pawsx-de

8
0

bert2bert_shared-portuguese-question-generation

NaNK
8
0

electra-base-finetuned-squadv2

8
0

electrovid19-small

8
0

prunebert-multi-uncased-finepruned-tydiqa-for-xqa

8
0

t5-small-finetuned-imdb-sentiment

8
0

multilingual-e5-large-instruct-es-trim-16k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

8
0

a2c-Pong-v0

7
1

deberta-v3-small-finetuned-mrpc

license:mit
7
1

mbart-large-finetuned-opus-it-en-translation

7
1

roberta-med-small2roberta-med-small-finetuned-cnn_daily_mail-summarization

license:apache-2.0
7
1

ddpm-ema-pokemon-v2-64

license:apache-2.0
7
1

camembert-base-finetuned-pawsx-fr

7
0

distilbert-multi-finetuned-for-xqua-on-tydiqa

7
0

t5-base-finetuned-race

7
0

t5-base-finetuned-turk-text-simplification

license:apache-2.0
7
0

spanish-mmBERT-small

license:mit
7
0

bert-es-hash-nano-test

7
0

mistral-7b-ft-h4-no_robots_instructions

NaNK
license:apache-2.0
6
13

distilroberta-finetuned-banking77

6
8

bertin-gpt-j-6B-ES-8bit

NaNK
6
7

electricidad-small-discriminator

6
5

bloom-6b3-8bit

NaNK
6
4

electricidad-small-finetuned-muchocine

6
2

gpt2-finetuned-reddit-tifu

6
1

convnext-tiny-finetuned-beans

license:apache-2.0
6
1

RoBERTinha

6
0

a2c-PongNoFrameskip-v0

6
0

bert2bert_shared-italian-question-generation

NaNK
6
0

distilbert-finetuned-sarcasm-classification

6
0

electra-large-finetuned-squadv1

6
0

mT5-small-finetuned-multi-question-generation

6
0

squeezebert-finetuned-squadv1

6
0

squeezebert-finetuned-squadv2

6
0

Worm_poca

6
0

speaker-segmentation-fine-tuned-callhome-spa-10e

license:apache-2.0
6
0

tinyllama-ft-en-es-rag-gguf-q8_0

NaNK
llama
6
0

tinyllama-ft-en-es-rag-gguf-q4_k_m

NaNK
llama
6
0

ModernBERT-base-ft-all-nli

license:apache-2.0
6
0

phi-4-14B-grpo-limo-2e-q4

NaNK
llama
6
0

bloom-560m-finetuned-sd-prompts

5
31

flan-t5-large-finetuned-openai-summarize_from_feedback

license:apache-2.0
5
6

mbart-large-finetuned-opus-en-es-translation

5
5

flan-t5-large-finetuned-gsm8k

license:apache-2.0
5
5

GPT-2-finetuned-common_gen

5
3

electricidad-base-generator

5
3

electricidad-small-finetuned-xnli-es

license:mit
5
2

bert2bert-small_shared-question-generation

NaNK
5
1

bert2bert_shared-spanish-finetuned-muchocine-review-summarization

NaNK
5
1

mobilebert-uncased-finetuned-squadv1

5
1

t5-small-finetuned-text2log

license:apache-2.0
5
1

stablelm2-1.6b-ft-openhermes

NaNK
5
1

functiongemma-270m-it-ft-mobile-actions-es

5
0

albert-base-v2-finetuned-mnli-pabee

5
0

bert-tiny2bert-tiny_shared-finetuned-wikisql

NaNK
5
0

bert2bert_shared-finetuned-wikisql

NaNK
5
0

electricidad-base-finetuned-squadv1-es

5
0

t5-small-finetuned-AESLC-summarization

5
0

phi-4-14B-grpo-limo-q4

- Developed by: mrm8488 - License: apache-2.0 - Finetuned from model : unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
5
0

multilingual-e5-small-es-trim-16k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

5
0

ModernBERT-base-ft-fineweb-edu-annotations

license:apache-2.0
4
11

t5-base-finetuned-qasc

4
5

bertin-gpt-j-6B-ES-v1-8bit

NaNK
4
5

bert-multi-cased-finedtuned-xquad-tydiqa-goldp

4
4

bloom-1b3-8bit

NaNK
4
3

bert2bert-multilingual_shared-question-generation

NaNK
4
2

biomedtra-small-es

4
2

gpt2-finetuned-recipes-cooking_v2

4
2

ddpm-ema-anime-128

license:apache-2.0
4
2

bloom-560m-finetuned-common_gen

4
2

deberta-v3-small-goemotions

license:mit
4
1

electricidad-base-finetuned-pawsx-es

4
1

t5-base-finetuned-math-linear-algebra-1d

4
1

t5-small-finetuned-boolq

4
1

ModernBERT-base-finetuned-squad

license:apache-2.0
4
1

ModernBERT-base-ft-financial-news-sentiment-analysis-2

license:apache-2.0
4
1

b2b-en-paraphrasing-no-questions

NaNK
4
0

b2b-en-paraphrasing-questions

NaNK
4
0

bert-small2bert-small_shared-finetuned-wikisql

NaNK
4
0

bert2bert-mini_shared-question-generation

NaNK
4
0

distilbert-base-uncased-newspop-student

4
0

electra-base-finetuned-squadv1

4
0

electra-small-finetuned-squadv1

4
0

electricidad-base-finetuned-medical-diagnostics

4
0

electricidad-base-finetuned-muchocine

4
0

electricidad-base-finetuned-pos

4
0

funnel-transformer-intermediate-mnli

4
0

prunebert-base-uncased-finepruned-topK-squadv2

4
0

prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa

4
0

prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa

4
0

prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa

4
0

prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa

4
0

roberta-base-finetuned-multitask

4
0

t5-base-finetuned-boolq

4
0

t5-base-finetuned-math-linear-algebra-2d

4
0

t5-base-finetuned-multinews-512

4
0

t5-base-finetuned-qasc-sc

4
0

t5-base-finetuned-swag

4
0

umberto-wikipedia-uncased-v1-finetuned-squadv1-it

4
0

spanish-TinyBERT-betito-finetuned-mnli

4
0

data2vec-text-base-finetuned-mrpc

license:mit
4
0

data2vec-base-finetuned-imagenet1k

4
0

PushBlock

4
0

Worm_v2

4
0

switch-base-16-finetuned-samsum

license:apache-2.0
4
0

xlm-v-base-finetuned-xglue-xnli

license:mit
4
0

mt5-base-ft-rf-02

license:apache-2.0
4
0

xlm-roberta-large-es-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

4
0

xlm-roberta-base-es-trim-16k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

4
0

multilingual-e5-base-es-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

4
0

multilingual-e5-large-es-trim-16k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

4
0

multilingual-e5-small-fra-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

4
0

flan-t5-xl-finetuned-unnatural-instructions

license:apache-2.0
3
4

bert-tiny-finetuned-yahoo_answers_topics

3
2

switch-base-8-finetuned-samsum

license:apache-2.0
3
2

GPT-2-finetuned-covid-bio-medrxiv

3
1

gpt2-finetuned-recipes-cooking

3
1

mbart-large-finetuned-bible-es-en-translation

3
1

roberta-base-1B-1-finetuned-squadv1

NaNK
3
1

electricidad-small-finetuned-diagTrast

3
1

ModernBERT-base-ft-code-defect-detection-4k

license:apache-2.0
3
1

functiongemma-270m-it-ft-mobile-actions-fr

3
0

GPT-2-finetuned-CRD3

3
0

bert-mini-5-finetuned-squadv2

3
0

bert-tiny-3-finetuned-squadv2

3
0

t5-base-finetuned-Reddit-TIFU-TLDR

3
0

t5-base-finetuned-quoref

3
0

data2vec-text-base-finetuned-stsb

license:mit
3
0

ddpm-ema-flower-64

license:apache-2.0
3
0

setfit-mpnet-base-v2-finetuned-spam-detection

3
0

santacoder-finetuned-the-stack-dockerfiles

3
0

ModernBERT-large-ft-all-nli

license:apache-2.0
3
0

phi-4-14B-grpo-gsm8k-3e

- Developed by: mrm8488 - License: apache-2.0 - Finetuned from model : unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
3
0

multilingual-e5-large-es-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

3
0

multilingual-e5-small-es-trim-32k

3
0

distilroberta-base-finetuned-suicide-depression

license:apache-2.0
2
6

mxbai-embed-large-v1-ft-webinstruct

NaNK
2
4

diltilgpt2-finetuned-bookcopus-10

2
3

legalectra-small-spanish

2
3

ModernBERT-large-ft-fineweb-edu-annotations

license:apache-2.0
2
3

T5-base-finetuned-cuad

license:mit
2
2

t5-small-finetuned-turk-text-simplification

license:apache-2.0
2
2

bert-spanish-cased-finetuned-pos-16-tags

2
1

electricidad-small-finetuned-medical-diagnostics

2
1

t5-small-finetuned-wikisql-sql-nl-nl-sql

license:apache-2.0
2
1

data2vec-text-base-finetuned-sst2

license:mit
2
1

ddpm-ema-anime-256

license:apache-2.0
2
1

bloom-560m-finetuned-the-stack-brainfuck

2
1

santacoder-finetuned-the-stack-swift

2
1

BioGPT-Large-finetuned-chatdoctor

2
1

DeepSeek-R1-Distill-Qwen-1.5B-grpo-limo-2e

NaNK
2
1

GuaPeTe-2-tiny-finetuned-eubookshop

2
0

GuaPeTe-2-tiny

2
0

RuPERTa-base-finetuned-squadv2

2
0

bert-tiny-wrslb-finetuned-squadv1

2
0

bert-uncased-finetuned-qnli

2
0

distilroberta-finetuned-squadv1

2
0

es-tinybert-v1

2
0

roberta-large-finetuned-wsc

2
0

spanbert-finetuned-squadv1

2
0

t5-base-finetuned-news-titles-classification

2
0

data2vec-text-base-finetuned-rte

license:mit
2
0

Worm

2
0

bloom-560m-finetuned-samsum

2
0

bloom-560m-ft-summarization-cnn

2
0

phi2-ft-no_robots-adapter

NaNK
2
0

xlm-roberta-base-es-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

2
0

multilingual-e5-large-instruct-fra-trim-30k

2
0

multilingual-e5-base-fra-trim-30k

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

2
0

flan-t5-base-finetuned-openai-summarize_from_feedback

license:apache-2.0
1
9

distilbert-base-multi-cased-finetuned-typo-detection

1
6

legal-longformer-base-8192-spanish

license:mit
1
5

ddpm-ema-anime-v2-128

license:apache-2.0
1
4

bloom-7b1-8bit

NaNK
1
4

CodeGPT-small-finetuned-python-token-completion

1
3

legalectra-base-spanish

1
3

wav2vec2-large-xlsr-53-spanish

license:apache-2.0
1
2

distilbert-base-matryoshka-sts-v2

NaNK
1
2

ViT2GPT-2-es

1
1

bert-small-finetuned-typo-detection

1
1

bioclinicalBERT-finetuned-covid-papers

1
1

deberta-v3-small-finetuned-qnli

license:mit
1
1

roberta-base-bne-finetuned-sqac-retriever

1
1

wav2vec2-large-xlsr-53-esperanto

license:apache-2.0
1
1

wav2vec2-large-xlsr-53-ukrainian

license:apache-2.0
1
1

electricidad-small-finetuned-amazon-review-classification

1
1

t5-base-iterater

license:apache-2.0
1
1

gpt-neo-1.3B-8bit

NaNK
1
1

pyramidsrnd

1
1

bloom-560m-finetuned-news-summarization-cnn

1
1

flan-t5-large-finetuned-samsum

license:apache-2.0
1
1

flan-t5-small-finetuned-samsum

1
1

flan-t5-large-finetuned-samsum-2

license:apache-2.0
1
1

electricidad-base-ft-diagTrast

1
1

distilbert-base-matryoshka-sts

NaNK
1
1

GuaPeTe-2-tiny-finetuned-TED

1
0

HindiBERTa

1
0

RoBasquERTa

1
0

RuPERTa-base-finetuned-pawsx-es

1
0

RuPERTa-base-finetuned-pos

1
0

RuPERTa-base-finetuned-spa-constitution

1
0

bsc-roberta-base-spanish-diagnostics

1
0

byt5-small-finetuned-tweet-qa

1
0

chEMBL26_smiles_v2

1
0

codebert2codebert-finetuned-code-refinement-small

1
0

distilgpt2-finetuned-wsb-tweets

1
0

flaubert-small-finetuned-movie-review-sentiment-analysis

1
0

spanbert-base-finetuned-squadv1

1
0

spanbert-base-finetuned-squadv2

1
0

spanbert-large-finetuned-squadv1

1
0

t5-base-finetuned-math-seq-next-term

1
0

wav2vec2-large-xlsr-53-euskera

license:apache-2.0
1
0

data2vec-text-base-finetuned-cola

license:mit
1
0

electricidad-small-finetuned-sst2-es

1
0

electricidad-base-finetuned-go_emotions-es-2

1
0

switch-base-16-finetuned-xsum

1
0

switch-base-16-finetuned-xsum-2

1
0

bloom-560m-finetuned-unnatural-instructions

1
0

bloom-560m-finetuned-unnatural-instructions-6k-steps

1
0

flan-t5-base-finetuned-samsum

1
0

flan-t5-base-common_gen

1
0

flan-t5-small-common_gen

1
0

santacoder-finetuned-the-stack-bash-2

1
0

santacoder-finetuned-the-stack-bash-4

1
0

xlm-roberta-base-finetuned-HC3

license:mit
1
0

santacoder-finetuned-xlcost-python

1
0

xlm-v-base-finetuned-xnli

1
0

bart-bio-base-es

1
0

roberta-base-finetuned-OIG-mod-2

license:mit
1
0

gpt2-finetuned-jhegarty-texts

license:mit
1
0

distilgpt2-finetuned-jhegarty-texts

license:apache-2.0
1
0

distilgpt2-finetuned-jhegarty-books

license:apache-2.0
1
0

gpt2-finetuned-jhegarty-books

license:mit
1
0

idefics-9b-ft-describe-diffusion-bf16

NaNK
1
0

idefics-9b-ft-floco

NaNK
1
0

gpt2-l

1
0

peaker-segmentation-fine-tuned-callhome-spa

license:apache-2.0
1
0

ModernBERT-large-ft-all-nli-2

license:apache-2.0
1
0

ModernBERT-base-ft-code-defect-detection-10e-4k

license:apache-2.0
1
0

phi-4-14B-grpo-limo

- Developed by: mrm8488 - License: apache-2.0 - Finetuned from model : unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
llama
1
0

phi-4-14B-grpo-limo-2e

NaNK
llama
1
0

Alpacoom

0
75

falcon-7b-ft-codeAlpaca_20k-v2

NaNK
0
11

gte-large-ft-webinstruct

NaNK
0
10

galactica-125m

license:apache-2.0
0
8

mamba-coder

0
8

codeBERTaJS

0
6

Qwen3-14B-ft-limo

NaNK
license:apache-2.0
0
6

t5-base-finetuned-break_data-question-retrieval

0
5

bloom-560m-finetuned-the-stack-cobol

0
5

t5-base-finetuned-math-calculus-differentiate

0
4

galactica-1.3b

NaNK
license:apache-2.0
0
4

ppo-LunarLander-v2

0
3

dollcerberoom

0
3

codebert-finetuned-clone-detection

0
2

ppo-BipedalWalker-v3

0
2

t5-base-finetuned-quartz

0
2

electricidad-base-finetuned-go_emotions-es

0
2

gpt-j-6B-ES-finetuned-paws-x-paraphrasing-8bit

NaNK
0
2

pomeranian

license:apache-2.0
0
2

tinyllama-ft-codeAlpaca-adapter

NaNK
base_model:unsloth/tinyllama-bnb-4bit
0
2

distilroberta-base-ft-allnli-matryoshka-768-16-1e-128bs

NaNK
0
2

distilroberta-base-ft-webinstruct

NaNK
0
2

a2c-BreakoutNoFrameskip-v4

0
1

convbert-base-spanish

license:mit
0
1

distilgpt2-finedtuned-meditations

0
1

gpt2-imdb-neutral

license:mit
0
1

ppo-CartPole-v1

0
1

t5-base-finetuned-AESLC-summarization

0
1

t5-small-spanish-finetuned-squadv1

0
1

electricidad-base-finetuned-parmex

0
1

gpt-neo-2.7B-8bit

NaNK
0
1

setfit-distiluse-base-multilingual-cased-v2-finetuned-amazon-reviews-multi-binary

0
1

bloomz-7b1-mt-ft-alpaca

NaNK
0
1

dolloom

0
1

halcon-7b-instructions-es

NaNK
0
1

idefics-9b-ft-describe-diffusion-bf16-adapter

NaNK
0
1

m-e5-large_bs64_10_all_languages

0
1

mistral-7b-ft-AgentInstruct

NaNK
license:apache-2.0
0
1

Llama-3-8B-Emb

NaNK
0
1