mrm8488
distilroberta-finetuned-financial-news-sentiment-analysis
--- license: apache-2.0 thumbnail: https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png tags: - generated_from_trainer - financial - stocks - sentiment widget: - text: "Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 ." datasets: - financial_phrasebank metrics: - accuracy model-index: - name: distilRoberta-financial-sentiment results: - task: name: Text Classification type: text-classification dataset: name: fina
bert-spanish-cased-finetuned-ner
deberta-v3-ft-financial-news-sentiment-analysis
bert2bert_shared-spanish-finetuned-summarization
bert-tiny-finetuned-sms-spam-detection
BERT-Tiny fine-tuned on on smsspam dataset for spam detection
mobilebert-finetuned-pos
t5-base-finetuned-question-generation-ap
T5-base fine-tuned on SQuAD for Question Generation Google's T5 fine-tuned on SQuAD v1.1 for Question Generation by just prepending the answer to the context. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ | Dataset | Split | # samples | | -------- | ----- | --------- | | squad | train | 87599 | | squad | valid | 10570 | Check out more about this dataset and others in NLP Viewer The training script is a slightly modified version of this awesome one by Suraj Patil He also made a great research on Question Generation Citation If you want to cite this model you can use this:
t5-base-finetuned-emotion
t5-small-finetuned-common_gen
codebert-base-finetuned-detect-insecure-code
bert2bert_shared-german-finetuned-summarization
bert-tiny-finetuned-fake-news-detection
t5-base-finetuned-span-sentiment-extraction
bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
modernbert-embed-base-ft-sts-spanish-matryoshka-768-64
bert-medium-finetuned-squadv2
distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
bert-tiny-finetuned-squadv2
t5-base-finetuned-wikiSQL
t5-base-finetuned-squadv2
bert-multi-cased-finetuned-xquadv1
camembert2camembert_shared-finetuned-french-summarization
phi-4-14B-grpo-gsm8k-3e-q4
llama-2-coder-7b
limstral-7B-v0.1
t5-base-finetuned-sarcasm-twitter
longformer-base-4096-finetuned-squadv2
t5-base-finetuned-summarize-news
spanish-gpt2
chEMBL_smiles_v1
bert-mini-finetuned-age_news-classification
BERT-Mini fine-tuned on agenews dataset for news classification
longformer-base-4096-spanish
bert-small2bert-small-finetuned-cnn_daily_mail-summarization
mbart-large-finetuned-opus-es-en-translation
spanbert-large-finetuned-squadv2
ddpm-ema-pokemon-64
spanish-t5-small-sqac-for-qa
bert-small-finetuned-squadv2
bert-tiny-5-finetuned-squadv2
ModernBERT-base-ft-financial-news-sentiment-analysis
t5-base-e2e-question-generation
phi-2-coder
spanbert-finetuned-squadv2
distiluse-base-multilingual-cased-v2-finetuned-stsb_multi_mt-es
multilingual-e5-large-ft-sts-spanish-matryoshka-768-16-5e
deberta-v3-small-finetuned-sst2
mT5-small-finetuned-tydiqa-for-xqa
t5-small-finetuned-wikiSQL
t5-small-finetuned-text-simplification
bert-tiny-finetuned-enron-spam-detection
t5-base-finetuned-imdb-sentiment
t5-base-finetuned-break_data
longformer-base-4096-spanish-finetuned-squad
deberta-v3-base-goemotions
ddpm-ema-butterflies-128
bert-mini2bert-mini-finetuned-cnn_daily_mail-summarization
T5 Base Finetuned Common Gen
layoutlm-finetuned-funsd
bert-base-portuguese-cased-finetuned-squad-v1-pt
t5-small-finetuned-quora-for-paraphrasing
bert-base-german-finetuned-ler
deberta-v3-small-finetuned-mnli
spanish-TinyBERT-betito
electra-small-finetuned-squadv2
t5-small-finetuned-emotion
bert-hash-femto-ft-prompt-injection
bert-hash-nano-ft-prompt-injection
deberta-v3-small-finetuned-cola
TinyBERT-spanish-uncased-finetuned-ner
bert2bert_shared-turkish-summarization
codebert-base-finetuned-stackoverflow-ner
electricidad-small-finetuned-squadv1-es
vit-base-patch16-224_finetuned-kvasirv2-colonoscopy
bert-hash-pico-ft-prompt-injection
bert-multi-uncased-finetuned-xquadv1
bart-legal-base-es
t5-small-finetuned-squadv1
electricidad-small-finetuned-restaurant-sentiment-analysis
vit-base-patch16-224-pretrained-cifar10
distilroberta-finetuned-tweets-hate-speech
deberta-v3-base-finetuned-squadv2
electricidad-base-discriminator
codebert-base-finetuned-code-ner
deberta-v3-small-finetuned-squadv2
bert-italian-finedtuned-squadv1-it-alfa
t5-small-finetuned-squadv2
bloom-7b1-sharded-fp16
bloomz-7b1-sharded-bf16
bert-spanish-cased-finetuned-pos
distilbert-multi-finedtuned-squad-pt
distilroberta-finetuned-age_news-classification
mobilebert-finetuned-ner
distilroberta-base-ft-allnli-matryoshka-768-64-1e-256bs
falcoder-7b
bert2bert_shared-spanish-finetuned-paus-x-paraphrasing
camembert-base-finetuned-movie-review-sentiment-analysis
spanbert-base-finetuned-tacred
spanbert-large-finetuned-tacred
xlm-multi-finetuned-xquadv1
convnext-tiny-finetuned-eurosat
deberta-v3-large-finetuned-mnli
t5-base-finetuned-spa-squadv1
bloomz-7b1-mt-sharded-bf16
codebert2codebert-finetuned-code-defect-detection
multilingual-e5-large-ft-sts-spanish-matryoshka-768-64-5e
dqn-SpaceInvadersNoFrameskip-v4
bloom-560m-finetuned-the-stack-rust
biomedtra-small-finenuned-clinical-ner
flan-t5-base-finetuned-gsm8k
RuPERTa-base
deberta-v3-small-finetuned-squad
dqn-BeamRiderNoFrameskip-v4
starcoder-sharded-bf16
dqn-SpaceInvadersNoFrameskip-v4-3
bart-base-es-finetuned-mlsum-es-3
mobilebert-uncased-finetuned-squadv2
RuPERTa-base-finetuned-ner
dqn-EnduroNoFrameskip-v4
CodeBERTaPy
bloom-560m-finetuned-wikilingua-spanish-summarization
dqn-SpaceInvadersNoFrameskip-v4-2
bert2bert-spanish-question-generation
flan-t5-small-finetuned-openai-summarize_from_feedback
xlm-roberta-base-finetuned-HC3-mix
bloom-7b1-sharded-bf16
bert-mini-finetuned-squadv2
multilingual-e5-large-instruct-es-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
t5-base-finetuned-e2m-intent
santacoder-finetuned-the-stack-bash-shell
bert2bert-medium_shared-question-generation
roberta-med-small_shared-finetuned-bbc_xsum-summarization
santacoder-finetuned-the-stack-clojure
open_llama_13b-sharded-bf16
ModernBERT-large-ft-fineweb-edu-annotations-4k
GPT-2-finetuned-CORD19
electricidad-base-finetuned-ner
t5-small-finetuned-translation-es-to-pt
llama-3-8b-ft-en-es-rag-gguf-q8_0
t5-base-finetuned-wikiSQL-sql-to-en
convbert-small-spanish
roberta-base-bne-finetuned-sqac
t5-base-finetuned-math-qa-test
vit-base-patch16-224_finetuned-pneumothorax
bert-base-german-dbmdz-cased-finetuned-pawsx-de
bert2bert_shared-portuguese-question-generation
electra-base-finetuned-squadv2
electrovid19-small
prunebert-multi-uncased-finepruned-tydiqa-for-xqa
t5-small-finetuned-imdb-sentiment
multilingual-e5-large-instruct-es-trim-16k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
a2c-Pong-v0
deberta-v3-small-finetuned-mrpc
mbart-large-finetuned-opus-it-en-translation
roberta-med-small2roberta-med-small-finetuned-cnn_daily_mail-summarization
ddpm-ema-pokemon-v2-64
camembert-base-finetuned-pawsx-fr
distilbert-multi-finetuned-for-xqua-on-tydiqa
t5-base-finetuned-race
t5-base-finetuned-turk-text-simplification
spanish-mmBERT-small
bert-es-hash-nano-test
mistral-7b-ft-h4-no_robots_instructions
distilroberta-finetuned-banking77
bertin-gpt-j-6B-ES-8bit
electricidad-small-discriminator
bloom-6b3-8bit
electricidad-small-finetuned-muchocine
gpt2-finetuned-reddit-tifu
convnext-tiny-finetuned-beans
RoBERTinha
a2c-PongNoFrameskip-v0
bert2bert_shared-italian-question-generation
distilbert-finetuned-sarcasm-classification
electra-large-finetuned-squadv1
mT5-small-finetuned-multi-question-generation
squeezebert-finetuned-squadv1
squeezebert-finetuned-squadv2
Worm_poca
speaker-segmentation-fine-tuned-callhome-spa-10e
tinyllama-ft-en-es-rag-gguf-q8_0
tinyllama-ft-en-es-rag-gguf-q4_k_m
ModernBERT-base-ft-all-nli
phi-4-14B-grpo-limo-2e-q4
bloom-560m-finetuned-sd-prompts
flan-t5-large-finetuned-openai-summarize_from_feedback
mbart-large-finetuned-opus-en-es-translation
flan-t5-large-finetuned-gsm8k
GPT-2-finetuned-common_gen
electricidad-base-generator
electricidad-small-finetuned-xnli-es
bert2bert-small_shared-question-generation
bert2bert_shared-spanish-finetuned-muchocine-review-summarization
mobilebert-uncased-finetuned-squadv1
t5-small-finetuned-text2log
stablelm2-1.6b-ft-openhermes
functiongemma-270m-it-ft-mobile-actions-es
albert-base-v2-finetuned-mnli-pabee
bert-tiny2bert-tiny_shared-finetuned-wikisql
bert2bert_shared-finetuned-wikisql
electricidad-base-finetuned-squadv1-es
t5-small-finetuned-AESLC-summarization
phi-4-14B-grpo-limo-q4
- Developed by: mrm8488 - License: apache-2.0 - Finetuned from model : unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
multilingual-e5-small-es-trim-16k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
ModernBERT-base-ft-fineweb-edu-annotations
t5-base-finetuned-qasc
bertin-gpt-j-6B-ES-v1-8bit
bert-multi-cased-finedtuned-xquad-tydiqa-goldp
bloom-1b3-8bit
bert2bert-multilingual_shared-question-generation
biomedtra-small-es
gpt2-finetuned-recipes-cooking_v2
ddpm-ema-anime-128
bloom-560m-finetuned-common_gen
deberta-v3-small-goemotions
electricidad-base-finetuned-pawsx-es
t5-base-finetuned-math-linear-algebra-1d
t5-small-finetuned-boolq
ModernBERT-base-finetuned-squad
ModernBERT-base-ft-financial-news-sentiment-analysis-2
b2b-en-paraphrasing-no-questions
b2b-en-paraphrasing-questions
bert-small2bert-small_shared-finetuned-wikisql
bert2bert-mini_shared-question-generation
distilbert-base-uncased-newspop-student
electra-base-finetuned-squadv1
electra-small-finetuned-squadv1
electricidad-base-finetuned-medical-diagnostics
electricidad-base-finetuned-muchocine
electricidad-base-finetuned-pos
funnel-transformer-intermediate-mnli
prunebert-base-uncased-finepruned-topK-squadv2
prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa
prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa
prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa
prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa
roberta-base-finetuned-multitask
t5-base-finetuned-boolq
t5-base-finetuned-math-linear-algebra-2d
t5-base-finetuned-multinews-512
t5-base-finetuned-qasc-sc
t5-base-finetuned-swag
umberto-wikipedia-uncased-v1-finetuned-squadv1-it
spanish-TinyBERT-betito-finetuned-mnli
data2vec-text-base-finetuned-mrpc
data2vec-base-finetuned-imagenet1k
PushBlock
Worm_v2
switch-base-16-finetuned-samsum
xlm-v-base-finetuned-xglue-xnli
mt5-base-ft-rf-02
xlm-roberta-large-es-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
xlm-roberta-base-es-trim-16k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
multilingual-e5-base-es-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
multilingual-e5-large-es-trim-16k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
multilingual-e5-small-fra-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
flan-t5-xl-finetuned-unnatural-instructions
bert-tiny-finetuned-yahoo_answers_topics
switch-base-8-finetuned-samsum
GPT-2-finetuned-covid-bio-medrxiv
gpt2-finetuned-recipes-cooking
mbart-large-finetuned-bible-es-en-translation
roberta-base-1B-1-finetuned-squadv1
electricidad-small-finetuned-diagTrast
ModernBERT-base-ft-code-defect-detection-4k
functiongemma-270m-it-ft-mobile-actions-fr
GPT-2-finetuned-CRD3
bert-mini-5-finetuned-squadv2
bert-tiny-3-finetuned-squadv2
t5-base-finetuned-Reddit-TIFU-TLDR
t5-base-finetuned-quoref
data2vec-text-base-finetuned-stsb
ddpm-ema-flower-64
setfit-mpnet-base-v2-finetuned-spam-detection
santacoder-finetuned-the-stack-dockerfiles
ModernBERT-large-ft-all-nli
phi-4-14B-grpo-gsm8k-3e
- Developed by: mrm8488 - License: apache-2.0 - Finetuned from model : unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
multilingual-e5-large-es-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
multilingual-e5-small-es-trim-32k
distilroberta-base-finetuned-suicide-depression
mxbai-embed-large-v1-ft-webinstruct
diltilgpt2-finetuned-bookcopus-10
legalectra-small-spanish
ModernBERT-large-ft-fineweb-edu-annotations
T5-base-finetuned-cuad
t5-small-finetuned-turk-text-simplification
bert-spanish-cased-finetuned-pos-16-tags
electricidad-small-finetuned-medical-diagnostics
t5-small-finetuned-wikisql-sql-nl-nl-sql
data2vec-text-base-finetuned-sst2
ddpm-ema-anime-256
bloom-560m-finetuned-the-stack-brainfuck
santacoder-finetuned-the-stack-swift
BioGPT-Large-finetuned-chatdoctor
DeepSeek-R1-Distill-Qwen-1.5B-grpo-limo-2e
GuaPeTe-2-tiny-finetuned-eubookshop
GuaPeTe-2-tiny
RuPERTa-base-finetuned-squadv2
bert-tiny-wrslb-finetuned-squadv1
bert-uncased-finetuned-qnli
distilroberta-finetuned-squadv1
es-tinybert-v1
roberta-large-finetuned-wsc
spanbert-finetuned-squadv1
t5-base-finetuned-news-titles-classification
data2vec-text-base-finetuned-rte
Worm
bloom-560m-finetuned-samsum
bloom-560m-ft-summarization-cnn
phi2-ft-no_robots-adapter
xlm-roberta-base-es-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
multilingual-e5-large-instruct-fra-trim-30k
multilingual-e5-base-fra-trim-30k
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
flan-t5-base-finetuned-openai-summarize_from_feedback
distilbert-base-multi-cased-finetuned-typo-detection
legal-longformer-base-8192-spanish
ddpm-ema-anime-v2-128
bloom-7b1-8bit
CodeGPT-small-finetuned-python-token-completion
legalectra-base-spanish
wav2vec2-large-xlsr-53-spanish
distilbert-base-matryoshka-sts-v2
ViT2GPT-2-es
bert-small-finetuned-typo-detection
bioclinicalBERT-finetuned-covid-papers
deberta-v3-small-finetuned-qnli
roberta-base-bne-finetuned-sqac-retriever
wav2vec2-large-xlsr-53-esperanto
wav2vec2-large-xlsr-53-ukrainian
electricidad-small-finetuned-amazon-review-classification
t5-base-iterater
gpt-neo-1.3B-8bit
pyramidsrnd
bloom-560m-finetuned-news-summarization-cnn
flan-t5-large-finetuned-samsum
flan-t5-small-finetuned-samsum
flan-t5-large-finetuned-samsum-2
electricidad-base-ft-diagTrast
distilbert-base-matryoshka-sts
GuaPeTe-2-tiny-finetuned-TED
HindiBERTa
RoBasquERTa
RuPERTa-base-finetuned-pawsx-es
RuPERTa-base-finetuned-pos
RuPERTa-base-finetuned-spa-constitution
bsc-roberta-base-spanish-diagnostics
byt5-small-finetuned-tweet-qa
chEMBL26_smiles_v2
codebert2codebert-finetuned-code-refinement-small
distilgpt2-finetuned-wsb-tweets
flaubert-small-finetuned-movie-review-sentiment-analysis
spanbert-base-finetuned-squadv1
spanbert-base-finetuned-squadv2
spanbert-large-finetuned-squadv1
t5-base-finetuned-math-seq-next-term
wav2vec2-large-xlsr-53-euskera
data2vec-text-base-finetuned-cola
electricidad-small-finetuned-sst2-es
electricidad-base-finetuned-go_emotions-es-2
switch-base-16-finetuned-xsum
switch-base-16-finetuned-xsum-2
bloom-560m-finetuned-unnatural-instructions
bloom-560m-finetuned-unnatural-instructions-6k-steps
flan-t5-base-finetuned-samsum
flan-t5-base-common_gen
flan-t5-small-common_gen
santacoder-finetuned-the-stack-bash-2
santacoder-finetuned-the-stack-bash-4
xlm-roberta-base-finetuned-HC3
santacoder-finetuned-xlcost-python
xlm-v-base-finetuned-xnli
bart-bio-base-es
roberta-base-finetuned-OIG-mod-2
gpt2-finetuned-jhegarty-texts
distilgpt2-finetuned-jhegarty-texts
distilgpt2-finetuned-jhegarty-books
gpt2-finetuned-jhegarty-books
idefics-9b-ft-describe-diffusion-bf16
idefics-9b-ft-floco
gpt2-l
peaker-segmentation-fine-tuned-callhome-spa
ModernBERT-large-ft-all-nli-2
ModernBERT-base-ft-code-defect-detection-10e-4k
phi-4-14B-grpo-limo
- Developed by: mrm8488 - License: apache-2.0 - Finetuned from model : unsloth/phi-4-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.