FINGU-AI

45 models • 7 total models in database
Sort by:

RomboUltima-32B

FINGU-AI/RomboUltima-32B is a merged model combining rombodawg/Rombos-LLM-V2.5-Qwen-32b and Sakalti/ultiima-32B. It maintains the individual strengths of both Qwen and Ultima architectures while benefiting from an optimized fusion for improved reasoning, multilingual comprehension, and multi-turn conversation capabilities.

NaNK
license:mit
24
6

Chocolatine-Fusion-14B

FINGU-AI/Chocolatine-Fusion-14B is a merged model combining jpacifico/Chocolatine-2-14B-Instruct-v2.0b3 and jpacifico/Chocolatine-2-14B-Instruct-v2.0b2. It maintains the strengths of Chocolatine while benefiting from an optimized fusion for improved reasoning and multi-turn conversation capabilities.

NaNK
license:mit
17
7

FinguMv3

NaNK
12
1

FinguAI-Chat-v1

license:apache-2.0
7
4

Qwen2.5-7b-lora-e-8

- Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

NaNK
6
0

FingUEm_V3

NaNK
5
3

Qwen2.5-32B-Lora-HQ-e-1

NaNK
license:mit
5
0

QWEN2.5-7B-Bnk-7e

NaNK
license:mit
5
0

FINGU-2.5-instruct-32B-v1

`FINGU-AI/FINGU-2.5-instruct-32B-v1` is a versatile causal language model designed to excel in various natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. The model demonstrates a strong aptitude for reasoning tasks, particularly in the Japanese language, making it a valuable tool for applications requiring logical inference and complex understanding. The model's architecture and training regimen have been optimized to enhance its reasoning abilities. This is particularly evident in tasks involving logical deduction and commonsense reasoning in Japanese. For instance, when evaluated on datasets such as JaQuAD—a Japanese Question Answering Dataset—the model exhibits a nuanced understanding of complex logical structures. :contentReference[oaicite:0]{index=0} Additionally, `FINGU-AI/FINGU-2.5-instruct-32B-v1` has been assessed using the JFLD benchmark, which tests a model's ability for deductive reasoning based on formal logic. The model's performance indicates a robust capacity to handle tasks that require understanding and reasoning over formal logical structures. To further evaluate and enhance the reasoning capabilities of `FINGU-AI/FINGU-2.5-instruct-32B-v1`, the following Japanese reasoning datasets are pertinent: - JaQuAD (Japanese Question Answering Dataset): A human-annotated dataset created for Japanese Machine Reading Comprehension, consisting of 39,696 extractive question-answer pairs on Japanese Wikipedia articles. 📄 ARXIV.ORG - JFLD (Japanese Formal Logic Dataset): A benchmark designed to evaluate deductive reasoning based on formal logic, providing a structured framework to assess logical reasoning capabilities in Japanese. 📄 ACLANTHOLOGY.ORG - JEMHopQA (Japanese Explainable Multi-Hop Question-Answering): A dataset for multi-hop QA in Japanese, including question-answer pairs and supporting evidence in the form of derivation triples, facilitating the development of explainable QA systems. 📄 ACLANTHOLOGY.ORG These datasets provide diverse challenges that can help in assessing and improving the model's reasoning abilities across different contexts and complexities. `FINGU-AI/FINGU-2.5-instruct-32B-v1` stands as a robust and adaptable language model, particularly distinguished by its reasoning capabilities in the Japanese language. Its performance across various reasoning benchmarks underscores its potential for applications that demand advanced logical inference and nuanced understanding in NLP tasks.

NaNK
license:mit
5
0

Phi-4-RRStock

A merged language model created using the Spherical Linear Interpolation (SLERP) merge method, allowing for a smooth blend of features from both parent models across different layers. The merge optimizes reasoning, general knowledge, and task-specific performance by strategically interpolating attention and MLP components.

llama
4
1

Qwen2.5-7b-lora-e-5

NaNK
4
0

Qwen2.5-32B-Lora-HQ-e-5

NaNK
license:mit
4
0

Qwen2.5-32B-Lora-HQ-e-6

Overview `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-6` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input. Model Details - Model ID: `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-6` - Architecture: Causal Language Model (LM) - Parameters: 32 billion - Precision: Torch BF16 for efficient GPU memory usage - Attention: SDPA (Scaled Dot-Product Attention) - Primary Use Case: Translation (e.g., Korean to Uzbek), text generation, and dialogue systems. Installation Make sure to install the required packages:

NaNK
license:mit
4
0

Qwen2.5-7b-lora-e-3

NaNK
3
0

Qwen2.5-L-e-4

NaNK
3
0

qwen2.5-omni-3b-merge

NaNK
3
0

Qwen3-14B-BNK-5ks

NaNK
license:mit
3
0

QwQ-Buddy-32B-Alpha

QwQ Buddy 32B Alpha is a merged 32B model created by fusing two high-performing models. It is built using the transformers library and is licensed under MIT.

NaNK
license:mit
2
1

Fingu-instruct-1

NaNK
2
0

Qwen2.5-L-e-3

NaNK
2
0

Qwen2.5-32B-Lora-HQ-e-3

NaNK
license:mit
2
0

Qwen2.5-32B-Lora-HQ-e-4

Overview `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input. Model Details - Model ID: `FINGU-AI/Qwen2.5-32B-Lora-HQ-e-4` - Architecture: Causal Language Model (LM) - Parameters: 32 billion - Precision: Torch BF16 for efficient GPU memory usage - Attention: SDPA (Scaled Dot-Product Attention) - Primary Use Case: Translation (e.g., Korean to Uzbek), text generation, and dialogue systems. Installation Make sure to install the required packages:

NaNK
license:mit
2
0

QWEN2.5-7B-Bnk-3e

QWEN2.5-7B-Bnk-5e is a multilingual translation model based on the QWEN 2.5 architecture with 7 billion parameters. It specializes in translating multiple languages to Korean and Uzbek. The model is designed for translating text from various Asian and European languages to Korean and Uzbek. It can be used for tasks such as: - Multilingual document translation - Cross-lingual information retrieval - Language learning applications - International communication assistance Please note that while the model strives for accuracy, it may not always produce perfect translations, especially for idiomatic expressions or highly context-dependent content. The model was fine-tuned on a diverse dataset of parallel texts covering the supported languages. Evaluation was performed on held-out test sets for each language pair. Fine-tuning was performed on the QWEN 2.5 7B base model using custom datasets for the specific language pairs. The model supports translation from the following languages to Korean and Uzbek: - uzbek (uz) - Russian (ru) - Thai (th) - Chinese (Simplified) (zh) - Chinese (Traditional) (zh-tw, zh-hant) - Bengali (bn) - Mongolian (mn) - Indonesian (id) - Nepali (ne) - English (en) - Khmer (km) - Portuguese (pt) - Sinhala (si) - Korean (ko) - Tagalog (tl) - Myanar (my) - Vietnamese (vi) - Japanese (ja) - The model's performance may vary across different language pairs and domains. - It may struggle with very colloquial or highly specialized text. - The model may not always capture cultural nuances or context-dependent meanings accurately. - The model should not be used for generating or propagating harmful, biased, or misleading content. - Users should be aware of potential biases in the training data that may affect translations. - The model's outputs should not be considered as certified translations for official or legal purposes without human verification. ```bibtex @misc{fingu2023qwen25, author = {FINGU AI and AI Team}, title = {QWEN2.5-7B-Bnk-7e: A Multilingual Translation Model}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face Model Hub}, howpublished = {\url{https://huggingface.co/FINGU-AI/QWEN2.5-7B-Bnk-5e}} }

NaNK
license:mit
2
0

FINGU-2.5-instruct-32B

NaNK
license:mit
1
1

Qwen-Orpo-v1

NaNK
license:apache-2.0
1
0

FingUv2

NaNK
1
0

Fingu-instruct-2

NaNK
1
0

Fingu-instruct-3

NaNK
1
0

Qwen2.5-orpo-lora

NaNK
1
0

Qwen2.5-Orpo

1
0

Qwen2.5-L-e-2

NaNK
1
0

QWEN2.5-32B-e1-adapter

NaNK
1
0

QWEN2.5-32B-3600s

Overview `FINGU-AI/QWEN2.5-32B-3600s` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between languages, as well as supporting other custom NLP tasks through flexible input. Installation Make sure to install the required packages:

NaNK
license:apache-2.0
1
0

L3-72b-Large

NaNK
license:apache-2.0
1
0

Q-Small-3B

Q Small 3B is a powerful causal language model designed for a variety of natural language processing tasks, including machine translation, text generation, and chat-based applications. It is particularly useful for translating between languages and supports other custom NLP tasks through flexible input. Make sure to install the required packages.

NaNK
license:apache-2.0
1
0

Ultimos-32B

FINGU-AI/Ultimos-32B is a merged model combining rombodawg/Rombos-LLM-V2.5-Qwen-32b and Sakalti/ultiima-32B. It maintains the individual strengths of both Qwen and Ultima architectures while benefiting from an optimized fusion for improved reasoning, multilingual comprehension, and multi-turn conversation capabilities.

NaNK
license:mit
1
0

QWEN2.5-32B-v3-5400s-m

NaNK
1
0

qwen2.5-omni-3b-lora-sft

NaNK
llama-factory
1
0

FinguEm7b

NaNK
0
2

L3-8B

FINGU-AI/L3-8B is a powerful causal language model designed for a variety of natural language processing tasks, including machine translation, text generation, and chat-based applications. It is particularly useful for translating between languages and supporting other custom NLP tasks through flexible input. Make sure to install the required packages.

NaNK
llama
0
2

Fingu-M-v1

NaNK
0
1

QWEN2.5-14B-Bnk-FP16

NaNK
0
1

Qwen2.5_14B_Instruct_Fine_Tuned_v3

NaNK
0
1

BNK_Translate_LLM_V3

license:mit
0
1

L3-78b-Large-v1

Overview `FINGU-AI/L3-78b-Large-v1` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between languages, as well as supporting other custom NLP tasks through flexible input. Installation Make sure to install the required packages:

NaNK
license:apache-2.0
0
1