ZeroXClem
Qwen3-4B-CrystalSonic-Q6_K-GGUF
ZeroXClem/Qwen3-4B-CrystalSonic-Q6K-GGUF This model was converted to GGUF format from `ZeroXClem/Qwen3-4B-CrystalSonic` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Qwen3 4B Hermes Axion Pro
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q4_0-GGUF
Llama3.1-TheiaFire-DarkFusion-8B-Q4_K_M-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q5_K_M-GGUF
Astral-Fusion-Neural-Happy-L3.1-8B-Q8_0-GGUF
Llama3.1-8B-Titanium-Forge-Q5_K_M-GGUF
LLama3.1-Hawkish-Theia-Fireball-8B-Q4_0-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q4_0-GGUF
Llama3.1-DarkStorm-Aspire-8B-Q5_0-GGUF
ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q50-GGUF This model was converted to GGUF format from `ZeroXClem/Llama3.1-DarkStorm-Aspire-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
L3SAO-Mix-SuperHermes-NovaPurosani-8B-Q4_0-GGUF
Astral-Fusion-Neural-Happy-L3.1-8B-Q5_K_S-GGUF
Llama3.1-DarkStorm-Aspire-8B-Q6_K-GGUF
ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q6K-GGUF This model was converted to GGUF format from `ZeroXClem/Llama3.1-DarkStorm-Aspire-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Llama3.1-TheiaFire-DarkFusion-8B-Q4_K_S-GGUF
L3.1-Pneuma-Allades-8B-Q5_0-GGUF
L3-Aspire-Heart-Matrix-8B-Q6_K-GGUF
Qwen3-4B-Valiant-Polaris-Q4_K_M-GGUF
Qwen3-4B-Hermes-Axion-Pro-Q4_K_M-GGUF
Llama3.1-DarkStorm-Aspire-8B-Q4_K_S-GGUF
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_0-GGUF
L3-Aspire-Heart-Matrix-8B-Q4_0-GGUF
Llama3.1-DarkStorm-Aspire-8B-Q5_K_M-GGUF
ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5KM-GGUF This model was converted to GGUF format from `ZeroXClem/Llama3.1-DarkStorm-Aspire-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
L3SAO-Mix-SuperHermes-NovaPurosani-8B-Q5_0-GGUF
Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF
Llama3.1-8B-Titanium-Forge-Q5_0-GGUF
Llama3.1-8B-Titanium-Forge-Q4_K_M-GGUF
Qwen2.5-7B-Qandora-CySec-Q4_0-GGUF
Llama3.1-8B-Titanium-Forge-Q4_K_S-GGUF
Llama3.1-TheiaFire-DarkFusion-8B-Q5_0-GGUF
Llama3.1-8B-Titanium-Forge-Q4_0-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q8_0-GGUF
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q8_0-GGUF
Llama3.1-BestMix-Chem-Einstein-8B-Q8_0-GGUF
Llama3.1-8B-Titanium-Forge-Q6_K-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix-Q4_K_M-GGUF
Qwen3-4B-Hermes-Axion-Pro-Q6_K-GGUF
ZeroXClem/Qwen3-4B-Hermes-Axion-Pro-Q6K-GGUF This model was converted to GGUF format from `ZeroXClem/Qwen3-4B-Hermes-Axion-Pro` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
LLama3.1-Hawkish-Theia-Fireball-8B-Q8_0-GGUF
Llama3.1-DarkStorm-Aspire-8B-Q4_K_M-GGUF
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q5_K_M-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q6_K-GGUF
Stheno-Hercules-3.1-8B-Q4_0-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix-Q5_0-GGUF
Llama3.1-DarkStorm-Aspire-8B-Q8_0-GGUF
ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q80-GGUF This model was converted to GGUF format from `ZeroXClem/Llama3.1-DarkStorm-Aspire-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Llama3.1-DarkStorm-Aspire-8B-Q5_K_S-GGUF
ZeroXClem/Llama3.1-DarkStorm-Aspire-8B-Q5KS-GGUF This model was converted to GGUF format from `ZeroXClem/Llama3.1-DarkStorm-Aspire-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
L3.1-Pneuma-Allades-8B-Q4_0-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q4_K_S-GGUF
LLama3.1-Hawkish-Theia-Fireball-8B-Q5_0-GGUF
LLama3.1-Hawkish-Theia-Fireball-8B-Q4_K_M-GGUF
Llama3.1-TheiaFire-DarkFusion-8B-Q8_0-GGUF
L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix-Q4_0-GGUF
Astral-Fusion-Neural-Happy-L3.1-8B-Q5_0-GGUF
Llama-3.1-8B-SpecialTitanFusion
Language model with support for English. Licensed under Apache 2.0.
L3SAO-Mix-SuperHermes-NovaPurosani-8B-Q8_0-GGUF
Llama3.1-BestMix-Chem-Einstein-8B-Q4_K_M-GGUF
Llama3.1-TheiaFire-DarkFusion-8B-Q5_K_S-GGUF
Qwen2.5-7B-HomerCreative-Mix-Q6_K-GGUF
Qwen2.5-7B-HomerCreative-Mix-Q4_0-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix-Q6_K-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix-Q5_K_M-GGUF
L3-Aspire-Heart-Matrix-8B-Q8_0-GGUF
LLama3.1-Hawkish-Theia-Fireball-8B-Q5_K_M-GGUF
Llama-3-Yggdrasil-AstralSpice-8B-Q5_K_M-GGUF
Llama-3-Yggdrasil-AstralSpice-8B-Q8_0-GGUF
Qwen3-4B-CrystalSonic
ZeroXClem-Qwen3-4B-CrystalSonic is an elite 4B-parameter merged model designed for deep reasoning, long-context tool use, structured code generation, and agentic autonomy. Built with MergeKit's modelstock method, this crystal-clear fusion draws from powerful contributors like MiroThinker, Muscae-UI, Fathom-Search, and Claude-distilled reasoning variants. At its heart lies Qwen3-4B-Pro, making this model both versatile and production-ready. A cutting-edge agentic model with 64k context, designed for task decomposition, web search, retrieval-augmented reasoning, and long-horizon problem solving. Built on DPO with multilingual capabilities. Fine-tuned for structured code generation in HTML, React, Tailwind, Markdown, and YAML. Supports layout-aware reasoning, component hierarchy, and UI prototyping with structured output. Trained for open-ended, deep information retrieval and autonomous search workflows. Sets new benchmarks in DeepSearch, surpassing GPT-4o + Search on reasoning-heavy QA. 🎭 `Liontix/Qwen3-4B-Claude-Sonnet-4-Reasoning-Distill-Safetensor` Distilled from Claude Sonnet 4/3.7, this model contributes high-fidelity reasoning and conversational engagement to the CrystalSonic blend. Base for long-context thought generation (262k context length). Improved reasoning across logic, math, alignment, tool use, and creativity. 🔹 Advanced Reasoning & DeepSearch — From Fathom and MiroThinker: search-aware, long-horizon, tool-augmented thinking. 🔹 UI & Structured Code Generation — Muscae-UI brings layout-aware reasoning and polished frontend component synthesis. 🔹 Safe & Aligned Dialogues — Claude-style instruction distillation adds emotional nuance and safe defaults. 🔹 Agentic Capabilities — Native support for thinking modes, planning, web search, file parsing, and external tool use. 🔹 Multilingual & Scientific — Handles technical, scientific, and cross-lingual queries with elegance and depth. 🧑💻 Frontend & UI Prototyping 🧠 Search-Augmented Autonomous Agents 🧬 Scientific Reasoning & Math 💬 Conversational AI with Deep Context 📑 Tool-Augmented Research Assistants 🔍 Structured Information Synthesis Apache 2.0: Credit to MiroThinker, Fathom-Search, Muscae, Qwen3-4B for their amazing models! We welcome your prompts, benchmarks, and merge proposals! 🌐 Hugging Face: @ZeroXClem 📬 GitHub Issues & PRs: Let’s build smarter agents together.
Llama3.1-BestMix-Chem-Einstein-8B-Q5_0-GGUF
Llama3.1-BestMix-Chem-Einstein-8B-Q6_K-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q5_K_S-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix-Q8_0-GGUF
Qwen-2.5-Aether-SlerpFusion-7B-Q4_K_M-GGUF
Qwen2.5-7B-Qandora-CySec
Apache 2.0 licensed library using transformers.
Llama-3-Yggdrasil-AstralSpice-8B-Q5_K_S-GGUF
Astral-Fusion-Neural-Happy-L3.1-8B-Q4_K_S-GGUF
Astral-Fusion-Neural-Happy-L3.1-8B-Q6_K-GGUF
Astral-Fusion-Neural-Happy-L3.1-8B-Q4_0-GGUF
Llama-3-Yggdrasil-AstralSpice-8B-Q4_K_M-GGUF
L3.1-Pneuma-Allades-8B-Q4_K_M-GGUF
Llama3.1-8B-Titanium-Forge-Q8_0-GGUF
Llama3.1-8B-Titanium-Forge-Q5_K_S-GGUF
Llama3.1-TheiaFire-DarkFusion-8B-Q6_K-GGUF
L3-Aspire-Heart-Matrix-8B-Q5_K_M-GGUF
LLama3.1-Hawkish-Theia-Fireball-8B-Q6_K-GGUF
Llama-3.1-8B-AthenaSky-MegaMix-Q4_K_M-GGUF
Llama3.1-DarkStorm-Aspire-8B
Welcome to Llama3.1-DarkStorm-Aspire-8B — an advanced and versatile 8B parameter AI model born from the fusion of powerful language models, designed to deliver superior performance across research, writing, coding, and creative tasks. This unique merge blends the best qualities of the Dark Enigma, Storm, and Aspire models, while built on the strong foundation of DarkStock. With balanced integration, it excels in generating coherent, context-aware, and imaginative outputs. Llama3.1-DarkStorm-Aspire-8B combines cutting-edge natural language processing capabilities to perform exceptionally well in a wide variety of tasks: - Research and Analysis: Perfect for analyzing textual data, planning experiments, and brainstorming complex ideas. - Creative Writing and Roleplaying: Excels in creative writing, immersive storytelling, and generating roleplaying scenarios. - General AI Applications: Use it for any application where advanced reasoning, instruction-following, and creativity are needed. This merge incorporates the finest elements of the following models: - Llama3.1-Dark-Enigma: Known for its versatility across creative, research, and coding tasks. Specializes in role-playing and simulating scenarios. - Llama-3.1-Storm-8B: A finely-tuned model for structured reasoning, enhanced conversational capabilities, and agentic tasks. - Aspire-8B: Renowned for high-quality generation across creative and technical domains. - L3.1-DarkStock-8B: The base model providing a sturdy and balanced core of instruction-following and narrative generation. This model was created using the Model Stock merge method, meticulously balancing each component model's unique strengths. The TIES merge method was used to blend the layers, ensuring smooth integration across the self-attention and MLP layers for optimal performance. The TIES method ensures seamless blending of each model’s specializations, allowing for smooth interpolation across their capabilities. The model uses bfloat16 for efficient processing and float16 for the final output, ensuring optimal performance without sacrificing precision. 1. Instruction Following & Reasoning: Leveraging DarkStock's structured capabilities, this model excels in handling complex reasoning tasks and providing precise instruction-based outputs. 2. Creative Writing & Role-Playing: The combination of Aspire and Dark Enigma offers powerful storytelling and roleplaying support, making it an ideal tool for immersive worlds and character-driven narratives. 3. High-Quality Output: The model is designed to provide coherent, context-aware responses, ensuring high-quality results across all tasks, whether it’s a research task, creative writing, or coding assistance. Llama3.1-DarkStorm-Aspire-8B is suitable for a wide range of applications: - Creative Writing & Storytelling: Generate immersive stories, role-playing scenarios, or fantasy world-building with ease. - Technical Writing & Research: Analyze text data, draft research papers, or brainstorm ideas with structured reasoning. - - Conversational AI: Use this model to simulate engaging and contextually aware conversations. The models included in this merge were each trained on diverse datasets: - Llama3.1-Dark-Enigma and Storm-8B were trained on a mix of high-quality, public datasets, with a focus on creative and technical content. - Aspire-8B emphasizes a balance between creative writing and technical precision, making it a versatile addition to the merge. - DarkStock provided a stable base, finely tuned for instruction-following and diverse general applications. As with any AI model, it’s important to understand and consider the limitations of Llama3.1-DarkStorm-Aspire-8B: - Bias: While the model has been trained on diverse data, biases in the training data may influence its output. Users should critically evaluate the model’s responses in sensitive scenarios. - Fact-based Tasks: For fact-checking and knowledge-driven tasks, it may require careful prompting to avoid hallucinations or inaccuracies. - Sensitive Content: This model is designed with an uncensored approach, so be cautious when dealing with potentially sensitive or offensive content. You can load the model using Hugging Face's transformers library: For best results, use the model with the bfloat16 precision for high efficiency, or float16 for the final outputs. This model is open-sourced under the Apache 2.0 License, allowing free use, distribution, and modification with proper attribution. We’re excited to see how the community uses Llama3.1-DarkStorm-Aspire-8B in various creative and technical applications. Be sure to share your feedback and improvements with us on the Hugging Face model page!
Mistral-2.5-Prima-Hercules-Fusion-7B
Llama3.1-BestMix-Chem-Einstein-8B-Q5_K_M-GGUF
Qwen2.5-7B-HomerCreative-Mix-Q5_K_M-GGUF
Qwen2.5-7B-HomerCreative-Mix
Language model with Apache 2.0 license.
Llama3.1 TheiaFire DarkFusion 8B
Architecture: Llama 3.1 - 8B Proposed Name: Llama3.1-TheiaFire-DarkFusion-8B Merge Method: TIES Merge Date: 10/25/2024 License: Apache 2.0 The Llama3.1-TheiaFire-DarkFusion-8B is a highly specialized fusion of four cutting-edge models, meticulously combined to provide an exceptional balance of technical reasoning, creativity, and uncensored freedom for a variety of use cases. Whether you need advanced coding assistance, blockchain insights, creative roleplaying, or general-purpose AI capabilities, this model delivers state-of-the-art results. This model was merged using the TIES merge method to ensure optimal blending of layer weights and parameter configurations, resulting in a model that excels in multiple domains. For optimal results, leave the system prompt blank within LMStudio. The tokenizer seems to struggle under system prompts. The following models were merged to create Llama3.1-TheiaFire-DarkFusion-8B: 1. Theia-Llama-3.1-8B-v1 - Purpose: Balances technical vision and crypto capabilities. - Training Focus: This model specializes in blockchain data and was trained on a large dataset of crypto whitepapers, research reports, and market data. - Unique Feature: Fine-tuned using LoRA for optimized crypto-specific performance. 2. EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO - Purpose: Specialized in agentic reasoning and advanced coding tasks. - Unique Feature: This model is equipped with a 128K context window and comes with built-in tools for ReAct, calculator, search, and more. 3. aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored - Purpose: Provides uncensored, creativity-driven responses ideal for writing, role-playing, and in-depth conversations. - Unique Feature: Uncensored nature allows for open exploration of creative writing and darker, more complex roleplay scenarios. 4. DeepAutoAI/ldmsoupLlama-3.1-8B-Inst - Purpose: Enhances performance with latent diffusion model blending. - Unique Feature: This model builds upon Llama-3.1’s foundation and improves unseen task generalization with latent diffusion. 1. Crypto Analysis & Blockchain Projects - Leverages data from CoinMarketCap and research reports for in-depth analysis of blockchain projects and crypto markets. - Ideal for creating blockchain-related content or automating crypto data analysis. 2. Advanced Coding Assistant - Built-in support for agentic behavior such as reasoning and action, making it perfect for AI-driven coding assistance. - Handles large-scale coding projects with tools like search and calculator integration. 3. Creative Writing & Roleplay - Uncensored output allows for rich, expressive writing ideal for novels, creative pieces, or roleplay scenarios. - Capable of producing nuanced, emotionally complex character responses in roleplaying games or interactive storytelling. 4. Unseen Task Generalization - With the latent diffusion capabilities, this model can handle unseen tasks by learning weight distributions in an adaptive manner, improving performance on novel datasets or tasks. - The model has shown significant improvements in multi-domain reasoning, code generation, and unconstrained creative output. - Enhanced task generalization due to latent diffusion model blending techniques. - Context Window: 128K (capable of handling long-form tasks like novel writing and in-depth research). - Agentic Tools: Built-in tools like search and calculator. - Safety: While uncensored, responsible prompting is encouraged to ensure the best user experience and ethical usage. This model can be used in popular AI libraries like Transformers and Langchain. Below is a basic setup using Transformers: - Uncensored Output: While this model offers creative freedom, it may produce content that could be considered inappropriate or unsuitable for certain contexts. - Bias: As with all language models, this one may reflect inherent biases in the training data. Users are encouraged to review and edit the outputs before use. This model is a collective effort, combining the groundbreaking work from: - Chainbase Labs (for Theia-Llama) - EpistemeAI (for Fireball Meta-Llama) - Aifeifei798 (for DarkIdol) - DeepAutoAI (for LDM Soup) Special thanks to the open-source community and the developers who contributed to the training and fine-tuning of these models.
Astral-Fusion-Neural-Happy-L3.1-8B-Q5_K_M-GGUF
Llama-3-Yggdrasil-AstralSpice-8B-Q6_K-GGUF
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q6_K-GGUF
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q4_K_S-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q4_K_M-GGUF
Llama-3-8B-ProLong-SAO-Roleplay-512k
Llama-3.1-8B-SuperTulu-LexiNova
Language model with support for English. Licensed under Apache 2.0.
L3SAO-Mix-SuperHermes-NovaPurosani-8B-Q4_K_M-GGUF
Llama3.1-TheiaFire-DarkFusion-8B-Q5_K_M-GGUF
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B-Q5_0-GGUF
L3-Aspire-Heart-Matrix-8B-Q5_0-GGUF
Qwen2.5-7B-HomerCreative-Mix-Q4_K_M-GGUF
Stheno-Hercules-3.1-8B-Q5_0-GGUF
Qwen-2.5-Aether-SlerpFusion-7B-Q6_K-GGUF
Qwen2.5-7B-CelestialHarmony-1M
License: MIT, Library Name: Transformers, Tags:
Astral-Fusion-Neural-Happy-L3.1-8B
L3-Aspire-Heart-Matrix-8B
Licensed under Apache 2.0. Tags include merge.
Qwen3-8B-HoneyBadger-EXP
Qwen3-4B-Valiant-Polaris
Astral-Fusion-Neural-Happy-L3.1-8B-Q4_K_M-GGUF
Llama 3.1 8B AthenaSky MegaMix
Overview ZeroXClem-Llama-3.1-8B-AthenaSky-MegaMix is a powerful AI model built through model stock merging using MergeKit. It brings together some of the best models available on Hugging Face, ensuring strong performance in a wide range of NLP tasks, including reasoning, coding, roleplay, and instruction-following. This model was created by merging high-quality foundational and fine-tuned models to create an optimized blended architecture that retains the strengths of each contributing model. Merge Details - Merge Method: `modelstock` - Base Model: `mergekit-community/L3.1-Athena-d-8B` - Dtype: `bfloat16` - Tokenizer Source: `mergekit-community/L3.1-Athena-d-8B` Models Merged The following models contributed to this fusion: - `Pedro13543/megablendmodel` - A well-balanced blend of roleplay and instruction-tuned Llama-3.1 variants. - `Skywork/Skywork-o1-Open-Llama-3.1-8B` - Optimized for reasoning and slow-thinking capabilities. - `Undi95/Meta-Llama-3.1-8B-Claude` - Fine-tuned on Claude Opus/Sonnet data, improving response depth and conversational engagement. - `mergekit-community/goodmixmodelStock` - A diverse mixture including RP-focused and knowledge-heavy datasets. Features & Improvements 🔹 Advanced Reasoning & Thoughtfulness - Thanks to `Skywork-o1` integration, this model excels in logical thinking and problem-solving. 🔹 Enhanced Conversational Depth - The inclusion of `Meta-Llama-3.1-8B-Claude` adds better response structuring, making it more engaging in dialogue. 🔹 Versatile Roleplay & Creativity - Leveraging `megablendmodel` and `goodmixmodelStock`, the model supports immersive roleplaying and storytelling. 🔹 Strong Instruction Following - Trained on various instruction datasets to provide clear, informative, and helpful responses. Use Cases - Chat & Roleplay - Supports natural, engaging, and dynamic conversational flow. - Programming & Code Generation - Provides reliable code completions and debugging suggestions. - Creative Writing - Generates compelling stories, character dialogues, and immersive text. - Educational Assistance - Helps explain complex topics and answer academic questions. - Logic & Problem-Solving - Can handle reasoning-based and structured thought processes. You can run the model using Ollama for direct testing: Model Alignment & Ethics ⚠️ Uncensored Use: This model does not apply strict moderation. Users should implement appropriate safety filters before deployment. ⚠️ Responsibility Notice: You are responsible for the outputs generated by this model. It is recommended to apply ethical safeguards and content moderation when integrating this model into applications. 📜 License: Governed by the Meta Llama 3.1 Community License Agreement. Feedback & Contributions We welcome feedback, bug reports, and performance evaluations! If you find improvements or wish to contribute, feel free to reach out or submit suggestions. --- ZeroXClem Team | 2025 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |26.79| |IFEval (0-Shot) |63.01| |BBH (3-Shot) |31.39| |MATH Lvl 5 (4-Shot)|27.95| |GPQA (0-shot) | 3.69| |MuSR (0-shot) | 6.90| |MMLU-PRO (5-shot) |27.82|
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B-Q4_K_M-GGUF
L3SAO-Mix-SuperHermes-NovaPurosani-8B-Q4_K_S-GGUF
Llama3.1-8B-Titanium-Forge
Qwen2.5-7B-Qandora-CySec-Q4_K_M-GGUF
Qwen2.5-7B-Qandora-CySec-Q5_K_M-GGUF
Qwen2.5-7B-Qandora-CySec-Q8_0-GGUF
Qwen2.5-7B-Qandora-CySec-Q6_K-GGUF
Qwen2.5-7B-Qandora-CySec-Q5_K_S-GGUF
Qwen2.5-7B-Qandora-CySec-Q5_0-GGUF
Qwen2.5-7B-Qandora-CySec-Q4_K_S-GGUF
Qwen2.5-7B-HomerCreative-Mix-Q8_0-GGUF
Qwen2.5-7B-HomerCreative-Mix-Q5_0-GGUF
Qwen-2.5-Aether-SlerpFusion-7B-Q5_K_M-GGUF
Qwen-2.5-Aether-SlerpFusion-7B-Q8_0-GGUF
Qwen2.5-7B-HomerAnvita-NerdMix
Language: en License: apache-2.0
Llama3.1-BestMix-Chem-Einstein-8B
Stheno-Hercules-3.1-8B
Qwen2.5-7B-HomerFuse-NerdExp
Mistral-2.5-Prima-Hercules-Fusion-7B-Q5_K_M-GGUF
Llama-3.1-8B-SuperTulu-LexiNova-Q6_K-GGUF
LLama3.1-Hawkish-Theia-Fireball-8B
Llama 3.1 8B Athena Apollo Exp
ZeroXClem-Llama-3.1-8B-Athena-Apollo-exp is a powerful AI model built through Model Stock merging using MergeKit. It merges several of the most capable and nuanced Llama-3.1-based models available on Hugging Face, optimized for performance across instruction-following, roleplay, logic, coding, and creative writing tasks. By fusing diverse fine-tuned architectures into a cohesive blended model, this creation delivers excellent generalist abilities while retaining specialized strengths. - Merge Method: `modelstock` - Base Model: `mergekit-community/L3.1-Athena-l3-8B` - Dtype: `bfloat16` - Tokenizer Source: `mergekit-community/L3.1-Athena-l3-8B` The following models contribute to this powerful fusion: - `rootxhacker/Apollo-exp-8B` — A rich blend focused on alignment, DPO, and SFT instruction tuning across Llama-3.1 variants. - `mergekit-community/L3.1-Athena-k-8B` — Roleplay and safety-aligned merge based on Meta's Llama-3.1 foundation. - `mergekit-community/L3.1-Athena-l2-8B` — LoRA-enhanced with long-context and creative capability merges. - `mergekit-community/L3.1-Athena-l-8B` — Deeply infused with LoRA-based domain-specific models in logic, psychology, storytelling, and more. 🔹 Instruction-Following Prowess — Merged from Tulu-aligned and instruct-tuned models like Apollo-exp and Athena-k for high-quality, context-aware responses. 🔹 Immersive Roleplay & Personality — Strong roleplay personas and emotional nuance thanks to Athena's diverse RP blends. 🔹 Creative & Structured Generation — Support for creative writing, long-context novelization, and formal logic modeling from l2/l3 integrations. 🔹 Depth in Dialogue — Enhanced ability to carry layered and philosophical conversation from Claude-style fine-tunes in Apollo-exp. - Conversational AI & Roleplay Bots - Formal Reasoning & Chain-of-Thought Tasks - Creative Writing & Storytelling Tools - Coding Assistants - Educational and Research Applications ⚠️ Unfiltered Output: This model is uncensored and may generate content outside of alignment norms. Please implement your own moderation layers when deploying in production environments. ⚠️ Responsible Use: Developers are encouraged to audit outputs and maintain ethical usage policies for downstream applications. 📜 License: Usage governed by the Meta Llama 3.1 Community License. We welcome your feedback, benchmarks, and improvements! Please open an issue or PR to contribute or tag us in your results and projects.
Llama-3.1-8B-SuperNova-EtherealHermes
Language model with support for English. Licensed under Apache 2.0.
Llama-3-Yggdrasil-AstralSpice-8B
Qwen-2.5-Aether-SlerpFusion-7B
Language support for English and Chinese.
Qwen2.5-7B-CelestialHarmony-1M-Q4_K_M-GGUF
ZeroXClem/Qwen2.5-7B-CelestialHarmony-1M-Q4KM-GGUF This model was converted to GGUF format from `ZeroXClem/Qwen2.5-7B-CelestialHarmony-1M` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Llama-3.1-8B-RainbowLight-EtherealMix
Language model with support for English. Licensed under Apache 2.0.
Llama-3.1-8B-SuperNova-EtherealHermes-Q4_K_M-GGUF
Qwen3-8B-HoneyBadger-EXP-Q4_K_M-GGUF
Mistral-2.5-Prima-Hercules-Fusion-7B-Q4_K_M-GGUF
Mistral-2.5-Prima-Hercules-Fusion-7B-Q8_0-GGUF
Mistral-2.5-Prima-Hercules-Fusion-7B-Q6_K-GGUF
Qwen2.5-7B-DistilPrism-Q4_K_M-GGUF
Llama-3.1-8B-SuperTulu-LexiNova-Q5_0-GGUF
L3SAO-Mix-SuperHermes-NovaPurosani-8B
Qwen3-4B-NexusPrime
Llama-3-Aetheric-Hermes-Lexi-Smaug-8B
Qwen2.5-7B-DistilPrism
Qwen3-4B-Wrist-On-Hermes
Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B
Qwen3-4B-Sky-High-Hermes
Qwen3-4B-MiniMight
Qwen3-1.7B-TardigradePro
Qwen3-4B-ChromaticCoder
Gemma3-4B-Arceus-Servant
L3.1-Pneuma-Allades-8B
Llama-3.1-8B-RainbowLight-EtherealMix-Q4_K_M-GGUF
Qwen2.5-7B-HomerFuse-NerdExp-Q4_K_M-GGUF
ZeroXClem/Qwen2.5-7B-HomerFuse-NerdExp-Q4KM-GGUF This model was converted to GGUF format from `ZeroXClem/Qwen2.5-7B-HomerFuse-NerdExp` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).