Lamapi

59 models • 7 total models in database
Sort by:

next-1b

NaNK
license:mit
3,296
7

next-4b

Türkiye’s First Vision-Language Model — Efficient, Multimodal, and Reasoning-Focused [](https://opensource.org/licenses/MIT) []() [](https://huggingface.co/Lamapi/next-4b) Next 4B is a 4-billion parameter multimodal Vision-Language Model (VLM) based on Gemma 3, fine-tuned to handle both text and images efficiently. It is Türkiye’s first open-source vision-language model, designed for: Understanding and generating text and image descriptions. Efficient reasoning and context-aware multimodal outputs. Turkish support with multilingual capabilities. Low-resource deployment using 8-bit quantization for consumer-grade GPUs. This model is ideal for researchers, developers, and organizations who need a high-performance multimodal AI capable of visual understanding, reasoning, and creative generation. Our Next 1B and Next 4B models are leading to all of the tiny models in benchmarks. Also, our Next 14b model is leading to state-of-the-art models in some of the Benchmarks. The image shows Mustafa Kemal Atatürk , the founder and first President of the Republic of Turkey. 1. Multimodal Intelligence: Understand and reason over images and text. 2. Efficiency: Run on modest GPUs using 8-bit quantization. 3. Accessibility: Open-source availability for research and applications. 4. Cultural Relevance: Optimized for Turkish language and context while remaining multilingual. | Feature | Description | | --------------------------------- | ----------------------------------------------------------------------- | | 🔋 Efficient Architecture | Optimized for low VRAM; supports 8-bit quantization for consumer GPUs. | | 🖼️ Vision-Language Capable | Understands images, captions them, and performs visual reasoning tasks. | | 🇹🇷 Multilingual & Turkish-Ready | Handles complex Turkish text with high accuracy. | | 🧠 Advanced Reasoning | Supports logical and analytical reasoning for both text and images. | | 📊 Consistent & Reliable Outputs | Reproducible responses across multiple runs. | | 🌍 Open Source | Transparent, community-driven, and research-friendly. | | Specification | Details | | ------------------ | ---------------------------------------------------------------------------------- | | Base Model | Gemma 3 | | Parameter Count | 4 Billion | | Architecture | Transformer, causal LLM + Vision Encoder | | Fine-Tuning Method | Instruction & multimodal fine-tuning (SFT) on Turkish and multilingual datasets | | Optimizations | Q80, F16, F32 quantizations for low VRAM and high VRAM usage | | Modalities | Text & Image | | Use Cases | Image captioning, multimodal QA, text generation, reasoning, creative storytelling | This project is licensed under the MIT License — free to use, modify, and distribute. Attribution is appreciated. 📧 Email: [email protected] 🤗 HuggingFace: Lamapi > Next 4B — Türkiye’s first vision-language AI, combining multimodal understanding, reasoning, and efficiency.

NaNK
license:mit
1,329
4

next-12b

Türkiye's Advanced Vision-Language Model — High Performance, Multimodal, and Enterprise-Ready [](https://opensource.org/licenses/MIT) []() [](https://huggingface.co/Lamapi/next-12b) Next 12B is a 12-billion parameter multimodal Vision-Language Model (VLM) based on Gemma 3, fine-tuned to deliver exceptional performance in both text and image understanding. This is Türkiye's most advanced open-source vision-language model, designed for: Superior understanding and generation of text and image descriptions. Advanced reasoning and context-aware multimodal outputs. Professional-grade Turkish support with extensive multilingual capabilities. Enterprise-ready deployment with optimized quantization options. This model is ideal for enterprises, researchers, and organizations who need a state-of-the-art multimodal AI capable of complex visual understanding, advanced reasoning, and creative generation. Next 12B sets new standards for medium-sized models across all major benchmarks. The image shows Mustafa Kemal Atatürk , the founder and first President of the Republic of Turkey. 1. Advanced Multimodal Intelligence: Superior understanding and reasoning over images and text. 2. Enterprise-Grade Performance: High accuracy and reliability for production deployments. 3. Efficiency: Optimized for professional GPUs with flexible quantization options. 4. Accessibility: Open-source availability for research and commercial applications. 5. Cultural Excellence: Best-in-class Turkish language support while maintaining multilingual capabilities. | Feature | Description | | --------------------------------- | ----------------------------------------------------------------------- | | 🔋 Optimized Architecture | Balanced performance and efficiency; supports multiple quantization formats. | | 🖼️ Advanced Vision-Language | Deep understanding of images with sophisticated visual reasoning capabilities. | | 🇹🇷 Professional Turkish Support | Industry-leading Turkish language performance with extensive multilingual reach. | | 🧠 Superior Reasoning | State-of-the-art logical and analytical reasoning for complex tasks. | | 📊 Production-Ready | Reliable, consistent outputs suitable for enterprise applications. | | 🌍 Open Source | Transparent, community-driven, and commercially friendly. | | Specification | Details | | ------------------ | ---------------------------------------------------------------------------------- | | Base Model | Gemma 3 | | Parameter Count | 12 Billion | | Architecture | Transformer, causal LLM + Enhanced Vision Encoder | | Fine-Tuning Method | Advanced instruction & multimodal fine-tuning (SFT) on curated Turkish and multilingual datasets | | Optimizations | Q80, Q4KM, F16, F32 quantizations for flexible deployment options | | Modalities | Text & Image | | Use Cases | Advanced image captioning, multimodal QA, text generation, complex reasoning, creative storytelling, enterprise applications | - MMLU Excellence: 91.8% on MMLU benchmark, demonstrating comprehensive knowledge across diverse domains - Mathematical Prowess: 81.2% on MATH benchmark, excelling in complex mathematical reasoning - Problem Solving: 94.3% on GSM8K, showcasing superior word problem solving capabilities - Professional Reasoning: 78.4% on MMLU-Pro, handling advanced professional-level questions - Enterprise Content Generation: High-quality multilingual content creation - Advanced Visual Analysis: Detailed image understanding and description - Educational Applications: Complex tutoring and explanation systems - Research Assistance: Literature review and data analysis - Creative Writing: Story generation and creative content - Technical Documentation: Code documentation and technical writing - Customer Support: Multilingual customer service automation - Data Extraction: Visual document processing and information extraction This project is licensed under the MIT License — free to use, modify, and distribute for commercial and non-commercial purposes. Attribution is appreciated. 📧 Email: [email protected] 🤗 HuggingFace: Lamapi > Next 12B — Türkiye's most advanced vision-language AI, combining state-of-the-art multimodal understanding, superior reasoning, and enterprise-grade reliability.

NaNK
license:mit
875
9

next-270m

NaNK
license:mit
589
2

next-codex

NaNK
license:mit
85
1

next-8b

NaNK
license:mit
74
2

next-14b

Türkiye’s First Reasoning-Capable AI Model — Logical, Analytical, and Enterprise-Ready [](https://opensource.org/licenses/MIT) []() [](https://huggingface.co/Lamapi/next-14b) Next 14B is a 14-billion parameter large language model (LLM) built upon Qwen 3 architecture, trained to achieve superior reasoning and analytical capabilities. It is Türkiye’s first reasoning-capable AI model, designed to think, infer, and make decisions — not just respond. Unlike vision-based models, Next 14B focuses on pure cognitive performance, mastering complex problem solving, abstract logic, and human-level understanding in both Turkish and English. - 🇹🇷 Türkiye’s first reasoning-capable AI model - 🧠 Advanced logical, analytical, and inferential reasoning - 🌍 High multilingual understanding (Turkish, English, and beyond) - 🏢 Enterprise-grade stability and consistency - 💬 Instruction-tuned for dialogue, problem solving, and analysis | Feature | Description | | --------------------------------------------- | ------------------------------------------------------------------------------ | | 🧠 Advanced Reasoning | Excels in abstract logic, critical thinking, and long-form analysis. | | 🇹🇷 Cultural & Multilingual Intelligence | Deep Turkish understanding, alongside fluent English and 30+ languages. | | ⚙️ Optimized for Efficiency | Available in quantized formats (Q80, Q4KM, FP16). | | 🧮 Mathematical & Analytical Skill | Performs exceptionally in structured problem solving and scientific reasoning. | | 🧩 Non-Vision Architecture | Focused purely on cognitive and linguistic understanding. | | 🏢 Enterprise Reliability | Consistent, interpretable outputs for professional use cases. | | Specification | Details | | ----------------- | ------------------------------------------------------------------ | | Base Model | Qwen 3 | | Parameters | 14 Billion | | Architecture | Transformer (Causal LLM) | | Modalities | Text-only | | Fine-Tuning | Instruction-tuned and reinforced with cognitive reasoning datasets | | Optimizations | Quantization-ready, FP16 support | | Primary Focus | Reasoning, logic, decision-making, and language understanding | Analytical Chatbots for business and enterprise logic Research Assistance — scientific, legal, or data-heavy reasoning Education & Tutoring — explain concepts step-by-step Creative Writing — coherent story logic and worldbuilding Code & Algorithm Design — reasoning-based code generation Decision Support Systems — scenario evaluation and inference Superior Reasoning: Outperforms previous-generation 12B models in logic-based benchmarks. Robust Mathematical Understanding: Handles symbolic reasoning and complex equations. Consistent Long-Context Memory: Capable of tracking context across multi-turn conversations. Professional Reliability: Built for critical enterprise and research applications. Licensed under the MIT License — free for commercial and non-commercial use. Attribution is appreciated. 📧 Email: [email protected] 🤗 HuggingFace: Lamapi > Next 14B — Türkiye’s first reasoning-capable large language model, combining logical depth, analytical intelligence, and enterprise reliability.

NaNK
license:mit
73
3

next-32b

NaNK
license:mit
70
1

next-ocr

license:apache-2.0
64
1

next-1b-Q4_K_M-GGUF

Lamapi/next-1b-Q4KM-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
62
2

next-12b-Q4_K_M-GGUF

NaNK
license:mit
61
2

next-4b-Q3_K_M-GGUF

Lamapi/next-4b-Q3KM-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
57
2

Next 1b Q6 K GGUF

Lamapi/next-1b-Q6K-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
47
1

next-12b-Q2_K-GGUF

NaNK
license:mit
46
2

next-4b-Q4_K_M-GGUF

Lamapi/next-4b-Q4KM-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
45
2

next-270m-Q4_K_M-GGUF

Lamapi/next-270m-Q4KM-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
43
3

next-1b-Q3_K_M-GGUF

Lamapi/next-1b-Q3KM-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
42
2

next-1b-Q4_K_S-GGUF

Lamapi/next-1b-Q4KS-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
37
2

next-4b-Q5_K_M-GGUF

Lamapi/next-4b-Q5KM-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
35
2

next-1b-Q2_K-GGUF

Lamapi/next-1b-Q2K-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
32
2

next-1b-Q5_K_M-GGUF

Lamapi/next-1b-Q5KM-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
31
2

next-1b-Q4_0-GGUF

Lamapi/next-1b-Q40-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
31
2

next-1b-Q3_K_L-GGUF

Lamapi/next-1b-Q3KL-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
31
2

next-270m-Q4_0-GGUF

Lamapi/next-270m-Q40-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
30
2

next-1b-Q5_0-GGUF

Lamapi/next-1b-Q50-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
30
2

Next 270m Q6 K GGUF

Lamapi/next-270m-Q6K-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
30
1

next-270m-Q3_K_M-GGUF

Lamapi/next-270m-Q3KM-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
29
2

next-4b-Q4_0-GGUF

Lamapi/next-4b-Q40-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
28
2

Next 4b Q6 K GGUF

Lamapi/next-4b-Q6K-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
27
1

Next 270m Q5 0 GGUF

NaNK
llama-cpp
26
2

next-270m-Q4_K_S-GGUF

Lamapi/next-270m-Q4KS-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
24
2

next-4b-Q5_0-GGUF

Lamapi/next-4b-Q50-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
24
2

next-1b-Q3_K_S-GGUF

Lamapi/next-1b-Q3KS-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
21
2

next-270m-Q5_K_S-GGUF

Lamapi/next-270m-Q5KS-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
19
2

next-270m-Q2_K-GGUF

Lamapi/next-270m-Q2K-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
18
2

next-4b-Q4_K_S-GGUF

Lamapi/next-4b-Q4KS-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
10
2

next-270m-Q5_K_M-GGUF

Lamapi/next-270m-Q5KM-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
9
2

next-1b-Q5_K_S-GGUF

Lamapi/next-1b-Q5KS-GGUF This model was converted to GGUF format from `Lamapi/next-1b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
9
2

next-4b-Q5_K_S-GGUF

Lamapi/next-4b-Q5KS-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
9
2

next-270m-Q3_K_L-GGUF

Lamapi/next-270m-Q3KL-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
2

next-270m-Q3_K_S-GGUF

Lamapi/next-270m-Q3KS-GGUF This model was converted to GGUF format from `Lamapi/next-270m` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
2

next-4b-Q3_K_S-GGUF

Lamapi/next-4b-Q3KS-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
2

next-4b-Q2_K-GGUF

Lamapi/next-4b-Q2K-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
7
2

next-4b-Q3_K_L-GGUF

Lamapi/next-4b-Q3KL-GGUF This model was converted to GGUF format from `Lamapi/next-4b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
7
2

next-32b-GGUF

NaNK
license:mit
0
1

next-32b-4bit

NaNK
license:mit
0
1

next-8b-Q4_K_M-GGUF

NaNK
llama-cpp
0
1

next-8b-IQ3_XXS-GGUF

NaNK
llama-cpp
0
1

next-8b-IQ4_NL-GGUF

NaNK
llama-cpp
0
1

next-8b-IQ4_XS-GGUF

NaNK
llama-cpp
0
1

next-14b-Q4_K_S-GGUF

Lamapi/next-14b-Q4KS-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q3_K_L-GGUF

Lamapi/next-14b-Q3KL-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q3_K_M-GGUF

Lamapi/next-14b-Q3KM-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q4_0-GGUF

Lamapi/next-14b-Q40-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q6_K-GGUF

Lamapi/next-14b-Q6K-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q5_K_M-GGUF

Lamapi/next-14b-Q5KM-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q5_K_S-GGUF

Lamapi/next-14b-Q5KS-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q2_K-GGUF

Lamapi/next-14b-Q2K-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1

next-14b-Q3_K_S-GGUF

Lamapi/next-14b-Q3KS-GGUF This model was converted to GGUF format from `Lamapi/next-14b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
0
1