TeichAI
gemma-4-31B-it-Claude-Opus-Distill-GGUF
GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
Qwen3-14B-Claude-Sonnet-4.5-Reasoning-Distill-GGUF
This model was trained on a Claude Sonnet 4.5 (reasoning) dataset with a high reasoning effort. - 🤖 Related Models: | Model | Effective parameters | Active parameters | | ------------- | ------------- | ------------- | | `TeichAI/TeichAI/Qwen3-30B-A3B-Thinking-2507-Claude-4.5-Sonnet-High-Reasoning-Distill-GGUF` | 30 B | 3 B | | `TeichAI/gpt-oss-20b-claude-4.5-sonnet-high-reasoning-distill-GGUF` | 20 B | 3 B | | `TeichAI/Qwen3-8B-Claude-Sonnet-4.5-Reasoning-Distill-GGUF` | 8 B | 8 B | - 🧬 Datasets: - `TeichAI/claude-sonnet-4.5-high-reasoning-250x` - 🏗 Base Model: - `unsloth/Qwen3-14B-unsloth-bnb-4bit`
gemma-4-26B-A4B-it-Claude-Opus-Distill-GGUF
Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill-GGUF
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill-GGUF
Qwen3-30B-A3B-Thinking-2507-Gemini-2.5-Flash-Distill-GGUF
gpt-oss-20b-claude-4.5-sonnet-high-reasoning-distill-GGUF
For the most reliable performance use the following sampling parameters: `temperature`: 0.1-0.2 `topk`: 100 `minp`: 0.00 `topp`: 1.00 `repeatpenalty`: 1.0 (off) - Developed by: armand0e - License: apache-2.0 - Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit This gptoss model was trained 2x faster with Unsloth and Huggingface's TRL library.
gemma-4-31B-it-Claude-Opus-Distill
gpt-oss-20b-glm-4.6-distill-GGUF
This model is still in development. Investigating bugs/flaws in tool calling.
Qwen3-8B-Gemini-3-Pro-Preview-Distill-GGUF
Qwen3-14B-GPT-5.2-High-Reasoning-Distill-GGUF
Qwen3-8B-Claude-Sonnet-4.5-Reasoning-Distill-GGUF
This model was trained on a Claude Sonnet 4.5 (reasoning) dataset with a high reasoning effort. - 🤖 Related Models: | Model | Effective parameters | Active parameters | | ------------- | ------------- | ------------- | | `TeichAI/TeichAI/Qwen3-30B-A3B-Thinking-2507-Claude-4.5-Sonnet-High-Reasoning-Distill-GGUF` | 30 B | 3 B | | `armand0e/gpt-oss-20b-claude-4.5-sonnet-high-reasoning-distill-GGUF` | 20 B | 3 B | | `TeichAI/Qwen3-14B-Claude-Sonnet-4.5-Reasoning-Distill-GGUF` | 14 B | 14 B | - 🧬 Datasets: - `TeichAI/claude-sonnet-4.5-high-reasoning-250x` - 🏗 Base Model: - `unsloth/Qwen3-8B-unsloth-bnb-4bit`
Qwen3.5-4B-Claude-Opus-Reasoning-Distill-GGUF
Qwen3.5-27B-Claude-Opus-4.6-Distill-GGUF
gpt-oss-20b-gpt-5-codex-distill-GGUF
For the most reliable performance use the following sampling parameters: `temperature`: 1 `topk`: 40 `minp`: 0.00 `topp`: 1.00 `repeatpenalty`: 1.0 (off) - Developed by: TeichAI - License: apache-2.0 - Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit This gptoss model was trained 2x faster with Unsloth and Huggingface's TRL library.
Nemotron-Orchestrator-8B-DeepSeek-v3.2-Speciale-Distill-GGUF
Qwen3.5-4B-Claude-Opus-Reasoning-Distill
GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill
Qwen3.5-27B-Claude-Opus-4.6-Distill
Qwen3-4B-Thinking-2507-Gemini-2.5-Flash-Distill-GGUF
Qwen3-4B-Thinking-2507-Kimi-K2-Thinking-Distill-GGUF
gemma-4-26B-A4B-it-Claude-Opus-Distill
Qwen3-4B-Thinking-2507-Claude-4.5-Opus-High-Reasoning-Distill-GGUF
gemma-4-26B-A4B-it-Claude-Opus-Distill-v2-GGUF
Qwen3-8B-GPT-5.2-High-Reasoning-Distill-GGUF
Qwen3-4B-Thinking-2507-Gemini-3-Pro-Preview-High-Reasoning-Distill-GGUF
Qwen3-8B-Claude-4.5-Opus-High-Reasoning-Distill-GGUF
Qwen3-4B-Thinking-2507-Claude-4.5-Opus-High-Reasoning-Distill
Qwen3-4B-gpt-5-codex-distill-GGUF
Qwen3-4B-Thinking-2507-GPT-5.1-Codex-Max-Distill-GGUF
Qwen3-4B-Thinking-2507-GPT-5.1-High-Reasoning-Distill-GGUF
Qwen3-4B-Thinking-2507-GPT-5-Codex-Distill-GGUF
Qwen3-14B-Gemini-3-Pro-Preview-High-Reasoning-Distill-GGUF
Qwen3-4B-Thinking-2507-Gemini-2.5-Flash-Lite-Preview-Distill-GGUF
Qwen3-8B-Gemini-2.5-Flash-Distill-GGUF
Qwen3-30B-A3B-Thinking-2507-Claude-4.5-Sonnet-High-Reasoning-Distill-GGUF
Qwen3-4B-Thinking-2507-Gemini-2.5-Flash-Distill
Qwen3.5-4B-Claude-Opus-Reasoning
Qwen3-14B-GPT-5.2-High-Reasoning-Distill
Qwen3-30B-A3B-Thinking-2507-Claude-4.5-Sonnet-High-Reasoning-Distill
Qwen3-4B-Thinking-2507-Gemini-3-Pro-Preview-High-Reasoning-Distill
gpt-oss-20b-claude-4.5-sonnet-high-reasoning-distill
- Developed by: armand0e - License: apache-2.0 - Finetuned from model : unsloth/gpt-oss-20b-unsloth-bnb-4bit This gptoss model was trained 2x faster with Unsloth and Huggingface's TRL library.