Liontix

19 models • 4 total models in database
Sort by:

Qwen3-8B-Claude-Sonnet-4-Reasoning-Distill-GGUF

This model was trained on a Claude Sonnet 4 (non-reasoning) dataset and a Claude Sonnet 3.7 (reasoning) dataset. It is a reasoning model.

NaNK
7,505
11

Qwen3-8B-Gemini-2.5-Pro-Distill

NaNK
7,428
3

Qwen3-8B-Gemini-2.5-Pro-Distill-GGUF

A newer version distilled on Gemini 3 Pro Preview is available here: TeichAI/Qwen3-8B-Gemini-3-Pro-Preview-Distill-GGUF This model was trained on a Gemini 2.5 Flash (non-reasoning) dataset and a Gemini 2.5 Pro (reasoning) dataset. It is a reasoning model.

NaNK
2,892
16

ERNIE-4.5-21B-A3B-Thinking-Gemini-2.5-Pro-Distill-GGUF

Disclaimer: This model is for testing purposes only. Raw inference with llama.cpp works but using it with ollama currently doesn't. This model was trained on a Gemini 2.5 Pro (reasoning) dataset. It is a reasoning model. It has 21 billion parameters in total and 3 billion activated parameters.

NaNK
750
1

Qwen3-8B-Sonnet-4-GPT-5-Distill-GGUF

This is a non-reasoning model finetuned on Claude Sonnet 4 and GPT 5 Chat.

NaNK
619
0

Qwen3-4B-Thinking-2507-Gemini-2.5-Pro-Distill-GGUF

This model was trained on a Gemini 2.5 Pro (reasoning) dataset. It is a reasoning model. Since the base model for this fine-tune is the Qwen3-4B-Thinking-2507 variant, you will experience longer phases of thinking. You might only want to use this model for more complex conversations or tasks like coding, math, or logical questions. You can request distilled models or datasets in the community tab.

NaNK
517
6

Qwen3-8B-GPT-5-Reasoning-Distill-GGUF

This model was trained on a GPT 5 (reasoning) dataset. It is reasoning model.

NaNK
234
2

Qwen3-8B-Deepseek-V3.1-Distill-GGUF

NaNK
210
2

Qwen3-4B-Thinking-2507-GLM-4.6-Distill-GGUF

NaNK
185
1

Qwen3-4B-Claude-Sonnet-4-Reasoning-Distill-GGUF

This model was trained on a Claude Sonnet 4 (non-reasoning) dataset and a Claude Sonnet 3.7 (reasoning) dataset. It is a reasoning model.

NaNK
147
1

Qwen3-8B-Claude-Sonnet-4-Reasoning-Distill-Safetensor

NaNK
license:mit
143
1

Qwen3 4B Claude Sonnet 4 Reasoning Distill Safetensor

This model was trained on a Claude Sonnet 4 (non-reasoning) dataset and a Claude Sonnet 3.7 (reasoning) dataset. - 🧬 Datasets: - `Liontix/claude-sonnet-4-100x` - `reedmayhew/claude-3.7-sonnet-reasoning` - 🏗 Base Model: - `unsloth/Qwen3-4B-unsloth-bnb-4bit` If you want to fine-tune this model: - Start from: `Liontix/Qwen3-4B-Claude-Sonnet-4-Reasoning-Distill-Safetensor` - Change dataset as needed in your training script or notebook Prompt format uses Claude-style ` ` / ` ` markers with role tags.

NaNK
124
4

Qwen3-1.7B-GPT5-nano-distill

NaNK
113
0

Qwen3-4B-Sonnet-4-GPT-5-Distill-GGUF

NaNK
66
0

Qwen3-4B-GPT-5-mini-Distill-GGUF

This model was trained on a GPT-5 mini reasoning and a GPT-5 non-reasoning dataset.

NaNK
59
0

Qwen3-4B-Advanced-Reasoning-Distill-GGUF

NaNK
39
1

Qwen3-4B-Thinking-2507-Gemini-2.5-Pro-Distill

This model was trained on a Gemini 2.5 Pro (reasoning) dataset. It is a reasoning model. Since the base model for this fine-tune is the Qwen3-4B-Thinking-2507 variant, you will experience longer phases of thinking. You might only want to use this model for more complex conversations or tasks like coding, math, or logical questions. If you want a GGUF version and not the safetensor files then head over here

NaNK
39
1

Qwen3-0.6B-revised

NaNK
33
0

Qwen3-8B-GPT-5-Reasoning-Distill-Safetensors

NaNK
7
1