RDson

29 models • 1 total models in database
Sort by:

Llama-3-Magenta-Instruct-4x8B-MoE-GGUF

NaNK
llama
370
1

Llama-3-Peach-Instruct-4x8B-MoE-GGUF

NaNK
llama
281
2

Seed-OSS-36B-Instruct-GGUF

Created using the fork pwilkin/llama.cpp, commit 8f64302. The main repo now supports the models! The quantization process is still the same, no re-making of the models is needed. The IQ models are made using bartowski1182/calibrationdatav3.txt.

NaNK
license:apache-2.0
146
1

llava-llama-3-8b-v1_1-GGUF

NaNK
llama
134
3

CoderO1-DeepSeekR1-14B-Preview-GGUF

NaNK
131
1

CoderO1-DeepSeekR1-Coder-14B-Preview-GGUF

NaNK
108
0

Dolphin-less-Llama-3-Instruct-8B-GGUF

NaNK
llama-3
66
0

Orca-Llama-3-8B-Instruct-DPO-GGUF

NaNK
37
2

Llama-3-14B-Instruct-v1-GGUF

NaNK
31
2

Phi-3-medium-128k-instruct-GGUF

license:mit
31
2

Phi-3-mini-code-finetune-128k-instruct-v1-GGUF

25
3

WomboCombo-R1-Coder-14B-Preview

Base model includes Qwen 2.5 Coder 14B Instruct and DeepSeek R1 Distill Qwen 14B.

NaNK
23
3

Qwen3-30B-A3B-By-Expert-Quantization-GGUF

NaNK
llama.cpp
12
1

CoderO1-DeepSeekR1-Coder-32B-Preview

NaNK
10
8

CoderO1-DeepSeekR1-Coder-32B-Preview-GGUF

NaNK
8
0

LIMO-R1-Distill-Qwen-7B

NaNK
license:mit
8
0

RYS-Gemma-2-27b-it-Q4_K_M-GGUF

NaNK
llama-cpp
6
0

CoderO1-14B-Preview

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using arcee-ai/SuperNova-Medius as a base. The following models were included in the merge: arcee-ai/Virtuoso-Small-v2 deepseek-ai/DeepSeek-R1-Distill-Qwen-14B Qwen/Qwen2.5-14B-Instruct Krystalan/DRT-o1-14B The following YAML configuration was used to produce this model:

NaNK
4
1

Llama-3-14B-Instruct-v1

NaNK
llama
4
0

Orca-Llama-3-8B-Instruct-DPO

NaNK
llama
2
3

CoderO1-DeepSeekR1-Coder-14B-Preview

This is a merge of pre-trained language models created using mergekit. GGUF files: RDson/CoderO1-DeepSeekR1-Coder-14B-Preview-GGUF. This is based on the work of FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview. This model was merged using the sce merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B Qwen/Qwen2.5-Coder-14B-Instruct The following YAML configuration was used to produce this model:

NaNK
2
3

Phi-3-mini-code-finetune-128k-instruct-v1

2
1

Llama-3-Teal-Instruct-2x8B-MoE

NaNK
llama
2
0

CoderO1-14B-Preview-v2

NaNK
2
0

Dolphin-less-Llama-3-Instruct-8B

NaNK
llama
1
1

CoderO1-DeepSeekR1-14B-Preview

NaNK
1
1

Llama-3-5B-Experimental

NaNK
llama
1
0

WomboCombo-R1-14B-Preview

This is a merge of pre-trained language models created using mergekit. This model was merged using the sce merge method using Qwen/Qwen2.5-14B-Instruct as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B arcee-ai/Virtuoso-Small Krystalan/DRT-o1-14B qingy2024/Fusion4-14B-Instruct The following YAML configuration was used to produce this model:

NaNK
0
5

Llama-3-Peach-Instruct-4x8B-MoE

NaNK
llama
0
1