RDson
Llama-3-Magenta-Instruct-4x8B-MoE-GGUF
Llama-3-Peach-Instruct-4x8B-MoE-GGUF
Seed-OSS-36B-Instruct-GGUF
Created using the fork pwilkin/llama.cpp, commit 8f64302. The main repo now supports the models! The quantization process is still the same, no re-making of the models is needed. The IQ models are made using bartowski1182/calibrationdatav3.txt.
llava-llama-3-8b-v1_1-GGUF
CoderO1-DeepSeekR1-14B-Preview-GGUF
CoderO1-DeepSeekR1-Coder-14B-Preview-GGUF
Dolphin-less-Llama-3-Instruct-8B-GGUF
Orca-Llama-3-8B-Instruct-DPO-GGUF
Llama-3-14B-Instruct-v1-GGUF
Phi-3-medium-128k-instruct-GGUF
Phi-3-mini-code-finetune-128k-instruct-v1-GGUF
WomboCombo-R1-Coder-14B-Preview
Base model includes Qwen 2.5 Coder 14B Instruct and DeepSeek R1 Distill Qwen 14B.
Qwen3-30B-A3B-By-Expert-Quantization-GGUF
CoderO1-DeepSeekR1-Coder-32B-Preview
CoderO1-DeepSeekR1-Coder-32B-Preview-GGUF
LIMO-R1-Distill-Qwen-7B
RYS-Gemma-2-27b-it-Q4_K_M-GGUF
CoderO1-14B-Preview
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using arcee-ai/SuperNova-Medius as a base. The following models were included in the merge: arcee-ai/Virtuoso-Small-v2 deepseek-ai/DeepSeek-R1-Distill-Qwen-14B Qwen/Qwen2.5-14B-Instruct Krystalan/DRT-o1-14B The following YAML configuration was used to produce this model:
Llama-3-14B-Instruct-v1
Orca-Llama-3-8B-Instruct-DPO
CoderO1-DeepSeekR1-Coder-14B-Preview
This is a merge of pre-trained language models created using mergekit. GGUF files: RDson/CoderO1-DeepSeekR1-Coder-14B-Preview-GGUF. This is based on the work of FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview. This model was merged using the sce merge method using Qwen/Qwen2.5-14B as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B Qwen/Qwen2.5-Coder-14B-Instruct The following YAML configuration was used to produce this model:
Phi-3-mini-code-finetune-128k-instruct-v1
Llama-3-Teal-Instruct-2x8B-MoE
CoderO1-14B-Preview-v2
Dolphin-less-Llama-3-Instruct-8B
CoderO1-DeepSeekR1-14B-Preview
Llama-3-5B-Experimental
WomboCombo-R1-14B-Preview
This is a merge of pre-trained language models created using mergekit. This model was merged using the sce merge method using Qwen/Qwen2.5-14B-Instruct as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B arcee-ai/Virtuoso-Small Krystalan/DRT-o1-14B qingy2024/Fusion4-14B-Instruct The following YAML configuration was used to produce this model: