Xiaojian9992024
Qwen2.5-Dyanka-7B-Preview
Base model: rombodawg/Rombos-LLM-V2.5-Qwen-7b, suayptalha/Clarus-7B-v0.1.
Qwen2.5-1.5B-Coder-Python
t5-small-GGUF
Llama3.1-8B-ExtraMix
Phi-4-mini-UNOFFICAL-Q6_K-GGUF
Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF
Llama3.2-1B-THREADRIPPER-v0.2
Base model: prithivMLmods/Llama-Express.1-Tiny, Xiaojian9992024/Llama3.2-1B-THREADRIPPER.
Qwen2.5-THREADRIPPER-Small
This model supports the following languages: Chinese and English.
Qwen2.5-Ultra-1.5B-25.02-Exp
Base model: SakanaAI TinySwallow 1.5B Instruct, rubenroy Zurich 1.5B GCv2 5m.
Qwen2.5-THREADRIPPER-Medium
Qwen2.5-7B-MS-Destroyer
Base model: Qwen/Qwen 2.5 7B Instruct, Xiaojian9992024/Qwen 2.5 Dyanka 7B Preview.
Phi-4-mini-UNOFFICAL-Q8_0-GGUF
Llama3.2-1B-THREADRIPPER
Base model: Trelis Llama 3.2 1B Instruct MATH synthetic, prithivMLmods Bellatrix Tiny 1B R1.
Qwen2.5-1.5B-THREADRIPPER-v0.1
This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using Qwen/Qwen2.5-1.5B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-Math-1.5B-Instruct prithivMLmods/Bellatrix-Tiny-1.5B-R1 Qwen/Qwen2.5-Coder-1.5B-Instruct justinj92/Qwen2.5-1.5B-Thinking The following YAML configuration was used to produce this model:
Qwen2.5-Ultra-1.5B-25.02-Exp-v0.2
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp as a base. The following models were included in the merge: cutelemonlili/Qwen2.5-1.5B-InstructMATHtrainingresponseQwen2.514B UWNSL/Qwen2.5-1.5B-InstructShortCoT The following YAML configuration was used to produce this model:
Qwen2.5-Dyanka-7B-Preview-v0.2
Base model: prithivMLmods/QwQ-MathOct-7B, pe-nlp/R1-Qwen2.5-7B-Instruct.
Reflection-L3.2-JametMiniMix-3B
Base model: Hastagaras L3.2 Jamet Mini 3B MK.III, Orion zhen Reflection Llama 3.2 3B Instruct.
Qwen2.5-THREADRIPPER-Small-AnniversaryEdition
Base model: Open Thoughts OpenThinker 7B, Xiaojian9992024 Qwen 2.5 THREADRIPPER Small.
Phi-4-mini-UNOFFICAL
Base model: microsoft/phi-4 library name: transformers
Wenda-12B-Preview-v1
SuperQwen-2.5-1.5B-Q8_0-GGUF
Xiaojian9992024/SuperQwen-2.5-1.5B-Q80-GGUF This model was converted to GGUF format from `mergekit-community/SuperQwen-2.5-1.5B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SuperQwen-2.5-1.5B-Q6_K-GGUF
Xiaojian9992024/SuperQwen-2.5-1.5B-Q6K-GGUF This model was converted to GGUF format from `mergekit-community/SuperQwen-2.5-1.5B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
SuperQwen-2.5-1.5B-Q4_K_M-GGUF
SuperQwen-2.5-1.5B-Q2_K-GGUF
Phi-4-mini-UNOFFICAL-Q5_K_M-GGUF
Phi-4-mini-UNOFFICAL-Q5_0-GGUF
mergekit-dare_ties-ajgjgea-Q8_0-GGUF
Llama3.1-16B-Upscaled-Q6_K-GGUF
Llama3.2-1B-THREADRIPPER-Q8_0-GGUF
Qwen2.5-1.5B-THREADRIPPER-v0.1-Q6_K-GGUF
Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1-Q6K-GGUF This model was converted to GGUF format from `Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Qwen2.5-1.5B-THREADRIPPER-v0.1-Q8_0-GGUF
Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1-Q80-GGUF This model was converted to GGUF format from `Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Tau-78B-Preview
Singularity-Qwen2.5-1.5B
IFeelSoSprunki-8B-Llama3.1
Qwen2.5-THREADRIPPER-Medium-Censored
Base model: unsloth/Qwen2.5-14B-Instruct, rombodawg/Rombos-LLM-V2.6-Qwen-14b.