Xiaojian9992024

38 models • 12 total models in database
Sort by:

Qwen2.5-Dyanka-7B-Preview

Base model: rombodawg/Rombos-LLM-V2.5-Qwen-7b, suayptalha/Clarus-7B-v0.1.

NaNK
license:apache-2.0
623
11

Qwen2.5-1.5B-Coder-Python

NaNK
license:apache-2.0
25
0

t5-small-GGUF

license:apache-2.0
17
2

Llama3.1-8B-ExtraMix

NaNK
llama
16
1

Phi-4-mini-UNOFFICAL-Q6_K-GGUF

llama-cpp
10
0

Qwen2.5-THREADRIPPER-Small-Q8_0-GGUF

llama-cpp
4
2

Llama3.2-1B-THREADRIPPER-v0.2

Base model: prithivMLmods/Llama-Express.1-Tiny, Xiaojian9992024/Llama3.2-1B-THREADRIPPER.

NaNK
llama
4
0

Qwen2.5-THREADRIPPER-Small

This model supports the following languages: Chinese and English.

NaNK
3
4

Qwen2.5-Ultra-1.5B-25.02-Exp

Base model: SakanaAI TinySwallow 1.5B Instruct, rubenroy Zurich 1.5B GCv2 5m.

NaNK
2
2

Qwen2.5-THREADRIPPER-Medium

NaNK
2
1

Qwen2.5-7B-MS-Destroyer

Base model: Qwen/Qwen 2.5 7B Instruct, Xiaojian9992024/Qwen 2.5 Dyanka 7B Preview.

NaNK
2
1

Phi-4-mini-UNOFFICAL-Q8_0-GGUF

llama-cpp
2
0

Llama3.2-1B-THREADRIPPER

Base model: Trelis Llama 3.2 1B Instruct MATH synthetic, prithivMLmods Bellatrix Tiny 1B R1.

NaNK
llama
2
0

Qwen2.5-1.5B-THREADRIPPER-v0.1

This is a merge of pre-trained language models created using mergekit. This model was merged using the dellalinear merge method using Qwen/Qwen2.5-1.5B-Instruct as a base. The following models were included in the merge: Qwen/Qwen2.5-Math-1.5B-Instruct prithivMLmods/Bellatrix-Tiny-1.5B-R1 Qwen/Qwen2.5-Coder-1.5B-Instruct justinj92/Qwen2.5-1.5B-Thinking The following YAML configuration was used to produce this model:

NaNK
2
0

Qwen2.5-Ultra-1.5B-25.02-Exp-v0.2

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp as a base. The following models were included in the merge: cutelemonlili/Qwen2.5-1.5B-InstructMATHtrainingresponseQwen2.514B UWNSL/Qwen2.5-1.5B-InstructShortCoT The following YAML configuration was used to produce this model:

NaNK
2
0

Qwen2.5-Dyanka-7B-Preview-v0.2

Base model: prithivMLmods/QwQ-MathOct-7B, pe-nlp/R1-Qwen2.5-7B-Instruct.

NaNK
2
0

Reflection-L3.2-JametMiniMix-3B

Base model: Hastagaras L3.2 Jamet Mini 3B MK.III, Orion zhen Reflection Llama 3.2 3B Instruct.

NaNK
llama
1
2

Qwen2.5-THREADRIPPER-Small-AnniversaryEdition

Base model: Open Thoughts OpenThinker 7B, Xiaojian9992024 Qwen 2.5 THREADRIPPER Small.

NaNK
1
2

Phi-4-mini-UNOFFICAL

Base model: microsoft/phi-4 library name: transformers

NaNK
1
1

Wenda-12B-Preview-v1

NaNK
1
1

SuperQwen-2.5-1.5B-Q8_0-GGUF

Xiaojian9992024/SuperQwen-2.5-1.5B-Q80-GGUF This model was converted to GGUF format from `mergekit-community/SuperQwen-2.5-1.5B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

SuperQwen-2.5-1.5B-Q6_K-GGUF

Xiaojian9992024/SuperQwen-2.5-1.5B-Q6K-GGUF This model was converted to GGUF format from `mergekit-community/SuperQwen-2.5-1.5B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

SuperQwen-2.5-1.5B-Q4_K_M-GGUF

NaNK
llama-cpp
1
0

SuperQwen-2.5-1.5B-Q2_K-GGUF

NaNK
llama-cpp
1
0

Phi-4-mini-UNOFFICAL-Q5_K_M-GGUF

llama-cpp
1
0

Phi-4-mini-UNOFFICAL-Q5_0-GGUF

llama-cpp
1
0

mergekit-dare_ties-ajgjgea-Q8_0-GGUF

llama-cpp
1
0

Llama3.1-16B-Upscaled-Q6_K-GGUF

NaNK
llama-cpp
1
0

Llama3.2-1B-THREADRIPPER-Q8_0-GGUF

NaNK
llama-cpp
1
0

Qwen2.5-1.5B-THREADRIPPER-v0.1-Q6_K-GGUF

Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1-Q6K-GGUF This model was converted to GGUF format from `Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Qwen2.5-1.5B-THREADRIPPER-v0.1-Q8_0-GGUF

Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1-Q80-GGUF This model was converted to GGUF format from `Xiaojian9992024/Qwen2.5-1.5B-THREADRIPPER-v0.1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Tau-78B-Preview

NaNK
license:apache-2.0
1
0

Singularity-Qwen2.5-1.5B

NaNK
1
0

IFeelSoSprunki-8B-Llama3.1

NaNK
llama
0
2

Qwen2.5-THREADRIPPER-Medium-Censored

Base model: unsloth/Qwen2.5-14B-Instruct, rombodawg/Rombos-LLM-V2.6-Qwen-14b.

NaNK
0
1

Llama3.1-8B-UltraMedical-TIES-Exp-25.02

NaNK
llama
0
1

Qwen2.5-Ultra-1.5B-25.02-Exp-GGUF

NaNK
0
1

SmolMoE-4x360M

0
1