win10

165 models • 9 total models in database
Sort by:

MagKr-3.2-24B-thinking

NaNK
license:apache-2.0
158
0

GPT-OSS-26B-abliterated-Preview-Q4_K_M-GGUF

NaNK
llama-cpp
122
0

SVD-Qwen3-Coder-Next-Thinking

53
0

Pixtral-12B-2409-hf-text-only-Q8_0-GGUF

NaNK
llama-cpp
50
4

NeMoria-21b-Q8_0-GGUF

NaNK
llama-cpp
50
0

nemolita-21b-Q8_0-GGUF

NaNK
llama-cpp
48
0

35b-beta-long-Q4_K_M-GGUF

NaNK
llama-cpp
45
0

Mistral-Nemo-Instruct-2407-Q4_K_M-GGUF

NaNK
llama-cpp
43
0

RYS-Gemma-2-27b-it-Q6_K-GGUF

NaNK
llama-cpp
39
0

Phi-3.5-24-10-06-Q8_0-GGUF

NaNK
llama-cpp
38
0

phi-3.5-Sakura-Yuzu-v1.5-7.64b

NaNK
38
0

aya-expanse-32b-Q5_K_M-GGUF

win10/aya-expanse-32b-Q5KM-GGUF This model was converted to GGUF format from `CohereForAI/aya-expanse-32b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
34
1

DeepSeek-Coder-V2-Lite-Instruct-Q6_K-GGUF

llama-cpp
34
0

aya-expanse-32b-Q4_K_M-GGUF

win10/aya-expanse-32b-Q4KM-GGUF This model was converted to GGUF format from `CohereForAI/aya-expanse-32b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
33
1

GPT-OSS-26B-abliterated-Preview-Q8_0-GGUF

NaNK
llama-cpp
33
1

Mistral-Nemo-abliterated-Nemo-Pro-v2

NaNK
33
0

DeepSeek-Coder-V2-Lite-Instruct-Q8_0-GGUF

llama-cpp
31
0

Mistral-Nemo-Instruct-2407-20b-Q5_K_M-GGUF

NaNK
llama-cpp
30
0

phi-3.5-sakura-yuzu-v2-Q8_0-GGUF

NaNK
llama-cpp
25
0

Mistral-rp-24b-karcher-Q6_K-GGUF

NaNK
llama-cpp
24
0

GPT-OSS-26B-abliterated-Preview

This is an expanded version of unsloth/gpt-oss-20b-BF16 scaled up to 26B parameters and created with abliteration (see abliteration to know more about it). Usage You can use this model in your applications by loading it with Hugging Face's `transformers` library: - Risk of Sensitive or Controversial Outputs: This model's safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs. - Not Suitable for All Audiences: Due to limited content filtering, the model's outputs may be inappropriate for public settings, underage users, or applications requiring high security. - Legal and Ethical Responsibilities: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences. - Research and Experimental Use: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications. - Monitoring and Review Recommendations: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content. - No Default Safety Guarantees: Unlike standard models, this model has not undergone rigorous safety optimization. The author bears no responsibility for any consequences arising from its use. Donation Your donation helps us continue our further development and improvement, a cup of coffee can do it. - PayPal: Support via PayPal - Ko-fi: Support our work on Ko-fi

NaNK
license:apache-2.0
20
4

phi3.5-mini-24-09-30-Q8_0-GGUF

NaNK
llama-factory
19
0

Qwerky-QwQ-32B-Q5_K_M-GGUF

NaNK
llama-cpp
19
0

GPT-OSS-30B-Preview

NaNK
license:apache-2.0
18
2

nemolita-21b

NaNK
18
1

openhands-Nemotron-32B-karcher-Q4_K_M-GGUF

win10/openhands-Nemotron-32B-karcher-Q4KM-GGUF This model was converted to GGUF format from `mergekit-community/openhands-Nemotron-32B-karcher` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
18
0

Phi-3.5-24-10-06

NaNK
17
1

phi-3.5-sakura-yuzu-v2.5-Q8_0-GGUF

NaNK
llama-cpp
17
0

Breeze-7B-FC-v1-0-EvolKit-75K-nopm_claude_writing_fixed-adapter-F32-GGUF

NaNK
llama-cpp
17
0

taide-meta-it-16b

NaNK
16
1

phi-3.5-Sakura-Yuzu-Q8_0-GGUF

llama-cpp
16
0

phi3.5-mini-24-09-30

llama-factory
15
0

Norns-Qwen2.5-7B-Q8_0-GGUF

NaNK
llama-cpp
13
0

phi-3.5-sakura-yuzu-v3.0-Q8_0-GGUF

NaNK
llama-cpp
12
0

Blue-Rose-Coder-12.3B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
11
1

MagiDevs-24B-2506-Vision-Q8_0-GGUF

win10/MagiDevs-24B-2506-Vision-Q80-GGUF This model was converted to GGUF format from `win10/MagiDevs-24B-2506-Vision` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
10
0

Fused-Yi-Qwen3-3B

NaNK
license:apache-2.0
10
0

ArliAI-RPMax-v1.3-merge-8B-Q8_0-GGUF

NaNK
llama-cpp
9
1

Meta-Llama-3-12B-Instruct-Q6_K-GGUF

NaNK
llama-cpp
9
0

Qwen2.5-5B-Instruct

NaNK
9
0

phi-3.5-Sakura-Yuzu-v1.5-7.64b-Q8_0-GGUF

NaNK
llama-cpp
9
0

granite-3.0-8b-instruct-Q8_0-GGUF

NaNK
llama-cpp
9
0

Mistral-Nemo-abliterated-Nemo-Pro-v2-Q8_0-GGUF

NaNK
llama-cpp
9
0

Mistral-RP-24b-karcher-pro

NaNK
8
1

llama3-13.45b-Instruct-Q6_K-GGUF

NaNK
llama
8
0

llama3-13.45b-Instruct-Q8_0-GGUF

NaNK
llama
8
0

llama3-13.45b-Instruct-Q5_K_M-GGUF

NaNK
llama
8
0

llama3-13.45b-Instruct-Q4_K_M-GGUF

NaNK
llama
8
0

Phi-3.5-mini-instruct-Q8_0-GGUF

llama-cpp
8
0

phi-3.5-sakura-yuzu-v2

NaNK
8
0

Weirdslerp2-25B-Q5_K_M-GGUF

NaNK
llama-cpp
8
0

Infinirc-ArliAI-RPMax-v1.3-merge-13.3B-Q8_0-GGUF

win10/Infinirc-ArliAI-RPMax-v1.3-merge-13.3B-Q80-GGUF This model was converted to GGUF format from `win10/Infinirc-ArliAI-RPMax-v1.3-merge-13.3B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

miscii-Virtuoso-Small-Q8_0-GGUF

llama-cpp
8
0

MagiDevs-24B-2506-Vision

NaNK
8
0

MagiDevs-24B-2506-Vision-Q6_K-GGUF

win10/MagiDevs-24B-2506-Vision-Q6K-GGUF This model was converted to GGUF format from `win10/MagiDevs-24B-2506-Vision` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

phi3-128k-6b

NaNK
7
2

DarkIdol-Llama-3.1-13.3B-Instruct-1.2-Uncensored

NaNK
llama
7
2

miscii-14b-1028-Q8_0-GGUF

NaNK
llama-cpp
7
2

Norns-Qwen2.5-12B

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: win10/Norns-Qwen2.5-7B The following YAML configuration was used to produce this model:

NaNK
7
1

SphinxMind-14B-normalize-false-Q8_0-GGUF

NaNK
llama-cpp
7
1

EVA-Instruct-QwQ-32B-Preview-Q4_K_M-GGUF

win10/EVA-Instruct-QwQ-32B-Preview-Q4KM-GGUF This model was converted to GGUF format from `win10/EVA-Instruct-QwQ-32B-Preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
7
1

Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF

NaNK
llama-cpp
7
0

internlm2_5-20b-chat-abliterated-Q6_K-GGUF

NaNK
llama-cpp
7
0

mergekit-karcher-pifptpx-Q8_0-GGUF

llama-cpp
7
0

Mistral-v0.3-13B-32k-Base-v1

NaNK
license:apache-2.0
6
0

Qwen2.5-mini-Instruct-2

NaNK
6
0

Infinirc-ArliAI-RPMax-v1.3-merge-8B-Q8_0-GGUF

NaNK
llama-cpp
6
0

Norns-Qwen2.5-7B-v0.2-Q8_0-GGUF

NaNK
llama-cpp
6
0

shuttle-3-mini-Q8_0-GGUF

win10/shuttle-3-mini-Q80-GGUF This model was converted to GGUF format from `shuttleai/shuttle-3-mini` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
6
0

Mistral-RP-24b-karcher-pro-Q4_K_M-GGUF

win10/Mistral-RP-24b-karcher-pro-Q4KM-GGUF This model was converted to GGUF format from `win10/Mistral-RP-24b-karcher-pro` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
6
0

magnum-v2-12b-Q8_0-GGUF

NaNK
llama-cpp
5
0

InternLM2_5-20B-ArliAI-RPMax-v1.1-Q6_K-GGUF

NaNK
llama-cpp
5
0

ArliAI-RPMax-v1.3-merge-llama3-8B-Q8_0-GGUF

NaNK
llama-cpp
5
0

ChatML-Nemo-Pro-Q8_0-GGUF

llama-cpp
5
0

sthenno-Test-maybe-is-pro-v2-Q8_0-GGUF

NaNK
llama-cpp
5
0

Lingshu-32B-Q4_K_M-GGUF

win10/Lingshu-32B-Q4KM-GGUF This model was converted to GGUF format from `lingshu-medical-mllm/Lingshu-32B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
0

EVA-QwQ-32B-Preview

Please support my ko-fi. https://ko-fi.com/ogodwin10 my-paypal: https://www.paypal.com/ncp/payment/X7DMN9DUBH2X8 If you like this model, please give me some sponsorship and I will continue to create better merges. If there is a device that can fine-tune the model, there will be more new models fine-tuned after the merge (just like solar 10b). merge This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using Qwen/QwQ-32B-Preview as a base. The following models were included in the merge: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 The following YAML configuration was used to produce this model:

NaNK
4
5

DeepSeek-R1-Distill-sthenno-14b-0121

Your support = more models My Ko-fi page (Click here) This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using sthenno-com/miscii-14b-1225 as a base. The following models were included in the merge: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B sthenno/tempesthenno-ppo-ckpt40 The following YAML configuration was used to produce this model:

NaNK
4
3

Phitis-14b-Base

NaNK
4
2

karcher-test-32b

NaNK
4
1

SphinxMind-14B-Q8_0-GGUF

win10/SphinxMind-14B-Q80-GGUF This model was converted to GGUF format from `win10/SphinxMind-14B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
1

Lusca-33B-Q4_K_M-GGUF

NaNK
llama-cpp
4
0

Llama-3.2-3B-Instruct-24-9-29

NaNK
llama
4
0

phi-3.5-Sakura-Yuzu-v1.5

NaNK
4
0

Mistral-Nemo-Instruct-2407-20b-Q8_0-GGUF

NaNK
llama-cpp
4
0

Mistral-Nemo-Instruct-2407-20b-Q4_K_M-GGUF

NaNK
llama-cpp
4
0

ArliAI-RPMax-v1.3-merge-13.3B-Q8_0-GGUF

win10/ArliAI-RPMax-v1.3-merge-13.3B-Q80-GGUF This model was converted to GGUF format from `win10/ArliAI-RPMax-v1.3-merge-13.3B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

Norns-Qwen2.5-Coder-7B-v0.1-Q8_0-GGUF

NaNK
llama-cpp
4
0

high-speed-mixing-7B-V1-Q8_0-GGUF

NaNK
llama-cpp
4
0

EVA-Meissa-mini-pro-v2-Q8_0-GGUF

win10/EVA-Meissa-mini-pro-v2-Q80-GGUF This model was converted to GGUF format from `win10/EVA-Meissa-mini-pro-v2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

openhands-Nemotron-32B-karcher-300-Q4_K_M-GGUF

win10/openhands-Nemotron-32B-karcher-300-Q4KM-GGUF This model was converted to GGUF format from `mergekit-community/openhands-Nemotron-32B-karcher-300` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

miscii-14b-1M-0128

NaNK
license:apache-2.0
3
3

Qwen2.5-2B-Instruct

Qwen2.5-2B-Instruct is a merge of the following models using LazyMergekit: Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-1.5B-Instruct

NaNK
3
1

steiner-32b-preview-Q4_K_M-GGUF

NaNK
llama-cpp
3
1

WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B

NaNK
3
1

Blue-Rose-Coder-12.3B-Instruct

NaNK
3
1

Norns-Qwen2.5-12B-Q8_0-GGUF

win10/Norns-Qwen2.5-12B-Q80-GGUF This model was converted to GGUF format from `win10/Norns-Qwen2.5-12B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
1

EVA-Norns-Qwen2.5-v0.1-Q8_0-GGUF

win10/EVA-Norns-Qwen2.5-v0.1-Q80-GGUF This model was converted to GGUF format from `win10/EVA-Norns-Qwen2.5-v0.1` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
1

miscii-14b-1225-Q8_0-GGUF

NaNK
llama-cpp
3
1

Qwen1.5-0.5b-Xia-Ai

NaNK
3
0

phi-3.5-Sakura-Yuzu

NaNK
3
0

phi-3.5-Sakura-Yuzu-v1.5-Q8_0-GGUF

NaNK
llama-cpp
3
0

DeepSeek-V2-Lite-XiaAi-Q8_0-GGUF

llama-cpp
3
0

Qwen2.5-Math-12.3B-Instruct

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: Qwen/Qwen2.5-Math-7B-Instruct The following YAML configuration was used to produce this model:

NaNK
3
0

WhiteRabbitNeo-2.5-Qwen-2.5-Coder-12.3B-Q8_0-GGUF

NaNK
llama-cpp
3
0

ArliAI-RPMax-v1.3-merge-13.3B

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: win10/ArliAI-RPMax-v1.3-merge-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Urdandi-Qwen2.5-7B

NaNK
3
0

Norns-Qwen2.5-Coder-7B-Instruct-v0.1-Q8_0-GGUF

NaNK
llama-cpp
3
0

falcon-mamba-7b-instruct-Q8_0-GGUF

NaNK
llama-cpp
3
0

ChatML-Nemo-Pro-model_stock-Q8_0-GGUF

llama-cpp
3
0

tempesthenno-ppo-ckpt40-Q8_0-GGUF

NaNK
llama-cpp
3
0

MagiD-24B

NaNK
3
0

Llama-3.2-3B-F1-Instruct-vectormemory

NaNK
llama
3
0

EVA-QwQ-32B-Coder-Preview

my kofi: https://ko-fi.com/ogodwin10 If you like this model, please give me some sponsorship and I will continue to create better merges. If there is a device that can fine-tune the model, there will be more new models fine-tuned after the merge (just like solar 10b). merge This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 as a base. The following models were included in the merge: Qwen/QwQ-32B-Preview Qwen/Qwen2.5-Coder-32B-Instruct The following YAML configuration was used to produce this model:

NaNK
2
3

Verdandi-Qwen2.5-7B

NaNK
2
1

high-speed-mixing-7B-V2-Q8_0-GGUF

NaNK
llama-cpp
2
1

EVA-QwQ-32B-Preview-Q4_K_M-GGUF

my kofi: https://ko-fi.com/ogodwin10 If you like this model, please give me some sponsorship and I will continue to create better merges. If there is a device that can fine-tune the model, there will be more new models fine-tuned after the merge (just like solar 10b). win10/EVA-QwQ-32B-Preview-Q4KM-GGUF This model was converted to GGUF format from `win10/EVA-QwQ-32B-Preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
1

wizardcoder-33b-v1.1-mirror-Q2_K-GGUF

NaNK
llama-cpp
2
0

phi-3.5-sakura-yuzu-v2.5

NaNK
2
0

Qwen2.5-Coder-12.3b-Instruct-Q8_0-GGUF

NaNK
llama-cpp
2
0

Norns-Qwen2.5-7B

NaNK
2
0

Norns-Qwen2.5-Coder-7B-v0.1

NaNK
2
0

EVA-Meissa-mini-pro

NaNK
2
0

EVA-Meissa-mini-pro-Q8_0-GGUF

win10/EVA-Meissa-mini-pro-Q80-GGUF This model was converted to GGUF format from `win10/EVA-Meissa-mini-pro` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
2
0

ChatML-Nemo-Pro-V2-Q8_0-GGUF

NaNK
llama-cpp
2
0

OuteTTS-500M-Fgo

license:cc-by-nc-4.0
2
0

karcher-max-iter1000-32b-Q4_K_M-GGUF

NaNK
llama-cpp
2
0

mergekit-karcher-pifptpx

NaNK
llama
2
0

KwaiCoder-AutoThink-preview-Q4_K_M-GGUF

win10/KwaiCoder-AutoThink-preview-Q4KM-GGUF This model was converted to GGUF format from `Kwaipilot/KwaiCoder-AutoThink-preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
2
0

ERNIE-4.5-29B-A4B-PT

NaNK
license:apache-2.0
2
0

miscii-Virtuoso-Small

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using sthenno-com/miscii-14b-1028 as a base. The following models were included in the merge: arcee-ai/Virtuoso-Small The following YAML configuration was used to produce this model:

NaNK
1
2

llama3-13.45b-Instruct

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: F:\text-generation-webui\models\meta-llamaMeta-Llama-3-8B-Instruct The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Qwen2.5-Coder-12.3b-Instruct

NaNK
1
1

ChatML-Nemo-Pro-V2

NaNK
1
1

Breeze-13B-32k-Base-v1_0

NaNK
license:apache-2.0
1
0

Breeze-13B-32k-Instruct-v1_0

Breeze-13B-32k-Instruct-v10 is a merge of the following models using mergekit: MediaTek-Research/Breeze-7B-32k-Instruct-v10 MediaTek-Research/Breeze-7B-32k-Instruct-v10 MediaTek-Research/Breeze-7B-32k-Instruct-v10 MediaTek-Research/Breeze-7B-32k-Instruct-v10 MediaTek-Research/Breeze-7B-32k-Instruct-v10 MediaTek-Research/Breeze-7B-32k-Instruct-v10 MediaTek-Research/Breeze-7B-32k-Instruct-v10

NaNK
license:apache-2.0
1
0

Llama-3.2-3B-Instruct-24-9-29-Q8_0-GGUF

NaNK
llama-factory
1
0

MagpieLM-8B

NaNK
llama
1
0

dolphin-2.9.3-mistral-nemo-20b-V2

NaNK
1
0

ghost-13.3b-beta-1608

NaNK
llama
1
0

Urd-Qwen2.5-7B

NaNK
1
0

Norns-Qwen2.5-Coder-7B-Instruct-v0.1

NaNK
1
0

ChatML-Nemo-Pro-weight-density-increase-test-Q8_0-GGUF

llama-cpp
1
0

sthenno-Test-maybe-is-pro

NaNK
1
0

karcher-max-iter1000-32b

NaNK
1
0

yi-qwen3-16b

NaNK
llama
1
0

DeepSeek-R1-Distill-sthenno-14b-0121-union-tokenizer

NaNK
0
5

EVA-Instruct-QwQ-32B-Preview

NaNK
0
3

ArliAI-RPMax-v1.3-merge-8B

NaNK
llama
0
2

ChatML-Nemo-Pro

NaNK
0
2

Nemotron2Gemma-AURORA-LoRA-27B-IT-0p95

NaNK
llama
0
1

Meta-Llama-3-15B-Instruct

NaNK
llama
0
1

Qwen2-12.3B

NaNK
license:apache-2.0
0
1

phi3.5-pro-10-08

0
1

Qwen2.5-mini-Instruct

NaNK
0
1

DeepSeek-V2-Lite-XiaAi

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using E:\tabbyAPI\models\deepseek-aiDeepSeek-V2-Lite as a base. The following models were included in the merge: E:\tabbyAPI\models\deepseek-aiDeepSeek-Coder-V2-Lite-Instruct The following YAML configuration was used to produce this model:

0
1

Meissa-Qwen2.5-12.3B-Instruct

NaNK
0
1

ArliAI-RPMax-v1.3-merge-llama3-8B

NaNK
llama
0
1

Infinirc-ArliAI-RPMax-v1.3-merge-8B

NaNK
llama
0
1

Norns-Qwen2.5-7B-v0.2

NaNK
0
1

EVA-Norns-Qwen2.5-v0.1

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1 as a base. The following models were included in the merge: win10/Urdandi-Qwen2.5-7B The following YAML configuration was used to produce this model:

NaNK
0
1

high-speed-mixing-7B-V2

NaNK
0
1

Phi-4-llama-t1-lora

NaNK
llama
0
1

granite-3.1-3b-a800m-t1

NaNK
license:apache-2.0
0
1

Qwen3-4B-only-tulu-3-sft-mixture-DolphinLabeled-step-190

NaNK
0
1