ngxson

48 models • 1 total models in database
Sort by:

Vintern-1B-v3_5-GGUF

--- license: mit base_model: - 5CD-AI/Vintern-1B-v3_5 ---

NaNK
license:mit
304,818
6

GLM-4.7-Flash-GGUF

6,788
17

Home-Cook-Mistral-Small-Omni-24B-2507-GGUF

This is a multimodal model created by merging Mistral Small 2506 (with vision capabilities) and Voxtral 2507 (with audio capabilities) using a modified version of the `mergekit` tool. For detailed merging instructions, refer to the sections below. This model is a merged derivative work combining Mistral Small 2506 and Voxtral 2507, both originally released by Mistral AI under the Apache 2.0 license. The merged model is also distributed under the Apache 2.0 license, and the full license text, along with original copyright notices, is included in this repository. I have no affiliation, sponsorship, or formal relationship with Mistral AI. This project is an independent effort to combine the vision and audio capabilities of the two models. Install `mergekit` from this version: https://github.com/arcee-ai/mergekit/tree/0027c5c51471fa891d438eccda5455ebe55b536e Modify the `mergekit` source code, open file `mergekit/mergemethods/generalizedtaskarithmetic.py` Go to the `mistralo` output directory, then download `tekken.json` from Voxtral and place it there: https://huggingface.co/mistralai/Voxtral-Small-24B-2507/blob/main/tekken.json Finally, use `converthftogguf.py` to convert it back to GGUF as usual Download these mmproj files: - Audio: https://huggingface.co/ggml-org/Voxtral-Mini-3B-2507-GGUF/blob/main/mmproj-Voxtral-Mini-3B-2507-Q80.gguf - Vision: https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF/blob/main/mmproj-F16.gguf Rename them to `audio.gguf`and `vision.gguf` respectively Then run mergemmprojmodels.py from this repo. The output file will be `mmproj-model.gguf`

NaNK
license:apache-2.0
6,283
25

boring-testing-tiny

3,284
0

DeepSeek-R1-Distill-Qwen-7B-abliterated-GGUF

NaNK
308
7

test_gguf_models

license:mit
297
0

Devstral-Small-Vision-2505-GGUF

The vision encoder is taken from Mistral Small, works out-of-the-box with llama.cpp

NaNK
license:apache-2.0
231
28

test_gguf_lora_adapter

license:mit
147
0

wllama-split-models

license:mit
143
0

tinyllama_split_test

license:mit
63
0

SmolLM2-1.7B-Instruct-Q4_K_M-GGUF

NaNK
llama-cpp
53
0

gemma-3-mmproj-gguf-q8_0-TEST

43
2

Vistral-7B-ChatML

NaNK
license:mit
42
1

LFM2-VL-450M-GGUF-Q4_0

42
0

MiMo-VL-7B-RL-GGUF

Original model: https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-RL

NaNK
license:mit
40
4

test-model-preset

29
0

ultravox-wip-ggml-do-not-use

29
0

DeepSeek-R1-Remixed-IQ1_M

17
1

LFM2-test-ci-80M

17
0

Llama-3-Instruct-abliteration-LoRA-8B-F16-GGUF

NaNK
llama-cpp
16
0

MiniThinky-v2-1B-Llama-3.2-Q8_0-GGUF

NaNK
llama-cpp
11
8

vistral-meow

license:mit
11
0

Qwen2.5-7B-Instruct-1M-Q4_K_M-GGUF

NaNK
llama-cpp
10
0

qwen3_next_fixed

9
0

Meta-Llama-3.1-8B-Instruct-Q8_0

NaNK
license:mit
9
0

gemma-3-4b-pt-Q4_0-GGUF

NaNK
llama-cpp
8
1

Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF

NaNK
license:mit
8
0

Llama-4-Scout-17B-16E-Instruct-GGUF

NaNK
7
2

GLM-5-small-test

5
0

MiniThinky-1B-Llama-3.2-Q8_0-GGUF

NaNK
llama-cpp
5
0

Llama-4-Maverick-17B-128E-Instruct-Q2_K-GGUF

NaNK
5
0

hunyuan-moe-tiny-random

5
0

SmolLM2-135M-Instruct-IQ4_XS-GGUF

llama-cpp
3
0

Llama-3.2-1B-Creative-Lora-F16-GGUF

NaNK
llama
3
0

test-gemma-2-2b-gguf

NaNK
2
0

tinygemma3_cifar

2
0

test-llava-will-be-deleted-soon

1
0

LoRA-Hermes-3-Llama-3.1-8B-F16-GGUF

NaNK
llama-cpp
1
0

TEST-Tiny-Llama4

llama4
1
0

demo_simple_rag_py

license:mit
0
10

LoRA-phi-4-abliterated

This is a LoRA extracted from a language model. It was extracted using mergekit. This LoRA adapter was extracted from huihui-ai/phi-4-abliterated and uses microsoft/phi-4 as a base. The following command was used to extract this LoRA adapter:

0
4

LoRA-Qwen2.5-14B-Instruct-abliterated-v2

NaNK
0
3

hf-blog-podcast

0
2

LoRA-Qwen2.5-7B-Instruct-abliterated-v3

NaNK
0
1

LoRA-Qwen2.5-3B-Instruct-abliterated

NaNK
0
1

LoRA-Qwen2.5-32B-Instruct-abliterated

NaNK
0
1

LoRA-llama-3-70B-Instruct-abliterated

NaNK
base_model:failspy/llama-3-70B-Instruct-abliterated
0
1

LoRA-Qwen2.5-1.5B-Instruct-abliterated

NaNK
0
1