Karsh-CAI

13 models • 1 total models in database
Sort by:

Qwen2.5-32B-AGI-Q4_K_M-GGUF

NaNK
llama-cpp
171
32

Peach-9B-8k-Roleplay-Q5_K_M-GGUF

NaNK
llama-cpp
23
6

Better-Qwen2-13B-Multilingual-RP-250

NaNK
21
2

Mistral-Nemo-Redstone-GGUF

NaNK
license:apache-2.0
20
0

Peach-9B-8k-Roleplay-Q8_0-GGUF

NaNK
llama-cpp
17
3

Qwen2.5 32B AGI Q6 K GGUF

Kas1o/Qwen2.5-32B-AGI-Q6K-GGUF This model was converted to GGUF format from `Kas1o/Qwen2.5-32B-AGI` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
17
2

llama3-8B-cn-rochat-v1-Q5_K_M-GGUF

NaNK
llama3
16
0

Qwen2.5-0.5B-Instruct-Thinking-Q8_0-GGUF

NaNK
llama-cpp
12
0

OR-7B-Q5_K_M-GGUF

NaNK
llama-cpp
8
0

Qwen2.5-14B-Instruct-1M-abliterated-Q8_0-GGUF

NaNK
llama-cpp
7
0

SuZhiDiXia-7B

NaNK
6
5

Mistral-Small-24B-Instruct-2501-Q8_0-GGUF

Karsh-CAI/Mistral-Small-24B-Instruct-2501-Q80-GGUF This model was converted to GGUF format from `mistralai/Mistral-Small-24B-Instruct-2501` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
2

OR-07.21-Billion-Parameters

NaNK
license:apache-2.0
3
1