Theta-Lev

25 models • 1 total models in database
Sort by:

YandexGPT-5-Lite-8B-instruct-Q5_K_M-GGUF

NaNK
llama-cpp
241
2

YandexGPT 5 Lite 8B Instruct Q8 GGUF

Theta-Lev/YandexGPT-5-Lite-8B-instruct-Q80-GGUF This model was converted to GGUF format from `yandex/YandexGPT-5-Lite-8B-instruct` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
101
1

DeepSeek-V2-Lite-Q8_0-GGUF

llama-cpp
47
0

deepseek-coder-6.7b-instruct-Q8_0-GGUF

NaNK
llama-cpp
31
0

L3.1-Dark-Reasoning-LewdPlay-evo-Hermes-R1-Uncensored-8B-Q5_K_M-GGUF

NaNK
llama 3.1
22
0

L3-SnowStorm-v1.15-4x8B-B-Q8_0-GGUF

NaNK
llama-cpp
18
0

L3-SnowStorm-v1.15-4x8B-B-Q5_K_M-GGUF

NaNK
llama-cpp
16
0

internlm2-math-plus-7b-Q8_0-GGUF

NaNK
llama-cpp
13
0

rho-math-1b-interpreter-v0.1-Q8_0-GGUF

NaNK
llama-cpp
11
0

RedPajama-INCITE-Instruct-3B-v1-Q8_0-GGUF

NaNK
llama-cpp
11
0

deepseek-math-7b-instruct-Q8_0-GGUF

NaNK
llama-cpp
9
0

Llama-3.1-Minitron-4B-Width-Base-Q8_0-GGUF

NaNK
llama-cpp
9
0

deepseek-math-7b-rl-Q8_0-GGUF

NaNK
llama-cpp
8
0

rho-math-7b-interpreter-v0.1-Q8_0-GGUF

NaNK
llama-cpp
8
0

Llama-3.1-Minitron-4B-Depth-Base-Q8_0-GGUF

NaNK
llama-cpp
8
0

Mistral-Nemo-Instruct-2407-Q8_0-GGUF

NaNK
llama-cpp
7
0

Qwen2.5-VL-3B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
7
0

Qwen2.5-MOE-6x1.5B-DeepSeek-Reasoning-e32-Q8_0-GGUF

Theta-Lev/Qwen2.5-MOE-6x1.5B-DeepSeek-Reasoning-e32-Q80-GGUF This model was converted to GGUF format from `DavidAU/Qwen2.5-MOE-6x1.5B-DeepSeek-Reasoning-e32` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
6
0

Qwen2.5-VL-7B-Instruct-Q8_0-GGUF

Theta-Lev/Qwen2.5-VL-7B-Instruct-Q80-GGUF This model was converted to GGUF format from `Qwen/Qwen2.5-VL-7B-Instruct` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
0

deepseek-math-7b-base-Q8_0-GGUF

NaNK
llama-cpp
4
0

DeepSeek-Coder-V2-Lite-Instruct-Q8_0-GGUF

llama-cpp
4
0

Qwen2.5-7B-Instruct-1M-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

Qwen2.5-VL-32B-Instruct-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

HeroBophades-2x7B-Q8_0-GGUF

NaNK
llama-cpp
1
0

HeroBophades-3x7B-Q8_0-GGUF

NaNK
llama-cpp
1
0