cstr

86 models • 1 total models in database
Sort by:

cohere-transcribe-03-2026-GGUF

license:apache-2.0
2,200
3

Spaetzle-v60-7b

NaNK
license:cc-by-nc-4.0
595
3

aya-expanse-8b-Q4_K_M-GGUF

cstr/aya-expanse-8b-Q4KM-GGUF This model was converted to GGUF format from `CohereForAI/aya-expanse-8b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
155
0

octen-embedding-0.6b-onnx-int4

NaNK
license:apache-2.0
98
0

granite-speech-4.0-1b-GGUF

NaNK
license:apache-2.0
85
0

Octen-Embedding-0.6B-ONNX-INT8-FULL

NaNK
license:apache-2.0
71
0

Llama3-DiscoLeo-Instruct-8B-v0.1-GGUF

NaNK
license:llama3
59
0

Ministral-8B-Instruct-2410-GGUF

NaNK
llama.cpp
50
1

salamandra-7b-instruct-GGUF

NaNK
license:apache-2.0
47
2

whisper-large-v3-turbo-int8_float32

license:apache-2.0
44
0

whisper-large-v3-turbo-german-int8_float32

NaNK
license:apache-2.0
43
2

Spaetzle-v85-7b-GGUF

NaNK
license:cc-by-nc-4.0
23
1

Phi-3-mini-4k-instruct-LLaMAfied-GGUF

license:mit
20
0

ALMA-7B-R-GGUF

NaNK
17
0

Llama3-DiscoLeo-Instruct-8B-32k-v0.1-GGUF

NaNK
license:llama3
17
0

DiscoLM_German_7b_v1_chat-GGUF

NaNK
16
0

Llama3_DiscoLM_German_8b_v0.1_experimental-GGUF

NaNK
15
2

llama3.1-8b-spaetzle-v74-GGUF

NaNK
base_model:cstr/llama3.1-8b-spaetzle-v59
15
0

mt0-large-Q4_K_M-GGUF

cstr/mt0-large-Q4KM-GGUF This model was converted to GGUF format from `bigscience/mt0-large` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
15
0

occiglot10b-dpo-GGUF

NaNK
14
1

occiglot-7b-de-en-instruct-dpo2-GGUF

NaNK
13
0

llama3.1-8b-spaetzle-v51-GGUF-1

NaNK
license:llama3.1
13
0

DeepSeek-R1-Distill-Llama-8B-abliterated-Q4_K_M-GGUF

NaNK
llama-cpp
12
0

llama3-8b-spaetzle-v33-GGUF

NaNK
base_model:cstr/llama3-8b-spaetzle-v20
11
0

llama3.1-8b-spaetzle-v59-GGUF

NaNK
Undi95/Meta-Llama-3.1-8B-Claude
9
0

Starling-LM-7B-beta-GGUF

NaNK
8
0

Mistral-7B-base-v0.2-GGUF

NaNK
8
0

llama3-8b-spaetzle-v39-GGUF

NaNK
license:llama3
8
0

llama3-8b-spaetzle-v39-q4_0-GGUF

NaNK
license:llama3
8
0

occiglot-7b-de-en-instruct-GGUF

NaNK
license:apache-2.0
7
1

Spaetzle-v60-7b-GGUF

NaNK
license:cc-by-nc-4.0
7
1

llama3.1-8b-spaetzle-v90-GGUF

NaNK
cstr/llama3.1-8b-spaetzle-v85
7
1

Spaetzle-v8-7b-orpo2-GGUF

NaNK
7
0

Llama-3-SauerkrautLM-8b-Instruct-GGUF

NaNK
7
0

llama3-8b-spaetzle-v31-GGUF

NaNK
7
0

Spaetzle-v12-7b-GGUF

NaNK
6
0

llama3-8b-spaetzle-v37

NaNK
llama
6
0

Spaetzle-v85-7b-GGUF-q4

NaNK
license:cc-by-nc-4.0
6
0

occiglot-7b-de-en-GGUF

NaNK
5
1

llama3-discolm-orca-GGUF

5
1

Spaetzle-v8-7b-GGUF

NaNK
5
0

bagel-dpo-7b-v0.5-GGUF

NaNK
5
0

dolphin-2.9-llama3-8b-GGUF

NaNK
5
0

Spaetzle-v31-7b

NaNK
4
1

Spaetzle-v31-7b-GGUF

NaNK
4
0

Spaetzle-v69-7b-GGUF

NaNK
license:cc-by-nc-4.0
4
0

OrpoLlama-3-8B-GGUF

NaNK
4
0

Llama3-DiscoLeo-Instruct-8B-v0.1-mlx

NaNK
llama
3
0

llama3.1-8b-spaetzle-v119

NaNK
llama
3
0

TowerInstruct-7B-v0.2-GGUF

NaNK
license:cc-by-nc-4.0
2
1

Spaetzle-v60-7b-Q4_0-GGUF

NaNK
license:cc-by-nc-4.0
2
0

Yi-1.5-9B-Chat-GGUF

NaNK
llama-cpp
2
0

llama3-8b-spaetzle-v33

NaNK
llama
2
0

llama3.1-8b-spaetzle-v74

NaNK
llama
2
0

Spaetzle-v8-7b

NaNK
1
2

Spaetzle-v62-7b

NaNK
1
1

Spaetzle-v65-7b

NaNK
1
1

Spaetzle-v69-7b

NaNK
license:cc-by-nc-4.0
1
1

llama3-8b-spaetzle-v20

NaNK
llama
1
1

Spaetzle-v85-7b

NaNK
license:cc-by-nc-4.0
1
1

llama3.1-8b-spaetzle-v59

NaNK
llama
1
1

WiederPipe

NaNK
license:apache-2.0
1
0

wmt21-dense-24-wide-en-x-stq4

1
0

wmt21-dense-24-wide-en-x-stq8

1
0

Spaetzle-v63-7b

NaNK
1
0

Spaetzle-v64-7b

NaNK
1
0

phi3-mini-4k-llamafied-sft-v1

license:mit
1
0

phi-3-orpo-v8_16-GGUF

1
0

phi-3-orpo-v9_16-GGUF

NaNK
llama
1
0

llama3-8b-spaetzle-v33-mlx-4bit

NaNK
llama
1
0

llama3-8b-spaetzle-v33-int4-inc

NaNK
llama
1
0

paraphrase-multilingual-MiniLM-L12-v2-mlx

license:apache-2.0
1
0

llama3-8b-spaetzle-v20-int4-inc

NaNK
llama
0
3

Spaetzle-v12-7b

NaNK
license:cc-by-sa-4.0
0
2

Spaetzle-v60-7b-int4-inc

NaNK
license:cc-by-nc-4.0
0
2

Spaetzle-v85-7b-int4-inc

NaNK
license:cc-by-nc-4.0
0
2

llama3.1-8b-spaetzle-v90

NaNK
llama
0
2

NeuDistRo-a1-laser

0
1

NeuDist-Ro-7B-laser

NaNK
0
1

wmt21-dense-24-wide-en-x-st

license:mit
0
1

Spaetzle-v66-7b

NaNK
0
1

Spaetzle-v67-7b

NaNK
0
1

Spaetzle-v68-7b

NaNK
0
1

phi-3-orpo-v9_4

NaNK
llama
0
1

whisper-large-v3-turbo-german-ggml

NaNK
license:apache-2.0
0
1

aihpi_f5_german_mlx_q4

NaNK
license:cc-by-nc-4.0
0
1