jaspionjader

234 models • 133 total models in database
Sort by:

LLAMA-3_8B_Unaligned_BETA-Q5_K_M-GGUF

NaNK
llama-cpp
11
0

Rebecca-8B-TIES-Q5_K_M-GGUF

NaNK
llama-cpp
8
0

Kosmos-EVAA-Franken-Immersive-v41-8B

NaNK
llama
7
1

483415566-6-Q5_K_M-GGUF

jaspionjader/483415566-6-Q5KM-GGUF This model was converted to GGUF format from `MrRobotoAI/483415566-6` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
7
0

bh-23

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-22 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
7
0

bh-36

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/bh-34 The following YAML configuration was used to produce this model:

NaNK
llama
7
0

Kosmos-VENN-8B-Q5_K_M-GGUF

NaNK
llama-cpp
6
0

Kosmos-Aurora_faustus-8B-Q5_K_M-GGUF

NaNK
llama-cpp
6
0

bh-11

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-10 The following YAML configuration was used to produce this model:

NaNK
llama
6
0

Kosmos-EVAA-Franken-v38-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fct-18-8b jaspionjader/fct-14-8b The following YAML configuration was used to produce this model:

NaNK
llama
5
2

Kosmos-EVAA-immersive-sof-v44-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/sof-10 as a base. The following models were included in the merge: jaspionjader/sof-14 jaspionjader/sof-13 The following YAML configuration was used to produce this model:

NaNK
llama
5
2

bh-56

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-55 as a base. The following models were included in the merge: jaspionjader/bh-48 jaspionjader/slu-37 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-47 jaspionjader/bh-49 The following YAML configuration was used to produce this model:

NaNK
llama
5
1

Darkens-8B-Q5_K_M-GGUF

NaNK
llama-cpp
5
0

Kosmos-EVAA-v6-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v6-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v6-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
0

Kosmos-EVAA-v9-8B-Q5_K_M-GGUF

NaNK
llama-cpp
5
0

Kosmos-EVAA-PRP-v30-8B-Q5_K_M-GGUF

NaNK
llama-cpp
5
0

Kosmos-EVAA-Franken-v37-8B

NaNK
llama
4
2

Kosmos-EVAA-Franken-Immersive-v40-8B

NaNK
llama
4
2

8b-Base-Academic-5-Q5_K_M-GGUF

NaNK
llama-cpp
4
1

Kosmos-EVAA-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B jaspionjader/Kosmos-Elusive-VENN-Aurorafaustus-8B The following YAML configuration was used to produce this model:

NaNK
llama
4
1

Kosmos-EVAA-TSN-v21-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v20-8B jaspionjader/Kosmos-EVAA-TSN-light-8B The following YAML configuration was used to produce this model:

NaNK
llama
4
1

Kosmos-EVAA-Franken-stock-v43-8B

NaNK
llama
4
1

Aurora_faustus-8B-LINEAR-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

WIP_TEST_PENDING_8-Q5_K_M-GGUF

llama-cpp
4
0

WIP_Damascus-8B-TIES-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Ministrations-8B-v1-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

BaeZel-8B-LINEAR-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Frigg-v1.4-8b-HIGH-FANTASY8-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-Elusive-VENN-Aurora_faustus-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-Elusive-VENN-Aurorafaustus-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-Elusive-VENN-Aurorafaustus-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

Kosmos-EVAA-v2-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v2-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v2-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

Auro-Kosmos-EVAA-v2-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-v9-TitanFusion-Mix-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-v12-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-gamma-light-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-gamma-alt-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-gamma-light-alt-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-gamma-ultra-light-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

gamma-Kosmos-EVAA-v2-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

Kosmos-EVAA-gamma-v18-8B-Q5_K_M-GGUF

NaNK
llama-cpp
4
0

sof-12

NaNK
llama
4
0

bh-20

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-18 jaspionjader/fr-18-8b The following YAML configuration was used to produce this model:

NaNK
llama
4
0

Kosmos-EVAA-gamma-v14-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-gamma-v13-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
2

Kosmos-EVAA-mix-v35-8B

NaNK
llama
3
2

Kosmos-EVAA-TSN-v22-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v21-8B jaspionjader/Kosmos-EVAA-TSN-v19-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
1

PRP-Kosmos-EVAA-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Gryphe/Pantheon-RP-1.0-8b-Llama-3 jaspionjader/Kosmos-EVAA-gamma-v18-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
1

Kosmos-EVAA-PRP-v25-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v23-8B jaspionjader/Kosmos-EVAA-PRP-v24-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
1

Kosmos-EVAA-PRP-v33-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v32-8B jaspionjader/Kosmos-EVAA-PRP-v30-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
1

bh-48

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-46 jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
1

bh-62

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B as a base. The following models were included in the merge: jaspionjader/bh-60 jaspionjader/bh-61 The following YAML configuration was used to produce this model:

NaNK
llama
3
1

8b-Base-Academic-14-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-Elusive-8b-gguf

NaNK
llama-cpp
3
0

Kosmos-Elusive-VENN-Asymmetric-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
0

Auro-Kosmos-EVAA-v2.1-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Auro-Kosmos-EVAA-v2.3-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-v7-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v6-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Kosmos-EVAA-v8-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v8-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v8-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
0

Kosmos-EVAA-Fusion-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-Fusion-light-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-Fusion-light-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-Fusion-light-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
0

Kosmos-EVAA-v10-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v10-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v10-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
3
0

Kosmos-EVAA-v11-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-gamma-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-gamma-v13-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-gamma-v16-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-gamma-v17-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

gamma-Kosmos-EVAA-v3-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

Kosmos-EVAA-TSN-v19-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-light-8B jaspionjader/Kosmos-EVAA-gamma-v18-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Kosmos-EVAA-PRP-v26-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v21-8B jaspionjader/Kosmos-EVAA-PRP-v25-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Kosmos-EVAA-PRP-v27-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v26-8B jaspionjader/Kosmos-EVAA-PRP-v25-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Kosmos-EVAA-Franken-v38-8B-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

ek-5

NaNK
llama
3
0

slu-1

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B as a base. The following models were included in the merge: crestf411/L3.1-8B-Slush-v1.1 crestf411/L3.1-8B-Dark-Planet-Slush jaspionjader/sof-14 The following YAML configuration was used to produce this model:

NaNK
llama
3
0

slu-7

NaNK
llama
3
0

slu-19

NaNK
llama
3
0

slu-30

NaNK
llama
3
0

bh-4

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-2 The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-8

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-6 jaspionjader/fr-18-8b The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-12

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-10 jaspionjader/fr-18-8b The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-15

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-14 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-27

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-26 The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-43

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-42 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-44

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/bh-42 The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-55

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/fr-18-8b as a base. The following models were included in the merge: jaspionjader/bh-54 jaspionjader/bh-48 jaspionjader/bh-50 The following YAML configuration was used to produce this model:

NaNK
llama
3
0

bh-57

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-56 jaspionjader/Kosmos-EVAA-Franken-v38-8B The following YAML configuration was used to produce this model:

NaNK
llama
3
0

Kosmos-EVAA-Franken-Immersive-v39-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/Kosmos-EVAA-Franken-v38-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
4

Kosmos-EVAA-v12-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v3-8B jaspionjader/Kosmos-EVAA-v11-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
2

Kosmos-EVAA-gamma-v17-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-v14-8B jaspionjader/Kosmos-EVAA-gamma-v16-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
2

Kosmos-EVAA-gamma-v18-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/gamma-Kosmos-EVAA-v3-8B jaspionjader/Kosmos-EVAA-gamma-v17-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
2

Kosmos-EVAA-Franken-v36-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/f-8-8b jaspionjader/f-5-8b The following YAML configuration was used to produce this model:

NaNK
llama
2
2

Kosmos-Aurora_faustus-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Khetterman/Kosmos-8B-v1 DreadPoor/Aurorafaustus-8B-LINEAR The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-v3-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.2-8B jaspionjader/Kosmos-EVAA-v2-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-gamma-alt-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: johnsutor/Llama-3-8B-Instructbreadcrumbs-density-0.1-gamma-0.01 jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-gamma-light-alt-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-alt-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-gamma-v13-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-gamma-light-alt-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-gamma-v15-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-ultra-light-8B jaspionjader/Kosmos-EVAA-gamma-v14-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-gamma-v16-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-v15-8B jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

PRP-Kosmos-EVAA-light-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/PRP-Kosmos-EVAA-8B jaspionjader/Kosmos-EVAA-TSN-v22-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-PRP-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Gryphe/Pantheon-RP-1.0-8b-Llama-3 jaspionjader/Kosmos-EVAA-gamma-v18-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-PRP-light-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-8B jaspionjader/Kosmos-EVAA-TSN-v22-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-PRP-v34-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v33-8B jaspionjader/Kosmos-EVAA-PRP-v31-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
1

sof-14

NaNK
llama
2
1

bh-14

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-10 as a base. The following models were included in the merge: jaspionjader/bh-13 jaspionjader/bh-12 jaspionjader/bh-11 The following YAML configuration was used to produce this model:

NaNK
llama
2
1

bh-26

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-22 as a base. The following models were included in the merge: jaspionjader/bh-25 jaspionjader/bh-24 jaspionjader/bh-23 The following YAML configuration was used to produce this model:

NaNK
llama
2
1

Kosmos-EVAA-immersive-mix-v45-8B

NaNK
llama
2
1

bh-64

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-62 jaspionjader/bh-63 The following YAML configuration was used to produce this model:

NaNK
llama
2
1

WIP-Testing_Something-8B-TIES-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

mergekit-slerp-fmrazcr-Q4_K_M-GGUF

llama-cpp
2
0

Kosmos-Elusive-VENN-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

Kosmos-EVAA-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Auro-Kosmos-EVAA-v2.2-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

Kosmos-EVAA-v3-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

Kosmos-EVAA-v4-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v4-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v4-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Kosmos-EVAA-v5-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v5-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v5-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Kosmos-EVAA-gamma-ultra-light-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-v12-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Kosmos-EVAA-gamma-v14-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

gamma-Kosmos-EVAA-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

Kosmos-EVAA-TSN-v20-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v19-8B jaspionjader/TSN-Kosmos-EVAA-v2-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Kosmos-EVAA-PRP-v23-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/PRP-Kosmos-EVAA-light-8B jaspionjader/Kosmos-EVAA-PRP-light-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Kosmos-EVAA-PRP-v24-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-PRP-v24-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-PRP-v24-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

Kosmos-EVAA-PRP-v30-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v29-8B jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Kosmos-EVAA-PRP-v32-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v29-8B jaspionjader/Kosmos-EVAA-PRP-v31-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Kosmos-EVAA-PRP-v31-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

fr-14-8b

NaNK
llama
2
0

kstc-2-8b

NaNK
llama
2
0

sof-7

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-Franken-stock-v43-8B as a base. The following models were included in the merge: jaspionjader/ek-6 jaspionjader/sof-6 jaspionjader/sof-5 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

sof-8

NaNK
llama
2
0

sof-9

NaNK
llama
2
0

Kosmos-EVAA-immersive-sof-v44-8B-Q5_K_M-GGUF

NaNK
llama-cpp
2
0

slu-3

NaNK
llama
2
0

slu-8

NaNK
llama
2
0

slu-12

NaNK
llama
2
0

slu-15

NaNK
llama
2
0

slu-24

NaNK
llama
2
0

bh-3

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-2 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-5

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/slu-29 jaspionjader/bh-2 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-7

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-6 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-13

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-10 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-21

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-18 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-24

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-22 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-28

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-26 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-29

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-26 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-30

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-26 as a base. The following models were included in the merge: jaspionjader/bh-28 jaspionjader/bh-29 jaspionjader/bh-27 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-32

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/bh-30 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-33

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/slu-37 jaspionjader/bh-30 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-35

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-34 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-40

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-38 jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-42

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-38 as a base. The following models were included in the merge: jaspionjader/bh-41 jaspionjader/bh-39 jaspionjader/bh-40 The following YAML configuration was used to produce this model:

NaNK
llama
2
0

bh-47

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-46 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Kosmos-VENN-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: DreadPoor/UNTESTED-VENN1.2-8B-ModelStock Khetterman/Kosmos-8B-v1 The following YAML configuration was used to produce this model:

NaNK
llama
1
2

Kosmos-EVAA-v9-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v8-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
2

Kosmos-EVAA-Fusion-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v9-TitanFusion-Mix-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
2

Kosmos-EVAA-TSN-light-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-TSN-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
2

Kosmos-Elusive-VENN-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-VENN-8B jaspionjader/Kosmos-Elusive-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-Elusive-VENN-Asymmetric-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-Elusive-VENN-8B DreadPoor/AsymmetricLinearity-8B-ModelStock The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Auro-Kosmos-EVAA-v2.1-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2-8B jaspionjader/Kosmos-EVAA-v2-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Auro-Kosmos-EVAA-v2.2-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.1-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-v8-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v3-8B jaspionjader/Kosmos-EVAA-v7-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-v9-TitanFusion-Mix-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v9-8B bunnycore/Llama-3.1-8B-TitanFusion-Mix The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-v11-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v10-8B jaspionjader/Auro-Kosmos-EVAA-v2.2-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-gamma-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v12-8B johnsutor/Llama-3-8B-Instructbreadcrumbs-density-0.1-gamma-0.01 The following YAML configuration was used to produce this model:

NaNK
llama
1
1

TSN-Kosmos-EVAA-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B bunnycore/Tulu-3.1-8B-SuperNova The following YAML configuration was used to produce this model:

NaNK
llama
1
1

TSN-Kosmos-EVAA-v2-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/TSN-Kosmos-EVAA-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-TSN-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: bunnycore/Tulu-3.1-8B-SuperNova jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-PRP-v29-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v26-8B jaspionjader/Kosmos-EVAA-PRP-v28-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-PRP-v31-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v30-8B jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

f-2-8b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/f-1-8b jaspionjader/Kosmos-EVAA-mix-v35-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
1

bbb-6

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/bbb-5 jaspionjader/kstc-4-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-Franken-stock-v42-8B

NaNK
llama
1
1

ek-1

NaNK
llama
1
1

sof-5

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-Franken-stock-v43-8B as a base. The following models were included in the merge: jaspionjader/sof-3 jaspionjader/ek-6 jaspionjader/sof-4 The following YAML configuration was used to produce this model:

NaNK
llama
1
1

bh-34

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-30 as a base. The following models were included in the merge: jaspionjader/bh-31 jaspionjader/bh-33 jaspionjader/bh-32 The following YAML configuration was used to produce this model:

NaNK
llama
1
1

bh-50

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-46 as a base. The following models were included in the merge: jaspionjader/bh-49 jaspionjader/bh-48 jaspionjader/bh-47 The following YAML configuration was used to produce this model:

NaNK
llama
1
1

Kosmos-EVAA-immersive-mix-v45.1-8B

NaNK
llama
1
1

Kosmos-EVAA-v2-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-EVAA-v4-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.3-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-EVAA-v5-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v4-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-EVAA-v6-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.2-8B jaspionjader/Kosmos-EVAA-v5-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-EVAA-v7-8B-Q5_K_M-GGUF

jaspionjader/Kosmos-EVAA-v7-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v7-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
1
0

Kosmos-EVAA-gamma-v15-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

TSN-Kosmos-EVAA-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

Kosmos-EVAA-TSN-v22-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

PRP-Kosmos-EVAA-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

PRP-Kosmos-EVAA-light-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

Kosmos-EVAA-PRP-v24-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v22-8B jaspionjader/Kosmos-EVAA-PRP-v23-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-EVAA-PRP-v26-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

f-3-8b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-mix-v35-8B jaspionjader/f-2-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
0

dp-4-8b

NaNK
llama
1
0

dp-6-8b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/dp-2-8b jaspionjader/dp-5-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-EVAA-Franken-v36-8B-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

fr-4-8b

NaNK
llama
1
0

fr-15-8b

NaNK
llama
1
0

fr-16-8b

NaNK
llama
1
0

fr-17-8b

NaNK
llama
1
0

knf-1-8b

This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: NeverSleep/Lumimaid-v0.2-8B Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/Kosmos-EVAA-Franken-v38-8B qingy2024/Albatross-8B-Instruct-v3 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

knfp-1

NaNK
llama
1
0

kstc-3-8b

NaNK
llama
1
0

bbb-1

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: bunnycore/Llama-3.1-8B-TitanFusion-Mix djuna/L3.1-PromissumMane-8B-Della-calc DreadPoor/ONeil-modelstock-8B jaspionjader/Kosmos-EVAA-v9-TitanFusion-Mix-8B DreadPoor/Aurorafaustus-8B-LINEAR The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bbb-2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/bbb-1 jaspionjader/kstc-4-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bbb-3

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/kstc-4-8b jaspionjader/bbb-2 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bbb-4

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bbb-3 as a base. The following models were included in the merge: jaspionjader/kstc-4-8b jaspionjader/kstc-5-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bbb-5

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/bbb-3 jaspionjader/bbb-4 jaspionjader/kstc-4-8b The following YAML configuration was used to produce this model:

NaNK
llama
1
0

ek-4

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/ek-1 as a base. The following models were included in the merge: jaspionjader/ek-3 jaspionjader/Kosmos-EVAA-Franken-stock-v42-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

sof-13

NaNK
llama
1
0

slu-18

NaNK
llama
1
0

bh-1

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B refuelai/Llama-3-Refueled The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-2

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/slu-37 as a base. The following models were included in the merge: khoantap/llama-linear-0.5-1-0.5-merge jaspionjader/bh-1 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-6

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-2 as a base. The following models were included in the merge: jaspionjader/bh-3 jaspionjader/bh-5 jaspionjader/bh-4 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-9

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-6 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-10

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-6 as a base. The following models were included in the merge: jaspionjader/bh-7 jaspionjader/bh-8 jaspionjader/bh-9 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-16

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-14 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-17

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-14 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-18

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-14 as a base. The following models were included in the merge: jaspionjader/bh-15 jaspionjader/bh-16 jaspionjader/bh-17 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-19

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-18 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-25

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/slu-37 jaspionjader/bh-22 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-31

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-30 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-37

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-34 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-38

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-34 as a base. The following models were included in the merge: jaspionjader/bh-37 jaspionjader/bh-36 jaspionjader/bh-35 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-39

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-38 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-41

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-38 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-46

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-42 as a base. The following models were included in the merge: jaspionjader/bh-43 jaspionjader/bh-45 jaspionjader/bh-44 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-49

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-46 jaspionjader/slu-37 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-51

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-48 jaspionjader/bh-50 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-52

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-50 jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-54

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-26 as a base. The following models were included in the merge: jaspionjader/bh-53 jaspionjader/bh-51 jaspionjader/bh-50 jaspionjader/bh-52 jaspionjader/bh-48 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-61

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B as a base. The following models were included in the merge: collinzrj/DeepSeek-R1-Distill-Llama-8B-abliterate prithivMLmods/Llama-8B-Distill-CoT The following YAML configuration was used to produce this model:

NaNK
llama
1
0

bh-63

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-62 as a base. The following models were included in the merge: mergekit-community/aka-test jaspionjader/bh-61 The following YAML configuration was used to produce this model:

NaNK
llama
1
0

Kosmos-Elusive-VENN-Aurora_faustus-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-Aurorafaustus-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:

NaNK
llama
0
2

Kosmos-EVAA-Fusion-light-8B

NaNK
llama
0
2

Kosmos-EVAA-gamma-light-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-8B jaspionjader/Kosmos-EVAA-v12-8B The following YAML configuration was used to produce this model:

NaNK
llama
0
2

Kosmos-Elusive-8b

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Khetterman/Kosmos-8B-v1 DreadPoor/Elusive1.2-8B-ModelStock The following YAML configuration was used to produce this model:

NaNK
llama
0
1

Auro-Kosmos-EVAA-v2-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v2-8B DreadPoor/Aurorafaustus-8B-LINEAR The following YAML configuration was used to produce this model:

NaNK
llama
0
1

Auro-Kosmos-EVAA-v2.3-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.2-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:

NaNK
llama
0
1

Kosmos-EVAA-v10-8B

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Fusion-light-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:

NaNK
llama
0
1

ek-3

NaNK
llama
0
1

bh-60

This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B as a base. The following models were included in the merge: collinzrj/DeepSeek-R1-Distill-Llama-8B-abliterate prithivMLmods/Llama-8B-Distill-CoT The following YAML configuration was used to produce this model:

NaNK
llama
0
1