jaspionjader
LLAMA-3_8B_Unaligned_BETA-Q5_K_M-GGUF
Rebecca-8B-TIES-Q5_K_M-GGUF
Kosmos-EVAA-Franken-Immersive-v41-8B
483415566-6-Q5_K_M-GGUF
jaspionjader/483415566-6-Q5KM-GGUF This model was converted to GGUF format from `MrRobotoAI/483415566-6` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
bh-23
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-22 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-36
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/bh-34 The following YAML configuration was used to produce this model:
Kosmos-VENN-8B-Q5_K_M-GGUF
Kosmos-Aurora_faustus-8B-Q5_K_M-GGUF
bh-11
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-10 The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-v38-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fct-18-8b jaspionjader/fct-14-8b The following YAML configuration was used to produce this model:
Kosmos-EVAA-immersive-sof-v44-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/sof-10 as a base. The following models were included in the merge: jaspionjader/sof-14 jaspionjader/sof-13 The following YAML configuration was used to produce this model:
bh-56
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-55 as a base. The following models were included in the merge: jaspionjader/bh-48 jaspionjader/slu-37 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-47 jaspionjader/bh-49 The following YAML configuration was used to produce this model:
Darkens-8B-Q5_K_M-GGUF
Kosmos-EVAA-v6-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v6-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v6-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-v9-8B-Q5_K_M-GGUF
Kosmos-EVAA-PRP-v30-8B-Q5_K_M-GGUF
Kosmos-EVAA-Franken-v37-8B
Kosmos-EVAA-Franken-Immersive-v40-8B
8b-Base-Academic-5-Q5_K_M-GGUF
Kosmos-EVAA-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B jaspionjader/Kosmos-Elusive-VENN-Aurorafaustus-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-TSN-v21-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v20-8B jaspionjader/Kosmos-EVAA-TSN-light-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-stock-v43-8B
Aurora_faustus-8B-LINEAR-Q5_K_M-GGUF
WIP_TEST_PENDING_8-Q5_K_M-GGUF
WIP_Damascus-8B-TIES-Q5_K_M-GGUF
Ministrations-8B-v1-Q5_K_M-GGUF
BaeZel-8B-LINEAR-Q5_K_M-GGUF
Frigg-v1.4-8b-HIGH-FANTASY8-Q5_K_M-GGUF
Kosmos-Elusive-VENN-Aurora_faustus-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-Elusive-VENN-Aurorafaustus-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-Elusive-VENN-Aurorafaustus-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-v2-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v2-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v2-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Auro-Kosmos-EVAA-v2-8B-Q5_K_M-GGUF
Kosmos-EVAA-v9-TitanFusion-Mix-8B-Q5_K_M-GGUF
Kosmos-EVAA-v12-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-light-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-alt-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-light-alt-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-ultra-light-8B-Q5_K_M-GGUF
gamma-Kosmos-EVAA-v2-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-v18-8B-Q5_K_M-GGUF
sof-12
bh-20
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-18 jaspionjader/fr-18-8b The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v14-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-gamma-v13-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-mix-v35-8B
Kosmos-EVAA-TSN-v22-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v21-8B jaspionjader/Kosmos-EVAA-TSN-v19-8B The following YAML configuration was used to produce this model:
PRP-Kosmos-EVAA-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Gryphe/Pantheon-RP-1.0-8b-Llama-3 jaspionjader/Kosmos-EVAA-gamma-v18-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v25-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v23-8B jaspionjader/Kosmos-EVAA-PRP-v24-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v33-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v32-8B jaspionjader/Kosmos-EVAA-PRP-v30-8B The following YAML configuration was used to produce this model:
bh-48
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-46 jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B The following YAML configuration was used to produce this model:
bh-62
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B as a base. The following models were included in the merge: jaspionjader/bh-60 jaspionjader/bh-61 The following YAML configuration was used to produce this model:
8b-Base-Academic-14-Q5_K_M-GGUF
Kosmos-Elusive-8b-gguf
Kosmos-Elusive-VENN-Asymmetric-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-Elusive-VENN-Asymmetric-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Auro-Kosmos-EVAA-v2.1-8B-Q5_K_M-GGUF
Auro-Kosmos-EVAA-v2.3-8B-Q5_K_M-GGUF
Kosmos-EVAA-v7-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v6-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v8-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v8-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v8-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-Fusion-8B-Q5_K_M-GGUF
Kosmos-EVAA-Fusion-light-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-Fusion-light-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-Fusion-light-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-v10-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v10-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v10-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-v11-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-v13-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-v16-8B-Q5_K_M-GGUF
Kosmos-EVAA-gamma-v17-8B-Q5_K_M-GGUF
gamma-Kosmos-EVAA-v3-8B-Q5_K_M-GGUF
Kosmos-EVAA-TSN-v19-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-light-8B jaspionjader/Kosmos-EVAA-gamma-v18-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v26-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v21-8B jaspionjader/Kosmos-EVAA-PRP-v25-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v27-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v26-8B jaspionjader/Kosmos-EVAA-PRP-v25-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-v38-8B-Q5_K_M-GGUF
ek-5
slu-1
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B as a base. The following models were included in the merge: crestf411/L3.1-8B-Slush-v1.1 crestf411/L3.1-8B-Dark-Planet-Slush jaspionjader/sof-14 The following YAML configuration was used to produce this model:
slu-7
slu-19
slu-30
bh-4
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-2 The following YAML configuration was used to produce this model:
bh-8
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-6 jaspionjader/fr-18-8b The following YAML configuration was used to produce this model:
bh-12
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-10 jaspionjader/fr-18-8b The following YAML configuration was used to produce this model:
bh-15
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-14 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-27
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-26 The following YAML configuration was used to produce this model:
bh-43
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-42 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-44
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/bh-42 The following YAML configuration was used to produce this model:
bh-55
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/fr-18-8b as a base. The following models were included in the merge: jaspionjader/bh-54 jaspionjader/bh-48 jaspionjader/bh-50 The following YAML configuration was used to produce this model:
bh-57
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-56 jaspionjader/Kosmos-EVAA-Franken-v38-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-Immersive-v39-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/Kosmos-EVAA-Franken-v38-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v12-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v3-8B jaspionjader/Kosmos-EVAA-v11-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v17-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-v14-8B jaspionjader/Kosmos-EVAA-gamma-v16-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v18-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/gamma-Kosmos-EVAA-v3-8B jaspionjader/Kosmos-EVAA-gamma-v17-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-v36-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/f-8-8b jaspionjader/f-5-8b The following YAML configuration was used to produce this model:
Kosmos-Aurora_faustus-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Khetterman/Kosmos-8B-v1 DreadPoor/Aurorafaustus-8B-LINEAR The following YAML configuration was used to produce this model:
Kosmos-EVAA-v3-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.2-8B jaspionjader/Kosmos-EVAA-v2-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-alt-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: johnsutor/Llama-3-8B-Instructbreadcrumbs-density-0.1-gamma-0.01 jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-light-alt-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-alt-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v13-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-gamma-light-alt-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v15-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-ultra-light-8B jaspionjader/Kosmos-EVAA-gamma-v14-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v16-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-v15-8B jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:
PRP-Kosmos-EVAA-light-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/PRP-Kosmos-EVAA-8B jaspionjader/Kosmos-EVAA-TSN-v22-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Gryphe/Pantheon-RP-1.0-8b-Llama-3 jaspionjader/Kosmos-EVAA-gamma-v18-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-light-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-8B jaspionjader/Kosmos-EVAA-TSN-v22-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v34-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v33-8B jaspionjader/Kosmos-EVAA-PRP-v31-8B The following YAML configuration was used to produce this model:
sof-14
bh-14
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-10 as a base. The following models were included in the merge: jaspionjader/bh-13 jaspionjader/bh-12 jaspionjader/bh-11 The following YAML configuration was used to produce this model:
bh-26
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-22 as a base. The following models were included in the merge: jaspionjader/bh-25 jaspionjader/bh-24 jaspionjader/bh-23 The following YAML configuration was used to produce this model:
Kosmos-EVAA-immersive-mix-v45-8B
bh-64
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-62 jaspionjader/bh-63 The following YAML configuration was used to produce this model:
WIP-Testing_Something-8B-TIES-Q5_K_M-GGUF
mergekit-slerp-fmrazcr-Q4_K_M-GGUF
Kosmos-Elusive-VENN-8B-Q5_K_M-GGUF
Kosmos-EVAA-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Auro-Kosmos-EVAA-v2.2-8B-Q5_K_M-GGUF
Kosmos-EVAA-v3-8B-Q5_K_M-GGUF
Kosmos-EVAA-v4-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v4-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v4-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-v5-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v5-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v5-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-gamma-ultra-light-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-v12-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-v14-8B-Q5_K_M-GGUF
gamma-Kosmos-EVAA-8B-Q5_K_M-GGUF
Kosmos-EVAA-TSN-v20-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v19-8B jaspionjader/TSN-Kosmos-EVAA-v2-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v23-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/PRP-Kosmos-EVAA-light-8B jaspionjader/Kosmos-EVAA-PRP-light-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v24-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-PRP-v24-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-PRP-v24-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-PRP-v30-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v29-8B jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v32-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v29-8B jaspionjader/Kosmos-EVAA-PRP-v31-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v31-8B-Q5_K_M-GGUF
fr-14-8b
kstc-2-8b
sof-7
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-Franken-stock-v43-8B as a base. The following models were included in the merge: jaspionjader/ek-6 jaspionjader/sof-6 jaspionjader/sof-5 The following YAML configuration was used to produce this model:
sof-8
sof-9
Kosmos-EVAA-immersive-sof-v44-8B-Q5_K_M-GGUF
slu-3
slu-8
slu-12
slu-15
slu-24
bh-3
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-2 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-5
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/slu-29 jaspionjader/bh-2 The following YAML configuration was used to produce this model:
bh-7
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-6 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-13
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-10 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-21
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-18 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-24
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-22 The following YAML configuration was used to produce this model:
bh-28
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-26 The following YAML configuration was used to produce this model:
bh-29
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-26 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-30
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-26 as a base. The following models were included in the merge: jaspionjader/bh-28 jaspionjader/bh-29 jaspionjader/bh-27 The following YAML configuration was used to produce this model:
bh-32
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/bh-30 The following YAML configuration was used to produce this model:
bh-33
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/slu-37 jaspionjader/bh-30 The following YAML configuration was used to produce this model:
bh-35
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-34 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-40
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-38 jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B The following YAML configuration was used to produce this model:
bh-42
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-38 as a base. The following models were included in the merge: jaspionjader/bh-41 jaspionjader/bh-39 jaspionjader/bh-40 The following YAML configuration was used to produce this model:
bh-47
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-46 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
Kosmos-VENN-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: DreadPoor/UNTESTED-VENN1.2-8B-ModelStock Khetterman/Kosmos-8B-v1 The following YAML configuration was used to produce this model:
Kosmos-EVAA-v9-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v8-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-Fusion-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v9-TitanFusion-Mix-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-TSN-light-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/Kosmos-EVAA-TSN-8B The following YAML configuration was used to produce this model:
Kosmos-Elusive-VENN-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-VENN-8B jaspionjader/Kosmos-Elusive-8b The following YAML configuration was used to produce this model:
Kosmos-Elusive-VENN-Asymmetric-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-Elusive-VENN-8B DreadPoor/AsymmetricLinearity-8B-ModelStock The following YAML configuration was used to produce this model:
Auro-Kosmos-EVAA-v2.1-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2-8B jaspionjader/Kosmos-EVAA-v2-8B The following YAML configuration was used to produce this model:
Auro-Kosmos-EVAA-v2.2-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.1-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v8-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v3-8B jaspionjader/Kosmos-EVAA-v7-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v9-TitanFusion-Mix-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v9-8B bunnycore/Llama-3.1-8B-TitanFusion-Mix The following YAML configuration was used to produce this model:
Kosmos-EVAA-v11-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v10-8B jaspionjader/Auro-Kosmos-EVAA-v2.2-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-gamma-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v12-8B johnsutor/Llama-3-8B-Instructbreadcrumbs-density-0.1-gamma-0.01 The following YAML configuration was used to produce this model:
TSN-Kosmos-EVAA-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B bunnycore/Tulu-3.1-8B-SuperNova The following YAML configuration was used to produce this model:
TSN-Kosmos-EVAA-v2-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-light-8B jaspionjader/TSN-Kosmos-EVAA-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-TSN-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: bunnycore/Tulu-3.1-8B-SuperNova jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v29-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v26-8B jaspionjader/Kosmos-EVAA-PRP-v28-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v31-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-PRP-v30-8B jaspionjader/Kosmos-EVAA-gamma-light-8B The following YAML configuration was used to produce this model:
f-2-8b
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/f-1-8b jaspionjader/Kosmos-EVAA-mix-v35-8B The following YAML configuration was used to produce this model:
bbb-6
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/bbb-5 jaspionjader/kstc-4-8b The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-stock-v42-8B
ek-1
sof-5
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/Kosmos-EVAA-Franken-stock-v43-8B as a base. The following models were included in the merge: jaspionjader/sof-3 jaspionjader/ek-6 jaspionjader/sof-4 The following YAML configuration was used to produce this model:
bh-34
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-30 as a base. The following models were included in the merge: jaspionjader/bh-31 jaspionjader/bh-33 jaspionjader/bh-32 The following YAML configuration was used to produce this model:
bh-50
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-46 as a base. The following models were included in the merge: jaspionjader/bh-49 jaspionjader/bh-48 jaspionjader/bh-47 The following YAML configuration was used to produce this model:
Kosmos-EVAA-immersive-mix-v45.1-8B
Kosmos-EVAA-v2-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v4-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.3-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v5-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v4-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v6-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.2-8B jaspionjader/Kosmos-EVAA-v5-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v7-8B-Q5_K_M-GGUF
jaspionjader/Kosmos-EVAA-v7-8B-Q5KM-GGUF This model was converted to GGUF format from `jaspionjader/Kosmos-EVAA-v7-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Kosmos-EVAA-gamma-v15-8B-Q5_K_M-GGUF
TSN-Kosmos-EVAA-8B-Q5_K_M-GGUF
Kosmos-EVAA-TSN-v22-8B-Q5_K_M-GGUF
PRP-Kosmos-EVAA-8B-Q5_K_M-GGUF
PRP-Kosmos-EVAA-light-8B-Q5_K_M-GGUF
Kosmos-EVAA-PRP-v24-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-TSN-v22-8B jaspionjader/Kosmos-EVAA-PRP-v23-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-PRP-v26-8B-Q5_K_M-GGUF
f-3-8b
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-mix-v35-8B jaspionjader/f-2-8b The following YAML configuration was used to produce this model:
dp-4-8b
dp-6-8b
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/dp-2-8b jaspionjader/dp-5-8b The following YAML configuration was used to produce this model:
Kosmos-EVAA-Franken-v36-8B-Q5_K_M-GGUF
fr-4-8b
fr-15-8b
fr-16-8b
fr-17-8b
knf-1-8b
This is a merge of pre-trained language models created using mergekit. This model was merged using the passthrough merge method. The following models were included in the merge: NeverSleep/Lumimaid-v0.2-8B Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B jaspionjader/Kosmos-EVAA-Franken-v38-8B qingy2024/Albatross-8B-Instruct-v3 The following YAML configuration was used to produce this model:
knfp-1
kstc-3-8b
bbb-1
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: bunnycore/Llama-3.1-8B-TitanFusion-Mix djuna/L3.1-PromissumMane-8B-Della-calc DreadPoor/ONeil-modelstock-8B jaspionjader/Kosmos-EVAA-v9-TitanFusion-Mix-8B DreadPoor/Aurorafaustus-8B-LINEAR The following YAML configuration was used to produce this model:
bbb-2
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/bbb-1 jaspionjader/kstc-4-8b The following YAML configuration was used to produce this model:
bbb-3
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/kstc-4-8b jaspionjader/bbb-2 The following YAML configuration was used to produce this model:
bbb-4
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bbb-3 as a base. The following models were included in the merge: jaspionjader/kstc-4-8b jaspionjader/kstc-5-8b The following YAML configuration was used to produce this model:
bbb-5
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/kstc-5-8b as a base. The following models were included in the merge: jaspionjader/bbb-3 jaspionjader/bbb-4 jaspionjader/kstc-4-8b The following YAML configuration was used to produce this model:
ek-4
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/ek-1 as a base. The following models were included in the merge: jaspionjader/ek-3 jaspionjader/Kosmos-EVAA-Franken-stock-v42-8B The following YAML configuration was used to produce this model:
sof-13
slu-18
bh-1
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B refuelai/Llama-3-Refueled The following YAML configuration was used to produce this model:
bh-2
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/slu-37 as a base. The following models were included in the merge: khoantap/llama-linear-0.5-1-0.5-merge jaspionjader/bh-1 The following YAML configuration was used to produce this model:
bh-6
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-2 as a base. The following models were included in the merge: jaspionjader/bh-3 jaspionjader/bh-5 jaspionjader/bh-4 The following YAML configuration was used to produce this model:
bh-9
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-6 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-10
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-6 as a base. The following models were included in the merge: jaspionjader/bh-7 jaspionjader/bh-8 jaspionjader/bh-9 The following YAML configuration was used to produce this model:
bh-16
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/fr-18-8b jaspionjader/bh-14 The following YAML configuration was used to produce this model:
bh-17
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-14 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-18
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-14 as a base. The following models were included in the merge: jaspionjader/bh-15 jaspionjader/bh-16 jaspionjader/bh-17 The following YAML configuration was used to produce this model:
bh-19
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-18 The following YAML configuration was used to produce this model:
bh-25
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/slu-37 jaspionjader/bh-22 The following YAML configuration was used to produce this model:
bh-31
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B jaspionjader/bh-30 The following YAML configuration was used to produce this model:
bh-37
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-34 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-38
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-34 as a base. The following models were included in the merge: jaspionjader/bh-37 jaspionjader/bh-36 jaspionjader/bh-35 The following YAML configuration was used to produce this model:
bh-39
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-38 jaspionjader/Kosmos-EVAA-immersive-sof-v44-8B The following YAML configuration was used to produce this model:
bh-41
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-38 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-46
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-42 as a base. The following models were included in the merge: jaspionjader/bh-43 jaspionjader/bh-45 jaspionjader/bh-44 The following YAML configuration was used to produce this model:
bh-49
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-46 jaspionjader/slu-37 The following YAML configuration was used to produce this model:
bh-51
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-48 jaspionjader/bh-50 The following YAML configuration was used to produce this model:
bh-52
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/bh-50 jaspionjader/Kosmos-EVAA-Franken-Immersive-v39-8B The following YAML configuration was used to produce this model:
bh-54
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-26 as a base. The following models were included in the merge: jaspionjader/bh-53 jaspionjader/bh-51 jaspionjader/bh-50 jaspionjader/bh-52 jaspionjader/bh-48 The following YAML configuration was used to produce this model:
bh-61
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B as a base. The following models were included in the merge: collinzrj/DeepSeek-R1-Distill-Llama-8B-abliterate prithivMLmods/Llama-8B-Distill-CoT The following YAML configuration was used to produce this model:
bh-63
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using jaspionjader/bh-62 as a base. The following models were included in the merge: mergekit-community/aka-test jaspionjader/bh-61 The following YAML configuration was used to produce this model:
Kosmos-Elusive-VENN-Aurora_faustus-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-Aurorafaustus-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-Fusion-light-8B
Kosmos-EVAA-gamma-light-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-gamma-8B jaspionjader/Kosmos-EVAA-v12-8B The following YAML configuration was used to produce this model:
Kosmos-Elusive-8b
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Khetterman/Kosmos-8B-v1 DreadPoor/Elusive1.2-8B-ModelStock The following YAML configuration was used to produce this model:
Auro-Kosmos-EVAA-v2-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-v2-8B DreadPoor/Aurorafaustus-8B-LINEAR The following YAML configuration was used to produce this model:
Auro-Kosmos-EVAA-v2.3-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Auro-Kosmos-EVAA-v2.2-8B jaspionjader/Kosmos-Elusive-VENN-8B The following YAML configuration was used to produce this model:
Kosmos-EVAA-v10-8B
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: jaspionjader/Kosmos-EVAA-Fusion-light-8B jaspionjader/Kosmos-EVAA-v3-8B The following YAML configuration was used to produce this model:
ek-3
bh-60
This is a merge of pre-trained language models created using mergekit. This model was merged using the SCE merge method using jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B as a base. The following models were included in the merge: collinzrj/DeepSeek-R1-Distill-Llama-8B-abliterate prithivMLmods/Llama-8B-Distill-CoT The following YAML configuration was used to produce this model: