Yuma42
Llama3.1-DeepDilemma-V1-8B-Q4_K_S-GGUF
Yuma42/Llama3.1-DeepDilemma-V1-8B-Q4KS-GGUF This model was converted to GGUF format from `Yuma42/Llama3.1-DeepDilemma-V1-8B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
KangalKhan-RawEmerald-7B-GGUF
KangalKhan-RawRuby-7B-GGUF
Llama3.1-DeepDilemma-V0.9-8B
Llama3.1-IgneousIguana-8B-Q4_K_S-GGUF
KangalKhan-Ruby-7B-Fixed-GGUF
Llama3.1-CogiMes-V0.5-8B
Llama3.1-StableRoots-V0.5-8B
Llama3.1-StableRoots-V0.5-8B is a merge of the following models using LazyMergekit: Yuma42/Llama3.1-IgneousIguana-8B Yuma42/Llama3.1-CogiMes-V0.5-8B
Llama3.1-DeepDilemma-V1-8B
Llama3.1-DeepDilemma-V1-8B is a merge of the following models using LazyMergekit: Yuma42/Llama3.1-StableRoots-V0.5-8B Yuma42/Llama3.1-DeepDilemma-V0.9-8B
KangalKhan-Sapphire-7B-GGUF
Llama3.1-SuperHawk-8B
license: llama3.1 tags: merge
Llama3.1-PuzzleSolver-V0.5-8B
Llama3.1-PuzzleSolver-V0.5-8B is a merge of the following models using LazyMergekit: Yuma42/Llama3.1-CogiMes-V0.5-8B Yuma42/Llama3.1-CrimeSolver-8B
Llama3.1-IgneousIguana-8B
license: llama3.1 tags: merge
KangalKhan-RawRuby-7B
Language: English. License: Apache 2.0.