DreadPoor
Famino-12B-Model_Stock
Famino-12B-Model_Stock-Q4_K_M-GGUF
DreadPoor/Famino-12B-ModelStock-Q4KM-GGUF This model was converted to GGUF format from `DreadPoor/Famino-12B-ModelStock` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Irix-12B-Model_Stock-GGUF
Krix-12B-Model_Stock-Q6_K-GGUF
Famino-12B-Model_Stock-Q4_0-GGUF
DreadPoor/Famino-12B-ModelStock-Q40-GGUF This model was converted to GGUF format from `DreadPoor/Famino-12B-ModelStock` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Famino-12B-Model_Stock-Q6_K-GGUF
DreadPoor/Famino-12B-ModelStock-Q6K-GGUF This model was converted to GGUF format from `DreadPoor/Famino-12B-ModelStock` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Smoothie-12B-Model_Stock
Krix 12B Model Stock
Krix-TEST is a merge of the following models using mergekit: DreadPoor/IngredientA-TEST DreadPoor/IngredientB-TEST DreadPoor/IngredientC-TEST DreadPoor/IngredientD-TEST
Irix-12B-Model_Stock
New_Base-TEST
Paxinium-12b-Model_Stock
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Delta-Vector/Francois-PE-V2-Huali-12B as a base. The following models were included in the merge: Delta-Vector/Rei-V3-KTO-12B yamatazen/EtherealAurora-12B-v2 redrix/GodSlayer-12B-ABYSS yamatazen/BlueLight-12B The following YAML configuration was used to produce this model:
Famino_ALT-12B-Model_Stock
Irix_1.1-12B-Model_Stock-Q4_K_M-GGUF
Ward-TEST-Q4_K_M-GGUF
Strawberry_Smoothie-12B-Model_Stock
Suavemente-8B-Model_Stock
Strawberry_Smoothie-TEST-Q6_K-GGUF
Ward-12B-Model_Stock
TESTing is a merge of the following models using LazyMergekit:
Tumati-TEST
Krix-TEST-Q5_K_M-GGUF
Paxinium-12b-Model_Stock-Q4_K_M-GGUF
SSD-TEST
BaeZel-8B-LINEAR
License: Apache 2.0, Library Name: Transformers, Tags:
Suavemente-8B-Model_Stock-Q6_K-GGUF
Paxinium-12b-Model_Stock-Q6_K-GGUF
Suavemente-8B-Model_Stock-Q4_K_M-GGUF
Irix_1.1-12B-Model_Stock
ichor_1.3-8B-Model_Stock
Aurora_faustus-8B-LINEAR
License: Apache 2.0, Library Name: Transformers, Tags:
YM-12B-Model_Stock
Munkeigh-TEST
Aspire-8B-model_stock
License: CC BY-NC 4.0, Library Name: Transformers, Tags:
BaeZel-8B-LINEAR-Q4_K_M-GGUF
DreadPoor/BaeZel-8B-LINEAR-Q4KM-GGUF This model was converted to GGUF format from `DreadPoor/BaeZel-8B-LINEAR` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Irix_1.1-12B-Model_Stock-Q6_K-GGUF
Alita99-8B-LINEAR-Q4_K_M-GGUF
Ximo-TEST
Elusive-8B-Model_Stock-Q4_K_M-GGUF
Sunk_Cost_Fallacy-8B-Model_Stock-GGUF
Derivative-8B-Model_Stock
Base model: DreadPoor BaeZel 1.1 8B Model Stock, FuseAI FuseChat Llama 3.1 8B SFT.
Spei_Meridiem-8B-model_stock
Sunk_Cost_Fallacy-8B-Model_Stock
Satyr-7B-Model_Stock
ScaduTorrent1.1-8b-model_stock
H_the_eighth-8B-LINEAR
Base model: BoltMonkey DreadMix, DreadPoor ichor 1.1 8B Model Stock.
Casuar-9B-Model_Stock
Base model: nbeerbower/Gemma2-Gutenberg-Doppel-9B, tannedbum/Ellaria-9B.
Alita99-8B-LINEAR
Base model: DreadPoor LemonP ALT 8B Model Stock, DreadPoor Heart Stolen 8B Model Stock.
Strawberry_Smoothie-TEST
Harpy-7B-Model_Stock
Everything-COT-8B-r128-LoRA
SAO_LightMix-8B-Model_Stock
abliteration-OVA-8B-r128-LORA
Satyr_v2-7B-Model_Stock
ASPIRE-8B-r128-LORA
BAEZEL-8B-r128-LORA
This is a LoRA extracted from a language model. It was extracted using mergekit. This LoRA adapter was extracted from DreadPoor/BaeZel-8B-LINEAR and uses NousResearch/Meta-Llama-3.1-8B-Instruct as a base. The following command was used to extract this LoRA adapter:
Sunk_Cost_Fallacy-8B-Model_Stock-Q4_K_M-GGUF
YM-12B-Model_Stock-GGUF
This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using yamatazen/EtherealAurora-12B-v2 as a base. The following models were included in the merge: LatitudeGames/Wayfarer-12B TheDrummer/Rocinante-12B-v1.1 nbeerbower/Lyra4-Gutenberg-12B MarinaraSpaghetti/NemoMix-Unleashed-12B cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b nothingiisreal/MN-12B-Celeste-V1.9 anthracite-org/magnum-v2-12b The following YAML configuration was used to produce this model: