DreadPoor

56 models • 8 total models in database
Sort by:

Famino-12B-Model_Stock

NaNK
license:apache-2.0
442
8

Famino-12B-Model_Stock-Q4_K_M-GGUF

DreadPoor/Famino-12B-ModelStock-Q4KM-GGUF This model was converted to GGUF format from `DreadPoor/Famino-12B-ModelStock` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
390
0

Irix-12B-Model_Stock-GGUF

NaNK
llama-cpp
287
8

Krix-12B-Model_Stock-Q6_K-GGUF

NaNK
llama-cpp
223
0

Famino-12B-Model_Stock-Q4_0-GGUF

DreadPoor/Famino-12B-ModelStock-Q40-GGUF This model was converted to GGUF format from `DreadPoor/Famino-12B-ModelStock` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
207
2

Famino-12B-Model_Stock-Q6_K-GGUF

DreadPoor/Famino-12B-ModelStock-Q6K-GGUF This model was converted to GGUF format from `DreadPoor/Famino-12B-ModelStock` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
131
1

Smoothie-12B-Model_Stock

NaNK
license:cc-by-nc-4.0
113
4

Krix 12B Model Stock

Krix-TEST is a merge of the following models using mergekit: DreadPoor/IngredientA-TEST DreadPoor/IngredientB-TEST DreadPoor/IngredientC-TEST DreadPoor/IngredientD-TEST

NaNK
license:apache-2.0
75
8

Irix-12B-Model_Stock

NaNK
45
37

New_Base-TEST

NaNK
license:cc-by-nc-4.0
44
0

Paxinium-12b-Model_Stock

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using Delta-Vector/Francois-PE-V2-Huali-12B as a base. The following models were included in the merge: Delta-Vector/Rei-V3-KTO-12B yamatazen/EtherealAurora-12B-v2 redrix/GodSlayer-12B-ABYSS yamatazen/BlueLight-12B The following YAML configuration was used to produce this model:

NaNK
40
4

Famino_ALT-12B-Model_Stock

NaNK
license:cc-by-nc-4.0
29
0

Irix_1.1-12B-Model_Stock-Q4_K_M-GGUF

NaNK
llama-cpp
22
2

Ward-TEST-Q4_K_M-GGUF

llama-cpp
19
0

Strawberry_Smoothie-12B-Model_Stock

NaNK
license:cc-by-nc-4.0
17
3

Suavemente-8B-Model_Stock

NaNK
llama
17
1

Strawberry_Smoothie-TEST-Q6_K-GGUF

NaNK
llama-cpp
16
1

Ward-12B-Model_Stock

TESTing is a merge of the following models using LazyMergekit:

NaNK
13
1

Tumati-TEST

NaNK
license:cc-by-nc-4.0
9
0

Krix-TEST-Q5_K_M-GGUF

llama-cpp
9
0

Paxinium-12b-Model_Stock-Q4_K_M-GGUF

NaNK
llama-cpp
7
3

SSD-TEST

NaNK
license:cc-by-nc-4.0
7
1

BaeZel-8B-LINEAR

License: Apache 2.0, Library Name: Transformers, Tags:

NaNK
llama
7
1

Suavemente-8B-Model_Stock-Q6_K-GGUF

NaNK
llama-cpp
7
1

Paxinium-12b-Model_Stock-Q6_K-GGUF

NaNK
llama-cpp
7
1

Suavemente-8B-Model_Stock-Q4_K_M-GGUF

NaNK
llama-cpp
7
0

Irix_1.1-12B-Model_Stock

NaNK
6
1

ichor_1.3-8B-Model_Stock

NaNK
llama
6
0

Aurora_faustus-8B-LINEAR

License: Apache 2.0, Library Name: Transformers, Tags:

NaNK
llama
5
6

YM-12B-Model_Stock

NaNK
5
4

Munkeigh-TEST

NaNK
5
1

Aspire-8B-model_stock

License: CC BY-NC 4.0, Library Name: Transformers, Tags:

NaNK
llama
4
7

BaeZel-8B-LINEAR-Q4_K_M-GGUF

DreadPoor/BaeZel-8B-LINEAR-Q4KM-GGUF This model was converted to GGUF format from `DreadPoor/BaeZel-8B-LINEAR` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
1

Irix_1.1-12B-Model_Stock-Q6_K-GGUF

NaNK
llama-cpp
4
1

Alita99-8B-LINEAR-Q4_K_M-GGUF

NaNK
llama-cpp
4
0

Ximo-TEST

NaNK
license:cc-by-nc-4.0
3
0

Elusive-8B-Model_Stock-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

Sunk_Cost_Fallacy-8B-Model_Stock-GGUF

NaNK
base_model:SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
3
0

Derivative-8B-Model_Stock

Base model: DreadPoor BaeZel 1.1 8B Model Stock, FuseAI FuseChat Llama 3.1 8B SFT.

NaNK
llama
2
3

Spei_Meridiem-8B-model_stock

NaNK
llama
2
2

Sunk_Cost_Fallacy-8B-Model_Stock

NaNK
llama
2
2

Satyr-7B-Model_Stock

NaNK
license:apache-2.0
2
0

ScaduTorrent1.1-8b-model_stock

NaNK
llama
2
0

H_the_eighth-8B-LINEAR

Base model: BoltMonkey DreadMix, DreadPoor ichor 1.1 8B Model Stock.

NaNK
llama
1
3

Casuar-9B-Model_Stock

Base model: nbeerbower/Gemma2-Gutenberg-Doppel-9B, tannedbum/Ellaria-9B.

NaNK
license:apache-2.0
1
2

Alita99-8B-LINEAR

Base model: DreadPoor LemonP ALT 8B Model Stock, DreadPoor Heart Stolen 8B Model Stock.

NaNK
llama
1
1

Strawberry_Smoothie-TEST

NaNK
license:cc-by-nc-4.0
0
1

Harpy-7B-Model_Stock

NaNK
license:apache-2.0
0
1

Everything-COT-8B-r128-LoRA

NaNK
0
1

SAO_LightMix-8B-Model_Stock

NaNK
llama
0
1

abliteration-OVA-8B-r128-LORA

NaNK
base_model:NousResearch/Meta-Llama-3.1-8B-Instruct
0
1

Satyr_v2-7B-Model_Stock

NaNK
0
1

ASPIRE-8B-r128-LORA

NaNK
0
1

BAEZEL-8B-r128-LORA

This is a LoRA extracted from a language model. It was extracted using mergekit. This LoRA adapter was extracted from DreadPoor/BaeZel-8B-LINEAR and uses NousResearch/Meta-Llama-3.1-8B-Instruct as a base. The following command was used to extract this LoRA adapter:

NaNK
0
1

Sunk_Cost_Fallacy-8B-Model_Stock-Q4_K_M-GGUF

NaNK
llama-cpp
0
1

YM-12B-Model_Stock-GGUF

This is a merge of pre-trained language models created using mergekit. This model was merged using the Model Stock merge method using yamatazen/EtherealAurora-12B-v2 as a base. The following models were included in the merge: LatitudeGames/Wayfarer-12B TheDrummer/Rocinante-12B-v1.1 nbeerbower/Lyra4-Gutenberg-12B MarinaraSpaghetti/NemoMix-Unleashed-12B cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b nothingiisreal/MN-12B-Celeste-V1.9 anthracite-org/magnum-v2-12b The following YAML configuration was used to produce this model:

NaNK
0
1