Valeciela

13 models • 1 total models in database
Sort by:

KansenSakura-Symbiosis-12B-GGUF

NaNK
license:apache-2.0
482
0

gpt-oss-120b-BF16-GGUF

Unsloth configs were selected over openai's in order to incorporate their chat template fixes. This is essentially like unsloth's F16 quant except the F16 weights are in BF16 instead, which is their native precision. |File| |:--:| |gpt-oss-120b-BF16-00001-of-00002.gguf| |gpt-oss-120b-BF16-00002-of-00002.gguf|

NaNK
license:apache-2.0
389
0

Impish_Nemo_12B-Q6_K_XL-GGUF

|File|Notes| |----|:---:| | ImpishNemo12B.Q6KXL.gguf|Q6K with select tensors quantized to Q80 7.23 bpw ~10% increase in size relative to Q6K Quantized from BF16 Very close in fidelity to full precision| | ImpishNemo12B.BF16.gguf|Native precision GGUF BF16|

NaNK
license:apache-2.0
204
0

KansenSakura-Erosion-RP-12b-Q6_K_XL-GGUF

NaNK
license:apache-2.0
136
0

KansenSakura-Symbiosis-12B-Q6_K_XL-GGUF

NaNK
license:apache-2.0
77
0

KansenSakura Symbiosis 12B

You know how in some video games your starting gear or character can get a huge upgrade if you bring it all the way to the end of the game? This is sort of like that. This model was merged using the Multi-SLERP merge method using Retreatcost/KansenSakura-Zero-RP-12b as a base. The following models were included in the merge: Retreatcost/KansenSakura-Erosion-RP-12b Retreatcost/KansenSakura-Eclipse-RP-12b Retreatcost/KansenSakura-Radiance-RP-12b The following YAML configuration was used to produce this model:

NaNK
license:apache-2.0
48
5

ToriiGate-v0.4-7B-Q8_0-GGUF

|File|Notes| |----|:---:| | ToriiGate-v0.4-7B.Q80.gguf|Q80 Quantized from BF16| | ToriiGate-v0.4-7B.mmproj-bf16.gguf|BF16 Native precision|

NaNK
license:apache-2.0
48
0

Cydonia-v4.1-MS3.2-Magnum-Diamond-24B-Q6_K_XL-GGUF

NaNK
license:apache-2.0
45
0

Noir-Blossom-12B-Q6_K_XL-GGUF

|File|Notes| |----|:---:| | Noir-Blossom-12B.Q6KXL.gguf|Q6K with select tensors quantized to Q80 7.23 bpw ~10% increase in size relative to Q6K Quantized from BF16 Very close in fidelity to full precision|

NaNK
license:apache-2.0
30
0

Broken-Tutu-24B-Unslop-v2.0-Q6_K_XL-GGUF

|File|Notes| |----|:---:| | Broken-Tutu-24B-Unslop-v2.0.Q6KXL.gguf|Q6K with select tensors quantized to Q80 ~7 bpw Quantized from BF16 Very close in fidelity to full precision|

NaNK
license:apache-2.0
28
0

Mistral-Large-Instruct-2411-Q6_K_L-GGUF

NaNK
27
0

Sapphira-L3.3-70b-0.1-Q6_K_L-GGUF

|File|Notes| |----|:---:| | PART 1 PART 2|Q6K with token embedding, output, and some other tensors quantized to Q80 6.70 bpw ~2.1% increase in size relative to Q6K Quantized from BF16|

NaNK
llama
15
0

MN-12B-Mag-Mell-R1-Q6_K_XL-GGUF

NaNK
12
0