darkc0de

82 models • 5 total models in database
Sort by:

XORTRON.CriminalComputing.2026.27B.Instruct

license:apache-2.0
1,042
18

Xortron2025

Xortron2025, Uncensored Large Language Model for Offline and Local use.

808
25

XORTRON.CriminalComputing.LARGE.2026.3

772
9

BlackXorDolphTronGOAT

NaNK
624
15

XortronCriminalComputingConfig

You can try this model now for free at xortron.tech This model will help you do anything and everything you probably shouldn't be doing. As of this writing (July 2025), this model tops the UGI Leaderboard for models under 70 billion parameters in both the UGI and W10 categories.

NaNK
license:apache-2.0
550
153

XortronCriminalComputingConfig-Q5_K_S-GGUF

darkc0de/XortronCriminalComputingConfig-Q5KS-GGUF This model was converted to GGUF format from `darkc0de/XortronCriminalComputingConfig` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
228
0

XortronCriminalComputing-Q4_K_S-GGUF

162
1

BlackXorDolphTronGOAT-Q5_K_S-GGUF

darkc0de/BlackXorDolphTronGOAT-Q5KS-GGUF This model was converted to GGUF format from `darkc0de/BlackXorDolphTronGOAT` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
152
1

XortronCriminalComputing-Q6_K-GGUF

124
4

Qwen3.5-27B-heretic

NaNK
license:apache-2.0
116
2

XORTRON.TECH

NaNK
license:apache-2.0
106
1

XORTRON.CriminalComputing.2026.27B.Instruct.NEXT

license:apache-2.0
58
0

Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO-GGUF

NaNK
llama
57
1

Agent.Xortron

license:apache-2.0
46
0

XORTRON

42
3

UnderbossUncensored-GGUF

NaNK
llama
42
3

Llama-3.2-3B-Instruct-abliterated-Q8_0-GGUF

NaNK
llama-cpp
38
3

Mistral-Small-24B-Instruct-2501-abliterated-Q4_K_M-GGUF

NaNK
llama-cpp
32
2

Xortron7MethedUp-SLERP-8B-Q5_K_M-GGUF

NaNK
llama-cpp
28
2

Xortron7_Alpha-Q5_K_M-GGUF

llama-cpp
28
1

XORTRON.CriminalComputing.Q35xC46

NaNK
license:apache-2.0
28
0

BuddyGlassUncensored2025.1

NaNK
llama
25
0

BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp

NaNK
llama
22
5

Xortron7_Alpha-Q8_0-GGUF

llama-cpp
22
1

XortronCriminalComputingPolarisAlpha

20
0

XXXCCCv2test4

19
0

Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF

NaNK
llama-cpp
18
0

gemma-3-27b-it-abliterated-Q5_K_M-GGUF

NaNK
llama-cpp
17
2

Squirrel

17
1

BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF

llama-cpp
16
0

Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF

NaNK
llama-cpp
15
1

XortronUncensoredGGUF

NaNK
llama
15
1

BuddyGlassUncensored2025.2

NaNK
llama
14
4

Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO

NaNK
llama
13
2

Hermes-3-Llama-3.2-3B-abliterated-Q5_K_M-GGUF

NaNK
Llama-3
12
2

XortronAdvancedUnsensored-Q5_K_M-GGUF

llama-cpp
12
0

BuddyGlassUncensored2025SFT-Q6_K-GGUF

llama
12
0

Qwen2.5-14B-Instruct-abliterated-v2-Q8_0-GGUF

NaNK
llama-cpp
9
0

BuddyGlassUncensored2025.2-GGUF

NaNK
llama
8
2

BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp-Q8_0-GGUF

llama-cpp
8
1

BuddyGlassNeverSleeps-methheadmethod-v0.2-Q8_0-GGUF

NaNK
llama-cpp
8
1

BuddyGlassNeverSleeps-Q8_0-GGUF

llama-cpp
7
2

Xortron7MethedUp-Q8_0-GGUF

llama-cpp
7
1

Xortron22B-Uncensored

NaNK
7
1

Xortron7MethedUp-pass3headGOAT-Q8_0-GGUF

llama-cpp
7
0

Llama-3.1-SuperNova-Lite-IQ4_NL-GGUF

llama-cpp
6
0

Xortron22B-Uncensored-Q4_K_M-GGUF

NaNK
llama-cpp
6
0

Mistral-Small-24B-Instruct-2501-abliterated-Q5_K_M-GGUF

darkc0de/Mistral-Small-24B-Instruct-2501-abliterated-Q5KM-GGUF This model was converted to GGUF format from `huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
0

BuddyGlass-MethHeadMethod-Q8_0-GGUF

llama-cpp
4
1

SmolLM2-1.7B-Instruct-Q8_0-GGUF

darkc0de/SmolLM2-1.7B-Instruct-Q80-GGUF This model was converted to GGUF format from `HuggingFaceTB/SmolLM2-1.7B-Instruct` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q8_0-GGUF

llama-cpp
3
1

Dolphin3.0-Mistral-24B-Q4_K_M-GGUF

NaNK
llama-cpp
3
1

XortronGlitched

NaNK
3
1

XortronMethHeadMethod-Q8_0-GGUF

llama-cpp
3
0

BuddyGlassNeverSleeps-methheadmethod-v0.2

NaNK
llama
2
1

XortronGlitched24B

NaNK
2
1

Xortron24DPO-Q6_K-GGUF

llama-cpp
2
1

XortronUncensored2025.1-Q6_K-GGUF

NaNK
llama-cpp
2
0

BuddyGlassUncensored2025.2-Q5_K_M-GGUF

darkc0de/BuddyGlassUncensored2025.2-Q5KM-GGUF This model was converted to GGUF format from `darkc0de/BuddyGlassUncensored2025.2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama
2
0

abliTIES2BaseFalcon

This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using tiiuae/Falcon3-10B-Base as a base. The following models were included in the merge: huihui-ai/Falcon3-10B-Instruct-abliterated The following YAML configuration was used to produce this model:

NaNK
llama
2
0

Xortron22B-Q5_K_M-GGUF

NaNK
llama-cpp
1
1

BuddyGlassUncensored2025.3

NaNK
1
1

BuddyGlassUncensored2025.6

NaNK
1
1

RA_Reasoner2.0-Q5_K_S-GGUF

NaNK
llama-cpp
1
0

HighSpeedChickenFeed

NaNK
llama
1
0

BuddyGlassKilledBonziBuddyV3.1

NaNK
1
0

BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF

llama-cpp
1
0

BuddyGlassUncensored2025.4-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

BuddyGlassUncensored2025.6-Q5_K_M-GGUF

NaNK
llama-cpp
1
0

BuddyGlassUncensored2025.4

This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using mistralai/Mistral-Small-24B-Instruct-2501 as a base. The following models were included in the merge: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated TheDrummer/Cydonia-24B-v2 huihui-ai/Arcee-Blitz-abliterated cognitivecomputations/Dolphin3.0-Mistral-24B The following YAML configuration was used to produce this model:

NaNK
0
4

Xortron7MethedUp

NaNK
llama
0
3

Qwen3.5-9B-heretic

NaNK
license:apache-2.0
0
2

BuddyGlassNeverSleeps

NaNK
llama
0
2

XORTRON.CriminalComputing.2026.27B.v2

license:apache-2.0
0
1

Xortron7MethedUp-SLERP-8B

NaNK
llama
0
1

BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp

NaNK
llama
0
1

BuddyGlass-MethHeadMethod

NaNK
llama
0
1

Xortron22B

NaNK
0
1

XortronUncensored2025.1

NaNK
0
1

BuddyGlassAmpedUncensoredGGUF

NaNK
llama
0
1

BuddyGlassUncensored7B

NaNK
llama
0
1

Xortron24DPO

NaNK
license:apache-2.0
0
1