darkc0de
XORTRON.CriminalComputing.2026.27B.Instruct
Xortron2025
Xortron2025, Uncensored Large Language Model for Offline and Local use.
XORTRON.CriminalComputing.LARGE.2026.3
BlackXorDolphTronGOAT
XortronCriminalComputingConfig
You can try this model now for free at xortron.tech This model will help you do anything and everything you probably shouldn't be doing. As of this writing (July 2025), this model tops the UGI Leaderboard for models under 70 billion parameters in both the UGI and W10 categories.
XortronCriminalComputingConfig-Q5_K_S-GGUF
darkc0de/XortronCriminalComputingConfig-Q5KS-GGUF This model was converted to GGUF format from `darkc0de/XortronCriminalComputingConfig` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
XortronCriminalComputing-Q4_K_S-GGUF
BlackXorDolphTronGOAT-Q5_K_S-GGUF
darkc0de/BlackXorDolphTronGOAT-Q5KS-GGUF This model was converted to GGUF format from `darkc0de/BlackXorDolphTronGOAT` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
XortronCriminalComputing-Q6_K-GGUF
Qwen3.5-27B-heretic
XORTRON.TECH
XORTRON.CriminalComputing.2026.27B.Instruct.NEXT
Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO-GGUF
Agent.Xortron
XORTRON
UnderbossUncensored-GGUF
Llama-3.2-3B-Instruct-abliterated-Q8_0-GGUF
Mistral-Small-24B-Instruct-2501-abliterated-Q4_K_M-GGUF
Xortron7MethedUp-SLERP-8B-Q5_K_M-GGUF
Xortron7_Alpha-Q5_K_M-GGUF
XORTRON.CriminalComputing.Q35xC46
BuddyGlassUncensored2025.1
BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp
Xortron7_Alpha-Q8_0-GGUF
XortronCriminalComputingPolarisAlpha
XXXCCCv2test4
Llama-3.2-3B-Instruct-uncensored-Q5_K_M-GGUF
gemma-3-27b-it-abliterated-Q5_K_M-GGUF
Squirrel
BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q5_K_M-GGUF
Llama-3.2-3B-Instruct-abliterated-Q4_K_M-GGUF
XortronUncensoredGGUF
BuddyGlassUncensored2025.2
Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO
Hermes-3-Llama-3.2-3B-abliterated-Q5_K_M-GGUF
XortronAdvancedUnsensored-Q5_K_M-GGUF
BuddyGlassUncensored2025SFT-Q6_K-GGUF
Qwen2.5-14B-Instruct-abliterated-v2-Q8_0-GGUF
BuddyGlassUncensored2025.2-GGUF
BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp-Q8_0-GGUF
BuddyGlassNeverSleeps-methheadmethod-v0.2-Q8_0-GGUF
BuddyGlassNeverSleeps-Q8_0-GGUF
Xortron7MethedUp-Q8_0-GGUF
Xortron22B-Uncensored
Xortron7MethedUp-pass3headGOAT-Q8_0-GGUF
Llama-3.1-SuperNova-Lite-IQ4_NL-GGUF
Xortron22B-Uncensored-Q4_K_M-GGUF
Mistral-Small-24B-Instruct-2501-abliterated-Q5_K_M-GGUF
darkc0de/Mistral-Small-24B-Instruct-2501-abliterated-Q5KM-GGUF This model was converted to GGUF format from `huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
BuddyGlass-MethHeadMethod-Q8_0-GGUF
SmolLM2-1.7B-Instruct-Q8_0-GGUF
darkc0de/SmolLM2-1.7B-Instruct-Q80-GGUF This model was converted to GGUF format from `HuggingFaceTB/SmolLM2-1.7B-Instruct` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp-Q8_0-GGUF
Dolphin3.0-Mistral-24B-Q4_K_M-GGUF
XortronGlitched
XortronMethHeadMethod-Q8_0-GGUF
BuddyGlassNeverSleeps-methheadmethod-v0.2
XortronGlitched24B
Xortron24DPO-Q6_K-GGUF
XortronUncensored2025.1-Q6_K-GGUF
BuddyGlassUncensored2025.2-Q5_K_M-GGUF
darkc0de/BuddyGlassUncensored2025.2-Q5KM-GGUF This model was converted to GGUF format from `darkc0de/BuddyGlassUncensored2025.2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
abliTIES2BaseFalcon
This is a merge of pre-trained language models created using mergekit. This model was merged using the TIES merge method using tiiuae/Falcon3-10B-Base as a base. The following models were included in the merge: huihui-ai/Falcon3-10B-Instruct-abliterated The following YAML configuration was used to produce this model:
Xortron22B-Q5_K_M-GGUF
BuddyGlassUncensored2025.3
BuddyGlassUncensored2025.6
RA_Reasoner2.0-Q5_K_S-GGUF
HighSpeedChickenFeed
BuddyGlassKilledBonziBuddyV3.1
BuddyGlassIsBonziBuddyUncensored-Q5_K_M-GGUF
BuddyGlassUncensored2025.4-Q5_K_M-GGUF
BuddyGlassUncensored2025.6-Q5_K_M-GGUF
BuddyGlassUncensored2025.4
This is a merge of pre-trained language models created using mergekit. This model was merged using the DARE TIES merge method using mistralai/Mistral-Small-24B-Instruct-2501 as a base. The following models were included in the merge: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated TheDrummer/Cydonia-24B-v2 huihui-ai/Arcee-Blitz-abliterated cognitivecomputations/Dolphin3.0-Mistral-24B The following YAML configuration was used to produce this model: