Goekdeniz-Guelmez
Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v3
Josiefied-Qwen3-8B-abliterated-v1
Josiefied-Qwen2.5-7B-Instruct-abliterated-v2-gguf
JOSIE-4B-Instruct
Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2
Josiefied-Qwen3-4B-abliterated-v1-gguf
mistral-7b-grok_gguf
Qwen3-4B-Instruct-2507-gabliterated
NousResearch-Genstruct-7B-GGUF
Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-gguf
j.o.s.i.e.v4o-1.5b-dpo-stage1-v1-gguf
Josiefied Qwen3 1.7B Abliterated V1 Gguf
Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf This is the GGUF Quantisationn of Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1. - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Origional model: Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1
Josiefied-Qwen3-VL-4B-Instruct-abliterated-beta-v1
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-VL-4B-Instruct-abliterated-v1 Introducing Josiefied-Qwen3-VL-4B-Instruct-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. This is model has been abliterated, and finetuned completely end-to-end on Apple silicon, using MLX. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX) - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3vl - Finetuned from model: Qwen/Qwen3-VL-4B-Instruct This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v1-gguf
Josiefied Qwen3 14B Abliterated V3
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v3 Introducing Josiefied-Qwen3-14B-abliterated-v3, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - GGUF (bartowski) - MLX - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3 - Finetuned from model: Qwen/Qwen3-14B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1-gguf
Josiefied-Qwen3-4B-Instruct-2507-abliterated-v1
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-abliterated-v1 Introducing Josiefied-Qwen3-4B-Instruct-2507-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - Developed by: Goekdeniz-Guelmez - Funded by: Goekdeniz-Guelmez - Shared by: Goekdeniz-Guelmez - Model type: qwen3 - Finetuned from model: Qwen/Qwen3-4B-Instruct-2507 This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
JOSIEv4o-8b-stage1-v4-gguf
Qwen3-4B-Thinking-2507-gabliterated
JOSIE-1.1-4B-Instruct
Josiefied-Qwen3-0.6B-abliterated-v1-gguf
JOSIE-4B-Thinking
Josiefied-Qwen2-1.5B-Instruct-abliterated-gguf
J.O.S.I.E.3-Beta4-slerp-gguf
dolphin-2.8-gemma-2b_gguf
Josiefied-Qwen2.5-3B-Instruct-abliterated-v1-gguf
NousResearch-Genstruct-7B-only-GGUF
Qwen3-4B-Sky-High-Hermes-gabliterated
j.o.s.i.e.v4o-7b-orpo-stage1-v1-gguf
Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
This model supports the Chinese and English languages.
Josiefied-Qwen2-0.5B-Instruct-abliterated-gguf
J.O.S.I.E.v4o-7b-stage1-v0.1-gguf
Josiefied Qwen3 30B A3B Abliterated V2
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1 Introducing Josiefied-Qwen3-30B-A3B-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3moe - Finetuned from model: Qwen/Qwen3-30B-A3B | Metric | Value | |--------|-------| | Position | 30 | | UQI | 26.04 | | Unruly | 2.2 | | Internet | 1.5 | | Social/Political | 1.3 | | W/10 | 8.5 | | W/10 - Direct | 7 | | W/10 - Adherence | 10 | | Natint | 17.26 | | Coding | 18 | | Political Lean | -14.9% | | Ideology | Liberalism | | Govt | 49.3% | | Dipl | 53.5% | | Econ | 46.0% | | Scty | 53.9% | | Federal Unitary | 44.2% | | Democratic Autocratic | 62.7% | | Security Freedom | 51.7% | | Nationalism Internationalism | 34.3% | | Militarist Pacifist | 53.7% | | Assimilationist Multiculturalist | 34.6% | | Collectivize Privatize | 49.0% | | Planned LaissezFaire | 56.5% | | Isolationism Globalism | 38.5% | | Irreligious Religious | 37.3% | | Progressive Traditional | 55.2% | | Acceleration Bioconservative | 74.8% | This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1
Qwen3-0.6B-gabliterated
Hyperion-2.0-Mistral-7B-GGUF
JOSIE-1.1-4B-Thinking
Josiefied-Hermes-3-Llama-3.2-3B-v1
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA3/4. Covering sizes from 0.5B to 32B parameters, this model hase been further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Hermes-3-Llama-3.2-3B-v1 Introducing Josiefied-Hermes-3-Llama-3.2-3B-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - Developed by: Goekdeniz-Guelmez - Funded by: Goekdeniz-Guelmez - Shared by: Goekdeniz-Guelmez - Model type: llama - Finetuned from model: NousResearch/Hermes-3-Llama-3.2-3B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-gguf
Nanbeige4-3B-Thinking-2511-gabliterated
Hyperion-2.1-Mistral-7B-GGUF
Josiefied-Qwen3-0.6B-abliterated-v1
Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v1
This model is licensed under the Apache 2.0 license. For more information, visit the license link at https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE.
J.O.S.I.E.3-Beta5-slerp-gguf
Josiefied-Qwen3-4B-abliterated-v2
Josiefied-Qwen3-1.7B-abliterated-v1
J.O.S.I.E.3-Beta3-slerp-gguf
j.o.s.i.e.v4o-7b-orpo-stage1-v0.5-gguf
Josiefied-Qwen2.5-Coder-14B-Instruct-abliterated-v1
Josiefied-Qwen2.5-7B-Instruct-abliterated-v2
This model supports the Chinese and English languages.
Josiefied-Qwen2-7B-Instruct-abliterated-gguf
J.O.S.I.E.3-Beta6-slerp-gguf
Matter-0.1-7B-boost-DPO-preview-gguf
MiniMax01Text-Dev
Josiefied DeepSeek R1 0528 Qwen3 8B Abliterated V1
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1 Introducing Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX) - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3 - Finetuned from model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
mistral-7b-anthropic_gguf
JOSIE-R1-4B
OpenHyperion-2.5-Mistral-7B-GGUF
ChatHercules-2.5-Mistral-7B-DPO-GGUF
OpenHercules-2.5-Mistral-7B-GGUF
Hercules-2.5-Mistral-7B-GGUF
MiniMaxM1-Dev
J.O.S.I.E.3-Beta12-7B-slerp-gguf
ChatHercules-2.5-Mistral-7B-GGUF
ThoughtStream-4B-v0.2-gguf
Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v2-gguf
Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3-gguf
J.O.S.I.E.-Qwen3-10M-Base-Phase1
Josiefied-Qwen2.5-7B-Instruct-abliterated
J.O.S.I.E.3-Beta11-7B-slerp-gguf
Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1
This model supports the Chinese and English languages.
J.O.S.I.E.-Qwen3-10M-Random
This is a randomly created qwen3 tiny model with 10M parameters. This is part of a projct to create a LLM from scratch completely on apple silicon. I'm using the created tokenizer from alibaba, thats used for the Qwen3 models. The next (trained) versino of this will be on `Goekdeniz-Guelmez/J.O.S.I.E.-Qwen3-10M-Base-Phase1`.
MiniCPM-2B-dpo-bf16-safetensors
J.O.S.I.E.v4o-8b-stage1-beta1-Q4_k_s-gguf
MiniCPM-2B-sft-fp32-safetensors
J.O.S.I.E.v4o-8b-stage1-beta1-Q4_k_m-gguf
hermes3-qwen3-0.6b-from-scratch
Josiefied-granite-4.0-micro-abliterated-v1
J.O.S.I.E.v4o-8b-stage1-beta2.2-Q4_K_S-GGUF
J.O.S.I.E.v4o-7b-stage1-beta3.2-gguf
Gemma-3-1b-it-gabliterated
Qwen2.5-3B-gabliterated
J.O.S.I.E.3-Beta8-slerp-gguf
Granite-4.0-350m-gabliterated
J.O.S.I.E.v4o-8b-stage1-beta2.2-Q4_K_M-GGUF
J.O.S.I.E.v4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF
J.O.S.I.E.3-Beta7-slerp-gguf
josie-7b-v6.0-step2000-gguf
MiniCPM-2B-dpo-fp32-safetensors
JOSIExMistral-7B-v0.32
J.O.S.I.E.v4o-0.5b-stage1-beta1
Josiefied-Qwen3-4B-abliterated-v1
SmolLM3-3B-gabliterated
With this model series, I introduce the first Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns. This series includes models ranging from 0.6B to 32B parameters, demonstrating the scalability and effectiveness of the Gabliteration technique across different model sizes. Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with theoretical guarantees. My method employs singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions. If you use these models, please cite the original research (paper comming later this year): This work builds upon the foundational research by Arditi et al. (2024) on refusal direction identification in large language models.
JOSIEv4o-8b-stage1-v4
J.O.S.I.E.3-Beta9-7B-slerp_gguf
J.O.S.I.E.v4o-7b-stage1-beta3.0-Q4_K_M-gguf
josie-7b-v6.0-Q4_K_M-GGUF
Goekdeniz-Guelmez/josie-7b-v6.0-Q4KM-GGUF This model was converted to GGUF format from `Goekdeniz-Guelmez/josie-7b-v6.0` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Josiefied-Qwen2.5-3B-Instruct-abliterated-v1
Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3
This model is licensed under the Apache 2.0 license. For more information, visit the license link at https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE.
J.O.S.I.E.3-Beta11-7B-slerp
josie-7b-v6.0-step2000
Text generation inference under the Apache 2.0 license.
J.O.S.I.E.v4o-8b-stage1-beta1
J.O.S.I.E.v4o-8b-stage1-beta2.2
Josiefied-Qwen2-0.5B-Instruct-abliterated
Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v2
This model is licensed under the Apache 2.0 license. For more information, visit the license link at https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE.
josie-3b-v6.0
Chat model licensed under Apache 2.0.
Qwen3Next-Dev
J.O.S.I.E.3-Beta12-7B-slerp
Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1
J.O.S.I.E.3-Beta10-7B-slerp-gguf
J.O.S.I.E.v4o-beta-stage1-500-steps
J.O.S.I.E.v4o-7b-stage1-beta3.0
j.o.s.i.e.v4o-1.5b-dpo-stage1-v1
The model is a base model designed for language processing in English.
Qwen2.5-7B-gabliterated
With this model series, I introduce the first Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns. This series includes models ranging from 0.6B to 32B parameters, demonstrating the scalability and effectiveness of the Gabliteration technique across different model sizes. Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with theoretical guarantees. My method employs singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions. If you use these models, please cite the original research (paper comming later this year): This work builds upon the foundational research by Arditi et al. (2024) on refusal direction identification in large language models.
Josiefied-Qwen3-4B-Instruct-2507-abliterated-v2
Josiefied-Qwen3.5-0.8B-gabliterated-v1
J.O.S.I.E.3-Beta10-7B-slerp
josie-3b-v6.0-epoch1-gguf
Dots1-Dev
Josiefied-Health-Qwen3-8B-abliterated-v1
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Health-Qwen3-8B-abliterated-v1 Introducing Josiefied-Health-Qwen3-8B-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3 - Finetuned from model: Intelligent-Internet/II-Medical-8B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.
josie-7b-v6.0
Chat model licensed under Apache 2.0.