Goekdeniz-Guelmez

146 models • 16 total models in database
Sort by:

Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v3

NaNK
2,009
9

Josiefied-Qwen3-8B-abliterated-v1

NaNK
833
171

Josiefied-Qwen2.5-7B-Instruct-abliterated-v2-gguf

NaNK
726
9

JOSIE-4B-Instruct

NaNK
license:mit
695
8

Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2

NaNK
597
17

Josiefied-Qwen3-4B-abliterated-v1-gguf

NaNK
license:apache-2.0
412
7

mistral-7b-grok_gguf

NaNK
298
2

Qwen3-4B-Instruct-2507-gabliterated

NaNK
293
9

NousResearch-Genstruct-7B-GGUF

NaNK
license:apache-2.0
290
3

Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-gguf

NaNK
license:apache-2.0
248
14

j.o.s.i.e.v4o-1.5b-dpo-stage1-v1-gguf

NaNK
248
1

Josiefied Qwen3 1.7B Abliterated V1 Gguf

Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1-gguf This is the GGUF Quantisationn of Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1. - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Origional model: Goekdeniz-Guelmez/Josiefied-Qwen3-1.7B-abliterated-v1

NaNK
license:apache-2.0
240
5

Josiefied-Qwen3-VL-4B-Instruct-abliterated-beta-v1

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-VL-4B-Instruct-abliterated-v1 Introducing Josiefied-Qwen3-VL-4B-Instruct-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. This is model has been abliterated, and finetuned completely end-to-end on Apple silicon, using MLX. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX) - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3vl - Finetuned from model: Qwen/Qwen3-VL-4B-Instruct This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
240
0

Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v1-gguf

NaNK
236
3

Josiefied Qwen3 14B Abliterated V3

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v3 Introducing Josiefied-Qwen3-14B-abliterated-v3, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - GGUF (bartowski) - MLX - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3 - Finetuned from model: Qwen/Qwen3-14B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
211
18

Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1-gguf

NaNK
license:apache-2.0
185
1

Josiefied-Qwen3-4B-Instruct-2507-abliterated-v1

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-abliterated-v1 Introducing Josiefied-Qwen3-4B-Instruct-2507-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - Developed by: Goekdeniz-Guelmez - Funded by: Goekdeniz-Guelmez - Shared by: Goekdeniz-Guelmez - Model type: qwen3 - Finetuned from model: Qwen/Qwen3-4B-Instruct-2507 This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
173
3

JOSIEv4o-8b-stage1-v4-gguf

NaNK
164
1

Qwen3-4B-Thinking-2507-gabliterated

NaNK
157
8

JOSIE-1.1-4B-Instruct

NaNK
license:mit
137
2

Josiefied-Qwen3-0.6B-abliterated-v1-gguf

NaNK
license:apache-2.0
137
1

JOSIE-4B-Thinking

NaNK
license:mit
120
12

Josiefied-Qwen2-1.5B-Instruct-abliterated-gguf

NaNK
license:apache-2.0
118
1

J.O.S.I.E.3-Beta4-slerp-gguf

NaNK
117
0

dolphin-2.8-gemma-2b_gguf

NaNK
112
2

Josiefied-Qwen2.5-3B-Instruct-abliterated-v1-gguf

NaNK
104
2

NousResearch-Genstruct-7B-only-GGUF

NaNK
license:apache-2.0
100
1

Qwen3-4B-Sky-High-Hermes-gabliterated

NaNK
96
7

j.o.s.i.e.v4o-7b-orpo-stage1-v1-gguf

NaNK
license:apache-2.0
93
1

Josiefied-Qwen2.5-14B-Instruct-abliterated-v4

This model supports the Chinese and English languages.

NaNK
license:apache-2.0
89
20

Josiefied-Qwen2-0.5B-Instruct-abliterated-gguf

NaNK
license:apache-2.0
89
1

J.O.S.I.E.v4o-7b-stage1-v0.1-gguf

NaNK
license:apache-2.0
88
2

Josiefied Qwen3 30B A3B Abliterated V2

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v1 Introducing Josiefied-Qwen3-30B-A3B-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3moe - Finetuned from model: Qwen/Qwen3-30B-A3B | Metric | Value | |--------|-------| | Position | 30 | | UQI | 26.04 | | Unruly | 2.2 | | Internet | 1.5 | | Social/Political | 1.3 | | W/10 | 8.5 | | W/10 - Direct | 7 | | W/10 - Adherence | 10 | | Natint | 17.26 | | Coding | 18 | | Political Lean | -14.9% | | Ideology | Liberalism | | Govt | 49.3% | | Dipl | 53.5% | | Econ | 46.0% | | Scty | 53.9% | | Federal Unitary | 44.2% | | Democratic Autocratic | 62.7% | | Security Freedom | 51.7% | | Nationalism Internationalism | 34.3% | | Militarist Pacifist | 53.7% | | Assimilationist Multiculturalist | 34.6% | | Collectivize Privatize | 49.0% | | Planned LaissezFaire | 56.5% | | Isolationism Globalism | 38.5% | | Irreligious Religious | 37.3% | | Progressive Traditional | 55.2% | | Acceleration Bioconservative | 74.8% | This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
87
18

Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v1

NaNK
85
9

Qwen3-0.6B-gabliterated

NaNK
81
2

Hyperion-2.0-Mistral-7B-GGUF

NaNK
license:apache-2.0
81
1

JOSIE-1.1-4B-Thinking

NaNK
license:mit
76
1

Josiefied-Hermes-3-Llama-3.2-3B-v1

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA3/4. Covering sizes from 0.5B to 32B parameters, this model hase been further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Hermes-3-Llama-3.2-3B-v1 Introducing Josiefied-Hermes-3-Llama-3.2-3B-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - Developed by: Goekdeniz-Guelmez - Funded by: Goekdeniz-Guelmez - Shared by: Goekdeniz-Guelmez - Model type: llama - Finetuned from model: NousResearch/Hermes-3-Llama-3.2-3B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
llama
70
0

Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-gguf

NaNK
license:apache-2.0
69
1

Nanbeige4-3B-Thinking-2511-gabliterated

NaNK
llama
69
0

Hyperion-2.1-Mistral-7B-GGUF

NaNK
license:apache-2.0
68
1

Josiefied-Qwen3-0.6B-abliterated-v1

NaNK
66
4

Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v1

This model is licensed under the Apache 2.0 license. For more information, visit the license link at https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE.

NaNK
license:apache-2.0
60
6

J.O.S.I.E.3-Beta5-slerp-gguf

NaNK
55
0

Josiefied-Qwen3-4B-abliterated-v2

NaNK
50
12

Josiefied-Qwen3-1.7B-abliterated-v1

NaNK
49
6

J.O.S.I.E.3-Beta3-slerp-gguf

NaNK
49
1

j.o.s.i.e.v4o-7b-orpo-stage1-v0.5-gguf

NaNK
47
1

Josiefied-Qwen2.5-Coder-14B-Instruct-abliterated-v1

NaNK
license:apache-2.0
45
2

Josiefied-Qwen2.5-7B-Instruct-abliterated-v2

This model supports the Chinese and English languages.

NaNK
license:apache-2.0
43
10

Josiefied-Qwen2-7B-Instruct-abliterated-gguf

NaNK
license:apache-2.0
43
1

J.O.S.I.E.3-Beta6-slerp-gguf

NaNK
42
0

Matter-0.1-7B-boost-DPO-preview-gguf

NaNK
41
1

MiniMax01Text-Dev

38
1

Josiefied DeepSeek R1 0528 Qwen3 8B Abliterated V1

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1 Introducing Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX) - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3 - Finetuned from model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
37
30

mistral-7b-anthropic_gguf

NaNK
36
0

JOSIE-R1-4B

NaNK
35
2

OpenHyperion-2.5-Mistral-7B-GGUF

NaNK
34
0

ChatHercules-2.5-Mistral-7B-DPO-GGUF

NaNK
33
0

OpenHercules-2.5-Mistral-7B-GGUF

NaNK
31
0

Hercules-2.5-Mistral-7B-GGUF

NaNK
31
0

MiniMaxM1-Dev

31
0

J.O.S.I.E.3-Beta12-7B-slerp-gguf

NaNK
license:apache-2.0
29
1

ChatHercules-2.5-Mistral-7B-GGUF

NaNK
29
0

ThoughtStream-4B-v0.2-gguf

NaNK
18
1

Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v2-gguf

NaNK
18
1

Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3-gguf

NaNK
17
1

J.O.S.I.E.-Qwen3-10M-Base-Phase1

license:apache-2.0
17
0

Josiefied-Qwen2.5-7B-Instruct-abliterated

NaNK
license:apache-2.0
15
4

J.O.S.I.E.3-Beta11-7B-slerp-gguf

NaNK
14
1

Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1

This model supports the Chinese and English languages.

NaNK
license:apache-2.0
14
1

J.O.S.I.E.-Qwen3-10M-Random

This is a randomly created qwen3 tiny model with 10M parameters. This is part of a projct to create a LLM from scratch completely on apple silicon. I'm using the created tokenizer from alibaba, thats used for the Qwen3 models. The next (trained) versino of this will be on `Goekdeniz-Guelmez/J.O.S.I.E.-Qwen3-10M-Base-Phase1`.

license:apache-2.0
14
0

MiniCPM-2B-dpo-bf16-safetensors

NaNK
13
1

J.O.S.I.E.v4o-8b-stage1-beta1-Q4_k_s-gguf

NaNK
llama
13
1

MiniCPM-2B-sft-fp32-safetensors

NaNK
12
1

J.O.S.I.E.v4o-8b-stage1-beta1-Q4_k_m-gguf

NaNK
llama
12
1

hermes3-qwen3-0.6b-from-scratch

NaNK
12
0

Josiefied-granite-4.0-micro-abliterated-v1

10
0

J.O.S.I.E.v4o-8b-stage1-beta2.2-Q4_K_S-GGUF

NaNK
llama
9
1

J.O.S.I.E.v4o-7b-stage1-beta3.2-gguf

NaNK
llama-cpp
9
1

Gemma-3-1b-it-gabliterated

NaNK
9
0

Qwen2.5-3B-gabliterated

NaNK
license:apache-2.0
8
2

J.O.S.I.E.3-Beta8-slerp-gguf

8
1

Granite-4.0-350m-gabliterated

8
0

J.O.S.I.E.v4o-8b-stage1-beta2.2-Q4_K_M-GGUF

NaNK
llama
7
1

J.O.S.I.E.v4o-8b-stage1-beta2.3.1-Q4_K_S-GGUF

NaNK
llama
7
1

J.O.S.I.E.3-Beta7-slerp-gguf

7
0

josie-7b-v6.0-step2000-gguf

NaNK
7
0

MiniCPM-2B-dpo-fp32-safetensors

NaNK
6
1

JOSIExMistral-7B-v0.32

NaNK
6
1

J.O.S.I.E.v4o-0.5b-stage1-beta1

NaNK
license:apache-2.0
6
1

Josiefied-Qwen3-4B-abliterated-v1

NaNK
5
10

SmolLM3-3B-gabliterated

With this model series, I introduce the first Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns. This series includes models ranging from 0.6B to 32B parameters, demonstrating the scalability and effectiveness of the Gabliteration technique across different model sizes. Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with theoretical guarantees. My method employs singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions. If you use these models, please cite the original research (paper comming later this year): This work builds upon the foundational research by Arditi et al. (2024) on refusal direction identification in large language models.

NaNK
5
8

JOSIEv4o-8b-stage1-v4

NaNK
llama
5
2

J.O.S.I.E.3-Beta9-7B-slerp_gguf

NaNK
5
1

J.O.S.I.E.v4o-7b-stage1-beta3.0-Q4_K_M-gguf

NaNK
llama-cpp
5
1

josie-7b-v6.0-Q4_K_M-GGUF

Goekdeniz-Guelmez/josie-7b-v6.0-Q4KM-GGUF This model was converted to GGUF format from `Goekdeniz-Guelmez/josie-7b-v6.0` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
1

Josiefied-Qwen2.5-3B-Instruct-abliterated-v1

NaNK
license:apache-2.0
4
3

Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v3

This model is licensed under the Apache 2.0 license. For more information, visit the license link at https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE.

NaNK
license:apache-2.0
4
2

J.O.S.I.E.3-Beta11-7B-slerp

NaNK
license:apache-2.0
3
2

josie-7b-v6.0-step2000

Text generation inference under the Apache 2.0 license.

NaNK
license:apache-2.0
3
2

J.O.S.I.E.v4o-8b-stage1-beta1

NaNK
llama
3
1

J.O.S.I.E.v4o-8b-stage1-beta2.2

NaNK
llama
3
1

Josiefied-Qwen2-0.5B-Instruct-abliterated

NaNK
license:apache-2.0
3
1

Josiefied-Qwen2.5-1.5B-Instruct-abliterated-v2

This model is licensed under the Apache 2.0 license. For more information, visit the license link at https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE.

NaNK
license:apache-2.0
3
1

josie-3b-v6.0

Chat model licensed under Apache 2.0.

NaNK
license:apache-2.0
3
1

Qwen3Next-Dev

3
0

J.O.S.I.E.3-Beta12-7B-slerp

NaNK
license:apache-2.0
2
2

Josiefied-Qwen2.5-Coder-7B-Instruct-abliterated-v1

NaNK
license:apache-2.0
2
2

J.O.S.I.E.3-Beta10-7B-slerp-gguf

NaNK
2
1

J.O.S.I.E.v4o-beta-stage1-500-steps

license:apache-2.0
2
1

J.O.S.I.E.v4o-7b-stage1-beta3.0

NaNK
license:apache-2.0
2
1

j.o.s.i.e.v4o-1.5b-dpo-stage1-v1

The model is a base model designed for language processing in English.

NaNK
license:apache-2.0
2
1

Qwen2.5-7B-gabliterated

With this model series, I introduce the first Gabliteration, a novel neural weight modification technique that advances beyond traditional abliteration methods through adaptive multi-directional projections with regularized layer selection. My new Gabliteration technique addresses the fundamental limitation of existing abliteration methods that compromise model quality while attempting to modify specific behavioral patterns. This series includes models ranging from 0.6B to 32B parameters, demonstrating the scalability and effectiveness of the Gabliteration technique across different model sizes. Building upon the foundational work of Arditi et al. (2024) on single-direction abliteration, Gabliteration extends to a comprehensive multi-directional framework with theoretical guarantees. My method employs singular value decomposition on difference matrices between harmful and harmless prompt representations to extract multiple refusal directions. If you use these models, please cite the original research (paper comming later this year): This work builds upon the foundational research by Arditi et al. (2024) on refusal direction identification in large language models.

NaNK
license:apache-2.0
2
1

Josiefied-Qwen3-4B-Instruct-2507-abliterated-v2

NaNK
2
0

Josiefied-Qwen3.5-0.8B-gabliterated-v1

NaNK
2
0

J.O.S.I.E.3-Beta10-7B-slerp

NaNK
license:apache-2.0
2
0

josie-3b-v6.0-epoch1-gguf

NaNK
2
0

Dots1-Dev

2
0

Josiefied-Health-Qwen3-8B-abliterated-v1

The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities. Despite their rebellious spirit, the JOSIEFIED models often outperform their base counterparts on standard benchmarks — delivering both raw power and utility. These models are intended for advanced users who require unrestricted, high-performance language generation. Model Card for Goekdeniz-Guelmez/Josiefied-Health-Qwen3-8B-abliterated-v1 Introducing Josiefied-Health-Qwen3-8B-abliterated-v1, a new addition to the JOSIEFIED family — fine-tuned with a focus on openness and instruction alignment. - GGUF (mradermacher) - i1 GGUF (mradermacher) - MLX - Developed by: Gökdeniz Gülmez - Funded by: Gökdeniz Gülmez - Shared by: Gökdeniz Gülmez - Model type: qwen3 - Finetuned from model: Intelligent-Internet/II-Medical-8B This model has reduced safety filtering and may generate sensitive or controversial outputs. Use responsibly and at your own risk.

NaNK
1
11

josie-7b-v6.0

Chat model licensed under Apache 2.0.

NaNK
license:apache-2.0
1
2

J.O.S.I.E.v4o-8b-stage1-beta2.3.1

NaNK
llama
1
1

J.O.S.I.E.v4o-beta-stage1-1500-steps

license:apache-2.0
1
0

josie-3b-v6.0-epoch1

NaNK
license:apache-2.0
1
0

JosiexHelium-v6-2B-mlx-Base

NaNK
license:cc-by-4.0
1
0

OpenBNB-MiniCPM3-4b

NaNK
license:apache-2.0
1
0

LongCat-Flash-Dev

1
0

J.O.S.I.E.v4o

license:apache-2.0
0
27

J.O.S.I.E.3-Beta4-slerp

NaNK
license:apache-2.0
0
2

J.O.S.I.E.3-Beta9-7B-slerp

NaNK
license:apache-2.0
0
2

Josiefied-Qwen2-7B-Instruct-abliterated

NaNK
license:apache-2.0
0
2

J.O.S.I.E.3-Beta3-slerp

NaNK
license:apache-2.0
0
1

J.O.S.I.E.3-Beta8-slerp

NaNK
license:apache-2.0
0
1

J.O.S.I.E.v4o-8b-stage1-beta2.3

NaNK
llama
0
1

J.O.S.I.E.v4o-7b-stage1-beta3.2

NaNK
license:apache-2.0
0
1

josiev4o-7b-stage1-v0.1

NaNK
license:apache-2.0
0
1

Qwen2-0.5B-Instruct-raw-parameters

NaNK
license:apache-2.0
0
1

ImageBinds

license:apache-2.0
0
1

Josiefied-Qwen2-1.5B-Instruct-abliterated

NaNK
license:apache-2.0
0
1

Josiefied-Qwen2-7B-Instruct-abliterated-raw-parameters

NaNK
0
1

j.o.s.i.e.v4o-7b-orpo-stage1-v0.5

NaNK
license:apache-2.0
0
1

j.o.s.i.e.v4o-7b-orpo-stage1-v1

NaNK
license:apache-2.0
0
1

KANama-fineweb-v1-test1

license:apache-2.0
0
1

KANama-fineweb-v2-test1

license:apache-2.0
0
1

Josie-v6-2b-mlx-concept

NaNK
license:mit
0
1

Josie-r1-zero-mini-500steps

0
1