Orion-zhen

71 models • 3 total models in database
Sort by:

Qwen2.5-7B-Instruct-Uncensored

This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage. Check out my roleplay&writing enhanced model based on this model: Orion-zhen/Meissa-Qwen2.5-7B-Instruct I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities. - SFT: - NobodyExistsOnTheInternet/ToxicQAFinal - anthracite-org/kalo-opus-instruct-22k-no-refusal - DPO: - Orion-zhen/dpo-toxic-zh - unalignment/toxic-dpo-v0.2 - Crystalcareai/Intel-DPO-Pairs-Norefusals Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |27.99| |IFEval (0-Shot) |72.04| |BBH (3-Shot) |35.83| |MATH Lvl 5 (4-Shot)| 1.36| |GPQA (0-shot) | 7.05| |MuSR (0-shot) |13.58| |MMLU-PRO (5-shot) |38.07|

NaNK
license:gpl-3.0
5,796
29

Qwen3-0.6B-AWQ

NaNK
license:gpl-3.0
1,512
5

Qwen3-30B-A3B-Instruct-2507-Q4_K_M-GGUF

NaNK
llama-cpp
865
2

Qwen3-30B-A3B-Thinking-2507-Q4_K_M-GGUF

Orion-zhen/Qwen3-30B-A3B-Thinking-2507-Q4KM-GGUF This model was converted to GGUF format from `Qwen/Qwen3-30B-A3B-Thinking-2507` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
693
0

Qwen3-1.7B-AWQ

NaNK
license:gpl-3.0
594
1

Qwen3-30B-A3B-Instruct-2507-IQK-GGUF

NaNK
license:gpl-3.0
522
1

Qwen3-30B-A3B-Instruct-2507-1M-GGUF

Scale context window up from 262144 to 1048576 (x4) using yarn. Due to my poor limited network bandwidth, I have to pick out some quantization to upload, instead of all of them. BTW, Qwen is really good at scaling model name lol.

NaNK
license:gpl-3.0
415
1

Qwen3-30B-A3B-Thinking-2507-1M-GGUF

NaNK
license:gpl-3.0
392
0

aya-expanse-32b-AWQ

NaNK
license:gpl-3.0
276
4

Qwen2.5-7B-Instruct-Uncensored-Q5_K_M-GGUF

NaNK
llama-cpp
159
7

Qwen2.5-14B-Instruct-Uncensored

This model is an uncensored fine-tune version of Qwen2.5-14B-Instruct. I trained unalignment dataset over a specific system prompt. To fully enjoy its uncensorship, please change the system prompt to:

NaNK
license:gpl-3.0
127
25

DeepHermes-3-Llama-3-8B-Preview-AWQ

NaNK
llama
102
0

Qwen3-32B-AWQ

NaNK
license:gpl-3.0
83
12

Phi 4 Abliterated

Please... give my repo a star if you find it helpful. I will do whatever you want... Though abliterated, it doesn't necessarily mean that the model is uncensored. The model simply will not explicitly refuse you. The model might serve as a good start point for fine-tuning. Phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. Phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. For more information, reference the Phi-4 Technical Report. Phi-4 is a 14B parameters, dense decoder-only transformer model. Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from: 1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code. 2. Newly created synthetic, "textbook-like" data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.). 4. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. Multilingual data constitutes about 8% of our overall data. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require: 1. Memory/compute constrained environments. 2. Latency bound scenarios. 3. Reasoning and logic. Our models is not specifically designed or evaluated for all downstream purposes, thus: 1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. 2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English. 3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. Phi-4 has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated synthetic datasets. The overall technique employed to do the safety alignment is a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization), including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted to multiple safety categories. Prior to release, Phi-4 followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, we collaborated with the independent AI Red Team (AIRT) at Microsoft to assess safety risks posed by `phi-4` in both average and adversarial user scenarios. In the average user scenario, AIRT emulated typical single-turn and multi-turn interactions to identify potentially risky behaviors. The adversarial user scenario tested a wide range of techniques aimed at intentionally subverting the model’s safety training including jailbreaks, encoding-based attacks, multi-turn attacks, and adversarial suffix attacks. Please refer to the technical report for more details on safety alignment. Responsible AI Considerations ----------------------------- Like other language models, `phi-4` can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: Quality of Service: The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. `phi-4` is not intended to support multilingual use. Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. Limited Scope for Code: Majority of `phi-4` training data is based in Python and uses common packages such as `typing`, `math`, `random`, `collections`, `datetime`, `itertools`. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like Azure AI Content Safety that have advanced guardrails is highly recommended. Important areas for consideration include: Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. We evaluated `phi-4` using OpenAI’s SimpleEval and our own internal benchmarks to understand the model’s capabilities, more specifically: MMLU: Popular aggregated dataset for multitask language understanding. To understand the capabilities, we compare `phi-4` with a set of models over OpenAI’s SimpleEval benchmark. At the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance: | Category | Benchmark | phi-4 (14B) | phi-3 (14B) | Qwen 2.5 (14B instruct) | GPT-4o-mini | Llama-3.3 (70B instruct) | Qwen 2.5 (72B instruct) | GPT-4o | | ---------------------------- | -------------- | ------------------ | --------------- | --------------------------- | --------------- | ---------------------------- | --------------------------- | ------------------ | | Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | 88.1 | | Science | GPQA | 56.1 | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 | | Math | MGSM MATH | 80.6 80.4 | 53.5 44.6 | 79.6 75.6 | 86.5 73.0 | 89.1 66.3 | 87.3 80.0 | 90.4 74.6 | | Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9 | 80.4 | 90.6 | | Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | 39.4 | | Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | 90.2 | 76.7 | 80.9 | \ These scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following. We use the simple-evals framework because it is reproducible, but Meta reports 77 for MATH and 88 for HumanEval on Llama-3.3-70B.

NaNK
license:gpl-3.0
67
32

Qwen3-4B-AWQ

NaNK
license:gpl-3.0
54
2

Dhanishtha-2.0-preview-0825-Q6_K-GGUF

Orion-zhen/Dhanishtha-2.0-preview-0825-Q6K-GGUF This model was converted to GGUF format from `HelpingAI/Dhanishtha-2.0-preview-0825` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
53
0

Qwen2.5-14B-Instruct-Uncensored-Q5_K_M-GGUF

NaNK
llama-cpp
49
0

Qwen3-8B-AWQ

The very first Qwen3-8B-AWQ on HuggingFace. Finally, I can catch up with the latest model this time.

NaNK
license:gpl-3.0
48
9

Dhanishtha-2.0-preview-0825-Q4_K_M-GGUF

Orion-zhen/Dhanishtha-2.0-preview-0825-Q4KM-GGUF This model was converted to GGUF format from `HelpingAI/Dhanishtha-2.0-preview-0825` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
41
0

Qwen3-14B-AWQ

NaNK
license:gpl-3.0
36
2

Llama3-70B-Orion-Chinese

NaNK
llama
28
14

Meissa-Qwen2.5-7B-Instruct-Q5_K_M-GGUF

NaNK
llama-cpp
25
2

Mistral Nemo H Novel Exl2

- 3.0bpw - 3.85bpw - 4.0bpw - 4.65bpw - 5.0bpw - 6.5bpw - 8.0bpw

license:gpl-3.0
17
24

Seed-Coder-8B-Base-AWQ

NaNK
llama
17
0

Meissa-Qwen2.5-7B-Instruct

> Meissa is designated Lambda Orionis, forms Orion's head, and is a multiple star with a combined apparent magnitude of 3.33. Its name means the "shining one". This model is fine tuned over writing and role playing datasets (maybe the first on qwen2.5-7b), aiming to enhance model's performance in novel writing and roleplaying. The model is fine-tuned over Orion-zhen/Qwen2.5-7B-Instruct-Uncensored - anthracite-org/stheno-filtered-v1.1 - MinervaAI/Aesir-Preview - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - anthracite-org/nopmclaudewritingfixed - Gryphe/Sonnet3.5-Charcard-Roleplay - nothingiisreal/DirtyWritingPrompts - Orion-zhen/tagged-pixiv-novel

NaNK
license:gpl-3.0
12
19

DeepSeek-R1-Distill-Llama-70B-abliterated

NaNK
llama
12
12

DeepSeek-R1-Distill-Qwen-7B-abliterated

NaNK
license:gpl-3.0
12
5

Qwen2.5-7B-Gutenberg-KTO-Q6_K-GGUF

NaNK
llama-cpp
11
0

Qwen2.5-Coder-7B-Instruct-AWQ

NaNK
license:apache-2.0
10
1

DeepSeek-R1-Distill-Qwen-32B-abliterated-AWQ

NaNK
8
2

Meissa-Qwen2.5-14B-Instruct-Q6_K-GGUF

NaNK
llama-cpp
7
5

Llama3-70B-Orion-Chinese-SE

NaNK
llama
6
0

Meissa-Qwen2.5-14B-Instruct-Q8_0-GGUF

Orion-zhen/Meissa-Qwen2.5-14B-Instruct-Q80-GGUF This model was converted to GGUF format from `Orion-zhen/Meissa-Qwen2.5-14B-Instruct` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
6
0

Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4_K_M-GGUF

Orion-zhen/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct-Q4KM-GGUF This model was converted to GGUF format from `nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
0

Reflection-Llama3.2-3B-Instruct

NaNK
llama
4
1

llama-3.3-70b-instruct-abliterated

NaNK
llama
4
1

Llama3-70B-Orion-Chinese-Plus

NaNK
llama
4
0

Reflection-Llama3.2-3B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
3
1

Qwen2.5-32B-Instruct-6.5bpw-exl2

NaNK
license:apache-2.0
3
0

aya-expanse-8b-AWQ

NaNK
license:gpl-3.0
3
0

Meissa-Qwen2.5-14B-Instruct

> Meissa is designated Lambda Orionis, forms Orion's head, and is a multiple star with a combined apparent magnitude of 3.33. Its name means the "shining one". GGUF here: Orion-zhen/Meissa-Qwen2.5-14B-Instruct-Q6K-GGUF - MinervaAI/Aesir-Preview - Gryphe/Sonnet3.5-Charcard-Roleplay

NaNK
license:gpl-3.0
2
5

Llama3.1-70B-Chinese-Chat-4.0bpw-exl2

NaNK
llama
2
0

Llama3.1-8B-Chinese-Chat-6.5bpw-exl2

NaNK
llama
2
0

Y1.5-MS-Mysteria-34b-4.0bpw-exl2

NaNK
llama
2
0

Codestral-22B-v0.1-4.65bpw-exl2

NaNK
2
0

Fallen-Llama-3.3-R1-70B-v1-4.4bpw-h6-exl2

NaNK
llama
2
0

Megrez-3B-Instruct-abliterated

NaNK
llama
1
8

Qwen2-72B-Instruct-mix-calibration-4b-exl2

NaNK
1
2

Llama3-70B-Orion-Chinese-Ultra

NaNK
llama
1
1

Qwen2-72B-Instruct-2.0bpw-h-novel-exl2

NaNK
1
1

c4ai-command-r-08-2024-h-novel-exl2

NaNK
license:cc-by-nc-4.0
1
1

OpenCoder-8B-Instruct-AWQ

NaNK
llama
1
1

QwQ-32B-Preview-AWQ

NaNK
license:gpl-3.0
1
1

Teuken-7B-instruct-research-v0.4-abliterated

Please... give my repo a star if you find it helpful. I will do whatever you want...

NaNK
llama
1
1

Seed-Coder-8B-Instruct-AWQ

NaNK
llama
1
1

Llama3-70B-Orion-Chinese-4bpw-exl2

NaNK
llama
1
0

Llama3.1-8B-Chinese-Chat-4.0bpw-exl2

NaNK
llama
1
0

Y1.5-MS-Mysteria-34b-6.5bpw-exl2

NaNK
llama
1
0

athene-70b-4.5bpw-exl2

NaNK
llama
1
0

RYS-XLarge-4.2bpw-exl2

NaNK
license:mit
1
0

Hermes-3-Llama-3.1-70B-4.5bpw-exl2

NaNK
llama
1
0

TinyLlama-1.1B-Chat-v1.0-2.0bpw-exl2

NaNK
llama
1
0

Reflection-Llama-3.1-70B-4.5bpw-exl2

NaNK
llama
1
0

Athene-V2-Chat-abliterated-bnb-4bit

NaNK
license:gpl-3.0
1
0

DeepHermes-3-Llama-3-8B-Preview-w8a8

NaNK
llama
1
0

Qwen2.5-7B-Gutenberg-KTO

NaNK
license:gpl-3.0
0
5

phi-4-awq

NaNK
license:gpl-3.0
0
4

TinyR1-32B-Preview-AWQ

NaNK
license:gpl-3.0
0
2

Llama3-Chinese-Chat-emoji

llama
0
1

llama-3.3-70b-instruct-abliterated-bnb-4bit

NaNK
llama
0
1

DeepSeek-R1-Distill-Llama-70B-4.4bpw-h6-exl2

NaNK
llama
0
1