QuixiAI

90 models • 6 total models in database
Sort by:

Llama-3-8B-Instruct-abliterated-v2

NaNK
llama
8,232
26

openchat-3.5-0106-laser

license:apache-2.0
7,894
17

WestLake-7B-v2-laser

NaNK
license:apache-2.0
7,872
36

TinyDolphin-2.8-1.1b

NaNK
llama
2,906
60

Qwen3-30B-A3B-AWQ

NaNK
license:apache-2.0
2,702
12

DeepSeek-R1-0528-AWQ

NaNK
license:mit
2,593
19

WizardLM-7B-Uncensored

Join our Discord! https://discord.gg/cognitivecomputations This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to tra...

NaNK
llama
1,610
488

WizardLM-13B-Uncensored

This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.

NaNK
llama
1,453
641

Wizard-Vicuna-30B-Uncensored

This is wizard-vicuna-13b trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. [](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations Shout out to the open source AI/ML community, and everyone who helped me out. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.44 | | ARC (25-shot) | 62.12 | | HellaSwag (10-shot) | 83.45 | | MMLU (5-shot) | 58.24 | | TruthfulQA (0-shot) | 50.81 | | Winogrande (5-shot) | 78.45 | | GSM8K (5-shot) | 14.25 | | DROP (3-shot) | 26.74 | Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |57.89| |AI2 Reasoning Challenge (25-Shot)|62.12| |HellaSwag (10-Shot) |83.45| |MMLU (5-Shot) |58.24| |TruthfulQA (0-shot) |50.81| |Winogrande (5-shot) |78.45| |GSM8k (5-shot) |14.25|

NaNK
llama
901
161

Wizard Vicuna 13B Uncensored

This is wizard-vicuna-13b trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.52 | | ARC (25-shot) | 58.96 | | HellaSwag (10-shot) | 81.95 | | MMLU (5-shot) | 47.92 | | TruthfulQA (0-shot) | 51.69 | | Winogrande (5-shot) | 75.69 | | GSM8K (5-shot) | 8.64 | | DROP (3-shot) | 21.79 |

NaNK
llama
789
318

WizardLM-33B-V1.0-Uncensored

NaNK
llama
784
60

WizardLM-30B-Uncensored

This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.32 | | ARC (25-shot) | 60.24 | | HellaSwag (10-shot) | 82.93 | | MMLU (5-shot) | 56.8 | | TruthfulQA (0-shot) | 51.57 | | Winogrande (5-shot) | 74.35 | | GSM8K (5-shot) | 12.89 | | DROP (3-shot) | 27.45 | Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |56.46| |AI2 Reasoning Challenge (25-Shot)|60.24| |HellaSwag (10-Shot) |82.93| |MMLU (5-Shot) |56.80| |TruthfulQA (0-shot) |51.57| |Winogrande (5-shot) |74.35| |GSM8k (5-shot) |12.89|

NaNK
llama
706
145

MegaDolphin-120b

NaNK
llama
582
72

Devstral-Vision-Small-2507-gguf

license:apache-2.0
456
10

Qwen3-58B-Distill-Stage3

NaNK
454
0

DeepSeek-R1-AWQ

NaNK
license:mit
369
86

laserxtral-GGUF

license:cc-by-nc-2.0
249
20

Qwen3-72B-Embiggened-gguf

NaNK
224
0

Wizard Vicuna 7B Uncensored

This is wizard-vicuna-13b trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.77 | | ARC (25-shot) | 53.41 | | HellaSwag (10-shot) | 78.85 | | MMLU (5-shot) | 37.09 | | TruthfulQA (0-shot) | 43.48 | | Winogrande (5-shot) | 72.22 | | GSM8K (5-shot) | 4.55 | | DROP (3-shot) | 23.8 | Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |---------------------------------|----:| |Avg. |48.27| |AI2 Reasoning Challenge (25-Shot)|53.41| |HellaSwag (10-Shot) |78.85| |MMLU (5-Shot) |37.09| |TruthfulQA (0-shot) |43.48| |Winogrande (5-shot) |72.22| |GSM8k (5-shot) | 4.55|

NaNK
llama
221
104

DeepSeek-V3-0324-AWQ

NaNK
license:mit
184
23

Qwen3-235B-A22B-AWQ

NaNK
license:apache-2.0
177
12

Llama-3-8B-Instruct-abliterated-v2-gguf

NaNK
license:llama3
98
19

DeepSeek-V3-AWQ

NaNK
license:mit
86
35

Devstral-Vision-Small-2507

NaNK
license:apache-2.0
83
36

Ina-v11.1

NaNK
llama
79
1

Qwen3-58B-Embiggened-gguf

NaNK
57
2

samantha-mistral-instruct-7b

NaNK
license:apache-2.0
48
37

samantha-mistral-7b

NaNK
license:apache-2.0
34
45

Qwen3-72B-Embiggened

NaNK
license:apache-2.0
21
21

WizardLM-Uncensored-Falcon-7b

NaNK
license:apache-2.0
19
56

samantha-1.1-westlake-7b

NaNK
license:apache-2.0
14
11

Samantha 1.11 70b

[](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations This model is trained on Llama-2 and is subject to the Llama-2 license, which permits for commercial and non-commercial use within limits Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She will not engage in roleplay, romance, or sexual activity. She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format. This Samantha was trained 15 epochs, and is significantly smarter. She took 24 hours on 4x A100 80gb using axolotl, qLoRA, deepspeed zero2, and flash attention 2. Her conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml > USER: > > I would like you to be my relationship coach. I am dating again, after a long divorce. Can you please give me advice about how to gain confidence and success? > > ASSISTANT: > > Of course! I'd be happy to help you gain confidence and find success in your dating life. Here are a few tips to get you started: > 1. Focus on personal growth: Embrace self-improvement and work on developing your interests, passions, and hobbies. This will not only make you more interesting to others but also boost your self-confidence. > 2. Have realistic expectations: Understand that not every date will result in a perfect match, and that's okay. Embrace the learning process and be open to new experiences. > 3. Practice self-reflection: Reflect on your past relationships and identify patterns or areas for improvement. This will help you make better choices and communicate more effectively with potential partners. > 4. Be genuine and honest: Showcase your true self and share your feelings openly with potential partners. Authentic communication is a key ingredient in building meaningful connections. > 5. Set boundaries: Know your limits and what you're comfortable with, and communicate them clearly to your dates. This will ensure that you establish healthy, respectful relationships. > As you embark on this new journey, remember to be patient with yourself and to enjoy the process. Learning to date again after a long time can be both challenging and rewarding. I'm here to support you and offer advice whenever you need it. Good luck and have fun! > > USER: > > Please give me a specific agenda for the next week. I can spend 2 hours per day on this. Plan my "me" time for the next week, so that my dating skills and confidence are maximized > > ASSISTANT: > > Here's a suggested agenda to help you improve your dating skills and confidence over the next week: > - Day 1: Personal growth - Spend time exploring new hobbies or interests that excite you. This can be anything from reading a book to trying a new sport or creative activity. > - Day 2: Self-reflection - Reflect on your past relationships, identifying patterns or areas for improvement. Write down your thoughts and insights. > - Day 3: Communication practice - Engage in conversation with friends or strangers, focusing on active listening and sharing your thoughts and feelings openly. > - Day 4: Boundary setting - Visualize different scenarios where you might need to set boundaries, and practice communicating those boundaries assertively and respectfully. > - Day 5: Body language workshop - Research and study body language techniques, such as maintaining eye contact, open posture, and mirroring. Practice implementing these techniques in a mirror or with friends. > - Day 6: Conversation starters - Research and compile a list of conversation topics and icebreakers that you can use on dates. Practice these with friends or family members. > - Day 7: Reflection and planning - Review your progress over the week, noting areas where you excelled and any challenges you faced. Use this reflection to plan your approach for future dates. > > Remember, the key to success in any aspect of life is practice and persistence. Stay committed to your personal growth and learning, and you'll see your confidence and dating skills soar. I'm here to support you every step of the way! Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric | Value | |-----------------------|---------------------------| | Avg. | 65.9 | | ARC (25-shot) | 70.05 | | HellaSwag (10-shot) | 87.55 | | MMLU (5-shot) | 67.82 | | TruthfulQA (0-shot) | 65.02 | | Winogrande (5-shot) | 83.27 | | GSM8K (5-shot) | 29.95 | | DROP (3-shot) | 57.68 |

NaNK
llama
13
67

WizardLM-1.0-Uncensored-Llama2-13b

NaNK
llama
12
52

laserxtral

license:cc-by-nc-2.0
11
78

samantha-falcon-7b

NaNK
11
26

Qwen3-235B-A22B-FP8-W8A16

NaNK
10
0

WizardLM-Uncensored-Falcon-40b

This is WizardLM trained on top of tiiuae/falcon-40b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Thank you chirper.ai for sponsoring some of my compute!

NaNK
license:apache-2.0
9
95

based-30b

NaNK
llama
9
44

QuixiGR00T-N1.5-3B-Zero

NaNK
license:apache-2.0
9
11

minotaur-llama2-13b-qlora

NaNK
base_model:TheBloke/Llama-2-13B-fp16
9
4

Kimi-K2-Instruct-AWQ

license:apache-2.0
9
1

Samantha-1.11-CodeLlama-34b

NaNK
llama
7
44

WizardLM-1.0-Uncensored-CodeLlama-34b

NaNK
llama
7
27

WizardLM-7B-V1.0-Uncensored

NaNK
llama
7
19

Samantha-1.11-13b

NaNK
llama
7
7

Kimi-K2-Base-AWQ

license:apache-2.0
7
2

samantha-1.2-mistral-7b

NaNK
license:apache-2.0
5
36

WizardLM-13B-V1.0-Uncensored

NaNK
llama
5
16

Qwen3-72B-Synthesis

A Qwen3-Architecture 72B Model Forged from `Qwen3-32B` and `Qwen2.5-72B-Instruct`. Qwen3-72B-Synthesis is an experimental, 80-layer, 72-billion-parameter large language model. It represents a novel approach to model creation, designed to produce a model with the pure, modern Qwen3 architecture while inheriting the vast, high-quality knowledge of the 72B-scale Qwen2.5-Instruct model. This was not a simple merge. It was a multi-phase surgical procedure involving dimensional up-scaling, architectural alignment, and a strategic "knowledge transplant" using `MergeKit`. The result is a unique checkpoint that serves as an ideal starting point for further fine-tuning. The core philosophy was to use `Qwen/Qwen3-32B` as the architectural "foundation" and `Qwen/Qwen2.5-72B-Instruct` as the "knowledge donor." Architecture: Qwen3 (RMSNorm, SwiGLU, no biases, includes `qnorm` and `knorm`) Parameters: ~72 Billion Layers: 80 Foundation: `Qwen/Qwen3-32B` Donor: `Qwen/Qwen2.5-72B-Instruct` Tokenizer: `Qwen/Qwen3-32B` Tokenizer (`vocabsize: 151936`) The creation of this model was a deliberate, three-phase process designed to overcome significant architectural incompatibilities. First, the `Qwen/Qwen3-32B` model (64 layers, 5120 hidden dim) was up-scaled to match the target 72B dimensions. This was done using a sophisticated self-interpolation script, where new dimensions were created by averaging different slices of the existing weights, rather than simple tiling. This produced `Qwen3-32B-Upscaled`, a 64-layer model with the correct 72B tensor shapes and Qwen3 architecture. The `Qwen/Qwen2.5-72B-Instruct` model was architecturally incompatible with the Qwen3 target. To solve this, a new donor model, `Qwen2.5-72B-Instruct-Aligned`, was created. This process involved: 1. Creating an empty 80-layer model shell with the pure Qwen3 architecture. 2. Surgically removing all `.bias` tensors from the Qwen2.5 weights. 3. Truncating the Qwen2.5 embedding and language model head layers from a vocabulary of 152064 to match Qwen3's 151936. 4. Loading the modified Qwen2.5 weights into the pure Qwen3 shell, resulting in a perfectly compatible donor model. With two architecturally-compatible models, the final merge was performed using `MergeKit`. A "Knowledge Bridge" strategy was employed to transplant a stable reasoning core from the donor while blending the rest. This model uses the standard Qwen ChatML prompt format. This is an experimental model and should be considered a high-quality checkpoint, not a finished product. Fine-tuning is highly recommended. While it inherits knowledge from a powerful instruction model, the merging process can create slight incoherence between layers. A round of fine-tuning on a high-quality instruction dataset is necessary to harmonize the weights and unlock its full potential. The model may exhibit unexpected behaviors, including repetitiveness or nonsensical outputs, prior to fine-tuning. This model has not been aligned for safety and may produce problematic, biased, or otherwise undesirable content. The user assumes all responsibility for the output generated. This model would not have been possible without the foundational work of Alibaba Cloud on the Qwen models, and the powerful, flexible `MergeKit` toolkit created by Charles Goddard and Arcee.ai.

NaNK
license:apache-2.0
5
8

samantha-yi-34b

NaNK
llama
5
7

Qwen3-235B-A22B-NVFP4-W4A16

NaNK
license:apache-2.0
5
0

yayi2-30b-llama

NaNK
llama
4
22

samantha-1.1-llama-33b

NaNK
llama
4
21

TinyDolphin-2.8.2-1.1b-laser

NaNK
llama
4
16

samantha-1.1-westlake-7b-laser

NaNK
license:apache-2.0
4
14

Samantha-1.11-7b

NaNK
llama
4
8

Qwen3-58B-Distill-Stage2

NaNK
4
1

samantha-phi

3
27

TinyDolphin-2.8.1-1.1b

NaNK
llama
3
11

DeepSeek-R1-bf16

3
3

DeepSeek-R1-0528-bf16

license:mit
3
1

QuietQwen-0.1

3
0

Kimi-K2-Instruct-BF16

3
0

GLM-4.5-Air-FP8-w8a8

3
0

GLM-4.5-Air-FP8-w8a16

3
0

samantha-33b

NaNK
llama
2
44

Samantha-120b

NaNK
llama
2
39

Qwen3-58B-Embiggened

NaNK
license:apache-2.0
2
13

MegaDolphin-120b-exl2

NaNK
license:llama2
2
6

Kimi-K2-Base-BF16

2
1

TinyDolphin-2.8.2-1.1b

NaNK
llama
2
0

GLM-4.5-FP8-w8a16

2
0

samantha-13b

NaNK
llama
1
36

samantha-1.1-llama-7b

NaNK
llama
1
15

laserxtral-exl2

license:cc-by-nc-2.0
1
9

ExpTinyDolphin-2.8-1.1b

NaNK
llama
1
0

Kraken_experiment01

1
0

Qwen3-72B-Instruct-2

NaNK
1
0

Qwen3-58B-Distill-Stage1

NaNK
1
0

Kraken

[](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations The Kraken model and Architecture Kraken is a joint effort between Cognitive Computations, VAGO Solutions and Hyperspace.ai. Created by Fernando Fernandes Neto, David Golchinfar, Lucas Atkins and Eric Hartford The Kraken model combining the best Python, SQL, Function Calling, Reasoning and foreign Models so far. The Kraken Architecture is a sophisticated machine learning framework designed for dynamic text generation tasks. It utilizes the Hugging Face transformers library to orchestrate multiple causal language models (CLMs) and intelligently route input through different models based on the context and content of the input text. The architecture is powered by a custom configuration class (KrakenConfig) that facilitates the integration and management of various components such as tokenizers, models, and routing mechanisms. Dynamic Model Routing: Uses a sequence classification model to route inputs to the most suitable language model based on the input's characteristics. Multiple Language Models: Supports integration of various pre-trained causal language models, allowing for flexible, context-appropriate responses. Customizable Templates: Includes support for input formatting using predefined templates, enhancing the model's adaptability to different conversational contexts. Extensible Configuration: Leverages a custom configuration setup that can be easily extended and adapted for various use cases involving causal language modeling. Switch expert and or quantization: Go into the config file of the krakenmodel folder Fernando Fernandes Neto, David Golchinfar, Lucas Atkins, Eric Hartford - Kraken: An OpenSource Collection of Experts Model, 2024

0
56

Samantha 7b

Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 7b took 1 hour on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. Her conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml

NaNK
llama
0
47

Llama-3-70B-Gradient-524k-adapter

NaNK
base_model:meta-llama/Meta-Llama-3-70B-Instruct
0
33

Llama-3-70B-Gradient-1048k-adapter

NaNK
base_model:meta-llama/Meta-Llama-3-70B-Instruct
0
33

DeepMixtral-8x7b-Instruct

NaNK
license:apache-2.0
0
13

based-7b

NaNK
llama
0
10

based-13b

NaNK
llama
0
8

samantha-1.1-llama-13b

NaNK
llama
0
7

Samantha-1.1-70b

NaNK
llama
0
6

mixtral-instruct-0.1-laser

license:apache-2.0
0
6

QuixiAI-Qwen3-VL-Coder-30B-A3B-Instruct

NaNK
0
1