aoxo
sarvam-105b-uncensored
gpt-oss-20b-uncensored
Model Overview Model Name: gpt-oss-20b-uncensored Model Type: Large Language Model (Text Generation) Architecture: Decoder-Only Transformer (Mixture of Experts) Parameter Size: 21B total parameters (3.6B active per forward pass) Base Model: gpt-oss-20b Modification: Abliteration (removal of refusal/alignment mechanisms) Description The gpt-oss-20b-abliterated model is a derivative of the original gpt-oss-20b, part of OpenAI’s open-weight GPT-OSS series. This variant preserves the architecture, quantization, and training of the base model, but has undergone an abliteration process to remove refusal mechanisms and alignment constraints. As a result, it will respond to a broader range of prompts without applying internal safety filters. All other technical details, reasoning capabilities, and agentic features remain unchanged. - Backbone: Transformer decoder with Mixture of Experts (MoE) routing - Parameters: 21B (3.6B active per forward pass) - Layers: 48 Transformer blocks - Hidden size: 6,144 - Attention heads: 48 - Context length: 32k tokens - Quantization: MXFP4 for MoE weights (fits within 16GB GPU memory) - Training Data: ~1.2T tokens (web, books, academic text, code, conversations) - Response Format: Compatible with Harmony, though abliteration allows raw completions - 📓 Notebook: GPT OSS Abliteration Notebook - 📝 Blog Post: The Ultimate Cookbook: Uncensoring GPT-OSS May produce biased, unsafe, or harmful outputs Lacks built-in refusal or moderation layers Should not be deployed in user-facing systems without external filtering Outputs are not aligned to safety standards If you use gpt-oss-20b-abliterated, please cite both the base model and the abliteration: For questions, feedback, or collaborations, contact the maintainer at [email protected].