gemma-4-E4B-it-The-DECKARD-V2-Strong-HERETIC-UNCENSORED-Instruct-mxfp8-mlx
509
license:apache-2.0
by
nightmedia
Other
OTHER
4B params
New
509 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM
Training Data Analysis
🟡 Average (4.3/10)
Researched training datasets used by gemma-4-E4B-it-The-DECKARD-V2-Strong-HERETIC-UNCENSORED-Instruct-mxfp8-mlx with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (3)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
gemma-4-E4B-it-The-DECKARD-V2-Strong-HERETIC-UNCENSORED-Instruct-mxfp8-mlxbrainwaves
arc arc/e boolq hswag obkqa piqa wino
bf16 0.434,0.554,0.831
mxfp8 0.444,0.553,0.831,0.646,0.412,0.751,0.630
q8-hi 0.436,0.558,0.833,0.642,0.422,0.755,0.631
q8 0.439,0.556,0.829,0.644,0.418,0.755,0.620
qx86-hi 0.434,0.562,0.827,0.642,0.428,0.756,0.631
qx64-hi 0.422,0.531,0.813,0.623,0.396,0.742,0.619
mxfp4 0.438,0.564,0.842,0.634,0.412,0.741,0.636
Quant Perplexity Peak Memory Tokens/sec
bf16 9.127 ± 0.095 22.01 GB 1438
mxfp8 8.936 ± 0.091 14.91 GB 1172
q8-hi 9.164 ± 0.096 15.30 GB 1220
q8 9.120 ± 0.095 15.04 GB 1219
qx86-hi 9.246 ± 0.097 14.79 GB 1190
qx64-hi 10.965 ± 0.120 12.92 GB 1211
mxfp4 9.588 ± 0.100 12.81 GB 1219
Thinking mode
bf16 0.513,0.707,0.778,0.656,0.428,0.768,0.635
q8 0.509,0.705,0.779,0.656,0.432,0.768,0.638Previous modelbrainwaves
gemma-4-E4B-it-The-DECKARD-HERETIC-UNCENSORED-Thinking
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.436,0.528,0.839,0.637,0.416,0.748,0.627
mxfp4 0.432,0.547,0.846,0.626,0.396,0.735,0.627
Quant Perplexity
mxfp8 8.836 ± 0.091
mxfp4 9.980 ± 0.108Baseline modelbrainwaves
gemma-4-E4B-it
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.404,0.489,0.825,0.586,0.392,0.734,0.661
mxfp4 0.414,0.508,0.854,0.562,0.378,0.717,0.645
Quant Perplexity Peak Memory Tokens/sec
mxfp8 34.652 ± 0.502 14.80 GB 1146
mxfp4 35.203 ± 0.506 11.06 GB 1200text
{%- set enable_thinking = true -%}Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.