phi-4-reasoning-awq
293
1
2 languages
license:mit
by
ronantakizawa
Other
OTHER
New
293 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
This is a 4-bit AWQ quantized version of microsoft/Phi-4-reasoning.
Training Data Analysis
🟡 Average (5.2/10)
Researched training datasets used by phi-4-reasoning-awq with quality assessment
Specialized For
code
general
science
multilingual
Training Datasets (3)
the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
- •Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
- •Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
- •Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Usagepythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig
import torch
model_id = "ronantakizawa/phi-4-reasoning-awq"
quantization_config = AwqConfig(
bits=4,
fuse_max_seq_len=2048,
do_fuse=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto",
quantization_config=quantization_config
)
# Reasoning task
prompt = "Solve step-by-step: If a train travels 120 miles in 2 hours, what is its average speed?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_p=0.95
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))Using AutoAWQpythontransformers
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_id = "ronantakizawa/phi-4-reasoning-awq"
model = AutoAWQForCausalLM.from_quantized(
model_id,
fuse_layers=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
# Generate
prompt = "Explain the logic: All dogs are mammals. All mammals are animals. Therefore..."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))Installationbash
pip install autoawq transformers accelerateLicensebibtex
@misc{phi-4-reasoning-awq,
author = {Ronan Takizawa},
title = {Phi-4-reasoning AWQ 4-bit Quantized},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/ronantakizawa/phi-4-reasoning-awq}}
}Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.