Llama-3.1_OpenScholar-8B-AWQ
4
3
llama
by
NeuML
Language Model
OTHER
8B params
New
4 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
18GB+ RAM
Mobile
Laptop
Server
Quick Summary
This is Llama-3.1OpenScholar-8B with AWQ Quantization applied using the following code.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
8GB+ RAM
Training Data Analysis
🟡 Average (4.8/10)
Researched training datasets used by Llama-3.1_OpenScholar-8B-AWQ with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (4)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
- •Scale and Accessibility: 750GB of publicly available, filtered text
- •Systematic Filtering: Documented heuristics enable reproducibility
- •Language Diversity: Despite English-only, captures diverse writing styles
Considerations
- •English-Only: Limits multilingual applications
- •Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Input and output pathpythontransformers
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"
# Quantization config
config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path=path,
low_cpu_mem_usage=True,
use_cache=False,
safetensors=False,
device_map="cuda",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)
# Quantize
model.quantize(tokenizer, quant_config=config)
# Save quantized model
model.save_quantized(output)
# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)
print(f'Model is quantized and saved to "{output}"')Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.