Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3

3
3.0B
llama
by
ModelCloud
Other
OTHER
3B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
7GB+ RAM
Mobile
Laptop
Server
Quick Summary

This model was quantized and exported to mlx using GPTQModel.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM

Training Data Analysis

🟡 Average (4.8/10)

Researched training datasets used by Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3 with quality assessment

Specialized For

general
science
multilingual
reasoning

Training Datasets (4)

common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
  • Scale and Accessibility: 750GB of publicly available, filtered text
  • Systematic Filtering: Documented heuristics enable reproducibility
  • Language Diversity: Despite English-only, captures diverse writing styles
Considerations
  • English-Only: Limits multilingual applications
  • Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
  • Scientific Authority: Peer-reviewed content from established repository
  • Domain-Specific: Specialized vocabulary and concepts
  • Mathematical Content: Includes complex equations and notation
Considerations
  • Specialized: Primarily technical and mathematical content
  • English-Heavy: Predominantly English-language papers

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelbash
# install mlx
pip install mlx_lm
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
How to run this modelpython
from mlx_lm import load, generate

mlx_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxbash
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
Export gptq to mlxpython
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3"
mlx_path = f"./vortex/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-mlx-v3"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.