unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx

32
30.0B
license:apache-2.0
by
nightmedia
Image Model
OTHER
30B params
New
32 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
68GB+ RAM
Mobile
Laptop
Server
Quick Summary

Let’s break down the differences between: - unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx - unsloth-Qwen3-VL-30B-A3B-Instruct-qx86-hi-mlx - unsloth-Qwen3-VL-30...

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
28GB+ RAM

Code Examples

Use with mlxbash
pip install mlx-lm
Use with mlxbash
pip install mlx-lm
Use with mlxbash
pip install mlx-lm
Use with mlxbash
pip install mlx-lm
Use with mlxbash
pip install mlx-lm
Use with mlxbash
pip install mlx-lm
Use with mlxbash
pip install mlx-lm
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Use with mlxpython
from mlx_lm import load, generate

model, tokenizer = load("unsloth-Qwen3-VL-30B-A3B-Instruct-qx86x-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.