Qwen3-235B-A22B-Instruct-2507-int4-mixed-AutoRound

85
11
235.0B
license:apache-2.0
by
Intel
Other
OTHER
235B params
New
85 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
526GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
219GB+ RAM

Code Examples

Generate the modelpythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
from auto_round import AutoRound

model_name = "Qwen/Qwen3-235B-A22B-Instruct-2507"

model = AutoModelForCausalLM.from_pretrained(model_name,
                                             device_map="cpu", torch_dtype="auto")

tokenizer = AutoTokenizer.from_pretrained(model_name)

layer_config = {}
for n, m in model.named_modules():
    if "mlp.gate" in n: ## vllm only support 16 bit for this layer
        layer_config[n] = {"bits": 16}
    elif isinstance(m, torch.nn.Linear) and (not "expert" in n or "shared_experts" in n) and n != "lm_head":
        layer_config[n] = {"bits": 8, "group_size": 128}

autoround = AutoRound(model, tokenizer, iters=0, group_size=64, layer_config=layer_config)
output_dir = "/dataset/Qwen3-235B-A22B-Instruct-2507-int4-mixed"
autoround.quantize_and_save(output_dir)

## tricky code to handle qkv fusing issue, we will fix it in vllm later
import os
import json

config_path = os.path.join(output_dir, "config.json")

with open(config_path, "r") as file:
    config = json.load(file)
extra_config = config["quantization_config"]["extra_config"]
num_hidden_layers = config["num_hidden_layers"]
for i in range(num_hidden_layers):
    qkv_name = f"model.layers.{str(i)}.self_attn.qkv_proj"
    extra_config[qkv_name] = {"bits": 8, "group_size": 128}
with open(config_path, "w") as file:
    json.dump(config, file, indent=2)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.