GLM-4.7-int4-mixed-AutoRound
66
12
—
by
Intel
Language Model
OTHER
New
66 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
Generate the modelpythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
from auto_round import AutoRound
from auto_round.utils import llm_load_model
model_name = "zai-org/GLM-4.7"
model, tokenizer = llm_load_model(model_name, device="cpu")
layer_config = {}
for n, m in model.named_modules():
if isinstance(m, torch.nn.Linear):
if "expert" in n and "shared_experts" not in n:
layer_config[n] = {"bits": 4}
print(n, 4)
elif n != "lm_head":
layer_config[n] = {"bits": 8}
print(n, 8)
autoround = AutoRound(model, tokenizer, iters=0, layer_config=layer_config, disable_opt_rtn=True)
autoround.quantize_and_save(format="auto_round", output_dir="tmp_autoround")Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.