Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4
1.2K
7
3 languages
license:mit
by
huihui-ai
Language Model
OTHER
New
1K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
This is an uncensored version of zai-org/GLM-4.
Code Examples
Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Usagepython
from mlx_lm import load, generate
model, tokenizer = load("huihui-ai/Huihui-GLM-4.5-Air-abliterated-mlx-mxfp4")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
text = generate(model, tokenizer, prompt=prompt, verbose=True)
print(text)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.