twkeed-vision

1
license:apache-2.0
by
twkeed-sa
Other
OTHER
4B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM

Code Examples

Usagepython
import mlx.core as mx
from mlx_vlm import load, generate
from mlx_vlm.trainer import get_peft_model

# Load base model
model, processor = load("mlx-community/Qwen3-VL-4B-Instruct-4bit")

# Apply LoRA structure
target_modules = ["q_proj", "v_proj", "k_proj", "o_proj"]
model = get_peft_model(model, linear_layers=target_modules, rank=16, alpha=2.0, dropout=0.05, freeze=True)

# Load adapters (download from this repo)
adapter_weights = mx.load("path/to/adapters.safetensors")
# Strip language_model prefix
stripped_weights = {k.replace('language_model.', ''): v for k, v in adapter_weights.items()}
model.language_model.load_weights(list(stripped_weights.items()), strict=False)

# Generate with Arabic prompt
prompt = "<|im_start|>user\nمن أنت؟<|im_end|>\n<|im_start|>assistant\n"
result = generate(model, processor, prompt, max_tokens=256)
print(result.text)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.