Qwen3.5-0.8B-vision-LORA-16bit
1
license:apache-2.0
by
Mustafaege
Image Model
OTHER
0.8B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
2GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM
Code Examples
Out-of-Scopebash
pip install unsloth transformers peft trl torch pillowHow to Get Startedpython
from unsloth import FastVisionModel
from PIL import Image
model, tokenizer = FastVisionModel.from_pretrained(
model_name="Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit",
)
FastVisionModel.for_inference(model)
image = Image.open("formula.png")
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Write the LaTeX representation for this image."},
],
}
]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
inputs = tokenizer(image, input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
# Example: \frac{d}{dx}\left(e^{x}\right) = e^{x}Example: \frac{d}{dx}\left(e^{x}\right) = e^{x}pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model_id = "unsloth/Qwen3.5-0.8B"
adapter_id = "Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype="auto",
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter_id)
model.eval()Merge and Export (for GGUF conversion or deployment)python
from unsloth import FastVisionModel
model, tokenizer = FastVisionModel.from_pretrained(
model_name="Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit",
)
# Merge LoRA into base weights
model.save_pretrained_merged("Qwen3.5-0.8B-vision-OCR-merged", tokenizer)Citationbibtex
@misc{mustafaege2026qwen35visionocr,
title = {Qwen3.5-0.8B Vision OCR: 16-bit LoRA Adapter for Image-to-LaTeX},
author = {Mustafaege},
year = {2026},
url = {https://huggingface.co/Mustafaege/Qwen3.5-0.8B-vision-LORA-16bit}
}
@misc{qwen3_5,
title = {Qwen3.5 Technical Report},
author = {Qwen Team},
year = {2025},
publisher = {Alibaba Cloud}
}Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.