qwen3vl-open-schematics-lora

1
license:apache-2.0
by
kingabzpro
Image Model
OTHER
8B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
18GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
8GB+ RAM

Code Examples

Usagepythontransformers
import torch
from transformers import AutoProcessor, AutoModelForVision2Seq
from PIL import Image

MODEL_ID = "kingabzpro/qwen3vl-open-schematics-lora"  # change me

processor = AutoProcessor.from_pretrained(MODEL_ID)
model = AutoModelForVision2Seq.from_pretrained(
    MODEL_ID,
    torch_dtype=torch.bfloat16,
    device_map="auto",
).eval()

def build_prompt(example):
    name = example.get("name") or "Unknown project"
    ftype = example.get("type") or "unknown format"
    return (
        f"Project: {name}\nFormat: {ftype}\n"
        "From the schematic image, extract all component labels and identifiers exactly as shown "
        "(part numbers, values, footprints, net labels like +5V/GND).\n"
        "Output only a comma-separated list. Do not generalize or add extra text."
    )

def run_inference(model_, example, max_new_tokens=256):
    prompt = build_prompt(example)
    messages = [{
        "role": "user",
        "content": [
            {"type": "image", "image": example["image"]},
            {"type": "text", "text": prompt},
        ],
    }]

    inputs = processor.apply_chat_template(
        messages,
        tokenize=True,
        add_generation_prompt=True,
        return_dict=True,
        return_tensors="pt",
    ).to(model_.device)

    with torch.inference_mode():
        out = model_.generate(**inputs, max_new_tokens=max_new_tokens, do_sample=False)

    gen = out[0][inputs["input_ids"].shape[1]:]
    return processor.decode(gen, skip_special_tokens=True)

# ---- Small usage example ----
example = {
    "name": "Arduino-like Board",
    "type": "kicad",
    "image": Image.open("schematic.png").convert("RGB"),
}

print(run_inference(model, example))
After (Fine-tuned)text
ATMEGA328P-PU, +5V, GND, R, C, C16MHz,
SERVO_A, SERVO_B, SERVO_C, SERVO_D, SERVO_E, SERVO_F
Target (Dataset)text
+5V, 7.62MM-3P, 7.62MM-3P_1, ..., ATMEGA328P-PU, ATMEGA328P-PU_1,
GND, MBB02070C1002FCT00, ..., Y5P102K2KV16CC0224_2

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.