smolvla-libero-QNN-CPU

133
by
xpuenabler
Other
OTHER
New
133 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

Requirementsbash
# Set QNN SDK path
export QNN_SDK_ROOT=/path/to/qnn-sdk-v2.43.0

# Add QNN libraries to library path
export LD_LIBRARY_PATH=$QNN_SDK_ROOT/lib/x86_64-linux-clang:$LD_LIBRARY_PATH

# Verify QNN tools are available
which qnn-net-run
Add QNN libraries to library pathbash
pip install numpy torch transformers huggingface_hub
Verify QNN tools are availablepython
from infer_libero_episode_qnn_cpu import QNNSmolVLAInference

# Initialize inference engine
inference = QNNSmolVLAInference(
    qnn_models_dir="./qnn_models",
    config_path="./config.json",
    preprocessor_config="./policy_preprocessor.json",
    postprocessor_config="./policy_postprocessor.json"
)

# Run inference on an image and language instruction
image = ...  # PIL Image or numpy array [H, W, 3]
instruction = "pick up the red cube"

action = inference.predict(image, instruction)
print(f"Predicted action: {action}")  # [8, 7] velocity vector
Check tensor transposition requirements abovebash
# Ensure QNN SDK v2.43.0 is installed
# Models may not be compatible with other versions
qnn-net-run --version

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.