smolvla-libero-QNN-CPU
133
—
by
xpuenabler
Other
OTHER
New
133 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
Requirementsbash
# Set QNN SDK path
export QNN_SDK_ROOT=/path/to/qnn-sdk-v2.43.0
# Add QNN libraries to library path
export LD_LIBRARY_PATH=$QNN_SDK_ROOT/lib/x86_64-linux-clang:$LD_LIBRARY_PATH
# Verify QNN tools are available
which qnn-net-runAdd QNN libraries to library pathbash
pip install numpy torch transformers huggingface_hubVerify QNN tools are availablepython
from infer_libero_episode_qnn_cpu import QNNSmolVLAInference
# Initialize inference engine
inference = QNNSmolVLAInference(
qnn_models_dir="./qnn_models",
config_path="./config.json",
preprocessor_config="./policy_preprocessor.json",
postprocessor_config="./policy_postprocessor.json"
)
# Run inference on an image and language instruction
image = ... # PIL Image or numpy array [H, W, 3]
instruction = "pick up the red cube"
action = inference.predict(image, instruction)
print(f"Predicted action: {action}") # [8, 7] velocity vectorCheck tensor transposition requirements abovebash
# Ensure QNN SDK v2.43.0 is installed
# Models may not be compatible with other versions
qnn-net-run --versionDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.