MiniCPM-SALA
1.1K
494
license:apache-2.0
by
openbmb
Language Model
OTHER
New
1K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
SGLangbash
# Clone repository
git clone -b minicpm_sala https://github.com/OpenBMB/sglang.git
cd sglang
# One-click installation (creates venv and compiles all dependencies)
bash install_minicpm_sala.sh
# Or specify PyPI mirror
bash install_minicpm_sala.sh https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simpleUsagebash
# Activate environment
source sglang_minicpm_sala_env/bin/activate
# Launch Inference Server (Replace MODEL_PATH with actual path)
MODEL_PATH=/path/to/your/MiniCPM-SALA
python3 -m sglang.launch_server \
--model ${MODEL_PATH} \
--trust-remote-code \
--disable-radix-cache \
--attention-backend minicpm_flashinfer \
--chunked-prefill-size 8192 \
--max-running-requests 32 \
--skip-server-warmup \
--port 31111 \
--dense-as-sparseManual Installationbash
# 0. Ensure uv is installed
pip install uv
# 1. Create venv
uv venv --python 3.12 sglang_minicpm_sala_env
source sglang_minicpm_sala_env/bin/activate
# 2. Install SGLang
uv pip install --upgrade pip setuptools wheel
uv pip install -e ./python[all]
# 3. Compile CUDA Extensions
# (Ensure dependencies are cloned to 3rdparty/)
cd 3rdparty/infllmv2_cuda_impl && python setup.py install && cd ../..
cd 3rdparty/sparse_kernel && python setup.py install && cd ../..
# 4. Install extra deps
uv pip install tilelang flash-linear-attentionDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.