AntAngelMed

80
82
license:apache-2.0
by
MedAIBase
Code Model
OTHER
New
80 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

vLLMplainvllm
pip install vllm==0.11.0
**SGLang**plain
pip install sglang -U
**SGLang**plain
docker pull lmsysorg/sglang:latest
plainvllm
NAME=your container name
MODEL_PATH=put your absolute model path here if you already have it locally.

docker run -itd --privileged --name=$NAME --net=host \
 --shm-size=1000g \
   --device /dev/davinci0 \
   --device /dev/davinci1 \
   --device /dev/davinci2\
   --device /dev/davinci3 \
   --device /dev/davinci4 \
   --device /dev/davinci5 \
   --device /dev/davinci6 \
   --device /dev/davinci7 \
   --device=/dev/davinci_manager \
   --device=/dev/hisi_hdc \
   --device /dev/devmm_svm \
   -v /usr/local/dcmi:/usr/local/dcmi \
    -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
    -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
    -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
    -v /etc/ascend_install.info:/etc/ascend_install.info \
   -v /usr/local/sbin:/usr/local/sbin \
   -v /etc/hccn.conf:/etc/hccn.conf \
   -v $MODEL_PATH:$MODEL_PATH \
   quay.io/ascend/vllm-ascend:v0.11.0rc2 \
   bash

docker exec -u root -it $NAME bash

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.