Hymba 1.5B Instruct

774
240
1.5B
by
nvidia
Language Model
OTHER
1.5B params
New
774 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
4GB+ RAM
Mobile
Laptop
Server
Quick Summary

💾 Github &nbsp&nbsp | &nbsp&nbsp 📄 Paper | &nbsp&nbsp 📜 Blog &nbsp Hymba-1.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Code Examples

text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
wget --header="Authorization: Bearer YOUR_HF_TOKEN" https://huggingface.co/nvidia/Hymba-1.5B-Base/resolve/main/setup.sh
bash setup.sh
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
docker pull ghcr.io/tilmto/hymba:v1
docker run --gpus all -v /home/$USER:/home/$USER -it ghcr.io/tilmto/hymba:v1 bash
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .
text
git clone https://github.com/OptimalScale/LMFlow.git
      cd LMFlow
      conda create -n lmflow python=3.9 -y
      conda activate lmflow
      conda install mpi4py
      pip install -e .

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.