LFM2-8B-A1B-GGUF
16.9K
37
8.0B
9 languages
BF16
—
by
unsloth
Language Model
OTHER
8B params
Fair
17K downloads
Community-tested
Edge AI:
Mobile
Laptop
Server
18GB+ RAM
Mobile
Laptop
Server
Quick Summary
> [!NOTE] > Includes Unsloth chat template fixes! For `llama.cpp`, use `--jinja` > Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading q...
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
8GB+ RAM
Code Examples
🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6🏃 How to run LFM2bash
pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e62. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v2. vLLMbashvllm
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -vpythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")pythonvllm
from vllm import LLM, SamplingParams
prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)
llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")
outputs = llm.chat(prompts, sampling_params)
for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.