GPT-OSS-Swallow-120B-RL-v0.1-gguf

15
1
llama.cpp
by
elvezjp
Language Model
OTHER
120B params
New
15 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
269GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
112GB+ RAM

Code Examples

Usage / 使い方bash
llama-cli -m GPT-OSS-Swallow-120B-RL-v0.1-Q8_0.gguf \
  -n 512 -c 4096 \
  -p "You are an excellent assistant. / あなたは優秀なアシスタントです。" \
  --cnv
LM Studiobashllama.cpp
# Using llama.cpp's convert_hf_to_gguf.py
python3 convert_hf_to_gguf.py \
  tokyotech-llm/GPT-OSS-Swallow-120B-RL-v0.1 \
  --outfile GPT-OSS-Swallow-120B-RL-v0.1-F16.gguf \
  --outtype f16
GGUF Q8_0 (8-bit quantization / 8bit量子化)bashllama.cpp
# Using llama.cpp's llama-quantize
llama-quantize \
  GPT-OSS-Swallow-120B-RL-v0.1-F16.gguf \
  GPT-OSS-Swallow-120B-RL-v0.1-Q8_0.gguf \
  Q8_0

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.