Gumini-1B-Base-i1-GGUF

1
llama-cpp
by
GuminiResearch
Language Model
OTHER
1B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM

Code Examples

All Quantization Resultsbash
# Download
huggingface-cli download GuminiResearch/Gumini-1B-Base-i1-GGUF Gumini-1B-Base.i1-Q4_K_M.gguf

# Run
./llama-cli -m Gumini-1B-Base.i1-Q4_K_M.gguf -p "저는 구미니입니다." -n 100
Runbash
echo 'FROM ./Gumini-1B-Base.i1-Q4_K_M.gguf' > Modelfile
ollama create gumini-1b -f Modelfile
ollama run gumini-1b
Citationbibtex
@misc{gumini2025,
  title={Gumini-1B: Bilingual Language Model Built with Qwen via Inheritune},
  author={Gumin Kwon},
  year={2025},
  note={Built with Qwen},
  url={https://huggingface.co/GuminiResearch/Gumini-1B-Base-i1-GGUF}
}

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.