LightOnOCR-2-1B-gguf

1
llama.cpp
by
wangjazz
Image Model
OTHER
1B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM

Code Examples

Usage with llama.cppbashllama.cpp
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release
Usage with llama.cppbashllama.cpp
# Using F16 (highest quality)
./build/bin/llama-mtmd-cli \
    -m LightOnOCR-2-1B-f16.gguf \
    --mmproj LightOnOCR-2-1B-mmproj-f16.gguf \
    --image your-document.png \
    -ngl 99 \
    -c 4096 \
    -n 1000 \
    --temp 0.2 \
    --repeat-penalty 1.15 \
    --repeat-last-n 128

# Using Q4_K_M (smaller, faster)
./build/bin/llama-mtmd-cli \
    -m LightOnOCR-2-1B-Q4_K_M.gguf \
    --mmproj LightOnOCR-2-1B-mmproj-f16.gguf \
    --image your-document.png \
    -ngl 99 \
    -c 4096 \
    -n 1000 \
    --temp 0.2 \
    --repeat-penalty 1.15

## Recommended Parameters

| Parameter | Value | Description |
|-----------|-------|-------------|
| `--temp` | 0.2 | Official recommended temperature |
| `--repeat-penalty` | 1.15 | Prevents repetition (1.1-1.2 optimal) |
| `--repeat-last-n` | 128 | Tokens to consider for penalty |
| `-n` | 1000 | Max output tokens (avoid >1500) |
| `-ngl` | 99 | GPU layers (use all for best speed) |

### Parameter Notes

- **repeat-penalty**: Values above 1.2 may reduce OCR quality
- **-n (max tokens)**: Limiting to ~1000 prevents repetition at end of long documents
- **Image preprocessing**: Render PDFs to PNG at 1540px longest edge

## Performance (Apple M4 Max)

| Metric | Value |
|--------|-------|
| Image encoding | ~435 ms |
| Image decoding | ~45 ms |
| Prompt processing | ~1,850 tokens/s |
| Text generation | ~228 tokens/s |
| Total time (1000 tokens) | ~8-10 sec |

## Quantization Details

| Format | Bits/Weight | Size Reduction | Quality Impact |
|--------|-------------|----------------|----------------|
| F16 | 16 | - | Baseline |
| Q8_0 | 8 | 45% | Nearly lossless |
| Q4_K_M | 4.5 | 66% | Minimal |

## Credits

- Original model: [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B)
- GGUF conversion: Using [llama.cpp](https://github.com/ggml-org/llama.cpp) convert tools
- Paper: [LightOnOCR: A 1B End-to-End Multilingual Vision-Language Model](https://arxiv.org/pdf/2601.14251)

## License

Apache License 2.0 (same as original model)

## Citation

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.