phogpt-0.13b

74
1
license:apache-2.0
by
amaury-delille
Language Model
OTHER
0.13B params
New
74 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
1GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM

Code Examples

Demopythontransformers
from transformers import AutoModelForSeq2SeqLM
from huggingface_hub import hf_hub_download
import sentencepiece as spm
import torch

model = AutoModelForSeq2SeqLM.from_pretrained("amaury-delille/phogpt-0.13b", trust_remote_code=True)

en_spm_path = hf_hub_download("amaury-delille/phogpt-0.13b", "tokenizer_en/spm.model")
vi_spm_path = hf_hub_download("amaury-delille/phogpt-0.13b", "tokenizer_vi/spm.model")

en_tokenizer = spm.SentencePieceProcessor(en_spm_path)
vi_tokenizer = spm.SentencePieceProcessor(vi_spm_path)

text = "Hello, how are you?"
encoded = en_tokenizer.Encode(text)
encoded.append(3)
max_len = 128
encoded = encoded + [0] * (max_len - len(encoded))
input_ids = torch.tensor([encoded[:max_len]], dtype=torch.long)

model.eval()
outputs = model.generate(input_ids, max_length=128)

out_tokens = [t for t in outputs[0].tolist() if t not in [0, 2, 3]]
translation = vi_tokenizer.Decode(out_tokens)
print(f"English: {text}")
print(f"Vietnamese: {translation}")

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.