albert-khmer-small

131
6
license:apache-2.0
by
seanghay
Language Model
OTHER
2B params
New
131 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
5GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Code Examples

How to Usepythontransformers
import torch
from transformers import AlbertForMaskedLM, AlbertTokenizer
import sentencepiece as spm

# Load model and tokenizer
model = AlbertForMaskedLM.from_pretrained("seanghay/albert-khmer-small")
tokenizer = AlbertTokenizer.from_pretrained("seanghay/albert-khmer-small")
sp = spm.SentencePieceProcessor()
sp.load(tokenizer.vocab_file)

text = "ភ្នំពេញគឺជា[MASK]នៃប្រទេសកម្ពុជា។"
pieces = sp.encode_as_pieces(text)
ids = sp.encode_as_ids(text)
input_ids = torch.LongTensor([2] + ids + [3]).unsqueeze(0) # [CLS] + ids + [SEP]
attention_mask = torch.zeros_like(input_ids)

# Perform inference
with torch.no_grad():
    outputs = model(**inputs)
    logits = outputs.logits

# Locate the [MASK] token and extract predictions
mask_token_index = torch.where(inputs.input_ids == tokenizer.mask_token_id)[1]
mask_token_logits = logits[0, mask_token_index, :]

top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()

print(f"Original text: {text}")
print(f"Decoded input text embedding (should match original text): {sp.decode_ids(input_ids.squeeze().tolist())})
for i, token_id in enumerate(top_5_tokens):
    predicted_token = tokenizer.decode([token_id])
    print(f"{i + 1}. {text.replace('[MASK]', predicted_token)}")

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.