novoyaz-20b

142
1
20.0B
license:apache-2.0
by
ZennyKenny
Language Model
OTHER
20B params
New
142 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
45GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
19GB+ RAM

Code Examples

pythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
    import torch

    model_id = "openai/gpt-oss-20b"  # base model used by handler

    tokenizer = AutoTokenizer.from_pretrained(
        model_id,
        use_fast=True,
        trust_remote_code=True,
    )

    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        torch_dtype="auto" if torch.cuda.is_available() else torch.float32,
        device_map="auto" if torch.cuda.is_available() else None,
        trust_remote_code=True,
        low_cpu_mem_usage=True,
    )

    if tokenizer.pad_token is None:
        tokenizer.pad_token = tokenizer.eos_token
    if not torch.cuda.is_available():
        model.config.use_cache = False

    PROMPT_PREFIX = (
        "Ты – модель, которая строго переписывает дореформенный русский текст "
        "в современную орфографию, не меняя смысл и пунктуацию. "
        "Не добавляй комментарии и не переводь текст.\n\nТекст:\n"
    )
    PROMPT_SUFFIX = "\n\nСовременный орфографический вариант:"

    pre_reform = "Въ началѣ бѣ Слово, и Слово бѣ къ Богу..."
    prompt = f"{PROMPT_PREFIX}{pre_reform}{PROMPT_SUFFIX}"

    inputs = tokenizer(prompt, return_tensors="pt", padding=True).to(model.device)

    gen_kwargs = dict(
        do_sample=False,
        temperature=0.0,
        num_beams=1,
        max_new_tokens=512,
        repetition_penalty=1.0,
    )

    with torch.inference_mode():
        outputs = model.generate(**inputs, **gen_kwargs)

    # Strip the prompt tokens from the generated sequence
    in_len = inputs["input_ids"].shape[-1]
    gen_only = outputs[0][in_len:]

    modern_text = tokenizer.decode(gen_only, skip_special_tokens=True).strip()
    print(modern_text)
python
{
      "inputs": ["текст 1", "текст 2"]
    }

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.