QVikhr-3-4B-Instruction

243
15
4.0B
2 languages
license:apache-2.0
by
Vikhrmodels
Language Model
OTHER
4B params
New
243 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM

Code Examples

Пример кода для запуска / Sample code to run:pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-3-4B-Instruction"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."

messages = [
    {"role": "user", "content": input_text},
]

# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
    input_ids,
    max_length=1512,
    temperature=0.3,
    num_return_sequences=1,
    no_repeat_ngram_size=2,
    top_k=50,
    top_p=0.95,
)

# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Пример кода для запуска / Sample code to run:pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-3-4B-Instruction"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."

messages = [
    {"role": "user", "content": input_text},
]

# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
    input_ids,
    max_length=1512,
    temperature=0.3,
    num_return_sequences=1,
    no_repeat_ngram_size=2,
    top_k=50,
    top_p=0.95,
)

# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Пример кода для запуска / Sample code to run:pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-3-4B-Instruction"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."

messages = [
    {"role": "user", "content": input_text},
]

# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
    input_ids,
    max_length=1512,
    temperature=0.3,
    num_return_sequences=1,
    no_repeat_ngram_size=2,
    top_k=50,
    top_p=0.95,
)

# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Пример кода для запуска / Sample code to run:pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-3-4B-Instruction"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."

messages = [
    {"role": "user", "content": input_text},
]

# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
    input_ids,
    max_length=1512,
    temperature=0.3,
    num_return_sequences=1,
    no_repeat_ngram_size=2,
    top_k=50,
    top_p=0.95,
)

# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Пример кода для запуска / Sample code to run:pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-3-4B-Instruction"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."

messages = [
    {"role": "user", "content": input_text},
]

# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
    input_ids,
    max_length=1512,
    temperature=0.3,
    num_return_sequences=1,
    no_repeat_ngram_size=2,
    top_k=50,
    top_p=0.95,
)

# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Пример кода для запуска / Sample code to run:pythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model_name = "Vikhrmodels/QVikhr-3-4B-Instruction"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare the input text
input_text = "Напиши краткое описание книги Гарри Поттер."

messages = [
    {"role": "user", "content": input_text},
]

# Tokenize and generate text
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt")
output = model.generate(
    input_ids,
    max_length=1512,
    temperature=0.3,
    num_return_sequences=1,
    no_repeat_ngram_size=2,
    top_k=50,
    top_p=0.95,
)

# Decode and print result
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.