GigaChat3-10B-A1.8B

8.4K
53
license:mit
by
ai-sage
Language Model
OTHER
10B params
New
8K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
23GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
10GB+ RAM

Code Examples

Пример использования (Quickstart)pythontransformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "ai-sage/GigaChat3-10B-A1.8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)

messages = [
    {"role": "user", "content": "Докажи теорему о неподвижной точке"}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=1000)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=False)
print(result)
bash
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ai-sage/GigaChat3-10B-A1.8B",
    "messages": [
      {
        "role": "user",
        "content": "Докажи теорему о неподвижной точке"
      }
    ],
    "max_tokens": 400,
    "temperature": 0
  }'
bash
curl http://localhost:30000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "ai-sage/GigaChat3-10B-A1.8B",
    "messages": [
      {
        "role": "user",
        "content": "Докажи теорему о неподвижной точке"
      }
    ],
    "max_tokens": 1000,
    "temperature": 0
  }'
Function callpythontransformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
import json
import re
REGEX_FUNCTION_CALL_V3 = re.compile(r"function call<\|role_sep\|>\n(.*)$", re.DOTALL)
REGEX_CONTENT_PATTERN = re.compile(r"^(.*?)<\|message_sep\|>", re.DOTALL)
def parse_function_and_content(completion_str: str):
    """
    Using the regexes the user provided, attempt to extract function call and content.
    Returns (function_call_str_or_None, content_str_or_None)
    """

    function_call = None
    content = None

    m_func = REGEX_FUNCTION_CALL_V3.search(completion_str)
    if m_func:
        try:
            function_call = json.loads(m_func.group(1))
            if isinstance(function_call, dict) and "name" in function_call and "arguments" in function_call:
                if not isinstance(function_call["arguments"], dict):
                    function_call = None
            else:
                function_call = None
        except json.JSONDecodeError:
            function_call = None

            # will return raw string in failed attempt of function calling
            return function_call, completion_str

    m_content = REGEX_CONTENT_PATTERN.search(completion_str)
    if m_content:
        content = m_content.group(1)
    else:
        # as a fallback, everything before the first message_sep marker if present
        if "<|message_sep|>" in completion_str:
            content = completion_str.split("<|message_sep|>")[0]
        else:
            content = completion_str

    return function_call, content

model_name = "ai-sage/GigaChat3-10B-A1.8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
tools = [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Получить информацию о текущей погоде в указанном городе.",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "Название города (например, Москва, Казань)."
            }
          },
          "required": ["city"]
        }
      }
    }
]
messages = [
    {"role": "user", "content": "Какая сейчас погода в Москве?"}
]
input_tensor = tokenizer.apply_chat_template(messages, tools=tools, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=1000)

result = parse_function_and_content(tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=False))[0]
print(result)
bash
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
  "model": "ai-sage/GigaChat3-10B-A1.8B",
  "temperature": 0,
  "messages": [
    {
      "role": "user",
      "content": "Какая сейчас погода в Москве?"
    }
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Получить информацию о текущей погоде в указанном городе.",
        "parameters": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "Название города (например, Москва, Казань)."
            }
          },
          "required": ["city"]
        }
      }
    }
  ]
}'

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.