c4ai-command-a-03-2025-gptqmodel-4bit

2
1
23 languages
license:cc-by-nc-4.0
by
Bedovyy
Language Model
OTHER
4B params
New
2 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary

Non-english performance may be significantly dropped.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM

Code Examples

pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
pip install transformerspythontransformers
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define conversation inputpython
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]

# Define documents for retrieval-based generation
documents = [ 
  {"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
  {"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]

# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
)

# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Get the Grounded Generation prompt, with citationspython
# Get the Grounded Generation prompt, with citations
input_prompt = tokenizer.apply_chat_template(
  conversation=conversation,
  documents=documents,
  tokenize=False,
  add_generation_prompt=True,
  return_tensors="pt",
  enable_citations=True,
)

# There are two answers to this question. Man has dreamed of <co>destroying the moon</co: 0:[0]> and <co>finding love.</co: 0:[1]>
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Define toolspython
# Define tools
tools = [{ 
  "type": "function", 
  "function": {
    "name": "query_daily_sales_report",
    "description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
    "parameters": {
      "type": "object",
      "properties": {
        "day": {
          "description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
          "type": "string",
        }
      },
      "required": ["day"]
    },
  }
}]

# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]


# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt"))
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>
text
On 29th September 2023, the total sales amount was <co>10000</co: 0:[0]> and the total units sold were <co>250.</co: 0:[0]>

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.