LongCat-Flash-Thinking-2601-int4-mixed-AutoRound
2
1
license:mit
by
INC4AI
Language Model
OTHER
New
2 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
How to Usepythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "INC4AI/LongCat-Flash-Thinking-2601-int4-mixed-AutoRound"
# Load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path=model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Please tell me what is $$1 + 1$$ and $$2 \times 2$$?"},
{"role": "assistant", "reasoning_content": "This question is straightforward: $$1 + 1 = 2$$ and $$2 \times 2 = 4$$.", "content": "The answers are 2 and 4."},
{"role": "user", "content": "Check again?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
enable_thinking=True,
add_generation_prompt=True,
save_history_reasoning_content=False # Discard reasoning history to save tokens
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate response
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
print(tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n"))Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.