Arcana-Qwen3-2.4B-A0.6B

8
31
2.4B
3 languages
license:apache-2.0
by
suayptalha
Language Model
OTHER
2.4B params
New
8 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
6GB+ RAM
Mobile
Laptop
Server
Quick Summary

"We are all experts at something, but we’re all also beginners at something else.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM

Code Examples

Usage:pythontransformers
import torch
from huggingface_hub import snapshot_download
from transformers import AutoModelForCausalLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

local_dir = snapshot_download(
    repo_id="suayptalha/Qwen3-2.4B-A0.6B",
)

model = AutoModelForCausalLM.from_pretrained(
    local_dir,
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    local_dir,
)

model.to(device)
model.eval()

prompt = "I have pain in my chest, what should I do?"
messages = [{"role": "user", "content": prompt}]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

with torch.no_grad():
    output_ids = model.generate(
        text=prompt,
        max_new_tokens=1024,
        temperature=0.6,
        top_p=0.95,
    )
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
Usage:pythontransformers
import torch
from huggingface_hub import snapshot_download
from transformers import AutoModelForCausalLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

local_dir = snapshot_download(
    repo_id="suayptalha/Qwen3-2.4B-A0.6B",
)

model = AutoModelForCausalLM.from_pretrained(
    local_dir,
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    local_dir,
)

model.to(device)
model.eval()

prompt = "I have pain in my chest, what should I do?"
messages = [{"role": "user", "content": prompt}]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

with torch.no_grad():
    output_ids = model.generate(
        text=prompt,
        max_new_tokens=1024,
        temperature=0.6,
        top_p=0.95,
    )
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
Usage:pythontransformers
import torch
from huggingface_hub import snapshot_download
from transformers import AutoModelForCausalLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

local_dir = snapshot_download(
    repo_id="suayptalha/Qwen3-2.4B-A0.6B",
)

model = AutoModelForCausalLM.from_pretrained(
    local_dir,
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    local_dir,
)

model.to(device)
model.eval()

prompt = "I have pain in my chest, what should I do?"
messages = [{"role": "user", "content": prompt}]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

with torch.no_grad():
    output_ids = model.generate(
        text=prompt,
        max_new_tokens=1024,
        temperature=0.6,
        top_p=0.95,
    )
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
Usage:pythontransformers
import torch
from huggingface_hub import snapshot_download
from transformers import AutoModelForCausalLM, AutoTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

local_dir = snapshot_download(
    repo_id="suayptalha/Qwen3-2.4B-A0.6B",
)

model = AutoModelForCausalLM.from_pretrained(
    local_dir,
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    local_dir,
)

model.to(device)
model.eval()

prompt = "I have pain in my chest, what should I do?"
messages = [{"role": "user", "content": prompt}]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

with torch.no_grad():
    output_ids = model.generate(
        text=prompt,
        max_new_tokens=1024,
        temperature=0.6,
        top_p=0.95,
    )
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.