SmallThinker-3B-Preview-GGUF

322
3
by
QuantFactory
Other
OTHER
3B params
New
322 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
7GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
3GB+ RAM

Code Examples

Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0
Training Detailstext
neat_packing: true
cutoff_len: 16384
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.02
bf16: true
ddp_timeout: 180000000
weight_decay: 0.0

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.