TinyLlama_pt
44
—
by
Abby-Woodring
Other
OTHER
New
44 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Training Data Analysis
🟡 Average (4.8/10)
Researched training datasets used by TinyLlama_pt with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (4)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
- •Scale and Accessibility: 750GB of publicly available, filtered text
- •Systematic Filtering: Documented heuristics enable reproducibility
- •Language Diversity: Despite English-only, captures diverse writing styles
Considerations
- •English-Only: Limits multilingual applications
- •Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
special_tokens: # null in tokenizer_config.json for Llama-tinyyaml
base_model: /work/awoodring1/l1b_merged0/ # TinyLlama_1.1v
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
# special_tokens: # null in tokenizer_config.json for Llama-tiny
# pad_token: "</s>" # match mistral
load_in_8bit: false
load_in_4bit: false
strict: false
pretraining_dataset:
- path: Abby-Woodring/fineweb_50M
data_files:
- CC-MAIN-2023-50/data.jsonl
text_column: text
type: pretrain
dataset_prepared_path: /hpc/home/awoodring1/hf_data/pretrain/1
output_dir: /work/awoodring1/l1b_fineweb/t1
hub_model_id: # upload in batch script after
hf_use_auth_token: true
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: true
eval_sample_packing: false
max_steps: 4000
adapter: # full pretraining
lora_model_dir:
use_wandb: true
wandb_project: fineweb_full_pt
wandb_entity:
wandb_watch:
wandb_name: t1
wandb_log_model:
wandb_mode:
gradient_accumulation_steps: 1
micro_batch_size: 35
num_epochs: 2
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00003
max_grad_norm: 1.0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
early_stopping_patience:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
loss_watchdog_threshold: 20.0
loss_watchdog_patience: 5
warmup_ratio: 0.01
evals_per_epoch: 0
eval_table_size:
eval_max_new_tokens: 2048
saves_per_epoch: 3
debug:
deepspeed:
weight_decay: 0.1 # increase because we are not using LoRA
fsdp_version: 1
fsdp_config:
activation_checkpointing: false
offload_params: false
cpu_ram_efficient_loading: true
use_orig_params: true
state_dict_type: FULL_STATE_DICT
auto_wrap_policy: TRANSFORMER_BASED_WRAP
transformer_layer_cls_to_wrap: LlamaDecoderLayer
reshard_after_forward: true
special_tokens:
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: trueDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.