Mistral-7B-v0.1-q-sparse-fineweb-edu-table2

2
7.0B
license:apache-2.0
by
skymizer
Language Model
OTHER
7B params
New
2 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM

Training Data Analysis

🟡 Average (5.3/10)

Researched training datasets used by Mistral-7B-v0.1-q-sparse-fineweb-edu-table2 with quality assessment

Specialized For

general
science
code
multilingual
reasoning

Training Datasets (4)

common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
  • Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
  • Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
  • Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
  • Scientific Authority: Peer-reviewed content from established repository
  • Domain-Specific: Specialized vocabulary and concepts
  • Mathematical Content: Includes complex equations and notation
Considerations
  • Specialized: Primarily technical and mathematical content
  • English-Heavy: Predominantly English-language papers

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42
eval_causal_lm_metrics: ["perplexity"]yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizer_use_fast: false
resize_token_embeddings_to_32x: false

flash_attention: true
xformers_attention:

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-45B-4096
    train_on_split: train
    type: completion

test_datasets:
  - path: skymizer/Mistral-7B-v0.1-base-tokenized-fineweb-edu-test-4K
    split: test
    type: completion

is_preprocess: true
skip_prepare_dataset: true

dataset_prepared_path:

hf_use_auth_token: true
output_dir: /mnt/home/model-team/models/Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2
resume_from_checkpoint:
auto_resume_from_checkpoints: true

sequence_len: 4096
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
pad_to_sequence_len: true

eval_sample_packing: false
# eval_causal_lm_metrics: ["perplexity"]

wandb_project: "sparse-tuning-cpt"
wandb_entity:
wandb_watch:
wandb_name: "Mistral-7B-v0.1-q-sparse-fineweb-edu-10000steps-4M-bs-table2"
wandb_log_model:

# global batch size = 2 * 8 * 8 GPUs * 8 Nodes * 4096 = 4M
gradient_accumulation_steps: 8
micro_batch_size: 2
eval_batch_size: 1
max_steps: 10000
optimizer: adamw_torch
learning_rate: 0.00005
lr_scheduler: cosine
cosine_min_lr_ratio: 0.2 
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.95
adam_eps: 0.000001
max_grad_norm: 2.0

train_on_inputs: false
group_by_length: false
bf16: true
fp16:
tf32: false

hub_model_id: "skymizer/Mistral-7B-v0.1-q-sparse-fineweb-edu-table2"

save_strategy: "steps"
save_steps: 500

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
local_rank:
logging_steps: 1

warmup_steps: 375
eval_steps: 500
eval_table_size:
debug:
deepspeed: /root/train/axolotl/deepspeed_configs/zero3_bf16.json
fsdp:
fsdp_config:
seed: 42

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.