Mistral-7B-sarcasm-scrolls-v2
19
license:apache-2.0
by
pszemraj
Language Model
OTHER
7B params
New
19 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM
Training Data Analysis
🟡 Average (5.3/10)
Researched training datasets used by Mistral-7B-sarcasm-scrolls-v2 with quality assessment
Specialized For
general
science
code
multilingual
reasoning
Training Datasets (4)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
- •Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
- •Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
- •Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
datasetyaml
base_model: mistralai/Mistral-7B-v0.3
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
strict: false
# dataset
datasets:
- path: BEE-spoke-data/sarcasm-scrolls
type: completion # format from earlier
field: text
val_set_size: 200
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
train_on_inputs: false
group_by_length: false
# WANDB
wandb_project: sarcasm-scrolls
wandb_entity: pszemraj
wandb_watch: gradients
wandb_name: Mistral-7B-v0.3-sarcasm-scrolls-v2a
hub_model_id: pszemraj/Mistral-7B-v0.3-sarcasm-scrolls-v2
hub_strategy: every_save
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch_fused # paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-5
load_in_8bit: false
load_in_4bit: false
bf16: true
tf32: true
torch_compile: true
torch_compile_backend: inductor # Optional[str]
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
logging_steps: 3
xformers_attention:
flash_attention: true
warmup_steps: 20
# hyperparams for freq of evals, saving, etc
evals_per_epoch: 4
saves_per_epoch: 4
save_safetensors: true
save_total_limit: 1 # Checkpoints saved at a time
output_dir: ./output-axolotl/output-model-chaz
resume_from_checkpoint:
deepspeed:
weight_decay: 0.06
special_tokens:Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.