tiny-random-LlamaForCausalLM

6
llama
by
ccmodular
Other
OTHER
New
6 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Training Data Analysis

🟡 Average (4.8/10)

Researched training datasets used by tiny-random-LlamaForCausalLM with quality assessment

Specialized For

general
science
multilingual
reasoning

Training Datasets (4)

common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
c4
🔵 6/10
general
multilingual
Key Strengths
  • Scale and Accessibility: 750GB of publicly available, filtered text
  • Systematic Filtering: Documented heuristics enable reproducibility
  • Language Diversity: Despite English-only, captures diverse writing styles
Considerations
  • English-Only: Limits multilingual applications
  • Filtering Limitations: Offensive content and low-quality text remain despite filtering
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
  • Scientific Authority: Peer-reviewed content from established repository
  • Domain-Specific: Specialized vocabulary and concepts
  • Mathematical Content: Includes complex equations and notation
Considerations
  • Specialized: Primarily technical and mathematical content
  • English-Heavy: Predominantly English-language papers

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

Shrink the tokenizer to a smaller vocab.pythontransformers
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer

out = "ccmodular/tiny-random-LlamaForCausalLM"
template = "meta-llama/Llama-3.1-8B-Instruct"

print("Loading original tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(template)

# Shrink the tokenizer to a smaller vocab.
print("Shrinking tokenizer...")
tokenizer = tokenizer.train_new_from_iterator(
    text_iterator=[],
    # Effective minimum.
    # BPE requires 256 + special tokens, rounded up to the next power of 2 => 512.
    vocab_size=512,
)

# Minimize the chat template (remove tools and other context).
tokenizer.chat_template = (
    "{{- bos_token }}"
    "{%- for message in messages %}"
    "{{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n' + message['content'] | trim + '<|eot_id|>' }}"
    "{%- endfor %}"
    "{%- if add_generation_prompt %}"
    "{{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}"
    "{%- endif %}"
)

# Configure the model.
config = AutoConfig.from_pretrained(template)
config.num_attention_heads = 1
config.head_dim = 2  # minimum required by RoPE in MAX
config.hidden_size = config.num_attention_heads * config.head_dim
config.num_hidden_layers = 1
config.intermediate_size = 1
config.max_position_embeddings = 32
config.num_key_value_heads = 1
config.vocab_size = len(tokenizer)

# Create the model.
model = AutoModelForCausalLM.from_config(config)

# Print some stats.
print(f"Model parameters: {model.num_parameters():,}")
print(f"Vocab size: {config.vocab_size}")

# Push to HF hub (requires auth).
print(f"Pushing to {out}...")
model.push_to_hub(out, private=False)
tokenizer.push_to_hub(out, private=False)
config.push_to_hub(out, private=False)
print(f"Successfully pushed minimal model to {out}")

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.