phi2_humor_merged_model

16
license:mit
by
insaabbas
Language Model
OTHER
2B params
New
16 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
5GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Training Data Analysis

🟡 Average (5.2/10)

Researched training datasets used by phi2_humor_merged_model with quality assessment

Specialized For

code
general
science
multilingual

Training Datasets (3)

the pile
🟢 8/10
code
general
science
multilingual
Key Strengths
  • Deliberate Diversity: Explicitly curated to include diverse content types (academia, code, Q&A, book...
  • Documented Quality: Each component dataset is thoroughly documented with rationale for inclusion, en...
  • Epoch Weighting: Component datasets receive different training epochs based on perceived quality, al...
common crawl
🔴 2.5/10
general
science
Key Strengths
  • Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
  • Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
  • Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
  • Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
  • Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
  • High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
  • Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
  • Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
  • Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
  • Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

How to Usepythontransformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig # If you uploaded the LoRA adapter separately

# Load the model and tokenizer
model_id = "insaabbas/phi2_humor_merged_model" 
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, 
                                            torch_dtype=torch.float16,
                                            trust_remote_code=True,
                                            device_map="auto")

# Example Prompt (Use your exact structured prompt format here!)
prompt = """
### Input:
Topic: The difference between a politician and a normal person.
Constraints: Must be a one-liner.
### Output:
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate the humor
outputs = model.generate(
    **inputs,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.7,
    pad_token_id=tokenizer.eos_token_id # Important for Phi-2
)

print(tokenizer.decode(outputs[0], skip_special_tokens=False))

### 5. Evaluation, Limitations, and Citation

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.