flan-t5-base-parkinson-abstain-curriculum-v1

1
license:apache-2.0
by
furkanyagiz
Language Model
OTHER
5B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
12GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
5GB+ RAM

Training Data Analysis

🔵 Good (6.0/10)

Researched training datasets used by flan-t5-base-parkinson-abstain-curriculum-v1 with quality assessment

Specialized For

general
multilingual

Training Datasets (1)

c4
🔵 6/10
general
multilingual
Key Strengths
  • Scale and Accessibility: 750GB of publicly available, filtered text
  • Systematic Filtering: Documented heuristics enable reproducibility
  • Language Diversity: Despite English-only, captures diverse writing styles
Considerations
  • English-Only: Limits multilingual applications
  • Filtering Limitations: Offensive content and low-quality text remain despite filtering

Explore our comprehensive training dataset analysis

View All Datasets

Code Examples

Recommended usagetext
Summarize the following Parkinson's disease (PD) abstract in 2-3 sentences.
Use ONLY information that appears in the abstract.
Do NOT add recommendations or speculation.
If results/conclusions are not present, output exactly: INSUFFICIENT_RESULT_INFORMATION

<ABSTRACT>
Transformers example (inference)texttransformers
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "https://huggingface.co/furkanyagiz/flan-t5-base-parkinson-abstain-curriculum-v1"
ABSTAIN = "INSUFFICIENT_RESULT_INFORMATION"

tok = AutoTokenizer.from_pretrained(MODEL_ID)
mdl = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)

prompt = (
  "Summarize the following Parkinson's disease (PD) abstract in 2-3 sentences.\n"
  "Use ONLY information that appears in the abstract.\n"
  "Do NOT add recommendations or speculation.\n"
  f"If results/conclusions are not present, output exactly: {ABSTAIN}\n\n"
  + abstract
)

inputs = tok(prompt, return_tensors="pt", truncation=True, max_length=512)
out = mdl.generate(
    **inputs,
    max_new_tokens=256,
    num_beams=4,
    do_sample=False,
    no_repeat_ngram_size=4,
    length_penalty=0.8,
    repetition_penalty=1.05,
)
summary = tok.decode(out[0], skip_special_tokens=True).strip()
print(summary)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.