keerthikoganti

2 models • 1 total models in database
Sort by:

Architecture Design Stages Compact Cnn

18
2

distilbert-24679-text-finetuned

Model Card for keerthikoganti/distilbert-24679-text-finetuned This model is a DistilBERT-based text classifier fine-tuned on samder03/2025-24679-text-dataset. It predicts one of 4 class labels based on input text. The project demonstrates fine-tuning of pretrained transformer models for supervised classification tasks in a classroom setting. - Developed by: Keerthi Koganti - Shared by: Keerthi Koganti - Model type: Transformer-based sequence classifier - Language(s) (NLP): English - License: Carnegie Mellon - Task: Multiclass text classification - Labels: 4 integer-coded categories (0, 1, 2, 3) - Framework: Hugging Face Transformers (Trainer API) - Repo artifacts: config.json, pytorchmodel.bin, tokenizer files, metrics.json Benchmarking fine-tuned vs zero-shot/few-shot prompting pipelines Educational demonstration of Hugging Face Trainer API Production deployment in safety-critical or high-stakes settings Use outside the domain/context of the training dataset Small dataset: The dataset was curated within class; total examples are limited. Domain bias: Texts come from a narrow domain and may not generalize. Label ambiguity: Some examples may be ambiguous or mislabeled. Overfitting risk: With few training samples, validation metrics may not reflect real-world performance. Evaluate on external test sets before any real-world use. Complement automated predictions with human review. from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline modelid = "keerthikoganti/distilbert-24679-text-finetuned" tokenizer = AutoTokenizer.frompretrained(modelid) model = AutoModelForSequenceClassification.frompretrained(modelid) clf = pipeline("text-classification", model=model, tokenizer=tokenizer) Example text = "Sample input text to classify" pred = clf(text) print(pred) - Training regime: Framework: Hugging Face Transformers (Trainer API)

1
0