bert-work-ethic-analysis

1
by
AventIQ-AI
Other
OTHER
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

Inference Examplepythontransformers
import torch
from transformers import DistilBertTokenizer, DistilBertForSequenceClassification

def load_model(model_path):
    tokenizer = DistilBertTokenizer.from_pretrained(model_path)
    model = DistilBertForSequenceClassification.from_pretrained(model_path).half()
    model.eval()
    return model, tokenizer

def classify_ethic(feedback, model, tokenizer, device="cuda"):
    inputs = tokenizer(
        feedback,
        max_length=256,
        padding="max_length",
        truncation=True,
        return_tensors="pt"
    ).to(device)
    outputs = model(**inputs)
    predicted_class = torch.argmax(outputs.logits, dim=1).item()
    return predicted_class

# Example usage
if __name__ == "__main__":
    model_path = "your-username/work-ethic-analysis"  # Replace with your HF repo
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model, tokenizer = load_model(model_path)
    model.to(device)

    feedback = "John consistently meets deadlines and takes initiative."
    category = classify_ethic(feedback, model, tokenizer, device)
    print(f"Feedback: {feedback}")
    print(f"Predicted Work Ethic Category: {category}")
plaintext
Feedback: John consistently meets deadlines and takes initiative.
Predicted Work Ethic Category: Strong Initiative

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.