roberta-based-sentiment-analysis-for-twitter-tweets

1
by
AventIQ-AI
Other
OTHER
New
1 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

RoBERTa-Base Quantized Model for Sentiment Analysis This repository hosts a quantized version of the RoBERTa model, fine-tuned for sentiment-analysis-twitter-tweets.

Code Examples

Usagebash
pip install transformers torch
Usagebash
pip install transformers torch
Usagebash
pip install transformers torch
Usagebash
pip install transformers torch
Usagebash
pip install transformers torch
Usagebash
pip install transformers torch
Usagebash
pip install transformers torch
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Usagepythontransformers
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch


 
# Load tokenizer

tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
 
# Define a test sentence

test_sentence = "The food was absolutely delicious and the service was amazing!"
 
# Tokenize input

inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
 
# Ensure input tensors are in correct dtype

inputs["input_ids"] = inputs["input_ids"].long()  # Convert to long type

inputs["attention_mask"] = inputs["attention_mask"].long()  # Convert to long type

 
# Make prediction

with torch.no_grad():

    outputs = quantized_model(**inputs)
 
# Get predicted class

predicted_class = torch.argmax(outputs.logits, dim=1).item()

print(f"Predicted Class: {predicted_class}")
 
 
label_mapping = {0: "Negative", 1: "Neutral", 2: "Positive"}   

#Example
 
predicted_label = label_mapping[predicted_class]

print(f"Predicted Label: {predicted_label}")
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation
Quantizationtext
.
├── config.json
├── tokenizer_config.json    
├── special_tokens_map.json 
├── tokenizer.json        
├── model.safetensors    # Fine Tuned Model
├── README.md            # Model documentation

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.