herbert-base-cased-sentiment
78.9K
7
514
Small context
123M
1 language
license:cc-by-4.0
by
Voicelab
Other
OTHER
Fair
79K downloads
Community-tested
Edge AI:
Mobile
Laptop
Server
1GB+ RAM
Mobile
Laptop
Server
Quick Summary
Overview - Language model: allegro/herbert-base-cased - Language: pl - Training data: Reviews + own data - Blog post: Sentiment analysis - COVID-19 – the source...
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM
Code Examples
Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)Sentiment Classification in Polishpythontransformers
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
id2label = {0: "negative", 1: "neutral", 2: "positive"}
tokenizer = AutoTokenizer.from_pretrained("Voicelab/herbert-base-cased-sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Voicelab/herbert-base-cased-sentiment")
input = ["Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?"]
encoding = tokenizer(
input,
add_special_tokens=True,
return_token_type_ids=True,
truncation=True,
padding='max_length',
return_attention_mask=True,
return_tensors='pt',
)
output = model(**encoding).logits.to("cpu").detach().numpy()
prediction = id2label[np.argmax(output)]
print(input, "--->", prediction)python
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positivepython
['Ale fajnie, spadł dzisiaj śnieg! Ulepimy dziś bałwana?'] ---> positiveDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.