ced-small
396
license:apache-2.0
by
mispeech
Audio Model
OTHER
New
396 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Code Examples
Inferencepythontransformers
>>> from transformers import AutoModelForAudioClassification, AutoFeatureExtractor
>>> model_name = "mispeech/ced-small"
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(model_name, trust_remote_code=True)
>>> model = AutoModelForAudioClassification.from_pretrained(model_name, trust_remote_code=True)
>>> import torchaudio
>>> audio, sampling_rate = torchaudio.load("/path-to/JeD5V5aaaoI_931_932.wav")
>>> assert sampling_rate == 16000
>>> inputs = feature_extractor(audio, sampling_rate=sampling_rate, return_tensors="pt")
>>> import torch
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = torch.argmax(logits, dim=-1).item()
>>> model.config.id2label[predicted_class_id]
'Finger snapping'Inference (Onnx)pythonpytorch
>>> from optimum.onnxruntime import ORTModelForAudioClassification
>>> model_name = "mispeech/ced-small"
>>> model = ORTModelForAudioClassification.from_pretrained(model_name, trust_remote_code=True)
>>> import torchaudio
>>> audio, sampling_rate = torchaudio.load("/path-to/JeD5V5aaaoI_931_932.wav")
>>> assert sampling_rate == 16000
>>> input_name = model.session.get_inputs()[0].name
>>> output = model(**{input_name: torch.randn(1, 16000)})
>>> logits = output.logits.squeeze()
>>> for idx in logits.argsort()[-2:][::-1]:
>>> print(f"{model.config.id2label[idx]}: {logits[idx]:.4f}")
'Finger snapping: 0.9155'
'Slap: 0.0567'Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.