ultraVAD

615
30
by
fixie-ai
Embedding Model
OTHER
New
615 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

Build model inputs via pipeline preprocesspythontransformers
import transformers
import torch
import librosa
import os

pipe = transformers.pipeline(model='fixie-ai/ultraVAD', trust_remote_code=True, device="cpu")

sr = 16000
wav_path = os.path.join(os.path.dirname(__file__), "sample.wav")
audio, sr = librosa.load(wav_path, sr=sr)

turns = [
  {"role": "assistant", "content": "Hi, how are you?"},
]

# Build model inputs via pipeline preprocess
inputs = {"audio": audio, "turns": turns, "sampling_rate": sr}
model_inputs = pipe.preprocess(inputs)

# Move tensors to model device
device = next(pipe.model.parameters()).device
model_inputs = {k: (v.to(device) if hasattr(v, "to") else v) for k, v in model_inputs.items()}

# Forward pass (no generation)
with torch.inference_mode():
  output = pipe.model.forward(**model_inputs, return_dict=True)

# Compute last-audio token position
logits = output.logits  # (1, seq_len, vocab)
audio_pos = int(
  model_inputs["audio_token_start_idx"].item() +
  model_inputs["audio_token_len"].item() - 1
)

# Resolve <|eot_id|> token id and compute probability at last-audio index
token_id = pipe.tokenizer.convert_tokens_to_ids("<|eot_id|>")
if token_id is None or token_id == pipe.tokenizer.unk_token_id:
  raise RuntimeError("<|eot_id|> not found in tokenizer.")

audio_logits = logits[0, audio_pos, :]
audio_probs = torch.softmax(audio_logits.float(), dim=-1)
eot_prob_audio = audio_probs[token_id].item()
print(f"P(<|eot_id|>) = {eot_prob_audio:.6f}")
threshold = 0.1
if eot_prob_audio > threshold:
  print("Is End of Turn")
else:
  print("Is Not End of Turn")

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.