fixie-ai
ultravox-v0_5-llama-3_2-1b
--- language: - ar - be - bg - bn - cs - cy - da - de - el - en - es - et - fa - fi - fr - gl - hi - hu - it - ja - ka - lt - lv - mk - mr - nl - pl - pt - ro - ru - sk - sl - sr - sv - sw - ta - th - tr - uk - ur - vi - zh library_name: transformers license: mit metrics: - bleu pipeline_tag: audio-text-to-text ---
ultravox-v0_6-llama-3_1-8b
ultravox-v0_5-llama-3_1-8b
turntaking-pretraining-it-multilingual-3c
ultravox-v0_7-glm-4_6
ultravox-v0_6-llama-3_3-70b
ultravox-v0_4
ultravox-v0_2
ultravox-v0_6-gemma-3-27b
ultravox-v0_3
ultraVAD
ultravox-v0_5-glm-4_5-355b
Ultravox is a multimodal Speech LLM built around a pretrained GLM-4.5 and whisper-large-v3-turbo backbone. See https://ultravox.ai for the GitHub repo and more information. Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). The input to the model is given as a text prompt with a special ` ` pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual. In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model. Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc. The model uses a pre-trained GLM-4.5 backbone as well as the encoder part of whisper-large-v3-turbo. The multi-modal adapter is trained, the Whisper encoder is fine-tuned, and the GLM model is kept frozen. We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based GLM backbone. The training dataset is a mix of ASR datasets, extended with continuations generated by Llama 3.1 8B, and speech translation datasets, which yield a modest improvement in translation evaluations. Supervised speech instruction finetuning via knowledge-distillation. For more info, see training code in Ultravox repo. - Training regime: BF16 mixed precision training - Hardward used: 8x B200 GPUs
ultravox-v0_6-qwen-3-32b
ultravox-v0_4_1-llama-3_1-8b
ultravox-v0_4-ToolACE-8B
ultravox-v0_4_1-mistral-nemo
ultravox-v0_5-llama-3_3-70b
ultravox-v0_3-llama-3_2-1b
turntaking-multilingual-llama8b-2a
ultravox-v0_4-mistral_nemo
ultravox-v0_4-llama-3_1-70b
ultravox-v0_4_1-llama-3_1-70b
Meta-Llama-3.1-8B-Instruct-bnb-4bit
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]