neuphonic
neucodec
Click the image above to see NeuCodec in action on Youtube! Created by Neuphonic - building faster, smaller, on-device voice AI A lightweight neural codec that encodes audio at just 0.8 kbps - perfect for researchers and builders who need something that just works for training high quality text-to-speech models. 🔊 Low bit-rate compression - a speech codec that compresses and reconstructs audio with near-inaudible reconstruction loss 🌍 Ready for real-world use - train your own SpeechLMs without needing to build your own codec 🏢 Commercial use permitted - use it in your own tools or products 📊 Released with large pre-encoded datasets - we’ve compressed Emilia-YODAS from 1.7TB to 41GB using NeuCodec, significantly reducing the compute requirements needed for training NeuCodec is a Finite Scalar Quantisation (FSQ) based 0.8kbps audio codec for speech tokenization. It takes advantage of the following features: FSQ quantisation resulting in a single codebook, making it ideal for downstream modeling with Speech Language Models. Trained with CC data such that there are no Non-Commercial data restrictions. At 50 tokens/sec and 16 bits per token, the overall bit-rate is 0.8kbps. The codec takes in 16kHz input and outputs 24kHz using an upsampling decoder. The FSQ encoding scheme allows for bit-level error resistance suitable for unreliable and noisy channels. NeuCodec is largely based on extending the work of X-Codec2.0. - Developed by: Neuphonic - Model type: Neural Audio Codec - License: apache-2.0 - Repository: https://github.com/neuphonic/neucodec - Paper: arXiv - Pre-encoded Datasets: - Emilia-YODAS-EN - More coming soon! To install from pypi in a dedicated environment, using Python 3.10 or above: The model was trained using the following data: Emilia-YODAS MLS LibriTTS Fleurs CommonVoice HUI Additional proprietary set All publically available data was covered by either the CC-BY-4.0 or CC0 license.
neucodec-onnx-decoder
This is an onnx-compiled version of the decoder of NeuCodec. It's main use case is providing a low footprint decoder for on-device TTS.
neutts-air
Created by Neuphonic - building faster, smaller, on-device voice AI State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Air is the world’s first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps. - 🗣Best-in-class realism for its size - produces natural, ultra-realistic voices that sound human - 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis - 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio - 🚄Simple LM + codec architecture built off a 0.5B backbone - the sweet spot between speed, size, and quality for real-world applications > [!CAUTION] > Websites like neutts.com are popping up and they're not affliated with Neuphonic, our github or this repo. > > We are on neuphonic.com only. Please be careful out there! 🙏 NeuTTS Air is built off Qwen 0.5B - a lightweight yet capable language model optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality: - Audio Codec: NeuCodec - our proprietary neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook - Format: Available in GGML format for efficient on-device inference - Responsibility: Watermarked outputs - Inference Speed: Real-time generation on mid-range devices - Power Consumption: Optimised for mobile and embedded devices Please refer to the following link for instructions on how to install `espeak`: The requirements file includes the dependencies needed to run the model with PyTorch. When using an ONNX decoder or a GGML model, some dependencies (such as PyTorch) are no longer required. The inference is compatible and tested on `python>=3.11`. To specify a particular model repo for the backbone or codec, add the `--backbone` argument. Available backbones are listed in NeuTTS-Air huggingface collection. Several examples are available, including a Jupyter notebook in the `examples` folder. 1. A reference audio sample (`.wav` file) 2. A text string The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Air’s instant voice cloning capability. You can find some ready-to-use samples in the `examples` folder: For optimal performance, reference audio samples should be: 1. Mono channel 2. 16-44 kHz sample rate 3. 3–15 seconds in length 4. Saved as a `.wav` file 5. Clean — minimal to no background noise 6. Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively Every audio file generated by NeuTTS Air includes Perth (Perceptual Threshold) Watermarker.
neutts-nano
distill-neucodec
neutts-air-q8-gguf
Created by Neuphonic - building faster, smaller, on-device voice AI State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Air is the world’s first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps. - 🗣Best-in-class realism for its size - produces natural, ultra-realistic voices that sound human - 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis - 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio - 🚄Simple LM + codec architecture built off a 0.5B backbone - the sweet spot between speed, size, and quality for real-world applications NeuTTS Air is built off Qwen 0.5B - a lightweight yet capable language model optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality: - Audio Codec: NeuCodec - our proprietary neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook - Format: Available in GGML format for efficient on-device inference - Responsibility: Watermarked outputs - Inference Speed: Real-time generation on mid-range devices - Power Consumption: Optimised for mobile and embedded devices Please refer to the following link for instructions on how to install `espeak`: The requirements file includes the dependencies needed to run the model with PyTorch. When using an ONNX decoder or a GGML model, some dependencies (such as PyTorch) are no longer required. The inference is compatible and tested on `python>=3.11`. To specify a particular model repo for the backbone or codec, add the `--backbone` argument. Available backbones are listed in NeuTTS-Air huggingface collection. Several examples are available, including a Jupyter notebook in the `examples` folder. 1. A reference audio sample (`.wav` file) 2. A text string The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Air’s instant voice cloning capability. You can find some ready-to-use samples in the `examples` folder: For optimal performance, reference audio samples should be: 1. Mono channel 2. 16-44 kHz sample rate 3. 3–15 seconds in length 4. Saved as a `.wav` file 5. Clean — minimal to no background noise 6. Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively Every audio file generated by NeuTTS Air includes Perth (Perceptual Threshold) Watermarker.
neutts-nano-q4-gguf
neucodec-onnx-decoder-int8
neutts-air-q4-gguf
Created by Neuphonic - building faster, smaller, on-device voice AI State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Air is the world’s first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps. - 🗣Best-in-class realism for its size - produces natural, ultra-realistic voices that sound human - 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis - 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio - 🚄Simple LM + codec architecture built off a 0.5B backbone - the sweet spot between speed, size, and quality for real-world applications NeuTTS Air is built off Qwen 0.5B - a lightweight yet capable language model optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality: - Audio Codec: NeuCodec - our proprietary neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook - Format: Available in GGML format for efficient on-device inference - Responsibility: Watermarked outputs - Inference Speed: Real-time generation on mid-range devices - Power Consumption: Optimised for mobile and embedded devices Please refer to the following link for instructions on how to install `espeak`: The requirements file includes the dependencies needed to run the model with PyTorch. When using an ONNX decoder or a GGML model, some dependencies (such as PyTorch) are no longer required. The inference is compatible and tested on `python>=3.11`. To specify a particular model repo for the backbone or codec, add the `--backbone` argument. Available backbones are listed in NeuTTS-Air huggingface collection. Several examples are available, including a Jupyter notebook in the `examples` folder. 1. A reference audio sample (`.wav` file) 2. A text string The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Air’s instant voice cloning capability. You can find some ready-to-use samples in the `examples` folder: For optimal performance, reference audio samples should be: 1. Mono channel 2. 16-44 kHz sample rate 3. 3–15 seconds in length 4. Saved as a `.wav` file 5. Clean — minimal to no background noise 6. Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively Every audio file generated by NeuTTS Air includes Perth (Perceptual Threshold) Watermarker.