theo77186

10 models • 1 total models in database
Sort by:

Llama-3-70B-Instruct-norefusal

NaNK
llama
8
3

Qwen2.5-Coder-7B-Instruct-20241106

This is a reupload of the 20241106 weights that was uploaded by mistake and taken down with a force push. To my surprise, with the release of the 0.5B-32B, there is no official update of the 1.5B or 7B weights. Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). For Qwen2.5-Coder, we release three base language models and instruction-tuned language models, 1.5, 7 and 32 (coming soon) billion parameters. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. - A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. - Long-context Support up to 128K tokens. This repo contains the instruction-tuned 7B Qwen2.5-Coder model, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 7.61B - Number of Paramaters (Non-Embedding): 6.53B - Number of Layers: 28 - Number of Attention Heads (GQA): 28 for Q and 4 for KV - Context Length: Full 131,072 tokens - Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our blog, GitHub, Documentation, Arxiv. The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: Here provides a code snippet with `applychattemplate` to show you how to load the tokenizer and model and how to generate contents. The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the `ropescaling` configuration only when processing long contexts is required. Detailed evaluation results are reported in this 📑 blog. For requirements on GPU memory and the respective throughput, see results here. If you find our work helpful, feel free to give us a cite.

NaNK
license:apache-2.0
5
4

Llama-3.2-8B-Instruct

NaNK
llama
4
2

VibeVoice-Large

For some reason, Microsoft decided to take down the weights for the large model. Reuploading it. Original model card below. VibeVoice: A Frontier Open-Source Text-to-Speech Model VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking. A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details. The model can synthesize speech up to 90 minutes long with up to 4 distinct speakers, surpassing the typical 1-2 speaker limits of many prior models. Training Details Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head. - LLM: Qwen2.5 for this release. - Tokenizers: - Acoustic Tokenizer: Based on a σ-VAE variant (proposed in LatentLM), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each. - Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task. - Diffusion Head: Lightweight module (4 layers, ~600M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference. - Context Length: Trained with a curriculum increasing up to 32,768 tokens. - Training Stages: - Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately. - VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers. Models | Model | Context Length | Generation Length | Weight | |-------|----------------|----------|----------| | VibeVoice-0.5B-Streaming | - | - | On the way | | VibeVoice-1.5B | 64K | ~90 min | HF link | | VibeVoice-Large| 32K | ~45 min | You are here. | Responsible Usage Direct intended uses The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the tech report. Out-of-scope uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios: - Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass. - Disinformation or impersonation – creating audio presented as genuine recordings of real people or events. - Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications. - Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive. - Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio. Risks and limitations While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model. Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content. English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs. Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects. Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations. Recommendations We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly. To mitigate the risks of misuse, we have: Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file. Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card. Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly. Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns. Contact This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at [email protected]. If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.

license:mit
3
0

recurrentgemma-9b-it-bnb-4bit

NaNK
2
0

Qwen2.5-Coder-1.5B-Instruct-20241106

NaNK
license:apache-2.0
2
0

chatterbox-f16-sf

09/04 🔥 Introducing Chatterbox Multilingual in 23 Languages! We're excited to introduce Chatterbox and Chatterbox Multilingual, Resemble AI's production-grade open source TTS models. Chatterbox Multilingual supports Arabic, Danish, German, Greek, English, Spanish, Finnish, French, Hebrew, Hindi, Italian, Japanese, Korean, Malay, Dutch, Norwegian, Polish, Portuguese, Russian, Swedish, Swahili, Turkish, Chinese out of the box. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations. Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support emotion exaggeration control, a powerful feature that makes your voices stand out. Try it now on our Hugging Face Gradio app. If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service ( link ). It delivers reliable performance with ultra-low latency of sub 200ms—ideal for production use in agents, applications, or interactive media. Key Details - Multilingual, zero-shot TTS supporting 23 languages - SoTA zeroshot English TTS - 0.5B Llama backbone - Unique exaggeration/intensity control - Ultra-stable with alignment-informed inference - Trained on 0.5M hours of cleaned data - Watermarked outputs - Easy voice conversion script - Outperforms ElevenLabs Tips - General Use (TTS and Voice Agents): - The default settings (`exaggeration=0.5`, `cfg=0.5`) work well for most prompts. - If the reference speaker has a fast speaking style, lowering `cfg` to around `0.3` can improve pacing. - Expressive or Dramatic Speech: - Try lower `cfg` values (e.g. `~0.3`) and increase `exaggeration` to around `0.7` or higher. - Higher `exaggeration` tends to speed up speech; reducing `cfg` helps compensate with slower, more deliberate pacing. Note: Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip’s language. To mitigate this, set the CFG weight to 0. Every audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy. Disclaimer Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.

license:mit
2
0

Qwen3-Next-70M-TinyStories

2
0

Llama-3-8B-Instruct-norefusal

NaNK
llama
0
4

dolphin-2.9.1-mistral-22b

NaNK
license:apache-2.0
0
2