01-ai

13 models • 4 total models in database
Sort by:

Yi-34B

- they might want nothing more than destruction itself rather then anything else from their quest after immortality (and maybe someone should tell them about modern medicine)? In any event though – one thing remains true regardless : whether or not success comes easy depends entirely upon how much effort we put into conquering whatever challenges lie ahead along with having faith deep down inside ourselves too ;) So let’s get started now shall We?" pipelinetag: text-generation --- Building the Next Generation of Open-Source and Bilingual LLMs - What is Yi? - Introduction - Models - Chat models - Base models - Model info - News - How to use Yi? - Quick start - Choose your path - pip - docker - llama.cpp - conda-lock - Web demo - Fine-tuning - Quantization - Deployment - FAQ - Learning hub - Why Yi? - Ecosystem - Upstream - Downstream - Serving - Quantization - Fine-tuning - API - Benchmarks - Base model performance - Chat model performance - Tech report - Citation - Who can use Yi? - Misc. - Acknowledgements - Disclaimer - License - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example, - Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024). - Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023). - 🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem. If you're interested in Yi's adoption of Llama architecture and license usage policy, see Yi's relation with Llama. ⬇️ > 💡 TL;DR > > The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama. - Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018. - Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi. - Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems. - However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights. - As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure. - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023. 🔥 2024-07-29 : The Yi Cookbook 1.0 is released, featuring tutorials and examples in both Chinese and English. 🎯 2024-05-13 : The Yi-1.5 series models are open-sourced, further improving coding, math, reasoning, and instruction-following abilities. 🎯 2024-03-16 : The Yi-9B-200K is open-sourced and available to the public. 🔔 2024-03-07 : The long text capability of the Yi-34B-200K has been enhanced. In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance. 🎯 2024-03-06 : The Yi-9B is open-sourced and available to the public. Yi-9B stands out as the top performer among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. 🎯 2024-01-23 : The Yi-VL models, Yi-VL-34B and Yi-VL-6B , are open-sourced and available to the public. Yi-VL-34B has ranked first among all existing open-source models in the latest benchmarks, including MMMU and CMMMU (based on data available up to January 2024). 🎯 2023-11-23 : Chat models are open-sourced and available to the public. This release contains two chat models based on previously released base models, two 8-bit models quantized by GPTQ, and two 4-bit models quantized by AWQ. - `Yi-34B-Chat` - `Yi-34B-Chat-4bits` - `Yi-34B-Chat-8bits` - `Yi-6B-Chat` - `Yi-6B-Chat-4bits` - `Yi-6B-Chat-8bits` 🔔 2023-11-23 : The Yi Series Models Community License Agreement is updated to v2.1 . 🔥 2023-11-08 : Invited test of Yi-34B chat model. Application form: 🎯 2023-11-05 : The base models, Yi-6B-200K and Yi-34B-200K , are open-sourced and available to the public. This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. 🎯 2023-11-02 : The base models, Yi-6B and Yi-34B , are open-sourced and available to the public. The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements. If you want to deploy Yi models, make sure you meet the software and hardware requirements. | Model | Download | |---|---| |Yi-34B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-34B-Chat-4bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-34B-Chat-8bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-6B-Chat| • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-6B-Chat-4bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-6B-Chat-8bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | - 4-bit series models are quantized by AWQ. - 8-bit series models are quantized by GPTQ - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). | Model | Download | |---|---| |Yi-34B| • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-34B-200K|• 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel| |Yi-9B|• 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel| |Yi-9B-200K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-6B| • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | |Yi-6B-200K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | - 200k is roughly equivalent to 400,000 Chinese characters. - If you want to use the previous version of the Yi-34B-200K (released on Nov 5, 2023), run `git checkout 069cd341d60f4ce4b07ec394e82b79e94f656cf` to download the weight. Model Intro Default context window Pretrained tokens Training Data Date 6B series models They are suitable for personal and academic use. 4K 3T Up to June 2023 9B series models It is the best at coding and math in the Yi series models. Yi-9B is continuously trained based on Yi-6B, using 0.8T tokens. 34B series models They are suitable for personal, academic, and commercial (particularly for small and medium-sized enterprises) purposes. It's a cost-effective solution that's affordable and equipped with emergent ability. 3T For chat model limitations, see the explanations below. ⬇️ The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training. However, this higher diversity might amplify certain existing issues, including: Hallucination: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning. Non-determinism in re-generation: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions. Cumulative Error: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc. To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as temperature, topp, or topk. These adjustments can help in the balance between creativity and coherence in the model's outputs. - Quick start - Choose your path - pip - docker - conda-lock - llama.cpp - Web demo - Fine-tuning - Quantization - Deployment - FAQ - Learning hub > 💡 Tip: If you want to get started with the Yi model and explore different methods for inference, check out the Yi Cookbook. Select one of the following paths to begin your journey with Yi! - 🙋‍♀️ and you have sufficient resources (for example, NVIDIA A800 80GB), you can choose one of the following methods: - pip - Docker - conda-lock - 🙋‍♀️ and you have limited resources (for example, a MacBook Pro), you can use llama.cpp. If you prefer not to deploy Yi models locally, you can explore Yi's capabilities using any of the following options. If you want to explore more features of Yi, you can adopt one of these methods: - Yi APIs (Yi official) - Early access has been granted to some applicants. Stay tuned for the next round of access! If you want to chat with Yi with more customizable options (e.g., system prompt, temperature, repetition penalty, etc.), you can try one of the following options: - Yi-34B-Chat-Playground (Yi official) - Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese). If you want to chat with Yi, you can use one of these online services, which offer a similar user experience: - Yi-34B-Chat (Yi official on Hugging Face) - No registration is required. - Yi-34B-Chat (Yi official beta) - Access is available through a whitelist. Welcome to apply (fill out a form in English or Chinese). This tutorial guides you through every step of running Yi-34B-Chat locally on an A800 (80G) and then performing inference. - Make sure Python 3.10 or a later version is installed. - If you want to run other Yi models, see software and hardware requirements. To set up the environment and install the required packages, execute the following command. You can download the weights and tokenizer of Yi models from the following sources: You can perform inference with Yi chat or base models as below. 1. Create a file named `quickstart.py` and copy the following content to it. Then you can see an output similar to the one below. 🥳 The steps are similar to pip - Perform inference with Yi chat model. Then you can see an output similar to the one below. 🥳 Prompt: Let me tell you an interesting story about cat Tom and mouse Jerry, Generation: Let me tell you an interesting story about cat Tom and mouse Jerry, which happened in my childhood. My father had a big house with two cats living inside it to kill mice. One day when I was playing at home alone, I found one of the tomcats lying on his back near our kitchen door, looking very much like he wanted something from us but couldn’t get up because there were too many people around him! He kept trying for several minutes before finally giving up... Run Yi-34B-chat locally with Docker: a step-by-step guide. ⬇️ This tutorial guides you through every step of running Yi-34B-Chat on an A800 GPU or 44090 locally and then performing inference. Step 0: Prerequisites Make sure you've installed Docker and nvidia-container-toolkit . Step 1: Start Docker docker run -it --gpus all \ -v <your-model-path>: /models ghcr.io/01-ai/yi:latest Alternatively, you can pull the Yi Docker image from registry.lingyiwanwu.com/ci/01-ai/yi:latest . Step 2: Perform inference You can perform inference with Yi chat or base models as below. Perform inference with Yi chat model The steps are similar to pip - Perform inference with Yi chat model . Note that the only difference is to set modelpath = '<your-model-mount-path>' instead of modelpath = '<your-model-path>' . Perform inference with Yi base model The steps are similar to pip - Perform inference with Yi base model . Note that the only difference is to set --model <your-model-mount-path>' instead of model <your-model-path> . You can use conda-lock to generate fully reproducible lock files for conda environments. ⬇️ You can refer to conda-lock.yml for the exact versions of the dependencies. Additionally, you can utilize micromamba for installing these dependencies. 1. Install micromamba by following the instructions available here . 2. Execute micromamba install -y -n yi -f conda-lock.yml to create a conda environment named yi and install the necessary dependencies. Quick start - llama.cpp The following tutorial will guide you through every step of running a quantized model ( Yi-chat-6B-2bits ) locally and then performing inference. Run Yi-chat-6B-2bits locally with llama.cpp: a step-by-step guide. ⬇️ This tutorial guides you through every step of running a quantized model ( Yi-chat-6B-2bits ) locally and then performing inference. - Step 0: Prerequisites - Step 1: Download llama.cpp - Step 2: Download Yi model - Step 3: Perform inference - This tutorial assumes you use a MacBook Pro with 16GB of memory and an Apple M2 Pro chip. - Make sure `git-lfs` is installed on your machine. To clone the `llama.cpp` repository, run the following command. 2.1 To clone XeIaso/yi-chat-6B-GGUF with just pointers, run the following command. 2.2 To download a quantized Yi model (yi-chat-6b.Q2K.gguf), run the following command. To perform inference with the Yi model, you can use one of the following methods. To compile `llama.cpp` using 4 threads and then conduct inference, navigate to the `llama.cpp` directory, and run the following command. > ##### Tips > > - Replace `/Users/yu/yi-chat-6B-GGUF/yi-chat-6b.Q2K.gguf` with the actual path of your model. > > - By default, the model operates in completion mode. > > - For additional output customization options (for example, system prompt, temperature, repetition penalty, etc.), run `./main -h` to check detailed descriptions and usage. Now you have successfully asked a question to the Yi model and got an answer! 🥳 1. To initialize a lightweight and swift chatbot, run the following command. 2. To access the chatbot interface, open your web browser and enter `http://0.0.0.0:8080` into the address bar. 3. Enter a question, such as "How do you feed your pet fox? Please answer this question in 6 simple steps" into the prompt window, and you will receive a corresponding answer. You can build a web UI demo for Yi chat models (note that Yi base models are not supported in this senario). Step 3. To start a web service locally, run the following command. You can access the web UI by entering the address provided in the console into your browser. Once finished, you can compare the finetuned model and the base model with the following command: For advanced usage (like fine-tuning based on your custom data), see the explanations below. ⬇️ By default, we use a small dataset from BAAI/COIG to finetune the base model. You can also prepare your customized dataset in the following `jsonl` format: And then mount them in the container to replace the default ones: For the Yi-6B model, a node with 4 GPUs, each with GPU memory larger than 60GB, is recommended. For the Yi-34B model, because the usage of the zero-offload technique consumes a lot of CPU memory, please be careful to limit the number of GPUs in the 34B finetune training. Please use CUDAVISIBLEDEVICES to limit the number of GPUs (as shown in scripts/runsftYi34b.sh). A typical hardware setup for finetuning the 34B model is a node with 8 GPUs (limited to 4 in running by CUDAVISIBLEDEVICES=0,1,2,3), each with GPU memory larger than 80GB, and total CPU memory larger than 900GB. Download a LLM-base model to MODELPATH (6B and 34B). A typical folder of models is like: Download a dataset from huggingface to local storage DATAPATH, e.g. Dahoas/rm-static. `finetune/yiexampledataset` has example datasets, which are modified from BAAI/COIG `cd` into the scripts folder, copy and paste the script, and run. For example: For the Yi-6B base model, setting trainingdebugsteps=20 and numtrainepochs=4 can output a chat model, which takes about 20 minutes. For the Yi-34B base model, it takes a relatively long time for initialization. Please be patient. Then you'll see the answer from both the base model and the finetuned model. Once finished, you can then evaluate the resulting model as follows: GPT-Q is a PTQ (Post-Training Quantization) method. It saves memory and provides potential speedups while retaining the accuracy of the model. Yi models can be GPT-Q quantized without a lot of efforts. We provide a step-by-step tutorial below. To run GPT-Q, we will use AutoGPTQ and exllama. And the huggingface transformers has integrated optimum and auto-gptq to perform GPTQ quantization on language models. The `quantautogptq.py` script is provided for you to perform GPT-Q quantization: You can run a quantized model using the `evalquantizedmodel.py`: Once finished, you can then evaluate the resulting model as follows: AWQ is a PTQ (Post-Training Quantization) method. It's an efficient and accurate low-bit weight quantization (INT3/4) for LLMs. Yi models can be AWQ quantized without a lot of efforts. We provide a step-by-step tutorial below. The `quantautoawq.py` script is provided for you to perform AWQ quantization: You can run a quantized model using the `evalquantizedmodel.py`: If you want to deploy Yi models, make sure you meet the software and hardware requirements. Before using Yi quantized models, make sure you've installed the correct software listed below. | Model | Software |---|--- Yi 4-bit quantized models | AWQ and CUDA Yi 8-bit quantized models | GPTQ and CUDA Before deploying Yi in your environment, make sure your hardware meets the following requirements. | Model | Minimum VRAM | Recommended GPU Example | |:----------------------|:--------------|:-------------------------------------:| | Yi-6B-Chat | 15 GB | 1 x RTX 3090 (24 GB) 1 x RTX 4090 (24 GB) 1 x A10 (24 GB) 1 x A30 (24 GB) | | Yi-6B-Chat-4bits | 4 GB | 1 x RTX 3060 (12 GB) 1 x RTX 4060 (8 GB) | | Yi-6B-Chat-8bits | 8 GB | 1 x RTX 3070 (8 GB) 1 x RTX 4060 (8 GB) | | Yi-34B-Chat | 72 GB | 4 x RTX 4090 (24 GB) 1 x A800 (80GB) | | Yi-34B-Chat-4bits | 20 GB | 1 x RTX 3090 (24 GB) 1 x RTX 4090 (24 GB) 1 x A10 (24 GB) 1 x A30 (24 GB) 1 x A100 (40 GB) | | Yi-34B-Chat-8bits | 38 GB | 2 x RTX 3090 (24 GB) 2 x RTX 4090 (24 GB) 1 x A800 (40 GB) | Below are detailed minimum VRAM requirements under different batch use cases. | Model | batch=1 | batch=4 | batch=16 | batch=32 | | ----------------------- | ------- | ------- | -------- | -------- | | Yi-6B-Chat | 12 GB | 13 GB | 15 GB | 18 GB | | Yi-6B-Chat-4bits | 4 GB | 5 GB | 7 GB | 10 GB | | Yi-6B-Chat-8bits | 7 GB | 8 GB | 10 GB | 14 GB | | Yi-34B-Chat | 65 GB | 68 GB | 76 GB | > 80 GB | | Yi-34B-Chat-4bits | 19 GB | 20 GB | 30 GB | 40 GB | | Yi-34B-Chat-8bits | 35 GB | 37 GB | 46 GB | 58 GB | | Model | Minimum VRAM | Recommended GPU Example | |----------------------|--------------|:-------------------------------------:| | Yi-6B | 15 GB | 1 x RTX 3090 (24 GB) 1 x RTX 4090 (24 GB) 1 x A10 (24 GB) 1 x A30 (24 GB) | | Yi-6B-200K | 50 GB | 1 x A800 (80 GB) | | Yi-9B | 20 GB | 1 x RTX 4090 (24 GB) | | Yi-34B | 72 GB | 4 x RTX 4090 (24 GB) 1 x A800 (80 GB) | | Yi-34B-200K | 200 GB | 4 x A800 (80 GB) | If you have any questions while using the Yi series models, the answers provided below could serve as a helpful reference for you. ⬇️ 💡Fine-tuning - Base model or Chat model - which to fine-tune? The choice of pre-trained language model for fine-tuning hinges on the computational resources you have at your disposal and the particular demands of your task. - If you are working with a substantial volume of fine-tuning data (say, over 10,000 samples), the Base model could be your go-to choice. - On the other hand, if your fine-tuning data is not quite as extensive, opting for the Chat model might be a more fitting choice. - It is generally advisable to fine-tune both the Base and Chat models, compare their performance, and then pick the model that best aligns with your specific requirements. - Yi-34B versus Yi-34B-Chat for full-scale fine-tuning - what is the difference? The key distinction between full-scale fine-tuning on `Yi-34B`and `Yi-34B-Chat` comes down to the fine-tuning approach and outcomes. - Yi-34B-Chat employs a Special Fine-Tuning (SFT) method, resulting in responses that mirror human conversation style more closely. - The Base model's fine-tuning is more versatile, with a relatively high performance potential. - If you are confident in the quality of your data, fine-tuning with `Yi-34B` could be your go-to. - If you are aiming for model-generated responses that better mimic human conversational style, or if you have doubts about your data quality, `Yi-34B-Chat` might be your best bet. 💡Quantization - Quantized model versus original model - what is the performance gap? - The performance variance is largely contingent on the quantization method employed and the specific use cases of these models. For instance, when it comes to models provided by the AWQ official, from a Benchmark standpoint, quantization might result in a minor performance drop of a few percentage points. - Subjectively speaking, in situations like logical reasoning, even a 1% performance shift could impact the accuracy of the output results. 💡General - Where can I source fine-tuning question answering datasets? - You can find fine-tuning question answering datasets on platforms like Hugging Face, with datasets like m-a-p/COIG-CQIA readily available. - Additionally, Github offers fine-tuning frameworks, such as hiyouga/LLaMA-Factory, which integrates pre-made datasets. - What is the GPU memory requirement for fine-tuning Yi-34B FP16? The GPU memory needed for fine-tuning 34B FP16 hinges on the specific fine-tuning method employed. For full parameter fine-tuning, you'll need 8 GPUs each with 80 GB; however, more economical solutions like Lora require less. For more details, check out hiyouga/LLaMA-Factory. Also, consider using BF16 instead of FP16 for fine-tuning to optimize performance. - Are there any third-party platforms that support chat functionality for the Yi-34b-200k model? If you're looking for third-party Chats, options include fireworks.ai. If you want to learn Yi, you can find a wealth of helpful educational resources here. ⬇️ Whether you're a seasoned developer or a newcomer, you can find a wealth of helpful educational resources to enhance your understanding and skills with Yi models, including insightful blog posts, comprehensive video tutorials, hands-on guides, and more. The content you find here has been generously contributed by knowledgeable Yi experts and passionate enthusiasts. We extend our heartfelt gratitude for your invaluable contributions! At the same time, we also warmly invite you to join our collaborative effort by contributing to Yi. If you have already made contributions to Yi, please don't hesitate to showcase your remarkable work in the table below. With all these resources at your fingertips, you're ready to start your exciting journey with Yi. Happy learning! 🥳 | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | 使用 Dify、Meilisearch、零一万物模型实现最简单的 RAG 应用(三):AI 电影推荐 | 2024-05-20 | 苏洋 | | 使用autodl服务器,在A40显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度18 words-s | 2024-05-20 | fly-iot | | Yi-VL 最佳实践 | 2024-05-20 | ModelScope | | 一键运行零一万物新鲜出炉Yi-1.5-9B-Chat大模型 | 2024-05-13 | Second State | | 零一万物开源Yi-1.5系列大模型 | 2024-05-13 | 刘聪 | | 零一万物Yi-1.5系列模型发布并开源! 34B-9B-6B 多尺寸,魔搭社区推理微调最佳实践教程来啦! | 2024-05-13 | ModelScope | | Yi-34B 本地部署简单测试 | 2024-05-13 | 漆妮妮 | | 驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(上) | 2024-05-13 | Words worth | | 驾辰龙跨Llama持Wasm,玩转Yi模型迎新春过大年(下篇) | 2024-05-13 | Words worth | | Ollama新增两个命令,开始支持零一万物Yi-1.5系列模型 | 2024-05-13 | AI工程师笔记 | | 使用零一万物 200K 模型和 Dify 快速搭建模型应用 | 2024-05-13 | 苏洋 | | (持更) 零一万物模型折腾笔记:社区 Yi-34B 微调模型使用 | 2024-05-13 | 苏洋 | | Python+ERNIE-4.0-8K-Yi-34B-Chat大模型初探 | 2024-05-11 | 江湖评谈 | | 技术布道 Vue及Python调用零一万物模型和Prompt模板(通过百度千帆大模型平台) | 2024-05-11 | MumuLab | | 多模态大模型Yi-VL-plus体验 效果很棒 | 2024-04-27 | 大家好我是爱因 | | 使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,并使用vllm优化加速,显存占用42G,速度23 words-s | 2024-04-27 | fly-iot | | Getting Started with Yi-1.5-9B-Chat | 2024-04-27 | Second State | | 基于零一万物yi-vl-plus大模型简单几步就能批量生成Anki图片笔记 | 2024-04-24 | 正经人王同学 | | 【AI开发:语言】一、Yi-34B超大模型本地部署CPU和GPU版 | 2024-04-21 | My的梦想已实现 | | 【Yi-34B-Chat-Int4】使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words-s,vllm要求算力在7以上的显卡就可以 | 2024-03-22 | fly-iot | | 零一万物大模型部署+微调总结 | 2024-03-22 | vwus | | 零一万物Yi大模型vllm推理时Yi-34B或Yi-6bchat重复输出的解决方案 | 2024-03-02 | 郝铠锋 | | Yi-34B微调训练 | 2024-03-02 | lsjlnd | | 实测零一万物Yi-VL多模态语言模型:能准确“识图吃瓜” | 2024-02-02 | 苏洋 | | 零一万物开源Yi-VL多模态大模型,魔搭社区推理&微调最佳实践来啦! | 2024-01-26 | ModelScope | | 单卡 3 小时训练 Yi-6B 大模型 Agent:基于 Llama Factory 实战 | 2024-01-22 | 郑耀威 | | 零一科技Yi-34B Chat大模型环境搭建&推理 | 2024-01-15 | 要养家的程序员 | | 基于LLaMA Factory,单卡3小时训练专属大模型 Agent | 2024-01-15 | 机器学习社区 | | 双卡 3080ti 部署 Yi-34B 大模型 - Gradio + vLLM 踩坑全记录 | 2024-01-02 | 漆妮妮 | | 【大模型部署实践-3】3个能在3090上跑起来的4bits量化Chat模型(baichuan2-13b、InternLM-20b、Yi-34b) | 2024-01-02 | aqSeabiscuit | | 只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型 | 2023-12-28 | 漆妮妮 | | 零一万物模型官方 Yi-34B 模型本地离线运行部署使用笔记(物理机和docker两种部署方式),200K 超长文本内容,34B 干翻一众 70B 模型,打榜分数那么高,这模型到底行不行? | 2023-12-28 | 代码讲故事 | | LLM - 大模型速递之 Yi-34B 入门与 LoRA 微调 | 2023-12-18 | BIT666 | | 通过vllm框架进行大模型推理 | 2023-12-18 | 土山炮 | | CPU 混合推理,非常见大模型量化方案:“二三五六” 位量化方案 | 2023-12-12 | 苏洋 | | 零一万物模型折腾笔记:官方 Yi-34B 模型基础使用 | 2023-12-10 | 苏洋 | | Running Yi-34B-Chat locally using LlamaEdge | 2023-11-30 | Second State | | 本地运行零一万物 34B 大模型,使用 Llama.cpp & 21G 显存 | 2023-11-26 | 苏洋 | | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------- | | yi-openai-proxy | 2024-05-11 | 苏洋 | | 基于零一万物 Yi 模型和 B 站构建大语言模型高质量训练数据集 | 2024-04-29 | 正经人王同学 | | 基于视频网站和零一万物大模型构建大语言模型高质量训练数据集 | 2024-04-25 | 正经人王同学 | | 基于零一万物yi-34b-chat-200k输入任意文章地址,点击按钮即可生成无广告或推广内容的简要笔记,并生成分享图给好友 | 2024-04-24 | 正经人王同学 | | Food-GPT-Yi-model | 2024-04-21 | Hubert S | | Deliverable | Date | Author | | ------------------------------------------------------------ | ---------- | ------------------------------------------------------------ | | Run dolphin-2.2-yi-34b on IoT Devices | 2023-11-30 | Second State | | 只需 24G 显存,用 vllm 跑起来 Yi-34B 中英双语大模型 | 2023-12-28 | 漆妮妮 | | Install Yi 34B Locally - Chinese English Bilingual LLM | 2023-11-05 | Fahd Mirza | | Dolphin Yi 34b - Brand New Foundational Model TESTED | 2023-11-27 | Matthew Berman | | Yi-VL-34B 多模态大模型 - 用两张 A40 显卡跑起来 | 2024-01-28 | 漆妮妮 | | 4060Ti 16G显卡安装零一万物最新开源的Yi-1.5版大语言模型 | 2024-05-14 | titan909 | | Yi-1.5: True Apache 2.0 Competitor to LLAMA-3 | 2024-05-13 | Prompt Engineering | | Install Yi-1.5 Model Locally - Beats Llama 3 in Various Benchmarks | 2024-05-13 | Fahd Mirza | | how to install Ollama and run Yi 6B | 2024-05-13 | Ridaa Davids | | 地表最强混合智能AI助手:llama370B+Yi34B+Qwen1.5110B | 2024-05-04 | 朱扎特 | | ChatDoc学术论文辅助--基于Yi-34B和langchain进行PDF知识库问答 | 2024-05-03 | 朱扎特 | | 基于Yi-34B的领域知识问答项目演示 | 2024-05-02 | 朱扎特 | | 使用RTX4090+GaLore算法 全参微调Yi-6B大模型 | 2024-03-24 | 小工蚂创始人 | | 无内容审查NSFW大语言模型Yi-34B-Chat蒸馏版测试,RolePlay,《天龙八部》马夫人康敏,本地GPU,CPU运行 | 2024-03-20 | 刘悦的技术博客 | | 无内容审查NSFW大语言模型整合包,Yi-34B-Chat,本地CPU运行,角色扮演潘金莲 | 2024-03-16 | 刘悦的技术博客 | | 量化 Yi-34B-Chat 并在单卡 RTX 4090 使用 vLLM 部署 | 2024-03-05 | 白鸽巢 | | Yi-VL-34B(5):使用3个3090显卡24G版本,运行Yi-VL-34B模型,支持命令行和web界面方式,理解图片的内容转换成文字 | 2024-02-27 | fly-iot | | Win环境KoboldCpp本地部署大语言模型进行各种角色扮演游戏 | 2024-02-25 | 魚蟲蟲 | | 无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P2 | 2024-02-23 | 魚蟲蟲 | | 【wails】(2):使用go-llama.cpp 运行 yi-01-6b大模型,使用本地CPU运行,速度还可以,等待下一版本更新 | 2024-02-20 | fly-iot | | 【xinference】(6):在autodl上,使用xinference部署yi-vl-chat和qwen-vl-chat模型,可以使用openai调用成功 | 2024-02-06 | fly-iot | | 无需显卡本地部署Yi-34B-Chat进行角色扮演游戏 P1 | 2024-02-05 | 魚蟲蟲 | | 2080Ti部署YI-34B大模型 xinference-oneapi-fastGPT本地知识库使用指南 | 2024-01-30 | 小饭护法要转码 | | Best Story Writing AI Model - Install Yi 6B 200K Locally on Windows | 2024-01-22 | Fahd Mirza | | Mac 本地运行大语言模型方法与常见问题指南(Yi 34B 模型+32 GB 内存测试) | 2024-01-21 | 小吴苹果机器人 | | 【Dify知识库】(11):Dify0.4.9改造支持MySQL,成功接入yi-6b 做对话,本地使用fastchat启动,占8G显存,完成知识库配置 | 2024-01-21 | fly-iot | | 这位LLM先生有点暴躁,用的是YI-6B的某个量化版,#LLM #大语言模型 #暴躁老哥 | 2024-01-20 | 晓漫吧 | | 大模型推理 NvLink 桥接器有用吗|双卡 A6000 测试一下 | 2024-01-17 | 漆妮妮 | | 大模型推理 A40 vs A6000 谁更强 - 对比 Yi-34B 的单、双卡推理性能 | 2024-01-15 | 漆妮妮 | | C-Eval 大语言模型评测基准- 用 LM Evaluation Harness + vLLM 跑起来 | 2024-01-11 | 漆妮妮 | | 双显卡部署 Yi-34B 大模型 - vLLM + Gradio 踩坑记录 | 2024-01-01 | 漆妮妮 | | 手把手教学!使用 vLLM 快速部署 Yi-34B-Chat | 2023-12-26 | 白鸽巢 | | 如何训练企业自己的大语言模型?Yi-6B LORA微调演示 #小工蚁 | 2023-12-21 | 小工蚂创始人 | | Yi-34B(4):使用4个2080Ti显卡11G版本,运行Yi-34B模型,5年前老显卡是支持的,可以正常运行,速度 21 words/s | 2023-12-02 | fly-iot | | 使用autodl服务器,RTX 3090 3 显卡上运行, Yi-34B-Chat模型,显存占用60G | 2023-12-01 | fly-iot | | 使用autodl服务器,两个3090显卡上运行, Yi-34B-Chat-int4模型,用vllm优化,增加 --num-gpu 2,速度23 words/s | 2023-12-01 | fly-iot | | Yi大模型一键本地部署 技术小白玩转AI | 2023-12-01 | 技术小白玩转AI | | 01.AI's Yi-6B: Overview and Fine-Tuning | 2023-11-28 | AI Makerspace | | Yi 34B Chat LLM outperforms Llama 70B | 2023-11-27 | DLExplorer | | How to run open source models on mac Yi 34b on m3 Max | 2023-11-26 | TECHNO PREMIUM | | Yi-34B - 200K - The BEST & NEW CONTEXT WINDOW KING | 2023-11-24 | Prompt Engineering | | Yi 34B : The Rise of Powerful Mid-Sized Models - Base,200k & Chat | 2023-11-24 | Sam Witteveen | | 在IoT设备运行破解版李开复大模型dolphin-2.2-yi-34b(还可作为私有OpenAI API服务器) | 2023-11-15 | Second State | | Run dolphin-2.2-yi-34b on IoT Devices (Also works as a Private OpenAI API Server) | 2023-11-14 | Second State | | How to Install Yi 34B 200K Llamafied on Windows Laptop | 2023-11-11 | Fahd Mirza | - Ecosystem - Upstream - Downstream - Serving - Quantization - Fine-tuning - API - Benchmarks - Chat model performance - Base model performance - Yi-34B and Yi-34B-200K - Yi-9B Yi has a comprehensive ecosystem, offering a range of tools, services, and models to enrich your experiences and maximize productivity. - Upstream - Downstream - Serving - Quantization - Fine-tuning - API The Yi series models follow the same model architecture as Llama. By choosing Yi, you can leverage existing tools, libraries, and resources within the Llama ecosystem, eliminating the need to create new tools and enhancing development efficiency. For example, the Yi series models are saved in the format of the Llama model. You can directly use `LlamaForCausalLM` and `LlamaTokenizer` to load the model. For more information, see Use the chat model. > 💡 Tip > > - Feel free to create a PR and share the fantastic work you've built using the Yi series models. > > - To help others quickly understand your work, it is recommended to use the format of ` : + `. If you want to get up with Yi in a few minutes, you can use the following services built upon Yi. - Yi-34B-Chat: you can chat with Yi using one of the following platforms: - Yi-34B-Chat | Hugging Face - Yi-34B-Chat | Yi Platform: Note that currently it's available through a whitelist. Welcome to apply (fill out a form in English or Chinese) and experience it firsthand! - Yi-6B-Chat (Replicate): you can use this model with more options by setting additional parameters and calling APIs. - ScaleLLM: you can use this service to run Yi models locally with added flexibility and customization. If you have limited computational capabilities, you can use Yi's quantized models as follows. These quantized models have reduced precision but offer increased efficiency, such as faster inference speed and smaller RAM usage. - TheBloke/Yi-34B-GPTQ - TheBloke/Yi-34B-GGUF - TheBloke/Yi-34B-AWQ If you're seeking to explore the diverse capabilities within Yi's thriving family, you can delve into Yi's fine-tuned models as below. - TheBloke Models: this site hosts numerous fine-tuned models derived from various LLMs including Yi. This is not an exhaustive list for Yi, but to name a few sorted on downloads: - TheBloke/dolphin-22-yi-34b-AWQ - TheBloke/Yi-34B-Chat-AWQ - TheBloke/Yi-34B-Chat-GPTQ - SUSTech/SUS-Chat-34B: this model ranked first among all models below 70B and outperformed the twice larger deepseek-llm-67b-chat. You can check the result on the Open LLM Leaderboard. - OrionStarAI/OrionStar-Yi-34B-Chat-Llama: this model excelled beyond other models (such as GPT-4, Qwen-14B-Chat, Baichuan2-13B-Chat) in C-Eval and CMMLU evaluations on the OpenCompass LLM Leaderboard. - NousResearch/Nous-Capybara-34B: this model is trained with 200K context length and 3 epochs on the Capybara dataset. - amazing-openai-api: this tool converts Yi model APIs into the OpenAI API format out of the box. - LlamaEdge: this tool builds an OpenAI-compatible API server for Yi-34B-Chat using a portable Wasm (WebAssembly) file, powered by Rust. For detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI. Yi-34B-Chat model demonstrates exceptional performance, ranking first among all existing open-source models in the benchmarks including MMLU, CMMLU, BBH, GSM8k, and more. - Evaluation methods: we evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. - Zero-shot vs. few-shot: in chat models, the zero-shot approach is more commonly employed. - Evaluation strategy: our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. - Challenges faced: some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results. : C-Eval results are evaluated on the validation datasets The Yi-34B and Yi-34B-200K models stand out as the top performers among open-source models, especially excelling in MMLU, CMMLU, common-sense reasoning, reading comprehension, and more. - Disparity in results: while benchmarking open-source models, a disparity has been noted between results from our pipeline and those reported by public sources like OpenCompass. - Investigation findings: a deeper investigation reveals that variations in prompts, post-processing strategies, and sampling techniques across models may lead to significant outcome differences. - Uniform benchmarking process: our methodology aligns with the original benchmarks—consistent prompts and post-processing strategies are used, and greedy decoding is applied during evaluations without any post-processing for the generated content. - Efforts to retrieve unreported scores: for scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. - Extensive model evaluation: to evaluate the model’s capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. - Special configurations: CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". - Falcon-180B caveat: Falcon-180B was not tested on QuAC and OBQA due to technical constraints. Its performance score is an average from other tasks, and considering the generally lower scores of these two tasks, Falcon-180B's capabilities are likely not underestimated. Yi-9B is almost the best among a range of similar-sized open-source models (including Mistral-7B, SOLAR-10.7B, Gemma-7B, DeepSeek-Coder-7B-Base-v1.5 and more), particularly excelling in code, math, common-sense reasoning, and reading comprehension. - In terms of overall ability (Mean-All), Yi-9B performs the best among similarly sized open-source models, surpassing DeepSeek-Coder, DeepSeek-Math, Mistral-7B, SOLAR-10.7B, and Gemma-7B. - In terms of coding ability (Mean-Code), Yi-9B's performance is second only to DeepSeek-Coder-7B, surpassing Yi-34B, SOLAR-10.7B, Mistral-7B, and Gemma-7B. - In terms of math ability (Mean-Math), Yi-9B's performance is second only to DeepSeek-Math-7B, surpassing SOLAR-10.7B, Mistral-7B, and Gemma-7B. - In terms of common sense and reasoning ability (Mean-Text), Yi-9B's performance is on par with Mistral-7B, SOLAR-10.7B, and Gemma-7B. The code and weights of the Yi series models are distributed under the Apache 2.0 license, which means the Yi series models are free for personal usage, academic purposes, and commercial use. A heartfelt thank you to each of you who have made contributions to the Yi community! You have helped Yi not just a project, but a vibrant, growing home for innovation. [](https://github.com/01-ai/yi/graphs/contributors) We use data compliance checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct, and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. The code and weights of the Yi-1.5 series models are distributed under the Apache 2.0 license. If you create derivative works based on this model, please include the following attribution in your derivative works: This work is a derivative of [The Yi Series Model You Base On] by 01.AI, used under the Apache 2.0 License.

NaNK
llama
11,510
1,299

Yi-1.5-34B-32K

Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. Model | Context Length | Pre-trained Tokens | :------------: | :------------: | :------------: | | Yi-1.5 | 4K, 16K, 32K | 3.6T | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel| | Yi-1.5-34B-Chat-16K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-Chat-16K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-6B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Name | Download | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-34B-32K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-32K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-6B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks. Yi-1.5-9B-Chat is the top performer among similarly sized open-source models. Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks. Yi-1.5-9B is the top performer among similarly sized open-source models. For getting up and running with Yi-1.5 models quickly, see README.

NaNK
llama
10,073
37

Yi-1.5-9B

Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. Model | Context Length | Pre-trained Tokens | :------------: | :------------: | :------------: | | Yi-1.5 | 4K, 16K, 32K | 3.6T | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel| | Yi-1.5-34B-Chat-16K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-Chat-16K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-6B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Name | Download | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-34B-32K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-32K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-6B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks. Yi-1.5-9B-Chat is the top performer among similarly sized open-source models. Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks. Yi-1.5-9B is the top performer among similarly sized open-source models. For getting up and running with Yi-1.5 models quickly, see README.

NaNK
llama
9,405
51

Yi-Coder-9B

NaNK
llama
9,393
44

Yi-1.5-6B-Chat

Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. Model | Context Length | Pre-trained Tokens | :------------: | :------------: | :------------: | | Yi-1.5 | 4K, 16K, 32K | 3.6T | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel| | Yi-1.5-34B-Chat-16K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-Chat-16K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-6B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Name | Download | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-34B-32K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-9B-32K | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | | Yi-1.5-6B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel | Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks. Yi-1.5-9B-Chat is the top performer among similarly sized open-source models. Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks. Yi-1.5-9B is the top performer among similarly sized open-source models. For getting up and running with Yi-1.5 models quickly, see README.

NaNK
llama
1,848
42

Yi-6B-Chat-4bits

NaNK
llama
921
22

Yi-VL-6B

NaNK
license:apache-2.0
550
124

Yi-Coder-1.5B-Chat

NaNK
llama
356
40

Yi-6B-Chat-8bits

NaNK
llama
334
9

Yi-VL-34B

- What is Yi-VL? - Overview - Models - Features - Architecture - Training - Limitations - Why Yi-VL? - Tech report - Benchmarks - Showcases - How to use Yi-VL? - Quick start - Hardware requirements - Misc. - Acknowledgements and attributions - List of used open-source projects - License - Yi Vision Language (Yi-VL) model is the open-source, multimodal version of the Yi Large Language Model (LLM) series, enabling content comprehension, recognition, and multi-round conversations about images. - Yi-VL demonstrates exceptional performance, ranking first among all existing open-source models in the latest benchmarks including MMMU in English and CMMMU in Chinese (based on data available up to January 2024). - Yi-VL-34B is the first open-source 34B vision language model worldwide. Model | Download |---|--- Yi-VL-34B |• 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel Yi-VL-6B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel - Multi-round text-image conversations: Yi-VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image. - Bilingual text support: Yi-VL supports conversations in both English and Chinese, including text recognition in images. - Strong image comprehension: Yi-VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images. - Fine-grained image resolution: Yi-VL supports image understanding at a higher resolution of 448×448. Yi-VL adopts the LLaVA architecture, which is composed of three primary components: - Vision Transformer (ViT): it's initialized with CLIP ViT-H/14 model and used for image encoding. - Projection Module: it's designed to align image features with text feature space, consisting of a two-layer Multilayer Perceptron (MLP) with layer normalizations. - Large Language Model (LLM): it's initialized with Yi-34B-Chat or Yi-6B-Chat, demonstrating exceptional proficiency in understanding and generating both English and Chinese. Yi-VL is trained to align visual information well to the semantic space of Yi LLM, which undergoes a comprehensive three-stage training process: - Stage 1: The parameters of ViT and the projection module are trained using an image resolution of 224×224. The LLM weights are frozen. The training leverages an image caption dataset comprising 100 million image-text pairs from LAION-400M. The primary objective is to enhance the ViT's knowledge acquisition within our specified architecture and to achieve better alignment between the ViT and the LLM. - Stage 2: The image resolution of ViT is scaled up to 448×448, and the parameters of ViT and the projection module are trained. It aims to further boost the model's capability for discerning intricate visual details. The dataset used in this stage includes about 25 million image-text pairs, such as LAION-400M, CLLaVA, LLaVAR, Flickr, VQAv2, RefCOCO, Visual7w and so on. - Stage 3: The parameters of the entire model (that is, ViT, projection module, and LLM) are trained. The primary goal is to enhance the model's proficiency in multimodal chat interactions, thereby endowing it with the ability to seamlessly integrate and interpret visual and linguistic inputs. To this end, the training dataset encompasses a diverse range of sources, totalling approximately 1 million image-text pairs, including GQA, VizWiz VQA, TextCaps, OCR-VQA, Visual Genome, LAION GPT4V and so on. To ensure data balancing, we impose a cap on the maximum data contribution from any single source, restricting it to no more than 50,000 pairs. Below are the parameters configured for each stage. Stage | Global batch size | Learning rate | Gradient clip | Epochs |---|---|---|---|--- Stage 1, 2 |4096|1e-4|0.5|1 Stage 3|256|2e-5|1.0|2 - The training consumes 128 NVIDIA A800 (80G) GPUs. - The total training time amounted to approximately 10 days for Yi-VL-34B and 3 days for Yi-VL-6B. This is the initial release of the Yi-VL, which comes with some known limitations. It is recommended to carefully evaluate potential risks before adopting any models. - Visual question answering is supported. Other features like text-to-3D and image-to-video are not yet supported. - A single image rather than several images can be accepted as an input. - There is a certain possibility of generating content that does not exist in the image. - In scenes containing multiple objects, some objects might be incorrectly identified or described with insufficient detail. - Yi-VL is trained on images with a resolution of 448×448. During inference, inputs of any resolution are resized to 448×448. Low-resolution images may result in information loss, and more fine-grained images (above 448) do not bring in extra knowledge. For detailed capabilities of the Yi series model, see Yi: Open Foundation Models by 01.AI. Yi-VL outperforms all existing open-source models in MMMU and CMMMU, two advanced benchmarks that include massive multi-discipline multimodal questions (based on data available up to January 2024). Below are some representative examples of detailed description and visual question answering, showcasing the capabilities of Yi-VL. For model inference, the recommended GPU examples are: This project makes use of open-source software/components. We acknowledge and are grateful to these developers for their contributions to the open-source community. 1. LLaVA - Authors: Haotian Liu, Chunyuan Li, Qingyang Wu, Yuheng Li, and Yong Jae Lee - Source: https://github.com/haotian-liu/LLaVA - License: Apache-2.0 license - Description: The codebase is based on LLaVA code. 2. OpenClip - Authors: Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt - Source: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K - License: MIT - Description: The ViT is initialized using the weights of OpenClip. - This attribution does not claim to cover all open-source components used. Please check individual components and their respective licenses for full details. - The use of the open-source components is subject to the terms and conditions of the respective licenses. We appreciate the open-source community for their invaluable contributions to the technology world. Please refer to the acknowledgments and attributions as well as individual components, for the license of source code. The Yi series models are fully open for academic research and free for commercial use, permissions of which are automatically granted upon application. For free commercial use, you only need to send an email to get official commercial permission.

NaNK
license:apache-2.0
286
263

Yi-Coder-1.5B

NaNK
llama
241
22

Yi-34B-Chat-8bits

NaNK
llama
94
28

Yi-34B-Chat-4bits

NaNK
llama
93
60