ZuluVision

3 models • 1 total models in database
Sort by:

MoviiGen1.1

[](https://huggingface.co/ZuluVision/MoviiGen1.1) [](https://github.com/ZulutionAI/MoviiGen1.1/stargazers) MoviiGen 1.1: Towards Cinematic-Quality Video Generative Models In this repository, we present MoviiGen 1.1, a cutting-edge video generation model that excels in cinematic aesthetics and visual quality. This model is a fine-tuning model based on the Wan2.1. Based on comprehensive evaluations by 11 professional filmmakers and AIGC creators, including industry experts, across 60 aesthetic dimensions, MoviiGen 1.1 demonstrates superior performance in key cinematic aspects: - šŸ‘ Superior Cinematic Aesthetics: MoviiGen 1.1 outperforms competitors in three critical dimensions: atmosphere creation, camera movement, and object detail preservation, making it the preferred choice for professional cinematic applications. - šŸ‘ Visual Coherence & Quality: MoviiGen 1.1 excels in clarity (+14.6%) and realism (+4.3%), making it ideal for high-fidelity scenarios such as real-scene conversion and portrait detail. Wan2.1 stands out in smoothness and overall visual harmony, better suited for tasks emphasizing composition, coherence, and artistic style. Both models have close overall scores, so users can select MoviiGen 1.1 for clarity and realism, or Wan2.1 for style and structural consistency. - šŸ‘ Comprehensive Visual Capabilities: MoviiGen 1.1 provides stable performance in complex visual scenarios, ensuring consistent subject and scene representation while maintaining high-quality motion dynamics. - šŸ‘ High-Quality Output: The model generates videos with exceptional clarity and detail, supporting both 720P and 1080P resolutions while maintaining consistent visual quality throughout the sequence. - šŸ‘ Professional-Grade Results: MoviiGen 1.1 is particularly well-suited for applications where cinematic quality, visual coherence, and aesthetic excellence are paramount, offering superior overall quality compared to other models. This repository features our latest model, which establishes new benchmarks in cinematic video generation. Through extensive evaluation by industry professionals, it has demonstrated exceptional capabilities in creating high-quality visuals with natural motion dynamics and consistent aesthetic quality, making it an ideal choice for professional video production and creative applications. | Your browser does not support the video tag. | Your browser does not support the video tag. | Your browser does not support the video tag. | |--------|--------|--------| | Your browser does not support the video tag. | Your browser does not support the video tag. | Your browser does not support the video tag. | | Your browser does not support the video tag. | Your browser does not support the video tag. | Your browser does not support the video tag. | | Your browser does not support the video tag. | Your browser does not support the video tag. | Your browser does not support the video tag. | šŸ”„ Latest News!! May 17, 2025: šŸ‘‹ We've released the inference code and training code of MoviiGen1.1. May 12, 2025: šŸ‘‹ We've released weights of MoviiGen1.1. 2. Install FastVideo according to their instructions. T2V-14B Model: šŸ¤— Huggingface MoviiGen1.1 model supports both 720P and 1080P. For more cinematic quality, we recommend using 1080P and a 21:9 aspect ratio (1920832). We provide a prompt extend model for MoviiGen1.1, which is a fine-tuned Qwen2.5-7B-Instruct model with our internal data. Model is available on šŸ¤— Huggingface. - Prompt Length: The prompt length should be around 100~200. - Prompt Content: The prompt should contain scene description, main subject, events, aesthetics description and camera movement. - Example: Our training framework is built on FastVideo, with custom implementation of sequence parallel to optimize memory usage and training efficiency. The sequence parallel approach allows us to distribute the computational load across multiple GPUs, enabling efficient training of large-scale video generation models. - Sequence Parallel & Ring Attention: Our custom implementation divides the temporal dimension across multiple GPUs, reducing per-device memory requirements while maintaining model quality. - Efficient Data Loading: Optimized data pipeline for handling high-resolution video frames (Latent Cache and Text Embedding Cache). - Multi Resolution Training Bucket: Support for training at multiple resolutions. - Mixed Precision Training: Support for BF16/FP16 training to accelerate computation. - Distributed Training: Seamless multi-node, multi-GPU training support. We cache the videos and corresponding text prompts as latents and text embeddings to optimize the training process. This preprocessing step significantly improves training efficiency by reducing computational overhead during the training phase. You need to provide a merge.txt file to specify the dataset path. And the dataset should be a json like trainingdata.json. Finally, you will get videocaption.json which contains the latents and text embeddings paths. When multi-node training, you need to set the number of nodes and the number of processes per node manually. We provide a sample script for multi-node training. Citation If you find our work helpful, please cite us.

NaNK
license:apache-2.0
666
99

MoviiGen1.1_Prompt_Rewriter

NaNK
license:apache-2.0
6
9

RaCig

NaNK
license:apache-2.0
0
4