fudan-generative-ai

7 models • 1 total models in database
Sort by:

WAM-Flow

12
1

hallo2

license:mit
0
131

hallo

license:mit
0
97

Hallo3

Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Diffusion Transformer Networks Jiahao Cui 1   Hui Li 1   Yun Zhan 1   Hanlin Shang 1   Kaihui Cheng 1   Yuqi Ma 1   Shan Mu 1   Hang Zhou 2   Jingdong Wang 2   Siyu Zhu 1✉️   - System requirement: Ubuntu 20.04/Ubuntu 22.04, Cuda 12.1 - Tested GPUs: H100 You can easily get all pretrained models required by inference from our HuggingFace repo. Or you can download them separately from their source repo: - hallo3: Our checkpoints. - Cogvidex: Cogvideox-5b-i2v pretrained model, consisting of transformer and 3d vae - t5-v11-xxl: text encoder, you can download from textencoder and tokenizer - audioseparator: Kim Vocal2 MDX-Net vocal removal model. - wav2vec: wav audio to vector model from Facebook. - insightface: 2D and 3D Face Analysis placed into `pretrainedmodels/faceanalysis/models/`. (Thanks to deepinsight) - face landmarker: Face detection & mesh model from mediapipe placed into `pretrainedmodels/faceanalysis/models`. Finally, these pretrained models should be organized as follows: Hallo3 has a few simple requirements for the input data of inference: 1. Reference image must be 1:1 or 3:2 aspect ratio. 2. Driving audio must be in WAV format. 3. Audio must be in English since our training datasets are only in this language. 4. Ensure the vocals of audio are clear; background music is acceptable. Animation results will be saved at `./output`. You can find more examples for inference at examples folder. prepare data for training Organize your raw videos into the following directory structure: You can use any datasetname, but ensure the videos directory and caption directory are named as shown above. Next, process the videos with the following commands: Update the data meta path settings in the configuration YAML files, `configs/sfts1.yaml` and `configs/sfts2.yaml`: If you find our work useful for your research, please consider citing the paper: The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology. This model is a fine-tuned derivative version based on the CogVideo-5B I2V model. CogVideo-5B is an open-source text-to-video generation model developed by the CogVideoX team. Its original code and model parameters are governed by the CogVideo-5B LICENSE. As a derivative work of CogVideo-5B, the use, distribution, and modification of this model must comply with the license terms of CogVideo-5B. Thank you to all the contributors who have helped to make this project better!

license:mit
0
64

champ

license:apache-2.0
0
30

DicFace_model

license:mit
0
2

hallo4

0
1