Alissonerdx

35 models • 1 total models in database
Sort by:

BFS-Best-Face-Swap

The BFS (Best Face Swap) LoRA series was created for Qwen Image Edit 2509, designed to perform high-fidelity face and head swaps with natural tone blending and consistent lighting. Each version focuses on a different level of replacement, from subtle face swaps to full head transfers. All models can be used directly with Qwen Image Edit 2509 or any compatible image-editing workflow. 👉 Workflows for all BFS versions can be downloaded here: Head/Face Swap Workflow — Qwen Image Edit 2509 (Civitai) | Version | File | Description | | ----------------------------- | ---------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | | BFS Face V1 | `bfsfacev1qwenimageedit2509.safetensors` | Swaps only the face. Keeps target hair, lighting, and background. | | BFS Head V1 | `bfsheadv1qwenimageedit2509.safetensors` | Full head swap with detailed blending of face and hair. | | BFS Head V2 | `bfsheadv2qwenimageedit2509.safetensors` | Improved tone and pose alignment; stronger anatomical consistency. | | BFS Head V3 (Recommended) | `bfsheadv3qwenimageedit2509.safetensors` | The most stable and accurate version. In this one, the input order is inverted — body first, then face. | | Version | Image 1 | Image 2 | Notes | | ----------- | -------- | -------- | ----------------------------------------------- | | Face V1 | Face | Body | Standard order. Swaps face only. | | Head V1 | Face | Body | Standard order. Full head swap with blending. | | Head V2 | Face | Body | Standard order. Improved tone and alignment. | | Head V3 | Body | Face | ⚠️ Inverted order — send body first, then face. | Works best with Qwen Image Edit 2509 base model. For most versions, Image 1 = face and Image 2 = body. Only Head V3 uses the opposite order (Image 1 = body, Image 2 = face). For improved alignment, you may use pose or face mesh control images (optional). Do not use or share results involving real people or public figures. I take no responsibility for any misuse of this model. Please use it responsibly and respect all likeness rights.

license:mit
59,430
317

flux.1-dev-SRPO-LoRas

These LoRAs were extracted from three sources: - the original SRPO (Flux.1-Dev): tencent/SRPO - community checkpoint: rockerBOO/flux.1-dev-SRPO - community checkpoint (quantized/refined): wikeeyang/SRPO-Refine-Quantized-v1.0 They are designed to provide modular, lightweight adaptations you can mix with other LoRAs, reducing storage and enabling fast experimentation across ranks (8, 16, 32, 64, 128). Notes: - The Loras version for Nunchaku was converted using the official Nunchaku conversion tool but it is something experimental I still need to test and analyze the results, I do not recommend using it for now it is only for testing. - These loras allow you to use the quality of SRPO using the official flux dev as a base, without the need to use the base flux SRPO, that is, in my opinion, it is not very advantageous to use any of these loras + flux SRPO as a base, unless you want to apply the quality of, for example, SRPO RockerBOO in the base flux SRPO model. - The version I recommend is RockerBOO but I advise you to test the others, because the original version will give you different results than the other versions. - According to some reports it seems to work well with Flux Krea, the report was with rank 256, I haven't tested it yet to confirm. Example comparison between Flux1-Dev baseline and LoRA extractions

14,443
74

Wan2.1-HuMo-GGUF

--- license: apache-2.0 pipelinetag: image-to-video --- HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning [](https://arxiv.org/abs/2509.08519)  [](https://phantom-video.github.io/HuMo/)  > HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning > Liyang Chen , Tianxiang Ma , Jiawei Liu, Bingchuan Li † , Zhuowei Chen, Lijie Liu, Xu He, Gen Li, Qian He, Zhiyong Wu § > Equal contribution, † Project lead, § Corresponding author > Tsinghua University | Intelligent Creation Team, ByteDance ✨ Key Features HuMo is a unified, human-centric video generation framework designed to produce high-quality, fine-grained, and controllable human videos from multimodal inputs—including text, images, and audio. It supports strong text prompt following, consistent subject preservation, synchronized audio-driven motion. > - ​​VideoGen from Text-Image​​ - Customize character appearance, clothing, makeup, props, and scenes using text prompts combined with reference images. > - ​​VideoGen from Text-Audio​​ - Generate audio-synchronized videos solely from text and audio inputs, removing the need for image references and enabling greater creative freedom. > - ​​VideoGen from Text-Image-Audio​​ - Achieve the higher level of customization and control by combining text, image, and audio guidance. 📑 Todo List - [x] Release Paper - [x] Checkpoint of HuMo-17B - [x] Inference Codes - [ ] Text-Image Input - [x] Text-Audio Input - [x] Text-Image-Audio Input - [x] Multi-GPU Inference - [ ] Release Prompts to Generate Demo of Faceless Thrones - [ ] HuMo-1.7B Model Preparation | Models | Download Link | Notes | |--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------| | HuMo-17B | 🤗 Huggingface | Released before September 15 | HuMo-1.7B | 🤗 Huggingface | To be released soon | Wan-2.1 | 🤗 Huggingface | VAE & Text encoder | Whisper-large-v3 | 🤗 Huggingface | Audio encoder | Audio separator | 🤗 Huggingface | Remove background noise (optional) Our model is compatible with both 480P and 720P resolutions. 720P inference will achieve much better quality. > Some tips > - Please prepare your text, reference images and audio as described in testcase.json. > - We support Multi-GPU inference using FSDP + Sequence Parallel. > - ​The model is trained on 97-frame videos at 25 FPS. Generating video longer than 97 frames may degrade the performance. We will provide a new checkpoint for longer generation. HuMo’s behavior and output can be customized by modifying generate.yaml configuration file. The following parameters control generation length, video resolution, and how text, image, and audio inputs are balanced: Acknowledgements Our work builds upon and is greatly inspired by several outstanding open-source projects, including Phantom, SeedVR, MEMO, Hallo3, OpenHumanVid, and Whisper. We sincerely thank the authors and contributors of these projects for generously sharing their excellent codes and ideas. If you find this project useful for your research, please consider citing our paper. 📧 Contact If you have any comments or questions regarding this open-source project, please open a new issue or contact Liyang Chen and Tianxiang Ma.

NaNK
license:apache-2.0
814
8

CustomLightning

553
4

YuE-s1-7B-anneal-en-cot-exl2-8.0bpw

NaNK
llama
133
1

YuE-s2-1B-general-exl2-8.0bpw

NaNK
llama
130
1

Dia1.6-pt_BR-v1

NaNK
license:apache-2.0
120
23

BFS-Best-Face-Swap-Video

NaNK
72
203

YuE-s1-7B-anneal-en-cot-int8

NaNK
llama
42
2

YuE-s1-7B-anneal-jp-kr-cot-int8

NaNK
llama
30
0

YuE-s1-7B-anneal-zh-cot-int8

NaNK
llama
28
0

YuE-s2-1B-general-int8

NaNK
llama
25
2

YuE-s1-7B-anneal-en-cot-nf4

NaNK
llama
22
1

YuE-s1-7B-anneal-en-icl-int8

NaNK
llama
22
0

YuE-s1-7B-anneal-en-icl-nf4

NaNK
llama
22
0

YuE-s1-7B-anneal-jp-kr-icl-nf4

NaNK
llama
21
0

YuE-s2-1B-general-nf4

NaNK
llama
21
0

YuE-s1-7B-anneal-jp-kr-icl-int8

NaNK
llama
20
0

YuE-s1-7B-anneal-zh-icl-int8

NaNK
llama
20
0

YuE-s1-7B-anneal-zh-cot-nf4

NaNK
llama
20
0

YuE-s1-7B-anneal-zh-icl-nf4

NaNK
llama
20
0

YuE-s1-7B-anneal-jp-kr-cot-nf4

NaNK
llama
19
0

YuE-s2-1B-general-exl2-4.0bpw

NaNK
llama
9
0

YuE-s1-7B-anneal-en-cot-exl2-4.0bpw

NaNK
llama
8
0

TryAnything

license:apache-2.0
7
1

YuE-s1-7B-anneal-en-cot-exl2-3.0bpw

NaNK
llama
4
0

YuE-s1-7B-anneal-en-cot-exl2-6.0bpw

NaNK
llama
4
0

YuE-s1-7B-anneal-en-cot-exl2-5.0bpw

NaNK
llama
2
0

YuE-s2-1B-general-exl2-3.0bpw

NaNK
llama
1
0

YuE-s2-1B-general-exl2-5.0bpw

NaNK
llama
1
0

YuE-s2-1B-general-exl2-6.0bpw

NaNK
llama
1
0

LTX-LoRAs

NaNK
license:apache-2.0
0
24

UltraWanComfy

NaNK
license:cc-by-4.0
0
17

flux.1-fill-OneReward-LoRAs

0
3

sageattention-2.1.0-cu128torch270-cp312-cp312-linux_x86_64.whl

license:apache-2.0
0
2