lym00
Wan2.2_T2V_A14B_VACE-test
> [!IMPORTANT] > ⚠️ Notice > This project is intended for experimental use only. This is an addon experiment of Wan2.2 T2V A14B and VACE scopes from Wan2.1 VACE T2V 14B. The process involved injecting VACE scopes into the target models, using scripts provided by wsbagnsv1. All GGUF quantized versions were created from the FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository. Tested with 2-step High Noise and 2-step Low Noise dual sampling with the LightX2V LoRA, it's working fine in ComfyUI. There's news where VACE team might have a fix for the color shifting issue to be released (discussions on the Banodoco Discord Server). Will be waiting for the official fix before testing further. >- Wan2.2 separates expert models by timestep: > The High-Noise expert focuses on generating overall layout and motion. > The Low-Noise expert refines textures and details. >- The A14B model includes both High-Noise and Low-Noise experts, which are activated at different denoising stages.
DEPRECATED-qwen-image-gguf-test
> [!IMPORTANT] > ⚠️ Deprecation Notice > > Please visit city96's repo https://huggingface.co/city96/Qwen-Image-gguf/tree/main for the full updated quants > > This project is now deprecated and was intended for experimental use only. > > This contains non-official and suboptimal patches. > Please visit city96's repo https://huggingface.co/city96/Qwen-Image-gguf/ ComfyUI Initial GGUF Tests Update ComfyUI to pull the relevant updates (Initial support for qwen image model) | Type | Name | Location | Download | | ------------ | -------------------------------------------------- | ------------------------------ | ---------------- | | Main Model | QwenImage-GGUF | `ComfyUI/models/unet` | GGUF | | Text Encoder | qwen2.5vl7b | `ComfyUI/models/textencoders` | Safetensors | | VAE | qwenimagevae | `ComfyUI/models/vae` | Safetensors | References Tensors: https://huggingface.co/Qwen/Qwen-Image/blob/main/transformer/diffusionpytorchmodel.safetensors.index.json Tools: https://github.com/city96/ComfyUI-GGUF/tree/main/tools Patches for unknown model: (referring last commit for cosmos) ComfyUI Implementation: https://github.com/comfyanonymous/ComfyUI/commit/c012400240d4867cd63a45220eb791b91ad47617 Apply patch, recompile, and quantize: https://github.com/city96/ComfyUI-GGUF/tree/main/tools#quantizing-using-custom-llamacpp
HunyuanVideo-Avatar-GGUF-Experiment
> [!IMPORTANT] > ⚠️ Important: > > This project is intended for experimental use only. > > Not yet supported in ComfyUI > https://github.com/comfyanonymous/ComfyUI/issues/8311 This repository contains a GGUF conversion of https://huggingface.co/tencent/HunyuanVideo-Avatar/blob/main/ckpts/hunyuan-video-t2v-720p/transformers/mprank00modelstates.pt from tencent/HunyuanVideo-Avatar. The conversion scripts are provided by city96, available at the ComfyUI-GGUF GitHub repository. The process involved first converting the pickletensors to a BF16 GGUF, then quantizing it, and finally applying the 5D fixes. Notes As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. - For an overview of quantization types, please see the GGUF quantization types.
Wan2.1_T2V_1.3B_SelfForcing_VACE-GGUF
Wan2.1_T2V_1.3B_SelfForcing_VACE
comfyui_nunchaku_lora_patch
qwen-image-diffsynth-studio-distill-lora-extract-experiment
flux.1-kontext-dev-gpt-image-edit-training-lora-extract-experiments
nunchaku_svdquant_deepcompressor_0.1.0_quantization_flux.1_kontext_dev_test
Win Amd64 Prebuilt Wheels
| Prebuilt Wheels | Python Versions | PyTorch Versions | CUDA Versions | Source | |------------------------------------------------|-----------------|------------------|----------------|---------------------------------------------------------------------------| | Flash-Attention 2.7.4.post1 | 3.12 | 2.8.0.dev | 12.8.1 | Dao-AILab/flash-attention | | SageAttention2.2.0 | 3.12 | 2.9.0.dev | 12.9.1 | thu-ml/SageAttention or jt-zhang/SageAttention2plus | | SageAttention3 (pending approval) | 3.12 | 2.9.0.dev | 12.9.1 | jt-zhang/SageAttention3 | | Flash-Attention2.8.1 | 3.12 | 2.9.0.dev | 12.9.1 | Dao-AILab/flash-attention | | xformers0.0.31.post1 | 3.12 | 2.9.0.dev | 12.9.1 | facebookresearch/xformers | | INSERT | INSERT | INSERT | INSERT | INSERT |