city96
umt5-xxl-encoder-gguf
--- base_model: google/umt5-xxl library_name: gguf license: apache-2.0 quantized_by: city96 language: en ---
Wan2.1-I2V-14B-480P-gguf
--- base_model: Wan-AI/Wan2.1-I2V-14B-480P library_name: gguf quantized_by: city96 tags: - video - video-generation license: apache-2.0 pipeline_tag: image-to-video language: - en - zh --- This is a direct GGUF conversion of [Wan-AI/Wan2.1-I2V-14B-480P](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)
Wan2.1-T2V-14B-gguf
This is a direct GGUF conversion of Wan-AI/Wan2.1-T2V-14B All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF. The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The VAE can be downloaded from this repository by Kijai Please refer to this chart for a basic overview of quantization types.
FLUX.1-dev-gguf
--- base_model: black-forest-labs/FLUX.1-dev library_name: gguf license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md quantized_by: city96 tags: - text-to-image - image-generation - flux ---
t5-v1_1-xxl-encoder-gguf
--- base_model: google/t5-v1_1-xxl library_name: gguf license: apache-2.0 quantized_by: city96 language: en ---
Qwen-Image-gguf
This is a direct GGUF conversion of Qwen/Qwen-Image.
FLUX.1-schnell-gguf
This is a direct GGUF conversion of black-forest-labs/FLUX.1-schnell The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readm...
FLUX.2-dev-gguf
Wan2.1-I2V-14B-720P-gguf
This is a direct GGUF conversion of Wan-AI/Wan2.1-I2V-14B-720P All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF. The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The other files required can be downloaded from this repository by Comfy-Org Please refer to this chart for a basic overview of quantization types.
Wan2.1-FLF2V-14B-720P-gguf
Wan2.1-Fun-14B-Control-gguf
HiDream-I1-Full-gguf
This is a direct GGUF conversion of HiDream-ai/HiDream-I1-Full The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/diffusionmodels` - see the GitHub readme for further install instructions. The VAE and additional files can be downloaded from Comfy-Org/HiDream-I1ComfyUI Please refer to this chart for a basic overview of quantization types.
t5-v1_1-xxl-encoder-bf16
stable-diffusion-3.5-large-gguf
HiDream-I1-Fast-gguf
HiDream-I1-Dev-gguf
HunyuanVideo-gguf
stable-diffusion-3.5-large-turbo-gguf
LTX-Video-gguf
HunyuanVideo-I2V-gguf
stable-diffusion-3.5-medium-gguf
LTX-Video-0.9.6-distilled-gguf
FastHunyuan-gguf
This is a direct GGUF conversion of FastVideo/FastHunyuan It is intended to be used with the native, built-in ComfyUI HunyuanVideo nodes As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The VAE can be downloaded from this repository by Kijai Please refer to this chart for a basic overview of quantization types.
LTX-Video-0.9.5-gguf
llava-llama-3-8b-v1_1-imat-gguf
stable-diffusion-3-medium-gguf
LTX-Video-0.9.6-dev-gguf
Wan2.1-Fun-14B-InP-gguf
flux.1-lite-8B-alpha-gguf
AuraFlow-v0.3-gguf
Cosmos-Predict2-14B-Text2Image-gguf
Flux.1-Heavy-17B
mt5-xl-fp16
SD Latent Interposer
This repository contains the models required to directly convert latents between SDv1.5 based models and SDXL models. For more info, please see https://github.com/city96/SD-Latent-Interposer