city96

41 models • 10 total models in database
Sort by:

umt5-xxl-encoder-gguf

--- base_model: google/umt5-xxl library_name: gguf license: apache-2.0 quantized_by: city96 language: en ---

license:apache-2.0
129,177
124

Wan2.1-I2V-14B-480P-gguf

--- base_model: Wan-AI/Wan2.1-I2V-14B-480P library_name: gguf quantized_by: city96 tags: - video - video-generation license: apache-2.0 pipeline_tag: image-to-video language: - en - zh --- This is a direct GGUF conversion of [Wan-AI/Wan2.1-I2V-14B-480P](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P)

NaNK
license:apache-2.0
105,153
251

Wan2.1-T2V-14B-gguf

This is a direct GGUF conversion of Wan-AI/Wan2.1-T2V-14B All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF. The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The VAE can be downloaded from this repository by Kijai Please refer to this chart for a basic overview of quantization types.

NaNK
license:apache-2.0
74,789
178

FLUX.1-dev-gguf

--- base_model: black-forest-labs/FLUX.1-dev library_name: gguf license: other license_name: flux-1-dev-non-commercial-license license_link: LICENSE.md quantized_by: city96 tags: - text-to-image - image-generation - flux ---

65,773
1,224

t5-v1_1-xxl-encoder-gguf

--- base_model: google/t5-v1_1-xxl library_name: gguf license: apache-2.0 quantized_by: city96 language: en ---

license:apache-2.0
63,325
468

Qwen-Image-gguf

This is a direct GGUF conversion of Qwen/Qwen-Image.

license:apache-2.0
46,695
260

FLUX.1-schnell-gguf

This is a direct GGUF conversion of black-forest-labs/FLUX.1-schnell The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readm...

license:apache-2.0
35,073
285

FLUX.2-dev-gguf

27,059
55

Wan2.1-I2V-14B-720P-gguf

This is a direct GGUF conversion of Wan-AI/Wan2.1-I2V-14B-720P All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF. The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The other files required can be downloaded from this repository by Comfy-Org Please refer to this chart for a basic overview of quantization types.

NaNK
license:apache-2.0
22,680
147

Wan2.1-FLF2V-14B-720P-gguf

NaNK
license:apache-2.0
11,877
27

Wan2.1-Fun-14B-Control-gguf

NaNK
license:apache-2.0
10,195
17

HiDream-I1-Full-gguf

This is a direct GGUF conversion of HiDream-ai/HiDream-I1-Full The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/diffusionmodels` - see the GitHub readme for further install instructions. The VAE and additional files can be downloaded from Comfy-Org/HiDream-I1ComfyUI Please refer to this chart for a basic overview of quantization types.

license:mit
10,036
66

t5-v1_1-xxl-encoder-bf16

8,327
29

stable-diffusion-3.5-large-gguf

5,868
118

HiDream-I1-Fast-gguf

license:mit
4,187
30

HiDream-I1-Dev-gguf

license:mit
3,720
60

HunyuanVideo-gguf

2,222
181

stable-diffusion-3.5-large-turbo-gguf

1,921
64

LTX-Video-gguf

1,790
24

HunyuanVideo-I2V-gguf

1,604
37

stable-diffusion-3.5-medium-gguf

1,318
51

LTX-Video-0.9.6-distilled-gguf

1,149
13

FastHunyuan-gguf

This is a direct GGUF conversion of FastVideo/FastHunyuan It is intended to be used with the native, built-in ComfyUI HunyuanVideo nodes As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. The model files can be used with the ComfyUI-GGUF custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The VAE can be downloaded from this repository by Kijai Please refer to this chart for a basic overview of quantization types.

899
48

LTX-Video-0.9.5-gguf

665
13

llava-llama-3-8b-v1_1-imat-gguf

NaNK
base_model:xtuner/llava-llama-3-8b-v1_1-transformers
628
29

stable-diffusion-3-medium-gguf

512
7

LTX-Video-0.9.6-dev-gguf

510
6

Wan2.1-Fun-14B-InP-gguf

NaNK
license:apache-2.0
419
20

flux.1-lite-8B-alpha-gguf

NaNK
365
51

AuraFlow-v0.3-gguf

NaNK
license:apache-2.0
286
6

Cosmos-Predict2-14B-Text2Image-gguf

NaNK
101
9

Flux.1-Heavy-17B

NaNK
31
25

mt5-xl-fp16

license:apache-2.0
1
0

SD Latent Interposer

This repository contains the models required to directly convert latents between SDv1.5 based models and SDXL models. For more info, please see https://github.com/city96/SD-Latent-Interposer

license:apache-2.0
0
18

SD-Latent-Upscaler

license:apache-2.0
0
13

CityAesthetics

license:apache-2.0
0
5

DiT

license:cc-by-nc-4.0
0
4

AnimeClassifiers

license:apache-2.0
0
2

CityVAE

license:apache-2.0
0
1

RevDiff

license:apache-2.0
0
1

mt5-xl-encoder-fp16

license:apache-2.0
0
1