silveroxides

65 models • 8 total models in database
Sort by:

Chroma-GGUF

--- license: apache-2.0 base_model: - lodestones/Chroma pipeline_tag: text-to-image ---

license:apache-2.0
176,126
222

FLUX.2-dev-fp8_scaled

17,989
48

Chroma-Misc-Models

3,229
30

flan-t5-xxl-encoder-only

license:apache-2.0
2,530
9

Chroma1-HD-GGUF

Chroma1-HD is an 8.9B parameter text-to-image foundational model based on FLUX.1-schnell. It is fully Apache 2.0 licensed, ensuring that anyone can use, modify, and build upon it. As a base model, Chroma1 is intentionally designed to be an excellent starting point for finetuning. It provides a strong, neutral foundation for developers, researchers, and artists to create specialized models. for the fast CFG "baked" version please go to Chroma1-Flash. Key Features High-Performance Base: 8.9B parameters, built on the powerful FLUX.1 architecture. Easily Finetunable: Designed as an ideal checkpoint for creating custom, specialized models. Community-Driven & Open-Source: Fully transparent with an Apache 2.0 license, and training history. Flexible by Design: Provides a flexible foundation for a wide range of generative tasks. Special Thanks A massive thank you to our supporters who make this project possible. Anonymous donor whose incredible generosity funded the pretraining run and data collections. Your support has been transformative for open-source AI. Fictional.ai for their fantastic support and for helping push the boundaries of open-source AI. You can try Chroma on their platform: `pip install transformers diffusers sentencepiece accelerate` ComfyUI For advanced users and customized workflows, you can use Chroma with ComfyUI. Requirements: A working ComfyUI installation. Chroma checkpoint (latest version). T5 XXL Text Encoder. FLUX VAE. Chroma Workflow JSON. Setup: 1. Place the `T5xxl` model in your `ComfyUI/models/clip` folder. 2. Place the `FLUX VAE` in your `ComfyUI/models/vae` folder. 3. Place the `Chroma checkpoint` in your `ComfyUI/models/diffusionmodels` folder. 4. Load the Chroma workflow file into ComfyUI and run. Model Details Architecture: Based on the 8.9B parameter FLUX.1-schnell model. Training Data: Trained on a 5M sample dataset curated from a 20M pool, including artistic, photographic, and niche styles. Technical Report: A comprehensive technical paper detailing the architectural modifications and training process is forthcoming. Intended Use Chroma is intended to be used as a base model for researchers and developers to build upon. It is ideal for: Finetuning on specific styles, concepts, or characters. Research into generative model behavior, alignment, and safety. As a foundational component in larger AI systems. Limitations and Bias Statement Chroma is trained on a broad, filtered dataset from the internet. As such, it may reflect the biases and stereotypes present in its training data. The model is released in a state as is and has not been aligned with a specific safety filter. Users are responsible for their own use of this model. It has the potential to generate content that may be considered harmful, explicit, or offensive. I encourage developers to implement appropriate safeguards and ethical considerations in their downstream applications. Summary of Architectural Modifications (For a full breakdown, tech report soon-ish.) 12B → 8.9B Parameters: TL;DR: I replaced a 3.3B parameter timestep-encoding layer with a more efficient 250M parameter FFN, as the original was vastly oversized for its task. MMDiT Masking: TL;DR: Masking T5 padding tokens enhanced fidelity and increased training stability by preventing the model from focusing on irrelevant ` ` tokens. Custom Timestep Distributions: TL;DR: I implemented a custom timestep sampling distribution (`-x^2`) to prevent loss spikes and ensure the model trains effectively on both high-noise and low-noise regions. P.S Chroma1-HD is not the old Chroma-v.50 it has been retrained from v.48

license:apache-2.0
1,691
3

flan-t5-xxl-encoder-only-GGUF

license:apache-2.0
1,255
5

OmniGen-V1

license:mit
881
13

Chroma1-Radiance-GGUF

license:apache-2.0
775
10

Qwen3.5-QuantOpsQuants

NaNK
license:apache-2.0
740
2

Anima-Quantized

649
4

Chroma1-Base-GGUF

Chroma1-Base is an 8.9B parameter text-to-image foundational model based on FLUX.1-schnell. It is fully Apache 2.0 licensed, ensuring that anyone can use, modify, and build upon it. As a base model, Chroma1 is intentionally designed to be an excellent starting point for finetuning. It provides a strong, neutral foundation for developers, researchers, and artists to create specialized models. for the fast CFG "baked" version please go to Chroma1-Flash. Key Features High-Performance Base: 8.9B parameters, built on the powerful FLUX.1 architecture. Easily Finetunable: Designed as an ideal checkpoint for creating custom, specialized models. Community-Driven & Open-Source: Fully transparent with an Apache 2.0 license, and training history. Flexible by Design: Provides a flexible foundation for a wide range of generative tasks. Special Thanks A massive thank you to our supporters who make this project possible. Anonymous donor whose incredible generosity funded the pretraining run and data collections. Your support has been transformative for open-source AI. Fictional.ai for their fantastic support and for helping push the boundaries of open-source AI. You can try Chroma on their platform: `pip install transformers diffusers sentencepiece accelerate` ComfyUI For advanced users and customized workflows, you can use Chroma with ComfyUI. Requirements: A working ComfyUI installation. Chroma checkpoint (latest version). T5 XXL Text Encoder. FLUX VAE. Chroma Workflow JSON. Setup: 1. Place the `T5xxl` model in your `ComfyUI/models/clip` folder. 2. Place the `FLUX VAE` in your `ComfyUI/models/vae` folder. 3. Place the `Chroma checkpoint` in your `ComfyUI/models/diffusionmodels` folder. 4. Load the Chroma workflow file into ComfyUI and run. Model Details Architecture: Based on the 8.9B parameter FLUX.1-schnell model. Training Data: Trained on a 5M sample dataset curated from a 20M pool, including artistic, photographic, and niche styles. Technical Report: A comprehensive technical paper detailing the architectural modifications and training process is forthcoming. Intended Use Chroma is intended to be used as a base model for researchers and developers to build upon. It is ideal for: Finetuning on specific styles, concepts, or characters. Research into generative model behavior, alignment, and safety. As a foundational component in larger AI systems. Limitations and Bias Statement Chroma is trained on a broad, filtered dataset from the internet. As such, it may reflect the biases and stereotypes present in its training data. The model is released in a state as is and has not been aligned with a specific safety filter. Users are responsible for their own use of this model. It has the potential to generate content that may be considered harmful, explicit, or offensive. I encourage developers to implement appropriate safeguards and ethical considerations in their downstream applications. Summary of Architectural Modifications (For a full breakdown, tech report soon-ish.) 12B → 8.9B Parameters: TL;DR: I replaced a 3.3B parameter timestep-encoding layer with a more efficient 250M parameter FFN, as the original was vastly oversized for its task. MMDiT Masking: TL;DR: Masking T5 padding tokens enhanced fidelity and increased training stability by preventing the model from focusing on irrelevant ` ` tokens. Custom Timestep Distributions: TL;DR: I implemented a custom timestep sampling distribution (`-x^2`) to prevent loss spikes and ensure the model trains effectively on both high-noise and low-noise regions.

license:apache-2.0
454
0

Chroma1-Flash-GGUF

license:apache-2.0
381
2

pony-v7-base-fp8_scaled-and-GGUF

284
1

chroma-debug-development-only-GGUF

Don't ask about specifics. This is just for my own testing and I share in case anyone else wanna try it out.

license:apache-2.0
143
2

big-asp-v2

105
0

Wan2.2_TI2V_5B-GGUF

NaNK
license:apache-2.0
97
3

sdxl-gguf

87
0

Z-Image-De-Turbo-fp8_scaled

license:apache-2.0
57
1

furrence2-large

license:mit
27
3

OmniGen-V1-fp8_e4m3fn

license:mit
24
5

NoobAI-XL-EPS-1.0-Vwe

21
0

T5xxl Flan Enc

7
11

Custom_SDXL_GGUF

6
0

SD3-modclip

5
0

RealHybridPony

5
0

Chroma_tests_non_official

license:apache-2.0
5
0

CLIP-ViT-bigG-14-laion2B-39B-b160k-fp16

NaNK
license:mit
4
0

NoobAI-XL-V-Pred-0.5

4
0

RNS_RealPonyV20

3
1

JTP PILOT2 Onnx

2
1

Ultimate-Creative-ReAbsorb-107-107

2
0

taef1

TAEF1 is very tiny autoencoder which uses the same "latent API" as FLUX.1's VAE. FLUX.1 is useful for real-time previewing of the FLUX.1 generation process. This repo contains `.safetensors` versions of the TAEF1 weights.

license:mit
2
0

Ultimate-Creator

1
0

Absolute-Creator-RealCreator-furclip

1
0

Absolute-Creator-RealCreator-testclip

1
0

veloxide

1
0

Capable_XL_Lucky

1
0

SD3-PonyCLIP-forfun

1
0

RNS_PonyUltimateV20

1
0

flux1-nf4-weights

Checkpoints for ComfyUI have bnb in the file name. The ones without are preliminary for yet to be implemented nf4 loader for unet only model.

0
108

Chroma-LoRAs

LoRAs under the 2k-test directories are licensed under Creative Commons Attribution Non Commercial Share Alike 4.0

0
28

CLIP-Collection

0
15

Chroma1-HD-fp8-scaled

This is a model repository for scaled fp8 quantized versions of Chroma1-HD In order to load Chroma1-HD-fp8scaledoriginalhybrid large or small models you will need to use this custom node in place of "Load Diffusion Model" node: ComfyUIHybrid-Scaledfp8-Loader I currently recommend using "smallrev3". large model can only use the pruned flash-heun LoRAs The fp8scaled model without hybrid in name can be loaded normally without issue. Chroma1-HD is an 8.9B parameter text-to-image foundational model based on FLUX.1-schnell. It is fully Apache 2.0 licensed, ensuring that anyone can use, modify, and build upon it. As a base model, Chroma1 is intentionally designed to be an excellent starting point for finetuning. It provides a strong, neutral foundation for developers, researchers, and artists to create specialized models. for the fast CFG "baked" version please go to Chroma1-Flash. Key Features High-Performance Base: 8.9B parameters, built on the powerful FLUX.1 architecture. Easily Finetunable: Designed as an ideal checkpoint for creating custom, specialized models. Community-Driven & Open-Source: Fully transparent with an Apache 2.0 license, and training history. Flexible by Design: Provides a flexible foundation for a wide range of generative tasks. Special Thanks A massive thank you to our supporters who make this project possible. Anonymous donor whose incredible generosity funded the pretraining run and data collections. Your support has been transformative for open-source AI. Fictional.ai for their fantastic support and for helping push the boundaries of open-source AI. You can try Chroma on their platform: `pip install transformers diffusers sentencepiece accelerate` ComfyUI For advanced users and customized workflows, you can use Chroma with ComfyUI. Requirements: A working ComfyUI installation. Chroma checkpoint (latest version). T5 XXL Text Encoder. FLUX VAE. Chroma Workflow JSON. Setup: 1. Place the `T5xxl` model in your `ComfyUI/models/clip` folder. 2. Place the `FLUX VAE` in your `ComfyUI/models/vae` folder. 3. Place the `Chroma checkpoint` in your `ComfyUI/models/diffusionmodels` folder. 4. Load the Chroma workflow file into ComfyUI and run. Model Details Architecture: Based on the 8.9B parameter FLUX.1-schnell model. Training Data: Trained on a 5M sample dataset curated from a 20M pool, including artistic, photographic, and niche styles. Technical Report: A comprehensive technical paper detailing the architectural modifications and training process is forthcoming. Intended Use Chroma is intended to be used as a base model for researchers and developers to build upon. It is ideal for: Finetuning on specific styles, concepts, or characters. Research into generative model behavior, alignment, and safety. As a foundational component in larger AI systems. Limitations and Bias Statement Chroma is trained on a broad, filtered dataset from the internet. As such, it may reflect the biases and stereotypes present in its training data. The model is released in a state as is and has not been aligned with a specific safety filter. Users are responsible for their own use of this model. It has the potential to generate content that may be considered harmful, explicit, or offensive. I encourage developers to implement appropriate safeguards and ethical considerations in their downstream applications. Summary of Architectural Modifications (For a full breakdown, tech report soon-ish.) 12B → 8.9B Parameters: TL;DR: I replaced a 3.3B parameter timestep-encoding layer with a more efficient 250M parameter FFN, as the original was vastly oversized for its task. MMDiT Masking: TL;DR: Masking T5 padding tokens enhanced fidelity and increased training stability by preventing the model from focusing on irrelevant ` ` tokens. Custom Timestep Distributions: TL;DR: I implemented a custom timestep sampling distribution (`-x^2`) to prevent loss spikes and ensure the model trains effectively on both high-noise and low-noise regions. P.S Chroma1-HD is not the old Chroma-v.50 it has been retrained from v.48

license:apache-2.0
0
14

flux1-nf4-unet

0
13

Z-Image-Turbo-quants-plus

0
9

Z-Image-Turbo-SingleFile

0
9

LoRA-Collection

0
9

GNER-T5-xxl-encoder-only

license:apache-2.0
0
7

Wan_2.2-fp8_scaled_hybrid

license:apache-2.0
0
5

Wan_2.2-distilled-lightx2v-fp8_scaled_hybrid

NaNK
license:apache-2.0
0
5

Chroma1-Radiance-fp8-scaled

This is a model repository for scaled fp8 quantized versions of Chroma1-Radiance In order to load Chroma1-Radiance-v0.4-fp8scaledoriginalhybridlarge you will need to use this custom node in place of "Load Diffusion Model" node: ComfyUIHybrid-Scaledfp8-Loader

license:apache-2.0
0
3

Z3D-E621-Convnext

0
3

Chroma-fp8-bf16-mixed-quant

license:apache-2.0
0
3

Qwen-Image-fp8-scaled-quants

license:apache-2.0
0
2

sdxl-safetensors

0
2

Flux1_fp8_e4mefn_unets

0
2

flux1-research

0
2

ChromaXL-Vpred-Mixes

0
2

Experience-Realistic-v3-LCM-Inpaint

0
2

pruned-models

0
2

Chroma2-Kaleidoscope-Merges

license:apache-2.0
0
1

Qwen2.5-VL-7B-MixedPrecision-ComfyUI

NaNK
license:apache-2.0
0
1

sd3-safetensors

0
1

Chroma-Custom-Merges

license:apache-2.0
0
1

WAN-14B-merges

NaNK
license:apache-2.0
0
1