jasperai

10 models • 3 total models in database
Sort by:

Flux.1-dev-Controlnet-Upscaler

This is Flux.1-dev ControlNet for low resolution images developed by Jasper research team. How to use This model can be used directly with the `diffusers` library Training This model was trained with a synthetic complex data degradation scheme taking as input a real-life image and artificially degrading it by combining several degradations such as amongst other image noising (Gaussian, Poisson), image blurring and JPEG compression in a similar spirit as [1] [1] Wang, Xintao, et al. "Real-esrgan: Training real-world blind super-resolution with pure synthetic data." Proceedings of the IEEE/CVF international conference on computer vision. 2021. Licence This model falls under the Flux.1-dev model licence.

12,766
848

Flux.1-dev-Controlnet-Surface-Normals

This is Flux.1-dev ControlNet for Surface Normals map developed by Jasper research team. How to use This model can be used directly with the `diffusers` library 💡 Note: You can compute the conditioning map using the `NormalBaeDetector` from the `controlnetaux` library Training This model was trained with surface normals maps computed with Clipdrop's surface normals estimator model as well as an open-souce surface normals estimation model such as Boundary Aware Encoder (BAE). Licence This model falls under the Flux.1-dev model licence.

1,146
96

flash-sd3

Flash Diffusion is a diffusion distillation method proposed in Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin from Jasper Research. This model is a 90.4M LoRA distilled version of SD3 model that is able to generate 1024x1024 images in 4 steps. See our live demo and official Github repo. The model can be used using the `StableDiffusion3Pipeline` from `diffusers` library directly. It can allow reducing the number of required sampling steps to 4 steps. ⚠️ First, you need to install a specific version of `diffusers` by running ⚠️ Then, you can run the following to generate an image Training details The model was trained for ~50 hours on 2 H100 GPUs. 💡 Training Hint : Model could perform much better on text if distilled on dataset of images containing text, feel free to try it yourself. Citation If you find this work useful or use it in your research, please consider citing us License This model is released under the the Creative Commons BY-NC license.

license:cc-by-nc-4.0
1,122
112

Flux.1 Dev Controlnet Depth

947
118

LBM Relighting

Latent Bridge Matching (LBM) is a new, versatile and scalable method proposed in LBM: Latent Bridge Matching for Fast Image-to-Image Translation that relies on Bridge Matching in a latent space to achieve fast image-to-image translation. This model was trained to relight a foreground object according to a provided background. See our live demo and official Github repo. How to use? To use this model you need first to install the associated `lbm` library by running the following Then, you can infer with the model on your input images License This code is released under the Creative Commons BY-NC 4.0 license. Citation If you find this work useful or use it in your research, please consider citing us

NaNK
license:cc-by-nc-4.0
386
86

flash-pixart

NaNK
license:cc-by-nc-4.0
235
27

flash-sdxl

NaNK
license:cc-by-nc-nd-4.0
86
35

flash-sd

NaNK
license:cc-by-nc-4.0
52
19

LBM_normals

NaNK
license:cc-by-nc-4.0
16
11

LBM_depth

NaNK
license:cc-by-nc-4.0
8
7