renderartist

22 models • 2 total models in database
Sort by:

retrocomicflux

786
42

toyboxflux

578
110

coloringbookflux

507
51

Simplevectorflux

Simple Vector Flux was trained on a curated dataset of ~50 synthetic images in classic vector style, 17 epochs, 2 repeats, ~1700 steps. This is a work in progress and it can be a little temperamental, the captioning was done using Joy Caption Batch with the trigger "v3ct0r" and "vector" in the prefix of the captions. You have to work a little bit to get desired results and sometimes there is bleeding/blending of subjects but overall the style is present and the results can be really good. This LoRA takes a couple of tries adjusting your prompt and adding tokens to match the style. You should use `v3ct0r` to trigger the image generation. You should use `vector` to trigger the image generation. Weights for this model are available in Safetensors format.

472
134

weirdthingsflux

344
7

coloringbookhidream

This HiDream LoRA is Lycoris based and produces great line art styles and coloring book images. I found the results to be much stronger than my Coloring Book Flux LoRA. Hope this clearly demonstrates the quality that can be achieved with this awesome model. I recommend using LCM sampler with the simple scheduler, for some reason using other samplers resulted in hallucinations that affected quality when LoRAs are utilized. Some of the images in the gallery will have prompt examples. This model was trained to 2000 steps, 2 repeats with a learning rate of 4e-4 trained with Simple Tuner using the main branch. The dataset was around 90 synthetic images in total. All of the images used were 1:1 aspect ratio at 1024x1024 to fit into VRAM. Training took around 3 hours using an RTX 4090 with 24GB VRAM, training times are on par with Flux LoRA training. Captioning was done using Joy Caption Batch with modified instructions and a token limit of 128 tokens (more than that gets truncated during training). The resulting LoRA can produce some really great coloring book images with either simple designs or more intricate designs based on prompts. I'm not here to troubleshoot installation issues or field endless questions, each environment is completely different. I trained the model with Full and ran inference in ComfyUI using the Dev model, it is said that this is the best strategy to get high quality outputs. Testing and training takes a lot of time and personal resources. If you can afford it please contribute to my KoFi (https://ko-fi.com/renderartist) – Contributing will allow me more flexibility to train in the cloud and continue experimenting and sharing better. You should use `c0l0ringb00k` to trigger the image generation. You should use `coloring book` to trigger the image generation. Weights for this model are available in Safetensors format.

license:apache-2.0
153
19

ROYGBIVFlux

69
22

sculptureflux

Sculpture Flux draws from the Art Institute of Chicago's prestigious collection of high-resolution public domain images, capturing the nuanced textures, dynamic forms, and timeless beauty of classical sculpture. While masterfully rendering the exquisite details of carved stonework, this versatile concept model transcends traditional materials to generate bold contemporary pieces in marble, bronze, steel, and beyond. Great for visualizing everything from classical busts to abstract modernist forms, each generation maintains the weight, presence, and dimensional complexity inherent to sculptural art. Experiment with lowering the guidance scale from the default of 3.0-3.5 down to 2.5 to allow for more creative prompting, doing this will allow for more flexibility both in form and materials. The LoRA was trained primarily on busts but seems flexible enough for figures and full bodies as well. This LoRA was trained with 22 images captioned with Joy Caption Batch, 2 repeats, 2 batch, 80 epochs, 32 DIM/32 ALPHA for a total of 1760 Steps. Next phases: Since the outputs are so strong I want to make a v2 of this LoRA with a mixture of real images and AI-genereated images to allow for more flexibility in the styles by default. Please support my research, I want to explore the fine-tuning of Flux to create better LoRAs. Cloud compute companies don't accept buzz! 🤣 (https://ko-fi.com/renderartist) ⬅ Thank you! You should use `sculptur3` to trigger the image generation. You should use `sculpture` to trigger the image generation. Weights for this model are available in Safetensors format.

54
5

simplevectorhidream

license:apache-2.0
49
8

creature-shock-flux

41
0

technically-color-qwen

Technically Color Qwen is meticulously crafted to capture the unmistakable essence of classic film. This LoRA was trained on approximately 180+ stills to excel at generating images imbued with the signature vibrant palettes, rich saturation, and dramatic lighting that defined an era of legendary classic film. This LoRA greatly enhances the depth and brilliance of hues, creating more realistic yet dreamlike textures, lush greens, brilliant blues, and sometimes even the distinctive glow seen in classic productions, making your outputs look truly like they've stepped right off a silver screen. I utilized ai-toolkit for training, the entire training took approximately 6 hours. Images were captioned using Joy Caption Batch, and the model was tested in ComfyUI. The gallery contains examples with workflows attached. I'm running a very simple 2-pass workflow that uses some advanced samplers for most of these; drag and drop any of the images from the gallery into ComfyUI to see the workflow. This is my first time training a LoRA for Qwen, I think it works pretty well, but I'm sure there will be improvements. You should use `t3chnic4lly` to trigger the image generation.

31
3

sketchpaintflux

28
16

rubberhose-ruckus-hidream

license:apache-2.0
26
7

classic-painting-flux

22
5

technically-color-flux

21
7

retroadflux

17
8

saturday-morning-qwen

6
1

floating-heads-hidream

The Floating Heads HiDream LoRA is LyCORIS-based and trained on stylized, human-focused 3D bust renders. I had an idea to train on this trending prompt I spotted on the Sora explore page. The intent is to isolate the head and neck with precise framing, natural accessories, detailed facial structures, and soft studio lighting. Results are 1760x2264 when using the workflow embedded in the first image of the gallery. The workflow is prioritizing visual richness, consistency, and quality over mass output. That said outputs are generally very clean, sharp and detailed with consistent character placement, and predictable lighting behavior. This is best used for expressive character design, editorial assets, or any project that benefits from high quality facial renders. Perfect for img2vid, LivePortrait or lip syncing. The first image in the gallery includes an embedded multi-pass workflow that uses multiple schedulers and samplers in sequence to maximize facial structure, accessory clarity, and texture fidelity. Every image in the gallery was generated using this process. While the LoRA wasn’t explicitly trained around this workflow, I developed both the model and the multi-pass approach in parallel, so I haven’t tested it extensively in a single-pass setup. The CFG in the final pass is set to 2, this gives crisper details and more defined qualities like wrinkles and pores, if your outputs look overly sharp set CFG to 1. The process is not fast — expect 300 seconds of diffusion for all 3 passes on an RTX 4090 (sometimes the second pass is enough detail). I'm still exploring methods of cutting inference time down, you're more than welcome to adjust whatever settings to achieve your desired results. Please share your settings in the comments for others to try if you figure something out. I don't need you to tell me this is slow, expect it to be slow (300 seconds for all 3 passes). v1: Training focused on isolated, neck-up renders across varied ages, facial structures, and ethnicities. Good subject diversity (age, ethnicity, and gender range) with consistent style. v2 (in progress): I plan on incorporating results from v1 into v2 to foster more consistency. Trained for 3,000 steps, 2 repeats at 2e-4 using SimpleTuner (took around 3 hours) Dataset of 71 generated synthetic images at 1024x1024 Training and inference completed on RTX 4090 24GB Captioning via Joy Caption Batch 128 tokens I trained this LoRA with HiDream Full using SimpleTuner and ran inference in ComfyUI using the HiDream Dev model. If you appreciate the quality or want to support future LoRAs like this, you can contribute here: 🔗 https://ko-fi.com/renderartist 🔗 renderartist.com You should use `h3adfl0at` to trigger the image generation. You should use `3D floating head` to trigger the image generation. Weights for this model are available in Safetensors format.

license:apache-2.0
5
6

doodletoonflux

4
2

saturday-morning-flux

4
0

Coloring-Book-Z-Image-Turbo-LoRA

license:apache-2.0
3
0

saturday-morning-wan

Saturday Morning WAN is a video LoRA trained on WAN 2.2 14B T2V, use text prompts to generate fun short cartoon animations with distinct modern American illustration styles. I'm including both the high and low noise versions of the LoRAs, download both of them. This model took over 8 hours to train on around 40 AI generated video clips and 70 AI generated stills. Trained with ai-toolkit on an RTX Pro 6000, tested in ComfyUI. Use with your preferred workflow, this should work well with regular base models and GGUF models. You should use `saturd4ym0rning` to trigger the image generation. You should use `cartoon` to trigger the image generation.

NaNK
0
1