neta-art
Neta-Lumina
Neta Lumina is a high‑quality anime‑style image‑generation model developed by Neta.art Lab. Building on the open‑source Lumina‑Image‑2.0 released by the Alpha‑VLLM team at Shanghai AI Laboratory, we fine‑tuned the model with a vast corpus of high‑quality anime images and multilingual tag data. The preliminary result is a compelling model with powerful comprehension and interpretation abilities (thanks to Gemma text encoder), ideal for illustration, posters, storyboards, character design, and more. - Optimized for diverse creative scenarios such as Furry, Guofeng (traditional‑Chinese aesthetics), pets, etc. - Wide coverage of characters and styles, from popular to niche concepts. (Still support danbooru tags!) - Accurate natural‑language understanding with excellent adherence to complex prompts. - Native multilingual support, with Chinese, English, and Japanese recommended first. For models in alpha tests, requst access at https://huggingface.co/neta-art/NetaLuminaAlpha if you are interested. We will keep updating. - Primary Goal: General knowledge and anime‑style optimization - Data Set: >13 million anime‑style images - >46,000 A100 Hours - Higher upper limit, suitable for pro users. Check Neta Lumina Prompt Book for better results. - First beta release candidate - Primary Goal: Enhanced aesthetics, pose accuracy, and scene detail - Data Set: Hundreds of thousands of handpicked high‑quality anime images (fine‑tuned on an older version of raw model) - User-friendly, suitable for most people. ComfyUI Neta Lumina is built on the Lumina2 Diffusion Transformer (DiT) framework, please follow these steps precisely. Currently Neta Lumina runs only on ComfyUI: - Latest ComfyUI installation - ≥ 8 GB VRAM 1. Neta Lumina-Beta - Download link: https://huggingface.co/neta-art/Neta-Lumina/blob/main/Unet/neta-lumina-v1.0.safetensors - Save path: `ComfyUI/models/unet/` 2. Text Encoder (Gemma-2B) - Download link:https://huggingface.co/neta-art/Neta-Lumina/blob/main/Text%20Encoder/gemma22bfp16.safetensors - Save path: `ComfyUI/models/textencoders/` 3. VAE Model (16-Channel FLUX VAE) - Download link: https://huggingface.co/neta-art/Neta-Lumina/blob/main/VAE/ae.safetensors - Save path: `ComfyUI/models/vae/` - `UNETLoader` – loads the `.pth` - `VAELoader` – loads `ae.safetensors` - `CLIPLoader` – loads `gemma22bfp16.safetensors` - `Text Encoder` – connects positive /negative prompts to K Sampler Simple merged release Download `neta-lumina-v1.0-all-in-one.safetensors`, `md5sum = dca54fef3c64e942c1a62a741c4f9d8a`, you may use ComfyUI’s simple checkpoint loader workflow. - Sampler: `resmultistep/ eulerancestral` - Scheduler: `linearquadratic` - Steps: 30 - CFG (guidance): 4 – 5.5 - EmptySD3LatentImage resolution: 1024 × 1024, 768 × 1532, 968 × 1322, or >= 1024 Detailed prompt guidelines: Neta Lumina Prompt Book - Discord: https://discord.com/invite/TTTGccjbEa - QQ group: 1039442542 - Continous base‑model training to raise reasoning capability. - Aesthetic‑dataset iteration to improve anatomy, background richness, and overall appealness. - Smarter, more versatile tagging tools to lower the creative barrier. - LoRA training tutorials and components - Experienced users may already fine‑tune via Lumina‑Image‑2.0’s open code. - Development of advanced control / style‑consistency features (e.g., Omini Control). Call for Collaboration! - Special thanks to the Alpha‑VLLM team for open‑sourcing Lumina‑Image‑2.0 - Model development: Neta.art Lab (Civitai) - Core Trainer: lili Civitai ・ Hugging Face - Partners - nebulae: Civitai ・ Hugging Face - 生姜: Hugging Face - 孙一 - narugo1992 & deepghs: open datasets, processing tools, and models - Naifu trainer at Mikubill - Evaluators & developers: 二小姐, spawner, Rnglg2 - Other contributors: 沉迷摸鱼, poi, AshenWitch, 十分无奈, GHOSTLX, wenaka, iiiiii, 年糕特工队, 恩匹希, 奶冻, mumu, yizyin, smile, Yang, 古神, 灵之药, LyloGummy, 雪时 - TeaCache: https://github.com/spawner1145/CUI-Lumina2-TeaCache - Advanced samplers & TeaCache guide (by spawner): https://docs.qq.com/doc/DZEFKb1ZrZVZiUmxw?nlc=1 - Neta Lumina ComfyUI Manual (in Chinese): https://docs.qq.com/doc/DZEVQZFdtaERPdXVh
neta-lumina-gguf
将模型放在 `models/unet` 下并使用 `https://github.com/spawner1145/ComfyUI-GGUF` 中的节点加载 put the model in `models/unet` and load it with `https://github.com/spawner1145/ComfyUI-GGUF`
neta-noob-1.0
neta-xl-2.0
gemma2-2b-gguf
load with https://github.com/spawner1145/ComfyUI-GGUF/tree/gemma2test 用 https://github.com/spawner1145/ComfyUI-GGUF/tree/gemma2test 这里的节点加载
Neta-Lumina-diffusers
基于Lumina2的动漫风格图像生成模型 | Anime-style Image Generation Model based on Lumina2 - 模型类型 | Model Type: Lumina2Pipeline - 基础模型 | Base Model: Lumina-Image-2.0 - 调度器 | Scheduler: FlowMatchEulerDiscreteScheduler - 数据类型 | Data Type: bfloat16 - 分辨率 | Resolution: 1024x1024 ⚠️ 性能提示 | Performance Note: 目前diffusers对默认Lumina2采样器的支持效果不如ComfyUI。如需体验最佳生成效果,建议使用ComfyUI。 ⚠️ Performance Note: Currently, diffusers' support for the default Lumina2 sampler doesn't match the generation quality of ComfyUI. For the best generation experience, we recommend using ComfyUI.