openfree
flux-chatgpt-ghibli-lora
claude-monet
I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements. - https://huggingface.co/openfree/claude-monet - https://huggingface.co/openfree/pierre-auguste-renoir - https://huggingface.co/openfree/paul-cezanne - https://huggingface.co/openfree/van-gogh - https://huggingface.co/openfree/winslow-homer You should use `claude monet` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
president-pjh
Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF
Gemma-3-R1984-27B-Q4_K_M-GGUF
Gemma-3-R1984-27B-Q8_0-GGUF
Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF
winslow-homer
Gemma-3-R1984-12B-Q8_0-GGUF
pierre-auguste-renoir
van-gogh
van-gogh I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements. - https://huggingface.co/openfree/claude-monet - https://huggingface.co/openfree/pierre-auguste-renoir - https://huggingface.co/openfree/paul-cezanne - https://huggingface.co/openfree/van-gogh - https://huggingface.co/openfree/winslow-homer You should use `gogh` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
paul-cezanne
I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements. - https://huggingface.co/openfree/claude-monet - https://huggingface.co/openfree/pierre-auguste-renoir - https://huggingface.co/openfree/paul-cezanne - https://huggingface.co/openfree/van-gogh - https://huggingface.co/openfree/winslow-homer You should use `Cezanne` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
Gemma 3 R1984 27B Q6 K GGUF
openfree/Gemma-3-R1984-27B-Q6K-GGUF This model was converted to GGUF format from `VIDraft/Gemma-3-R1984-27B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Qwen2.5-VL-32B-Instruct-Q8_0-GGUF
QwQ-R1984-32B-Q4_K_M-GGUF
Mistral-Small-3.1-24B-Instruct-2503-Q8_0-GGUF
pepe
You should use `pepe` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
QwQ-32B-Q4_K_M-GGUF
flux-lora-korea-palace
Darwin-Qwen3-4B
openfree/Darwin-Qwen3-4B This model is automatically merged using evolutionary algorithm 'Darwin A2AP' v3.2 Overview This study introduces a new paradigm of AI model fusion. Traditional "model merging" techniques have been restricted to models of the same family (e.g., transformer-based LLMs). We transcend this limitation by proposing a method to directly collide and fuse the core representational structures (DNA) of entirely different species — such as transformers and diffusion models. This approach acts as an "AI particle accelerator," colliding fundamentally distinct elements of intelligence to uncover new possibilities. The paper and source code (to be released on GitHub and Hugging Face) are currently under preparation and will be made publicly available soon. They will be released in a reproducible and extensible form for anyone to explore. Contribution Breaking the Species Barrier Fusion of fundamentally different models such as transformers and diffusion architectures. Realization of cross-species model merging once deemed impossible. AI Embryo Creation Formation of an initial “AI embryo” based on fused DNA. The embryo is not confined to a single domain or function but serves as the foundation for multi-capability intelligence. Virtual Evolutionary Environment AI embryos are placed into a simulated environment spanning thousands of generations. Through survival and adaptation, natural selection drives evolution beyond the limitations of parent models, producing new offspring models. Merge Information Father Model 1: Qwen/Qwen3-4B-Instruct-2507 Mother Model 2: Qwen/Qwen3-4B-Thinking-2507 Validation Task Accuracy: 88.56% Note: The above accuracy is a proxy metric used for merge ratio optimization. Algorithm Version: Darwin A2AP Enhanced v3.2 ⚠️ Notice The actual language generation performance of this model requires separate evaluation. The validation score above is not an LLM benchmark score. Strengths & Features Cross-Domain Intelligence Example: Legal LLM × Medical LLM → instantly produces a “Forensic LLM.” This is not mere knowledge aggregation but the creation of new intelligence at the intersection of domains. Extreme Efficiency Achieves results at roughly 1/10,000 of the time and cost compared to training a new foundation model. Accessible via a simple click-based process. Unified Intelligence Escapes confinement to a single domain by organically merging multiple expertises. Provides an experimental basis for integrated reasoning and creativity with AGI-like qualities. Reproducibility & Openness Source code and models will be fully released on GitHub and Hugging Face. Researchers and developers can freely reproduce, experiment, and expand. Outlook This research opens the door to a new generation of model creation, expressed as “Foundation a + Foundation b = Foundation abXc.” It represents far more than a reduction in training costs, serving as a critical turning point for future studies on the evolution and fusion of AI intelligence.
lee-min-ho
boris-yeltsin
You should use `Boris Yeltsin` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
korea-president-yoon
myt-flux-fantasy
You should use `fantasy` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
sergey-lazarev
voice-crown-necklace
bruce-lee
Mistral-Small-3.1-24B-Instruct-2503-Q6_K-GGUF
QwQ-32B-Q8_0-GGUF
casey
You should use `casey` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
Llama-3_3-Nemotron-Super-49B-v1-Q4_K_M-GGUF
Gemma-3-R1984-12B-Q6_K-GGUF
president-k-dj
You should use `presidentKDJ` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
adsd1
병합 정보 - 기본 모델 1: microsoft/Orca-2-7b - 기본 모델 2: HuggingFaceH4/zephyr-7b-beta - 최종 정확도: 82.22% - 알고리즘 버전: Enhanced v3.1 with Late-Stage Improvements
z-tao
vcnl
TinyLlama-SmolLM
병합 정보 - 기본 모델 1: TinyLlama/TinyLlama-1.1B-Chat-v1.0 - 기본 모델 2: HuggingFaceTB/SmolLM-1.7B-Instruct - 최종 정확도: 58.61% - 알고리즘 버전: Enhanced v3.1 with Late-Stage Improvements
QwQ-R1984-32B-Q8_0-GGUF
Llama-3_3-Nemotron-Super-49B-v1-Q6_K-GGUF
leonardo-dicaprio
Qwen2.5-Mistral-7B-Instr
병합 정보 - 기본 모델 1: Qwen/Qwen2.5-7B-Instruct - 기본 모델 2: mistralai/Mistral-7B-Instruct-v0.3 - 최종 정확도: 82.22% - 알고리즘 버전: Enhanced v3.1 with Late-Stage Improvements
string-sandal
keisi
hongdae-beoseukingjon
DarwinAI-gemma-3-270m
병합 정보 - 기본 모델 1: google/gemma-3-270m-it - 기본 모델 2: google/gemma-3-270m - 병합 방법: Evolutionary Algorithm - 최종 정확도: 81.39% 실험 결과 - 최종 테스트 정확도: 81.39% - 실험 방법: Evolutionary Algorithm - 총 진화 단계: 5000
gpt2-bert
병합 정보 - 기본 모델 1: openai-community/gpt2 - 기본 모델 2: google-bert/bert-base-uncased - 최종 정확도: 84.44%
jungkook
You should use `Jungkook` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
WizardMath-AgentEvol-7B
dasdsds2
FLUX.1-schnell-training-adapter
morgenstern
Gemma-3-R1984-1B-0613
This model is a fine-tuned version of VIDraft/Gemma-3-R1984-1B. It has been trained using TRL. - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1