openfree

52 models • 1 total models in database
Sort by:

flux-chatgpt-ghibli-lora

724
320

claude-monet

I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements. - https://huggingface.co/openfree/claude-monet - https://huggingface.co/openfree/pierre-auguste-renoir - https://huggingface.co/openfree/paul-cezanne - https://huggingface.co/openfree/van-gogh - https://huggingface.co/openfree/winslow-homer You should use `claude monet` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

170
67

president-pjh

96
7

Qwen2.5-VL-32B-Instruct-Q4_K_M-GGUF

NaNK
llama-cpp
87
9

Gemma-3-R1984-27B-Q4_K_M-GGUF

NaNK
llama-cpp
72
16

Gemma-3-R1984-27B-Q8_0-GGUF

NaNK
llama-cpp
56
14

Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF

NaNK
llama-cpp
52
7

winslow-homer

50
6

Gemma-3-R1984-12B-Q8_0-GGUF

NaNK
llama-cpp
42
14

pierre-auguste-renoir

41
7

van-gogh

van-gogh I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements. - https://huggingface.co/openfree/claude-monet - https://huggingface.co/openfree/pierre-auguste-renoir - https://huggingface.co/openfree/paul-cezanne - https://huggingface.co/openfree/van-gogh - https://huggingface.co/openfree/winslow-homer You should use `gogh` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

38
8

paul-cezanne

I developed a flux-based learning model trained on a curated collection of high-resolution masterpieces from renowned global artists. This LoRA fine-tuning process leveraged the exceptional quality of open-access imagery released by prestigious institutions including the Art Institute of Chicago. The resulting model demonstrates remarkable capability in capturing the nuanced artistic techniques and stylistic elements across diverse historical art movements. - https://huggingface.co/openfree/claude-monet - https://huggingface.co/openfree/pierre-auguste-renoir - https://huggingface.co/openfree/paul-cezanne - https://huggingface.co/openfree/van-gogh - https://huggingface.co/openfree/winslow-homer You should use `Cezanne` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

35
8

Gemma 3 R1984 27B Q6 K GGUF

openfree/Gemma-3-R1984-27B-Q6K-GGUF This model was converted to GGUF format from `VIDraft/Gemma-3-R1984-27B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
27
10

Qwen2.5-VL-32B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
27
7

QwQ-R1984-32B-Q4_K_M-GGUF

NaNK
llama-cpp
25
14

Mistral-Small-3.1-24B-Instruct-2503-Q8_0-GGUF

NaNK
llama-cpp
25
6

pepe

You should use `pepe` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

24
36

QwQ-32B-Q4_K_M-GGUF

NaNK
llama-cpp
17
4

flux-lora-korea-palace

16
40

Darwin-Qwen3-4B

openfree/Darwin-Qwen3-4B This model is automatically merged using evolutionary algorithm 'Darwin A2AP' v3.2 Overview This study introduces a new paradigm of AI model fusion. Traditional "model merging" techniques have been restricted to models of the same family (e.g., transformer-based LLMs). We transcend this limitation by proposing a method to directly collide and fuse the core representational structures (DNA) of entirely different species — such as transformers and diffusion models. This approach acts as an "AI particle accelerator," colliding fundamentally distinct elements of intelligence to uncover new possibilities. The paper and source code (to be released on GitHub and Hugging Face) are currently under preparation and will be made publicly available soon. They will be released in a reproducible and extensible form for anyone to explore. Contribution Breaking the Species Barrier Fusion of fundamentally different models such as transformers and diffusion architectures. Realization of cross-species model merging once deemed impossible. AI Embryo Creation Formation of an initial “AI embryo” based on fused DNA. The embryo is not confined to a single domain or function but serves as the foundation for multi-capability intelligence. Virtual Evolutionary Environment AI embryos are placed into a simulated environment spanning thousands of generations. Through survival and adaptation, natural selection drives evolution beyond the limitations of parent models, producing new offspring models. Merge Information Father Model 1: Qwen/Qwen3-4B-Instruct-2507 Mother Model 2: Qwen/Qwen3-4B-Thinking-2507 Validation Task Accuracy: 88.56% Note: The above accuracy is a proxy metric used for merge ratio optimization. Algorithm Version: Darwin A2AP Enhanced v3.2 ⚠️ Notice The actual language generation performance of this model requires separate evaluation. The validation score above is not an LLM benchmark score. Strengths & Features Cross-Domain Intelligence Example: Legal LLM × Medical LLM → instantly produces a “Forensic LLM.” This is not mere knowledge aggregation but the creation of new intelligence at the intersection of domains. Extreme Efficiency Achieves results at roughly 1/10,000 of the time and cost compared to training a new foundation model. Accessible via a simple click-based process. Unified Intelligence Escapes confinement to a single domain by organically merging multiple expertises. Provides an experimental basis for integrated reasoning and creativity with AGI-like qualities. Reproducibility & Openness Source code and models will be fully released on GitHub and Hugging Face. Researchers and developers can freely reproduce, experiment, and expand. Outlook This research opens the door to a new generation of model creation, expressed as “Foundation a + Foundation b = Foundation abXc.” It represents far more than a reduction in training costs, serving as a critical turning point for future studies on the evolution and fusion of AI intelligence.

NaNK
license:apache-2.0
16
4

lee-min-ho

15
1

boris-yeltsin

You should use `Boris Yeltsin` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

13
3

korea-president-yoon

12
12

myt-flux-fantasy

You should use `fantasy` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

10
1

sergey-lazarev

10
0

voice-crown-necklace

9
8

bruce-lee

9
2

Mistral-Small-3.1-24B-Instruct-2503-Q6_K-GGUF

NaNK
llama-cpp
8
4

QwQ-32B-Q8_0-GGUF

NaNK
llama-cpp
7
8

casey

You should use `casey` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

7
3

Llama-3_3-Nemotron-Super-49B-v1-Q4_K_M-GGUF

NaNK
llama-3
6
5

Gemma-3-R1984-12B-Q6_K-GGUF

NaNK
llama-cpp
5
13

president-k-dj

You should use `presidentKDJ` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

5
7

adsd1

병합 정보 - 기본 모델 1: microsoft/Orca-2-7b - 기본 모델 2: HuggingFaceH4/zephyr-7b-beta - 최종 정확도: 82.22% - 알고리즘 버전: Enhanced v3.1 with Late-Stage Improvements

NaNK
llama
5
0

z-tao

4
3

vcnl

4
2

TinyLlama-SmolLM

병합 정보 - 기본 모델 1: TinyLlama/TinyLlama-1.1B-Chat-v1.0 - 기본 모델 2: HuggingFaceTB/SmolLM-1.7B-Instruct - 최종 정확도: 58.61% - 알고리즘 버전: Enhanced v3.1 with Late-Stage Improvements

NaNK
llama
4
0

QwQ-R1984-32B-Q8_0-GGUF

NaNK
llama-cpp
3
10

Llama-3_3-Nemotron-Super-49B-v1-Q6_K-GGUF

NaNK
llama-3
3
8

leonardo-dicaprio

3
2

Qwen2.5-Mistral-7B-Instr

병합 정보 - 기본 모델 1: Qwen/Qwen2.5-7B-Instruct - 기본 모델 2: mistralai/Mistral-7B-Instruct-v0.3 - 최종 정확도: 82.22% - 알고리즘 버전: Enhanced v3.1 with Late-Stage Improvements

NaNK
license:apache-2.0
3
0

string-sandal

2
6

keisi

2
2

hongdae-beoseukingjon

2
1

DarwinAI-gemma-3-270m

병합 정보 - 기본 모델 1: google/gemma-3-270m-it - 기본 모델 2: google/gemma-3-270m - 병합 방법: Evolutionary Algorithm - 최종 정확도: 81.39% 실험 결과 - 최종 테스트 정확도: 81.39% - 실험 방법: Evolutionary Algorithm - 총 진화 단계: 5000

license:apache-2.0
2
0

gpt2-bert

병합 정보 - 기본 모델 1: openai-community/gpt2 - 기본 모델 2: google-bert/bert-base-uncased - 최종 정확도: 84.44%

NaNK
license:apache-2.0
2
0

jungkook

You should use `Jungkook` to trigger the image generation. Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

1
4

WizardMath-AgentEvol-7B

NaNK
llama
1
0

dasdsds2

NaNK
llama
1
0

FLUX.1-schnell-training-adapter

license:apache-2.0
0
2

morgenstern

0
2

Gemma-3-R1984-1B-0613

This model is a fine-tuned version of VIDraft/Gemma-3-R1984-1B. It has been trained using TRL. - TRL: 0.18.1 - Transformers: 4.52.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.1

NaNK
0
1