Darwin-Qwen3-4B
16
4
4.0B
license:apache-2.0
by
openfree
Other
OTHER
4B params
New
16 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary
openfree/Darwin-Qwen3-4B This model is automatically merged using evolutionary algorithm 'Darwin A2AP' v3.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM
Code Examples
⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)⚠️ Benchmarking Test Resultspythontransformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("openfree/Darwin-Qwen3-4B")
tokenizer = AutoTokenizer.from_pretrained("openfree/Darwin-Qwen3-4B")
# 추론 예시
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.