Qwen3-Jan-RA-20x-6B-qx86-hi-mlx
105
6.0B
2 languages
license:apache-2.0
by
nightmedia
Language Model
OTHER
6B params
New
105 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
14GB+ RAM
Mobile
Laptop
Server
Quick Summary
This model is a merge of janhq/Jan-V1-4B and Gen-Verse/Qwen3-4B-RA-SFT, with 2B of Brainstorming20x added by DavidAU We are comparing four agentic hybrid model...
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
6GB+ RAM
Code Examples
Use with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxbash
pip install mlx-lmUse with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Use with mlxpython
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Jan-RA-20x-6B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.