moot20
SmolVLM-256M-Instruct-MLX
SmolVLM-256M-Base-MLX
SmolVLM 500M Base MLX
Dolphin3.0-Mistral-24B-MLX-8bits
The Model moot20/Dolphin3.0-Mistral-24B-MLX-8bits was converted to MLX format from cognitivecomputations/Dolphin3.0-Mistral-24B using mlx-lm version 0.21.1.
paligemma2-3b-mix-224-MLX-4bits
Qwen2.5-Coder-14B-Instruct-MLX-6bits
Dolphin3.0-R1-Mistral-24B-MLX-4bits
The Model moot20/Dolphin3.0-R1-Mistral-24B-MLX-4bits was converted to MLX format from cognitivecomputations/Dolphin3.0-R1-Mistral-24B using mlx-lm version 0.21.1.
Dolphin3.0-Mistral-24B-MLX-4bits
Qwen2.5-VL-3B-Instruct-MLX-8bits
SmolVLM-256M-Base-MLX-4bits
DeepSeek-R1-Distill-Qwen-32B-MLX-6bits
Dolphin3.0-R1-Mistral-24B-MLX-8bits
Dolphin3.0-Mistral-24B-MLX-6bits
DeepSeek-R1-Distill-Qwen-14B-MLX-4bit
DeepSeek-R1-Distill-Qwen-1.5B-MLX-4bit
phi-4-MLX-4bit
Qwen2.5-Coder-7B-Instruct-MLX-4bits
Llama-3.1-Tulu-3-8B-MLX-4bits
Qwen2.5-Coder-32B-Instruct-MLX-8bits
DeepSeek-R1-Distill-Qwen-14B-MLX-6bits
DeepSeek-R1-Distill-Qwen-14B-MLX-8bits
SmolVLM-256M-Instruct-MLX-4bits
SmolVLM-500M-Instruct-MLX
Dolphin3.0-R1-Mistral-24B-MLX-6bits
The Model moot20/Dolphin3.0-R1-Mistral-24B-MLX-6bits was converted to MLX format from cognitivecomputations/Dolphin3.0-R1-Mistral-24B using mlx-lm version 0.21.1.
Qwen2.5-Coder-7B-Instruct-MLX-6bits
Qwen2.5-VL-7B-Instruct-MLX-8bits
Qwen2.5-Coder-32B-Instruct-MLX-4bits
DeepSeek-R1-Distill-Qwen-1.5B-MLX-6bits
The Model moot20/DeepSeek-R1-Distill-Qwen-1.5B-MLX-6bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using mlx-lm version 0.21.1.
DeepSeek-R1-Distill-Qwen-7B-MLX-6bits
DeepSeek-R1-Distill-Qwen-32B-MLX-4bits
The Model moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-4bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-32B using mlx-lm version 0.21.1.
DeepSeek-R1-Distill-Qwen-32B-MLX-8bits
paligemma2-28b-mix-224-MLX-6bits
moot20/paligemma2-28b-mix-224-MLX-6bits This model was converted to MLX format from [`google/paligemma2-28b-mix-224`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx
DeepSeek-R1-Distill-Qwen-7B-MLX-4bit
Qwen2.5-14B-Instruct-1M-MLX-4bit
Mistral-Small-24B-Base-2501-MLX-4bit
Qwen2.5-Coder-14B-Instruct-MLX-4bits
Qwen2.5-Coder-14B-Instruct-MLX-8bits
The Model moot20/Qwen2.5-Coder-14B-Instruct-MLX-8bits was converted to MLX format from Qwen/Qwen2.5-Coder-14B-Instruct using mlx-lm version 0.21.1.
Qwen2.5-VL-7B-Instruct-MLX-4bits
moot20/Qwen2.5-VL-7B-Instruct-MLX-4bits This model was converted to MLX format from [`Qwen/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version 0.1.12. Refer to the original model card for more details on the model. Use with mlx
Llama-3.1-Tulu-3-8B-MLX-6bits
phi-4-MLX-6bits
phi-4-MLX-8bits
Mistral-Small-24B-Base-2501-MLX-6bits
DeepSeek-R1-Distill-Qwen-1.5B-MLX-8bits
The Model moot20/DeepSeek-R1-Distill-Qwen-1.5B-MLX-8bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using mlx-lm version 0.21.1.
DeepSeek-R1-Distill-Qwen-7B-MLX-8bits
SmolVLM-500M-Instruct-MLX-4bits
SmolVLM-500M-Instruct-MLX-6bits
SmolVLM-500M-Instruct-MLX-8bits
SmolVLM-256M-Instruct-MLX-8bits
Velvet-14B-MLX-4bits
DeepHermes-3-Llama-3-8B-Preview-MLX-4bits
paligemma2-3b-mix-448-MLX-4bits
paligemma2-28b-mix-224-MLX-4bits
moot20/paligemma2-28b-mix-224-MLX-4bits This model was converted to MLX format from [`google/paligemma2-28b-mix-224`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx
DeepSeek-R1-Distill-Llama-8B-MLX-4bit
Mistral-Small-24B-Instruct-2501-MLX-4bit
The Model moot20/Mistral-Small-24B-Instruct-2501-MLX-4bit was converted to MLX format from mistralai/Mistral-Small-24B-Instruct-2501 using mlx-lm version 0.21.1.
Qwen2.5-Coder-3B-Instruct-MLX-4bits
Llama-3.1-Tulu-3-8B-MLX-8bits
Mistral-Small-24B-Instruct-2501-MLX-6bits
Qwen2.5-14B-Instruct-1M-MLX-8bits
DeepSeek-R1-Distill-Llama-8B-MLX-6bits
DeepSeek-R1-Distill-Llama-8B-MLX-8bits
The Model moot20/DeepSeek-R1-Distill-Llama-8B-MLX-8bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Llama-8B using mlx-lm version 0.21.1.
SmolVLM-256M-Instruct-MLX-6bits
DeepHermes-3-Llama-3-8B-Preview-MLX-6bits
DeepHermes-3-Llama-3-8B-Preview-MLX-8bits
paligemma2-3b-mix-224-MLX-8bits
paligemma2-10b-mix-224-MLX-6bits
paligemma2-10b-mix-448-MLX-8bits
paligemma2-28b-mix-448-MLX-4bits
s1-32B-MLX-4bits
Qwen2.5-Coder-7B-Instruct-MLX-8bits
Qwen2.5-Coder-3B-Instruct-MLX-6bits
Qwen2.5-Coder-3B-Instruct-MLX-8bits
Qwen2.5-Coder-0.5B-Instruct-MLX-4bits
Qwen2.5-Coder-0.5B-Instruct-MLX-6bits
Qwen2.5-Coder-0.5B-Instruct-MLX-8bits
Qwen2.5-VL-7B-Instruct-MLX-6bits
moot20/Qwen2.5-VL-7B-Instruct-MLX-6bits This model was converted to MLX format from [`Qwen/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version 0.1.12. Refer to the original model card for more details on the model. Use with mlx
Qwen2.5-VL-3B-Instruct-MLX-4bits
Qwen2.5-VL-3B-Instruct-MLX-6bits
Mistral-Small-24B-Base-2501-MLX-8bits
The Model moot20/Mistral-Small-24B-Base-2501-MLX-8bits was converted to MLX format from mistralai/Mistral-Small-24B-Base-2501 using mlx-lm version 0.21.1.
Qwen2.5-7B-Instruct-1M-MLX-6bits
Qwen2.5-7B-Instruct-1M-MLX-8bits
Qwen2.5-14B-Instruct-1M-MLX-6bits
The Model moot20/Qwen2.5-14B-Instruct-1M-MLX-6bits was converted to MLX format from Qwen/Qwen2.5-14B-Instruct-1M using mlx-lm version 0.21.1.
Qwen2.5-Coder-32B-Instruct-MLX-6bits
The Model moot20/Qwen2.5-Coder-32B-Instruct-MLX-6bits was converted to MLX format from Qwen/Qwen2.5-Coder-32B-Instruct using mlx-lm version 0.21.1.
SmolVLM-256M-Base-MLX-6bits
SmolVLM-256M-Base-MLX-8bits
SmolVLM-500M-Base-MLX-4bits
SmolVLM-500M-Base-MLX-6bits
SmolVLM-500M-Base-MLX-8bits
s1-32B-MLX-6bits
DeepScaleR-1.5B-Preview-MLX-4bits
paligemma2-3b-mix-224-MLX-6bits
paligemma2-10b-mix-224-MLX-4bits
paligemma2-10b-mix-224-MLX-8bits
paligemma2-10b-mix-448-MLX-4bits
paligemma2-28b-mix-224-MLX-8bits
moot20/paligemma2-28b-mix-224-MLX-8bits This model was converted to MLX format from [`google/paligemma2-28b-mix-224`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx
paligemma2-28b-mix-448-MLX-8bits
moot20/paligemma2-28b-mix-448-MLX-8bits This model was converted to MLX format from [`google/paligemma2-28b-mix-448`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx