moot20

95 models • 1 total models in database
Sort by:

SmolVLM-256M-Instruct-MLX

license:apache-2.0
14
0

SmolVLM-256M-Base-MLX

8
1

SmolVLM 500M Base MLX

8
1

Dolphin3.0-Mistral-24B-MLX-8bits

The Model moot20/Dolphin3.0-Mistral-24B-MLX-8bits was converted to MLX format from cognitivecomputations/Dolphin3.0-Mistral-24B using mlx-lm version 0.21.1.

NaNK
8
0

paligemma2-3b-mix-224-MLX-4bits

NaNK
8
0

Qwen2.5-Coder-14B-Instruct-MLX-6bits

NaNK
license:apache-2.0
7
0

Dolphin3.0-R1-Mistral-24B-MLX-4bits

The Model moot20/Dolphin3.0-R1-Mistral-24B-MLX-4bits was converted to MLX format from cognitivecomputations/Dolphin3.0-R1-Mistral-24B using mlx-lm version 0.21.1.

NaNK
7
0

Dolphin3.0-Mistral-24B-MLX-4bits

NaNK
7
0

Qwen2.5-VL-3B-Instruct-MLX-8bits

NaNK
6
1

SmolVLM-256M-Base-MLX-4bits

NaNK
6
1

DeepSeek-R1-Distill-Qwen-32B-MLX-6bits

NaNK
license:mit
6
0

Dolphin3.0-R1-Mistral-24B-MLX-8bits

NaNK
6
0

Dolphin3.0-Mistral-24B-MLX-6bits

NaNK
6
0

DeepSeek-R1-Distill-Qwen-14B-MLX-4bit

NaNK
license:mit
5
0

DeepSeek-R1-Distill-Qwen-1.5B-MLX-4bit

NaNK
license:mit
5
0

phi-4-MLX-4bit

NaNK
license:mit
5
0

Qwen2.5-Coder-7B-Instruct-MLX-4bits

NaNK
license:apache-2.0
5
0

Llama-3.1-Tulu-3-8B-MLX-4bits

NaNK
llama
5
0

Qwen2.5-Coder-32B-Instruct-MLX-8bits

NaNK
license:apache-2.0
5
0

DeepSeek-R1-Distill-Qwen-14B-MLX-6bits

NaNK
license:mit
5
0

DeepSeek-R1-Distill-Qwen-14B-MLX-8bits

NaNK
license:mit
5
0

SmolVLM-256M-Instruct-MLX-4bits

NaNK
license:apache-2.0
5
0

SmolVLM-500M-Instruct-MLX

license:apache-2.0
5
0

Dolphin3.0-R1-Mistral-24B-MLX-6bits

The Model moot20/Dolphin3.0-R1-Mistral-24B-MLX-6bits was converted to MLX format from cognitivecomputations/Dolphin3.0-R1-Mistral-24B using mlx-lm version 0.21.1.

NaNK
5
0

Qwen2.5-Coder-7B-Instruct-MLX-6bits

NaNK
license:apache-2.0
4
0

Qwen2.5-VL-7B-Instruct-MLX-8bits

NaNK
license:apache-2.0
4
0

Qwen2.5-Coder-32B-Instruct-MLX-4bits

NaNK
license:apache-2.0
4
0

DeepSeek-R1-Distill-Qwen-1.5B-MLX-6bits

The Model moot20/DeepSeek-R1-Distill-Qwen-1.5B-MLX-6bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using mlx-lm version 0.21.1.

NaNK
license:mit
4
0

DeepSeek-R1-Distill-Qwen-7B-MLX-6bits

NaNK
license:mit
4
0

DeepSeek-R1-Distill-Qwen-32B-MLX-4bits

The Model moot20/DeepSeek-R1-Distill-Qwen-32B-MLX-4bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-32B using mlx-lm version 0.21.1.

NaNK
license:mit
4
0

DeepSeek-R1-Distill-Qwen-32B-MLX-8bits

NaNK
license:mit
4
0

paligemma2-28b-mix-224-MLX-6bits

moot20/paligemma2-28b-mix-224-MLX-6bits This model was converted to MLX format from [`google/paligemma2-28b-mix-224`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx

NaNK
4
0

DeepSeek-R1-Distill-Qwen-7B-MLX-4bit

NaNK
license:mit
3
1

Qwen2.5-14B-Instruct-1M-MLX-4bit

NaNK
license:apache-2.0
3
0

Mistral-Small-24B-Base-2501-MLX-4bit

NaNK
license:apache-2.0
3
0

Qwen2.5-Coder-14B-Instruct-MLX-4bits

NaNK
license:apache-2.0
3
0

Qwen2.5-Coder-14B-Instruct-MLX-8bits

The Model moot20/Qwen2.5-Coder-14B-Instruct-MLX-8bits was converted to MLX format from Qwen/Qwen2.5-Coder-14B-Instruct using mlx-lm version 0.21.1.

NaNK
license:apache-2.0
3
0

Qwen2.5-VL-7B-Instruct-MLX-4bits

moot20/Qwen2.5-VL-7B-Instruct-MLX-4bits This model was converted to MLX format from [`Qwen/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version 0.1.12. Refer to the original model card for more details on the model. Use with mlx

NaNK
license:apache-2.0
3
0

Llama-3.1-Tulu-3-8B-MLX-6bits

NaNK
llama
3
0

phi-4-MLX-6bits

NaNK
license:mit
3
0

phi-4-MLX-8bits

NaNK
license:mit
3
0

Mistral-Small-24B-Base-2501-MLX-6bits

NaNK
license:apache-2.0
3
0

DeepSeek-R1-Distill-Qwen-1.5B-MLX-8bits

The Model moot20/DeepSeek-R1-Distill-Qwen-1.5B-MLX-8bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using mlx-lm version 0.21.1.

NaNK
license:mit
3
0

DeepSeek-R1-Distill-Qwen-7B-MLX-8bits

NaNK
license:mit
3
0

SmolVLM-500M-Instruct-MLX-4bits

NaNK
license:apache-2.0
3
0

SmolVLM-500M-Instruct-MLX-6bits

NaNK
license:apache-2.0
3
0

SmolVLM-500M-Instruct-MLX-8bits

NaNK
license:apache-2.0
3
0

SmolVLM-256M-Instruct-MLX-8bits

NaNK
license:apache-2.0
3
0

Velvet-14B-MLX-4bits

NaNK
license:apache-2.0
3
0

DeepHermes-3-Llama-3-8B-Preview-MLX-4bits

NaNK
llama
3
0

paligemma2-3b-mix-448-MLX-4bits

NaNK
3
0

paligemma2-28b-mix-224-MLX-4bits

moot20/paligemma2-28b-mix-224-MLX-4bits This model was converted to MLX format from [`google/paligemma2-28b-mix-224`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx

NaNK
3
0

DeepSeek-R1-Distill-Llama-8B-MLX-4bit

NaNK
llama
2
0

Mistral-Small-24B-Instruct-2501-MLX-4bit

The Model moot20/Mistral-Small-24B-Instruct-2501-MLX-4bit was converted to MLX format from mistralai/Mistral-Small-24B-Instruct-2501 using mlx-lm version 0.21.1.

NaNK
license:apache-2.0
2
0

Qwen2.5-Coder-3B-Instruct-MLX-4bits

NaNK
2
0

Llama-3.1-Tulu-3-8B-MLX-8bits

NaNK
llama
2
0

Mistral-Small-24B-Instruct-2501-MLX-6bits

NaNK
license:apache-2.0
2
0

Qwen2.5-14B-Instruct-1M-MLX-8bits

NaNK
license:apache-2.0
2
0

DeepSeek-R1-Distill-Llama-8B-MLX-6bits

NaNK
llama
2
0

DeepSeek-R1-Distill-Llama-8B-MLX-8bits

The Model moot20/DeepSeek-R1-Distill-Llama-8B-MLX-8bits was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Llama-8B using mlx-lm version 0.21.1.

NaNK
llama
2
0

SmolVLM-256M-Instruct-MLX-6bits

NaNK
license:apache-2.0
2
0

DeepHermes-3-Llama-3-8B-Preview-MLX-6bits

NaNK
llama
2
0

DeepHermes-3-Llama-3-8B-Preview-MLX-8bits

NaNK
llama
2
0

paligemma2-3b-mix-224-MLX-8bits

NaNK
2
0

paligemma2-10b-mix-224-MLX-6bits

NaNK
2
0

paligemma2-10b-mix-448-MLX-8bits

NaNK
2
0

paligemma2-28b-mix-448-MLX-4bits

NaNK
2
0

s1-32B-MLX-4bits

NaNK
license:apache-2.0
1
2

Qwen2.5-Coder-7B-Instruct-MLX-8bits

NaNK
license:apache-2.0
1
0

Qwen2.5-Coder-3B-Instruct-MLX-6bits

NaNK
1
0

Qwen2.5-Coder-3B-Instruct-MLX-8bits

NaNK
1
0

Qwen2.5-Coder-0.5B-Instruct-MLX-4bits

NaNK
license:apache-2.0
1
0

Qwen2.5-Coder-0.5B-Instruct-MLX-6bits

NaNK
license:apache-2.0
1
0

Qwen2.5-Coder-0.5B-Instruct-MLX-8bits

NaNK
license:apache-2.0
1
0

Qwen2.5-VL-7B-Instruct-MLX-6bits

moot20/Qwen2.5-VL-7B-Instruct-MLX-6bits This model was converted to MLX format from [`Qwen/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version 0.1.12. Refer to the original model card for more details on the model. Use with mlx

NaNK
license:apache-2.0
1
0

Qwen2.5-VL-3B-Instruct-MLX-4bits

NaNK
1
0

Qwen2.5-VL-3B-Instruct-MLX-6bits

NaNK
1
0

Mistral-Small-24B-Base-2501-MLX-8bits

The Model moot20/Mistral-Small-24B-Base-2501-MLX-8bits was converted to MLX format from mistralai/Mistral-Small-24B-Base-2501 using mlx-lm version 0.21.1.

NaNK
license:apache-2.0
1
0

Qwen2.5-7B-Instruct-1M-MLX-6bits

NaNK
license:apache-2.0
1
0

Qwen2.5-7B-Instruct-1M-MLX-8bits

NaNK
license:apache-2.0
1
0

Qwen2.5-14B-Instruct-1M-MLX-6bits

The Model moot20/Qwen2.5-14B-Instruct-1M-MLX-6bits was converted to MLX format from Qwen/Qwen2.5-14B-Instruct-1M using mlx-lm version 0.21.1.

NaNK
license:apache-2.0
1
0

Qwen2.5-Coder-32B-Instruct-MLX-6bits

The Model moot20/Qwen2.5-Coder-32B-Instruct-MLX-6bits was converted to MLX format from Qwen/Qwen2.5-Coder-32B-Instruct using mlx-lm version 0.21.1.

NaNK
license:apache-2.0
1
0

SmolVLM-256M-Base-MLX-6bits

NaNK
1
0

SmolVLM-256M-Base-MLX-8bits

NaNK
1
0

SmolVLM-500M-Base-MLX-4bits

NaNK
1
0

SmolVLM-500M-Base-MLX-6bits

NaNK
1
0

SmolVLM-500M-Base-MLX-8bits

NaNK
1
0

s1-32B-MLX-6bits

NaNK
license:apache-2.0
1
0

DeepScaleR-1.5B-Preview-MLX-4bits

NaNK
dataset:KbsdJames/Omni-MATH
1
0

paligemma2-3b-mix-224-MLX-6bits

NaNK
1
0

paligemma2-10b-mix-224-MLX-4bits

NaNK
1
0

paligemma2-10b-mix-224-MLX-8bits

NaNK
1
0

paligemma2-10b-mix-448-MLX-4bits

NaNK
1
0

paligemma2-28b-mix-224-MLX-8bits

moot20/paligemma2-28b-mix-224-MLX-8bits This model was converted to MLX format from [`google/paligemma2-28b-mix-224`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx

NaNK
1
0

paligemma2-28b-mix-448-MLX-8bits

moot20/paligemma2-28b-mix-448-MLX-8bits This model was converted to MLX format from [`google/paligemma2-28b-mix-448`]() using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model. Use with mlx

NaNK
1
0