lmstudio-community

500 models • 5 total models in database
Sort by:

gemma-4-26B-A4B-it-GGUF

NaNK
license:apache-2.0
851,193
19

Qwen3-VL-4B-Instruct-MLX-4bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---

NaNK
license:apache-2.0
650,101
5

Qwen3-VL-4B-Instruct-MLX-8bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---

NaNK
license:apache-2.0
635,949
0

Qwen3-VL-4B-Instruct-MLX-5bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---

NaNK
license:apache-2.0
635,056
0

Qwen3-VL-4B-Instruct-MLX-6bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---

NaNK
license:apache-2.0
634,603
0

gemma-4-E4B-it-GGUF

NaNK
license:apache-2.0
429,725
23

DeepSeek-R1-0528-Qwen3-8B-MLX-4bit

--- license: mit library_name: mlx tags: - mlx base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B pipeline_tag: text-generation ---

NaNK
license:mit
253,221
6

gemma-4-31B-it-GGUF

NaNK
license:apache-2.0
240,735
20

DeepSeek-R1-0528-Qwen3-8B-MLX-8bit

--- license: mit library_name: mlx tags: - mlx pipeline_tag: text-generation base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B ---

NaNK
license:mit
239,715
10

Qwen3-4B-Thinking-2507-MLX-4bit

--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Thinking-2507 ---

NaNK
license:apache-2.0
232,103
9

Qwen3-4B-Thinking-2507-MLX-8bit

--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Thinking-2507 ---

NaNK
license:apache-2.0
228,020
7

Qwen3-4B-Thinking-2507-MLX-6bit

--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Thinking-2507 ---

NaNK
license:apache-2.0
227,031
2

Qwen3-VL-8B-Instruct-MLX-4bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---

NaNK
license:apache-2.0
202,916
3

gpt-oss-20b-GGUF

--- base_model: openai/gpt-oss-20b license: apache-2.0 tags: - gguf ---

NaNK
license:apache-2.0
200,505
63

Magistral-Small-2509-MLX-4bit

--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---

NaNK
license:apache-2.0
196,139
0

Magistral-Small-2509-MLX-8bit

--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---

NaNK
license:apache-2.0
191,136
1

Magistral-Small-2509-MLX-6bit

--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---

NaNK
license:apache-2.0
190,143
0

Magistral-Small-2509-MLX-5bit

--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---

NaNK
license:apache-2.0
189,804
0

Qwen3-VL-8B-Instruct-MLX-8bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---

NaNK
license:apache-2.0
187,299
1

Qwen3-VL-8B-Instruct-MLX-6bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---

NaNK
license:apache-2.0
184,030
0

Qwen3-VL-8B-Instruct-MLX-5bit

--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---

NaNK
license:apache-2.0
183,460
0

Qwen3-Coder-30B-A3B-Instruct-MLX-4bit

💫 Community Model> Qwen3-Coder-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-Coder-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
173,859
7

Qwen3-Coder-30B-A3B-Instruct-MLX-5bit

💫 Community Model> Qwen3-Coder-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm 5-bit quantized version of Qwen3-Coder-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
170,467
2

Qwen3-Coder-30B-A3B-Instruct-MLX-8bit

NaNK
license:apache-2.0
166,230
8

Qwen3-Coder-30B-A3B-Instruct-MLX-6bit

💫 Community Model> Qwen3-Coder-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-Coder-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
164,341
2

gemma-3n-E4B-it-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E4B-it MLX quantization: provided by LM Studio team using mlxvlm 4-bit quantized version of gemma-3n-E4B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
163,236
1

gemma-3n-E4B-it-MLX-bf16

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E4B-it MLX quantization: provided by LM Studio team using mlxvlm Original bfloat16 version of gemma-3n-E4B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
159,631
3

gemma-3n-E4B-it-MLX-8bit

NaNK
159,629
0

gemma-3n-E4B-it-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E4B-it MLX quantization: provided by LM Studio team using mlxvlm 6-bit quantized version of gemma-3n-E4B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
159,435
0

gemma-4-E2B-it-GGUF

NaNK
license:apache-2.0
148,609
9

Qwen3-VL-30B-A3B-Instruct-MLX-4bit

NaNK
license:apache-2.0
116,023
0

Qwen3-4B-Instruct-2507-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-4B-Instruct-2507 MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
115,100
2

Qwen3-4B-Instruct-2507-MLX-8bit

NaNK
license:apache-2.0
112,198
1

Qwen3-4B-Instruct-2507-MLX-5bit

NaNK
license:apache-2.0
112,084
0

Qwen3-4B-Instruct-2507-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-4B-Instruct-2507 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
111,983
0

Qwen3-VL-30B-A3B-Instruct-MLX-8bit

NaNK
license:apache-2.0
109,072
0

Qwen3-VL-30B-A3B-Instruct-MLX-6bit

NaNK
license:apache-2.0
106,953
0

Qwen3-VL-30B-A3B-Instruct-MLX-5bit

💫 Community Model> Qwen3-VL-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 5-bit quantized version of Qwen3-VL-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
106,691
0

Seed-OSS-36B-Instruct-MLX-8bit

NaNK
license:apache-2.0
93,445
2

GLM-4.7-Flash-MLX-8bit

NaNK
license:mit
93,131
2

Seed-OSS-36B-Instruct-MLX-4bit

💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ByteDance-Seed Original model: Seed-OSS-36B-Instruct MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Seed-OSS-36B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
93,014
0

Seed-OSS-36B-Instruct-MLX-5bit

NaNK
license:apache-2.0
92,378
1

GLM-4.7-Flash-MLX-6bit

NaNK
license:mit
92,301
1

Seed-OSS-36B-Instruct-MLX-6bit

💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ByteDance-Seed Original model: Seed-OSS-36B-Instruct MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Seed-OSS-36B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
92,280
0

Qwen3-8B-MLX-4bit

NaNK
license:apache-2.0
82,157
0

Qwen3-8B-MLX-8bit

NaNK
license:apache-2.0
78,234
2

LFM2-24B-A2B-MLX-4bit

NaNK
62,342
1

gpt-oss-120b-MLX-8bit

NaNK
license:apache-2.0
60,719
11

Hermes-4-70B-MLX-4bit

NaNK
llama
60,571
1

Qwen3-14B-GGUF

NaNK
60,112
13

Hermes-4-70B-MLX-8bit

NaNK
llama
59,973
1

DeepSeek-R1-0528-Qwen3-8B-GGUF

NaNK
license:mit
59,824
44

Hermes-4-70B-MLX-5bit

NaNK
llama
59,553
0

Hermes-4-70B-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-70B MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Hermes-4-70B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
llama
59,420
0

Qwen3-30B-A3B-Instruct-2507-MLX-4bit

NaNK
license:apache-2.0
58,182
6

Qwen3-30B-A3B-Instruct-2507-MLX-8bit

NaNK
license:apache-2.0
56,691
4

Qwen3-30B-A3B-Instruct-2507-MLX-6bit

💫 Community Model> Qwen3-30B-A3B-Instruct-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-30B-A3B-Instruct-2507 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-30B-A3B-Instruct-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
56,412
0

Qwen3-Coder-30B-A3B-Instruct-GGUF

💫 Community Model> Qwen3 Coder 30B A3B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
53,809
19

Qwen3-8B-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-8B GGUF quantization: provided by bartowski based on `llama.cpp` release b5200 Supports a context length of up to 131,072 tokens with YaRN (default 32k) Supports `/nothink` to disable reasoning, just add it at the end of your prompt Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense Excels at creative writing, role-playing, multi-turn dialogues, and instruction following Advanced agent capabilities and support for over 100 languages and dialects 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
52,686
7

Magistral-Small-2509-GGUF

💫 Community Model> Magistral-Small-2509 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Magistral-Small-2509 GGUF quantization: provided by LM Studio team using `llama.cpp` release b6503 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

license:apache-2.0
52,631
2

gemma-3-12b-it-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-12b-it GGUF quantization: provided by bartowski based on `llama.cpp` release b4877 Supports a context length of 128k tokens, with a max output of 8192. Multimodal supporting images normalized to 896 x 896 resolution. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Requires latest (currently beta) llama.cpp runtime. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
52,269
33

Qwen3-VL-8B-Instruct-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-8B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6890 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
52,246
1

Mistral-Small-3.2-24B-Instruct-2506-MLX-6bit

NaNK
license:apache-2.0
50,751
0

Mistral-Small-3.2-24B-Instruct-2506-MLX-4bit

NaNK
license:apache-2.0
50,004
3

LFM2-1.2B-MLX-8bit

NaNK
49,388
3

Mistral-Small-3.2-24B-Instruct-2506-MLX-8bit

NaNK
license:apache-2.0
49,079
1

LFM2-1.2B-MLX-bf16

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: LiquidAI Original model: LFM2-1.2B MLX quantization: provided by LM Studio team using mlxlm Original bfloat16 version of LFM2-1.2B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
48,240
5

Phi-4-mini-reasoning-MLX-4bit

NaNK
license:mit
45,212
3

Qwen3-30B-A3B-Instruct-2507-GGUF

💫 Community Model> Qwen3-30B-A3B-Instruct-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-30B-A3B-Instruct-2507 GGUF quantization: provided by LM Studio team using `llama.cpp` release b6022 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
43,881
7

Qwen3-VL-4B-Instruct-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-4B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6890 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
43,840
0

Qwen2.5-Coder-14B-Instruct-MLX-4bit

NaNK
license:apache-2.0
42,526
1

Qwen2.5-VL-7B-Instruct-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-7B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5317 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
40,047
2

Qwen2.5-Coder-14B-Instruct-MLX-8bit

NaNK
license:apache-2.0
39,163
1

Qwen3-VL-30B-A3B-Instruct-GGUF

💫 Community Model> Qwen3-VL-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-30B-A3B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6890 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
38,805
0

Devstral-Small-2507-MLX-4bit

NaNK
license:apache-2.0
37,740
3

Devstral-Small-2507-MLX-8bit

NaNK
license:apache-2.0
37,322
2

Devstral-Small-2507-MLX-6bit

💫 Community Model> Devstral-Small-2507 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2507 MLX quantization: provided by LM Studio team using mlxlm LM Studio model page: mistralai/devstral-small-2507 6-bit quantized version of Devstral-Small-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
37,110
1

Devstral-Small-2507-MLX-bf16

license:apache-2.0
36,867
0

Qwen3-4B-Thinking-2507-GGUF

NaNK
35,229
22

QwQ-32B-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: QwQ-32B MLX quantizations: provided by bartowski from mlx-examples LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
34,042
0

QwQ-32B-MLX-8bit

NaNK
license:apache-2.0
33,624
0

gemma-3-27B-it-qat-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-27b-it GGUF quantization: provided by Google Optimized with Quantization Aware Training for improved 4-bit performance. Supports a context length of 128k tokens, with a max output of 8192. Multimodal supporting images normalized to 896 x 896 resolution. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
30,588
16

ERNIE-4.5-21B-A3B-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: baidu Original model: ERNIE-4.5-21B-A3B-PT MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of ERNIE-4.5-21B-A3B-PT using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
30,105
1

ERNIE-4.5-21B-A3B-MLX-8bit

NaNK
license:apache-2.0
29,870
1

ERNIE-4.5-21B-A3B-MLX-6bit

NaNK
license:apache-2.0
29,840
1

gemma-3-4b-it-GGUF

NaNK
27,428
27

gemma-3n-E4B-it-text-GGUF

NaNK
27,061
11

granite-4.0-h-tiny-GGUF

license:apache-2.0
26,924
1

GLM-4.6V-Flash-MLX-4bit

NaNK
license:mit
26,733
1

Phi-4-reasoning-plus-MLX-4bit

NaNK
license:mit
26,002
1

Devstral-Small-2-24B-Instruct-2512-GGUF

NaNK
license:apache-2.0
25,144
1

Qwen3-14B-MLX-4bit

This model lmstudio-community/Qwen3-14B-4bit was converted to MLX format from Qwen/Qwen3-14B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
25,084
4

gpt-oss-120b-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-120b GGUF quantization: provided by LM Studio team using `llama.cpp` 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
25,060
12

GLM-4.6V-Flash-MLX-6bit

NaNK
license:mit
25,041
0

Qwen3-Next-80B-A3B-Instruct-GGUF

NaNK
license:apache-2.0
24,628
0

Qwen3-14B-MLX-8bit

This model lmstudio-community/Qwen3-14B-MLX-8bit was converted to MLX format from Qwen/Qwen3-14B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
23,857
1

Qwen3-32B-MLX-4bit

This model lmstudio-community/Qwen3-32B-MLX-4bit was converted to MLX format from Qwen/Qwen3-32B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
21,989
3

Qwen3-4B-Instruct-2507-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-4B-Instruct-2507 GGUF quantization: provided by bartowski based on `llama.cpp` release b6096 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
21,558
11

Qwen3-32B-MLX-8bit

This model lmstudio-community/Qwen3-32B-MLX-8bit was converted to MLX format from Qwen/Qwen3-32B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
21,409
2

Qwen3-1.7B-MLX-8bit

This model lmstudio-community/Qwen3-1.7B-MLX-8bit was converted to MLX format from Qwen/Qwen3-1.7B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
21,385
1

Mistral-7B-Instruct-v0.3-GGUF

💫 Community Model> Mistral 7B Instruct v0.3 by Mistral AI 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Mistral AI Original model: Mistral-7B-Instruct-v0.3 GGUF quantization: provided by bartowski based on `llama.cpp` release b2965 Mistral 7B Instruct is an excellent high quality model tuned for instruction following, and release v0.3 is no different. This iteration features function calling support, which should extend the use case further and allow for a more useful assistant. Choose the `Mistral Instruct` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: Version 0.3 has a few changes over release 0.2, including: - An extended vocabulary (32000 -> 32768) - A new tokenizer - Support for function calling Function calling support is made possible through the new extended vocabulary, including tokens TOOLCALLS, AVAILABLETOOLS, and TOOLRESULTS. This model maintains the v0.2 context length of 32768 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. 🙏 Special thanks to Kalomaze, Dampf and turboderp for their work on the dataset (linked here) that was used for calculating the imatrix for all sizes. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
20,686
47

Qwen3-1.7B-MLX-4bit

This model lmstudio-community/Qwen3-1.7B-MLX-4bit was converted to MLX format from Qwen/Qwen3-1.7B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
20,606
0

Qwen3-1.7B-MLX-bf16

This model lmstudio-community/Qwen3-1.7B-MLX-bf16 was converted to MLX format from Qwen/Qwen3-1.7B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
20,527
1

Qwen3-Next-80B-A3B-Instruct-MLX-4bit

NaNK
license:apache-2.0
19,414
6

Qwen3-4B-MLX-4bit

This model lmstudio-community/Qwen3-4B-MLX-4bit was converted to MLX format from Qwen/Qwen3-4B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
18,986
0

Qwen2.5-Coder-32B-Instruct-MLX-4bit

💫 Community Model> Qwen2.5 Coder 32B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-32B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long-context Support up to 128K tokens with yarn rope scaling factor of 4.0 Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
18,721
5

Qwen3-4B-MLX-8bit

This model lmstudio-community/Qwen3-4B-MLX-8bit was converted to MLX format from Qwen/Qwen3-4B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
18,693
0

Qwen2.5-Coder-32B-Instruct-MLX-8bit

💫 Community Model> Qwen2.5 Coder 32B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-32B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long-context Support up to 128K tokens with yarn rope scaling factor of 4.0 Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
18,068
4

GLM-4.6V-Flash-MLX-8bit

NaNK
license:mit
18,022
0

Magistral-Small-2506-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistral-ai Original model: magistral-small MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of magistral-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
16,095
15

magistral-small-2506-mlx-bf16

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistral-ai Original model: magistral-small MLX quantization: provided by LM Studio team using mlxlm Original bfloat16 version of magistral-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

license:apache-2.0
15,981
1

Qwen3-Next-80B-A3B-Instruct-MLX-8bit

💫 Community Model> Qwen3-Next-80B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Next-80B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm LM Studio Model Page: https://lmstudio.ai/models/qwen/qwen3-next-80b 8-bit quantized version of Qwen3-Next-80B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
15,838
1

Qwen3-Next-80B-A3B-Instruct-MLX-6bit

NaNK
license:apache-2.0
15,352
0

Qwen3-30B-A3B-MLX-4bit

This model lmstudio-community/Qwen3-30B-A3B-MLX-4bit was converted to MLX format from Qwen/Qwen3-30B-A3B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
15,233
24

Qwen3-Next-80B-A3B-Instruct-MLX-5bit

💫 Community Model> Qwen3-Next-80B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Next-80B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm LM Studio Model Page: https://lmstudio.ai/models/qwen/qwen3-next-80b 5-bit quantized version of Qwen3-Next-80B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
15,213
0

Qwen3-30B-A3B-MLX-8bit

This model lmstudio-community/Qwen3-30B-A3B-MLX-8bit was converted to MLX format from Qwen/Qwen3-30B-A3B using mlx-lm version 0.24.0.

NaNK
license:apache-2.0
14,957
9

Devstral-Small-2505-MLX-4bit

💫 Community Model> Devstral-Small-2505 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2505 MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Devstral-Small-2505 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
13,897
7

gemma-3-1B-it-qat-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-1b-it GGUF quantization: provided Google Optimized with Quantization Aware Training for improved 4-bit performance. Supports a context length of 32k tokens, with a max output of 8192. Gemma 3 models are well-suited for a variety of text generation, including question answering, summarization, and reasoning. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
13,701
13

Meta-Llama-3.1-8B-Instruct-GGUF

NaNK
llama
11,212
250

Llama-3.3-70B-Instruct-GGUF

NaNK
llama
11,048
52

Mistral-Small-3.2-24B-Instruct-2506-GGUF

💫 Community Model> Mistral Small 3.2 24B Instruct 2506 by Mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Mistral-Small-3.2-24B-Instruct-2506 GGUF quantization: provided by lmmy based on `llama.cpp` release b5726 Supports dozens of languages, including English, French, German, Spanish, Portuguese, Italian, Japanese, Korean, Russian, Chinese, Arabic, Persian, Indonesian, Malay, Nepali, Polish, Romanian, Serbian, Swedish, Turkish, Ukrainian, Vietnamese, Hindi, and Bengali. This model's tool calling performance may be degraded. Stay tuned for more updates from the team. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
10,942
7

phi-4-GGUF

NaNK
license:mit
10,531
56

Qwen2.5-Coder-14B-Instruct-GGUF

NaNK
license:apache-2.0
10,240
5

Phi-4-mini-reasoning-GGUF

license:mit
10,216
0

Seed-OSS-36B-Instruct-GGUF

💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ByteDance-Seed Original model: Seed-OSS-36B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6292 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
9,333
2

DeepSeek-Coder-V2-Lite-Instruct-GGUF

9,126
58

Meta-Llama-3-8B-Instruct-GGUF

NaNK
llama
9,069
187

GLM-4.7-Flash-MLX-4bit

NaNK
license:mit
9,035
5

Llama-3.2-1B-Instruct-GGUF

NaNK
llama
8,614
41

Qwen3-1.7B-GGUF

NaNK
8,182
7

DeepSeek-R1-Distill-Qwen-7B-GGUF

NaNK
8,155
88

Hermes-4-70B-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-70B GGUF quantization: provided by LM Studio team using `llama.cpp` release b6287 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
7,691
6

Codestral-22B-v0.1-GGUF

NaNK
license:apache-2.0
7,681
26

Qwen3-32B-GGUF

NaNK
license:apache-2.0
7,470
10

GLM-4.5-Air-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of GLM-4.5-Air using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:mit
7,372
1

Qwen3-4B-GGUF

NaNK
7,076
13

NVIDIA-Nemotron-3-Nano-30B-A3B-GGUF

NaNK
6,544
11

Phi-4-reasoning-plus-GGUF

license:mit
6,443
6

Qwen3-Coder-480B-A35B-Instruct-MLX-6bit

NaNK
license:apache-2.0
6,426
5

QwQ-32B-GGUF

NaNK
license:apache-2.0
6,331
46

Qwen3-Coder-480B-A35B-Instruct-MLX-4bit

NaNK
license:apache-2.0
6,153
3

Qwen3-235B-A22B-Instruct-2507-MLX-4bit

NaNK
license:apache-2.0
6,106
1

Qwen3-235B-A22B-Instruct-2507-MLX-6bit

NaNK
license:apache-2.0
5,930
0

Devstral-Small-2507-GGUF

license:apache-2.0
5,922
5

Qwen3-235B-A22B-Instruct-2507-MLX-8bit

NaNK
license:apache-2.0
5,918
0

Qwen3-Coder-480B-A35B-Instruct-MLX-8bit

NaNK
license:apache-2.0
5,839
3

Mistral-Nemo-Instruct-2407-GGUF

license:apache-2.0
5,621
33

SmolLM2-1.7B-Instruct-GGUF

NaNK
license:apache-2.0
5,325
6

Qwen3-0.6B-GGUF

NaNK
license:apache-2.0
4,925
6

DeepSeek-R1-Distill-Qwen-14B-GGUF

NaNK
4,755
38

gpt-oss-20b-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-20b MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of gpt-oss-20b using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
4,627
43

Qwen2.5-VL-3B-Instruct-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-3B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5317 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
4,453
0

DeepSeek-R1-Distill-Qwen-32B-GGUF

💫 Community Model> DeepSeek R1 Distill Qwen 32B by Deepseek-Ai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepseek-ai Original model: DeepSeek-R1-Distill-Qwen-32B GGUF quantization: provided by bartowski based on `llama.cpp` release b4514 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
4,438
35

Llama-3.2-3B-Instruct-GGUF

NaNK
llama
4,337
38

gemma-2-9b-it-GGUF

NaNK
4,301
27

Qwen3-30B-A3B-GGUF

NaNK
license:apache-2.0
4,046
26

gemma-3-270m-it-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of gemma-3-270m-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
3,993
2

Llama-3.1-Tulu-3-8B-GGUF

NaNK
base_model:allenai/Llama-3.1-Tulu-3-8B
3,740
4

EXAONE-3.5-2.4B-Instruct-GGUF

NaNK
3,616
1

Qwen3-Coder-Next-GGUF

license:apache-2.0
3,571
0

Yi-Coder-1.5B-GGUF

NaNK
license:apache-2.0
3,548
1

gemma-3-4B-it-qat-GGUF

NaNK
3,546
18

EXAONE-3.5-7.8B-Instruct-GGUF

NaNK
3,532
2

OpenCoder-1.5B-Instruct-GGUF

NaNK
3,516
0

gpt-oss-safeguard-20b-MLX-MXFP4

💫 Community Model> gpt-oss-safeguard-20b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-20b MLX quantization: provided by LM Studio team using mlxlm MXFP4 quantized version of gpt-oss-safeguard-20b using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
3,480
1

AMD-OLMo-1B-SFT-DPO-GGUF

NaNK
license:apache-2.0
3,465
1

NuExtract-v1.5-GGUF

NaNK
license:mit
3,444
4

Qwen2.5-0.5B-Instruct-GGUF

NaNK
license:apache-2.0
3,272
4

DeepSeek-R1-Distill-Llama-8B-GGUF

NaNK
base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B
3,219
44

Qwen2.5-0.5B-Instruct-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long context: Support for 32k tokens and 8k token generation Large-scale training dataset: Encompasses a huge range of knowledge. Enhanced structured data understanding and generation. Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
3,052
0

gemma-3-1b-it-GGUF

NaNK
3,045
6

Qwen2.5-Coder-32B-GGUF

NaNK
license:apache-2.0
2,967
2

DeepSeek-R1-Distill-Qwen-1.5B-GGUF

NaNK
2,914
15

granite-4.0-h-small-MLX-8bit

💫 Community Model> granite-4.0-h-small by ibm-granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-4.0-h-small MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of granite-4.0-h-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
2,834
0

deepseek-coder-6.7B-kexer-GGUF

NaNK
license:apache-2.0
2,662
4

ERNIE-4.5-21B-A3B-PT-GGUF

NaNK
2,552
5

dolphin-2.8-mistral-7b-v02-GGUF

NaNK
license:apache-2.0
2,502
9

gemma-3-12B-it-qat-GGUF

NaNK
2,490
10

DeepSeek-R1-Distill-Llama-70B-GGUF

NaNK
base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B
2,443
4

gpt-oss-safeguard-20b-GGUF

💫 Community Model> gpt-oss-safeguard-20b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-20b GGUF quantization: provided by LM Studio team using `llama.cpp` release b6868 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
2,411
2

Qwen2.5-Coder-7B-Instruct-GGUF

NaNK
license:apache-2.0
2,397
20

gemma-2-27b-it-GGUF

NaNK
2,312
15

GLM-4.5-Air-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of GLM-4.5-Air using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:mit
2,289
1

GLM-4.5-Air-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air GGUF quantization: provided by bartowski based on `llama.cpp` release b6085 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

2,278
2

gemma-3-27b-it-GGUF

NaNK
2,161
48

Qwen3-235B-A22B-Instruct-2507-GGUF

NaNK
2,115
10

Qwen3-VL-32B-Instruct-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 8-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
2,051
0

Qwen3-VL-32B-Instruct-GGUF

NaNK
license:apache-2.0
2,031
0

Qwen3-VL-2B-Instruct-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6888 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
1,988
1

gemma-2-2b-it-GGUF

NaNK
1,890
22

granite-3.1-8b-instruct-GGUF

💫 Community Model> granite 3.1 8b instruct by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-3.1-8b-instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b4381 Intended for general instructions, summarization, text classification and extraction, Q/A, RAG, coding, function calling, and long context tasks. Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
1,799
16

granite-3.2-8b-instruct-GGUF

NaNK
license:apache-2.0
1,783
3

gpt-oss-safeguard-120b-MLX-MXFP4

💫 Community Model> gpt-oss-safeguard-120b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-120b MLX quantization: provided by LM Studio team using mlxlm MXFP4 quantized version of gpt-oss-safeguard-120b using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
1,763
0

Qwen3-30B-A3B-Thinking-2507-GGUF

NaNK
license:apache-2.0
1,757
1

Llama-4-Scout-17B-16E-Instruct-GGUF

NaNK
llama
1,707
33

Qwen3-VL-30B-A3B-Thinking-GGUF

NaNK
license:apache-2.0
1,707
0

Qwen2.5-Coder-7B-Instruct-MLX-4bit

NaNK
license:apache-2.0
1,669
1

Devstral-Small-2505-GGUF

💫 Community Model> Devstral Small 2505 by Mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2505 GGUF quantization: provided by mattjcly based on `llama.cpp` release b5426 "Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model debuts as the #1 open source model on SWE-bench. Despite its compact size of just 24 billion parameters, Devstral outperforms much larger models in agentic coding tasks. These tasks require exploring a codebase and making complex modifications to resolve issues." 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

license:apache-2.0
1,666
30

Magistral-Small-2506-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: magistral-small GGUF quantization: provided by lmmy based on `llama.cpp` release b5606 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

license:apache-2.0
1,660
17

Qwen3-Next-80B-A3B-Thinking-MLX-8bit

NaNK
license:apache-2.0
1,620
3

Qwen3-VL-8B-Thinking-GGUF

NaNK
license:apache-2.0
1,539
0

GLM-4.5-Air-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of GLM-4.5-Air using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:mit
1,510
0

embeddinggemma-300m-qat-GGUF

1,389
5

Qwen2.5-VL-32B-Instruct-GGUF

💫 Community Model> Qwen2.5 VL 32B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-32B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5284 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
1,366
1

granite-4.0-h-small-GGUF

license:apache-2.0
1,327
3

gemma-3n-E2B-it-text-GGUF

NaNK
1,325
4

Qwen3-0.6B-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-0.6B MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-0.6B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
1,309
0

granite-4.0-h-tiny-MLX-8bit

NaNK
license:apache-2.0
1,279
1

Qwen2.5-7B-Instruct-GGUF

NaNK
license:apache-2.0
1,278
5

Mistral-Small-3.1-24B-Instruct-2503-GGUF

NaNK
license:apache-2.0
1,269
38

Qwen3-Coder-480B-A35B-Instruct-GGUF

💫 Community Model> Qwen3 Coder 480B A35B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-480B-A35B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5962 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
1,259
2

Phi-4-mini-instruct-GGUF

1,228
11

Qwen3-VL-2B-Instruct-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 8-bit quantized version of Qwen3-VL-2B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
1,212
1

medgemma-27b-text-it-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: medgemma-27b-text-it MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of medgemma-27b-text-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
1,082
3

Phi-3.1-mini-4k-instruct-GGUF

NaNK
license:mit
992
23

Qwen3-235B-A22B-GGUF

NaNK
license:apache-2.0
992
14

Qwen3-VL-32B-Thinking-GGUF

NaNK
license:apache-2.0
989
0

mathstral-7B-v0.1-GGUF

NaNK
license:apache-2.0
968
9

granite-4.0-h-small-MLX-6bit

💫 Community Model> granite-4.0-h-small by ibm-granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-4.0-h-small MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of granite-4.0-h-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
951
1

Qwen2.5-7B-Instruct-1M-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-7B-Instruct-1M GGUF quantization: provided by bartowski based on `llama.cpp` release b4546 Significantly improved performance in handling long-context tasks while maintaining its capability in short tasks. Accuracy degradation may occur for sequences exceeding 262,144 tokens until improved support is added. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
945
41

Qwen2.5-VL-72B-Instruct-GGUF

💫 Community Model> Qwen2.5 VL 72B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-72B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5317 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
893
1

Llama-3-Groq-8B-Tool-Use-GGUF

NaNK
llama
871
16

DeepSeek-Coder-V2-Instruct-0724-GGUF

NaNK
856
3

Qwen3-VL-32B-Instruct-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 4-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
846
0

Llama-4-Scout-17B-16E-MLX-text-8bit

NaNK
llama4
843
0

gemma-3-270m-it-MLX-bf16

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it MLX quantization: provided by LM Studio team using mlxlm Original bfloat16 version of gemma-3-270m-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

817
2

Qwen2.5-14B-Instruct-1M-GGUF

💫 Community Model> Qwen2.5 14B Instruct 1M by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-14B-Instruct-1M GGUF quantization: provided by bartowski based on `llama.cpp` release b4546 Significantly improved performance in handling long-context tasks while maintaining its capability in short tasks. Accuracy degradation may occur for sequences exceeding 262,144 tokens until improved support is added. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
801
19

codegemma-7b-it-GGUF

NaNK
787
10

Qwen3-0.6B-MLX-bf16

NaNK
license:apache-2.0
787
0

Qwen3-30B-A3B-Thinking-2507-MLX-4bit

NaNK
license:apache-2.0
786
2

KAT-Dev-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
784
1

gpt-oss-safeguard-120b-GGUF

💫 Community Model> gpt-oss-safeguard-120b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-120b GGUF quantization: provided by LM Studio team using `llama.cpp` release b6866 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
779
1

Qwen3-VL-235B-A22B-Thinking-GGUF

NaNK
license:apache-2.0
772
0

InternVL3_5-14B-GGUF

NaNK
766
1

Qwen2-VL-7B-Instruct-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2-VL-7B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b4327 Vision model capable of understanding images of various resolutions and ratios. Complex reasoning for agentic automation with vision. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
756
6

Qwen3-Next-80B-A3B-Thinking-MLX-4bit

NaNK
license:apache-2.0
726
4

gemma-3-270m-it-qat-GGUF

💫 Community Model> gemma-3-270m-it-qat-q40 by google 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it-qat-q40-unquantized GGUF quantization: provided by LM Studio team using `llama.cpp` release b6153 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

725
3

granite-4.0-h-micro-GGUF

license:apache-2.0
719
0

Qwen2.5-Coder-7B-Instruct-MLX-8bit

💫 Community Model> Qwen2.5 Coder 7B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-7B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long-context Support up to 128K tokens with yarn rope scaling factor of 4.0 Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
716
0

Qwen3-Next-80B-A3B-Thinking-GGUF

NaNK
license:apache-2.0
692
0

Qwen2.5-Coder-1.5B-Instruct-GGUF

NaNK
license:apache-2.0
687
2

GLM-4-9B-0414-GGUF

NaNK
license:mit
679
1

InternVL3_5-30B-A3B-GGUF

💫 Community Model> InternVL35 30B A3B by Opengvlab 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: OpenGVLab Original model: InternVL35-30B-A3B GGUF quantization: provided by bartowski based on `llama.cpp` release b6258 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
677
5

gemma-3-270m-it-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it GGUF quantization: provided by LM Studio team using `llama.cpp` release b6153 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

656
4

EXAONE-4.0-32B-MLX-4bit

NaNK
625
0

Qwen2.5-14B-Instruct-GGUF

NaNK
license:apache-2.0
616
10

pixtral-12b-GGUF

NaNK
license:apache-2.0
608
4

granite-vision-3.2-2b-GGUF

💫 Community Model> granite vision 3.2 2b by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-vision-3.2-2b GGUF quantization: provided by bartowski based on `llama.cpp` release b4778 Designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more. Use cases include analyzing tables and charts, performing OCR, and answering questions based on document content. It also has general image understanding. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
605
7

Qwen2-VL-2B-Instruct-GGUF

NaNK
592
3

GLM-4-32B-0414-GGUF

NaNK
license:mit
586
1

Qwen3-30B-A3B-Thinking-2507-MLX-8bit

NaNK
license:apache-2.0
585
1

Qwen3-VL-235B-A22B-Instruct-GGUF

NaNK
license:apache-2.0
585
0

Qwen3-VL-32B-Instruct-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 6-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
574
0

medgemma-4b-it-GGUF

NaNK
547
5

gemma-3-270m-it-qat-MLX-4bit

💫 Community Model> gemma-3-270m-it-qat-q40 by google 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it-qat-q40-unquantized MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of gemma-3-270m-it-qat-q40-unquantized using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
545
3

granite-4.0-h-small-MLX-4bit

NaNK
license:apache-2.0
533
0

CodeLlama-7B-KStack-GGUF

NaNK
base_model:JetBrains/CodeLlama-7B-KStack
525
2

WizardLM-2-7B-GGUF

NaNK
license:apache-2.0
520
17

Qwen3-VL-4B-Thinking-GGUF

NaNK
license:apache-2.0
520
0

Meta-Llama-3.1-70B-Instruct-GGUF

NaNK
llama
516
36

c4ai-command-r-v01-GGUF

license:cc-by-nc-4.0
506
22

Qwen3-VL-32B-Thinking-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 8-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
506
0

Qwen2.5-Coder-1.5B-Instruct-MLX-8bit

💫 Community Model> Qwen2.5 Coder 1.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-1.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
503
1

Qwen3-30B-A3B-Thinking-2507-MLX-6bit

NaNK
license:apache-2.0
493
2

Qwen3-235B-A22B-Thinking-2507-GGUF

💫 Community Model> Qwen3 235B A22B Thinking 2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 GGUF quantization: provided by bartowski based on `llama.cpp` release b5962 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
492
2

olmOCR-7B-0225-preview-GGUF

NaNK
license:apache-2.0
490
12

Llama3-ChatQA-1.5-8B-GGUF

NaNK
llama-3
487
6

starcoder2-15b-instruct-v0.1-GGUF

NaNK
483
4

Llama-4-Scout-17B-16E-MLX-text-4bit

NaNK
llama4
483
0

Qwen2-VL-72B-Instruct-GGUF

NaNK
480
0

Meta-Llama-3-70B-Instruct-GGUF

NaNK
llama
476
111

Qwen2-500M-Instruct-GGUF

NaNK
license:apache-2.0
468
6

Qwen3-235B-A22B-Thinking-2507-MLX-8bit

💫 Community Model> Qwen3-235B-A22B-Thinking-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of Qwen3-235B-A22B-Thinking-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
466
1

InternVL3_5-8B-GGUF

NaNK
465
1

Yi-Coder-9B-Chat-GGUF

NaNK
license:apache-2.0
463
17

MiniCPM-o-2_6-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openbmb Original model: MiniCPM-o-26 GGUF quantization: provided by bartowski based on `llama.cpp` release b4585 Supports images of any aspect ratio up to 1.8 million pixels (e.g. 1344x1344) See more in their technical report (here)[https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9] 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
454
9

codegemma-7b-GGUF

NaNK
454
3

codegemma-1.1-7b-it-GGUF

NaNK
449
5

deepseek-coder-1.3B-kexer-GGUF

NaNK
license:apache-2.0
438
4

Qwen3-VL-2B-Thinking-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Thinking GGUF quantization: provided by LM Studio team using `llama.cpp` release b6889 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
435
0

gemma-1.1-2b-it-GGUF

NaNK
432
15

LFM2-350M-MLX-8bit

NaNK
421
0

SmolLM3-3B-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: HuggingFaceTB Original model: SmolLM3-3B MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of SmolLM3-3B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
415
4

Qwen2.5-Coder-0.5B-Instruct-GGUF

NaNK
license:apache-2.0
412
3

Qwen3-VL-32B-Instruct-MLX-5bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 5-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
412
0

granite-embedding-107m-multilingual-GGUF

license:apache-2.0
410
1

Yi-1.5-9B-Chat-GGUF

NaNK
license:apache-2.0
407
10

Qwen2.5-Coder-3B-Instruct-MLX-4bit

💫 Community Model> Qwen2.5 Coder 3B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-3B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
396
0

nomic-embed-code-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nomic-ai Original model: nomic-embed-code GGUF quantization: provided by bartowski based on `llama.cpp` release b5284 7B parameter embedding model designed for code retrieval Based on Qwen2 and trained for multiple programming languages such as Python, Java, Ruby, PHP, JavaScript, and Go 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
392
2

Qwen2.5-14B-Instruct-MLX-4bit

NaNK
license:apache-2.0
391
1

MiniCPM-V-2_6-GGUF

NaNK
376
14

DeepSeek-R1-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepseek-ai Original model: DeepSeek-R1 GGUF quantization: provided by bartowski based on `llama.cpp` release b4514 DeepSeek R1 represents the current SOTA for open reasoning models. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
374
15

Qwen2.5-3B-Instruct-GGUF

NaNK
371
5

Qwen3-VL-2B-Instruct-MLX-bf16

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL Original bfloat16 version of Qwen3-VL-2B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
365
0

codegemma-2b-GGUF

NaNK
363
4

gemma-3n-E2B-it-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E2B-it MLX quantization: provided by LM Studio team using mlxvlm 6-bit quantized version of gemma-3n-E2B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
362
0

Phi-3.1-mini-128k-instruct-GGUF

NaNK
license:mit
361
7

InternVL3_5-4B-GGUF

NaNK
361
0

InternVL3_5-2B-GGUF

NaNK
359
2

granite-4.0-micro-GGUF

license:apache-2.0
353
0

granite-4.0-h-tiny-MLX-6bit

NaNK
license:apache-2.0
352
0

Qwen2.5-Coder-32B-Instruct-GGUF

NaNK
license:apache-2.0
350
5

gemma-3-270m-it-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of gemma-3-270m-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
345
2

LFM2-VL-1.6B-GGUF

NaNK
341
1

GLM-4.5-GGUF

NaNK
341
0

medgemma-4b-it-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: medgemma-4b-it MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of medgemma-4b-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
339
2

Qwen2.5-32B-Instruct-GGUF

NaNK
license:apache-2.0
336
5

granite-embedding-278m-multilingual-GGUF

license:apache-2.0
324
2

granite-4.0-h-tiny-MLX-4bit

NaNK
license:apache-2.0
323
0

stable-code-instruct-3b-GGUF

NaNK
322
2

gemma-3n-E2B-it-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E2B-it MLX quantization: provided by LM Studio team using mlxvlm 4-bit quantized version of gemma-3n-E2B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
320
0

GLM-Z1-9B-0414-GGUF

NaNK
license:mit
315
3

KAT-Dev-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev GGUF quantization: provided by LM Studio team using `llama.cpp` release b6644 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

315
2

OREAL-DeepSeek-R1-Distill-Qwen-7B-GGUF

NaNK
license:apache-2.0
305
0

aya-expanse-8b-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: CohereForAI Original model: aya-expanse-8b GGUF quantization: provided by bartowski based on `llama.cpp` release b3930 Aya Expanse offers highly advanced multilingual capabilities. License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy Supports 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:cc-by-nc-4.0
297
6

aya-23-8B-GGUF

NaNK
license:cc-by-nc-4.0
296
7

Starling-LM-7B-beta-GGUF

NaNK
license:apache-2.0
289
6

Qwen2.5-Math-7B-Instruct-GGUF

NaNK
license:apache-2.0
288
3

Meta-Llama-3-120B-Instruct-GGUF

NaNK
base_model:mlabonne/Meta-Llama-3-120B-Instruct
286
48

SmolLM2-135M-Instruct-GGUF

license:apache-2.0
285
0

Qwen2.5-Coder-3B-Instruct-GGUF

NaNK
283
6

internlm2-math-plus-20b-GGUF

NaNK
281
1

Qwen3-235B-A22B-Thinking-2507-MLX-4bit

💫 Community Model> Qwen3-235B-A22B-Thinking-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-235B-A22B-Thinking-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
280
1

granite-4.0-h-small-MLX-5bit

NaNK
license:apache-2.0
280
0

c4ai-command-r-08-2024-GGUF

NaNK
license:cc-by-nc-4.0
274
22

EXAONE-4.0.1-32B-GGUF

NaNK
271
0

DeepSeek-V2.5-GGUF

NaNK
258
10

Qwen2.5-7B-Instruct-MLX-4bit

NaNK
license:apache-2.0
256
0

Llama-3.1-Tulu-3-405B-GGUF

NaNK
base_model:allenai/Llama-3.1-Tulu-3-405B
254
3

openchat-3.6-8b-20240522-GGUF

NaNK
llama3
253
7

InternVL3_5-1B-GGUF

NaNK
253
0

Qwen2.5-1.5B-Instruct-GGUF

NaNK
license:apache-2.0
249
1

Llama-3.1-Nemotron-70B-Instruct-HF-GGUF

NaNK
llama3.1
248
38

EuroLLM-9B-Instruct-GGUF

💫 Community Model> EuroLLM 9B Instruct by Utter-Project 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: utter-project Original model: EuroLLM-9B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b4240 Capable of generating text in all EU languages including: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
247
0

Yi-1.5-6B-Chat-GGUF

NaNK
license:apache-2.0
241
7

EXAONE-4.0-1.2B-GGUF

NaNK
239
0

LFM2-VL-450M-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: LiquidAI Original model: LFM2-VL-450M GGUF quantization: provided by bartowski based on `llama.cpp` release b6214 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

239
0

UI-TARS-7B-DPO-GGUF

NaNK
license:apache-2.0
238
8

zeta-GGUF

license:apache-2.0
234
4

Qwen3-Next-80B-A3B-Thinking-MLX-6bit

NaNK
license:apache-2.0
234
0

aya-23-35B-GGUF

NaNK
license:cc-by-nc-4.0
233
14

Mistral-Large-Instruct-2411-GGUF

NaNK
230
13

Qwen2.5-14B-Instruct-MLX-8bit

NaNK
license:apache-2.0
229
0

Llama3-ChatQA-1.5-70B-GGUF

NaNK
llama-3
222
6

medgemma-27b-text-it-GGUF

NaNK
222
2

Qwen2.5-Coder-0.5B-Instruct-MLX-4bit

💫 Community Model> Qwen2.5 Coder 0.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
219
0

gemma-3n-E2B-it-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E2B-it MLX quantization: provided by LM Studio team using mlxvlm 8-bit quantized version of gemma-3n-E2B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
217
0

Qwen2.5-72B-Instruct-GGUF

NaNK
212
4

Qwen2.5-Coder-3B-Instruct-MLX-8bit

💫 Community Model> Qwen2.5 Coder 3B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-3B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
212
0

Meta-Llama-3-8B-Instruct-BPE-fix-GGUF

NaNK
llama
210
11

c4ai-command-r-plus-08-2024-GGUF

NaNK
license:cc-by-nc-4.0
210
5

gemma-3n-E2B-it-MLX-bf16

NaNK
208
0

cogito-v2-preview-llama-70B-GGUF

💫 Community Model> cogito v2 preview llama 70B by Deepcogito 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepcogito Original model: cogito-v2-preview-llama-70B GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
base_model:deepcogito/cogito-v2-preview-llama-70B
208
0

Magistral-Small-2507-GGUF

💫 Community Model> Magistral-Small-2507 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Magistral-Small-2507 GGUF quantization: provided by LM Studio team using `llama.cpp` release b5980 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
204
4

Qwen2.5-Coder-1.5B-Instruct-MLX-4bit

💫 Community Model> Qwen2.5 Coder 1.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-1.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
202
0

cogito-v2-preview-llama-109B-MoE-GGUF

💫 Community Model> cogito v2 preview llama 109B MoE by Deepcogito 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepcogito Original model: cogito-v2-preview-llama-109B-MoE GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
base_model:deepcogito/cogito-v2-preview-llama-109B-MoE
200
0

Qwen2.5-32B-Instruct-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-32B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long context: Support for 32k tokens and 8k token generation Large-scale training dataset: Encompasses a huge range of knowledge. Enhanced structured data understanding and generation. Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
199
2

SmolLM3-3B-GGUF

NaNK
license:apache-2.0
195
2

Qwen3-235B-A22B-Thinking-2507-MLX-6bit

💫 Community Model> Qwen3-235B-A22B-Thinking-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-235B-A22B-Thinking-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
195
1

wavecoder-ultra-6.7b-GGUF

NaNK
194
11

Qwen3-VL-32B-Thinking-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 4-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
194
0

Apriel-Nemotron-15b-Thinker-GGUF

NaNK
license:mit
191
0

Qwen3-VL-2B-Thinking-MLX-8bit

NaNK
license:apache-2.0
186
1

DeepSeek-V3-0324-GGUF

NaNK
license:mit
181
11

Qwen1.5-32B-Chat-GGUF

NaNK
179
15

Qwen3-Next-80B-A3B-Thinking-MLX-5bit

NaNK
license:apache-2.0
175
0

granite-3.0-3b-a800m-instruct-GGUF

NaNK
license:apache-2.0
174
1

GLM-Z1-Rumination-32B-0414-GGUF

NaNK
license:mit
172
2

Athene-V2-Chat-GGUF

171
14

OlympicCoder-32B-GGUF

NaNK
license:apache-2.0
170
3

Qwen2.5-7B-Instruct-MLX-8bit

NaNK
license:apache-2.0
170
0

Hunyuan-A13B-Instruct-GGUF

NaNK
168
1

granite-4.0-h-tiny-MLX-5bit

NaNK
license:apache-2.0
162
0

Phi-4-reasoning-MLX-4bit

This model lmstudio-community/Phi-4-reasoning-MLX-4bit was converted to MLX format from microsoft/Phi-4-reasoning using mlx-lm version 0.24.0.

NaNK
license:mit
160
0

Yi 1.5 34B Chat GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: 01-ai Original model: Yi-1.5-34B-Chat GGUF quantization: provided by bartowski based on `llama.cpp` release b2854 Model Summary: Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. This model should perform well on a wide range of tasks, such as coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. Under the hood, the model will see a prompt that's formatted like so: No technical details have been released about this model. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. 🙏 Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for the IQ1M and IQ2XS quants, which makes them usable even at their tiny size! LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
158
9

Phi-4-reasoning-GGUF

license:mit
158
2

DeepSeek-V2.5-1210-GGUF

NaNK
156
5

SmolLM2-360M-Instruct-GGUF

license:apache-2.0
155
0

internlm2-math-plus-mixtral8x22b-GGUF

NaNK
154
1

granite-3.3-8b-instruct-GGUF

💫 Community Model> granite 3.3 8b instruct by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-3.3-8b-instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5147 Fine-tuned for improved reasoning and instruction-following capabilities Supports English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese Capable of thinking, classification, extraction, coding (including FIM), function calling, and long context tasks such as summarization, RAG, and long document Q/A 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
148
0

aya-expanse-32b-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: CohereForAI Original model: aya-expanse-32b GGUF quantization: provided by bartowski based on `llama.cpp` release b3930 Aya Expanse offers highly advanced multilingual capabilities. License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy Supports 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:cc-by-nc-4.0
146
8

KAT-Dev-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
145
0

Athene-70B-GGUF

NaNK
license:cc-by-nc-4.0
144
7

LFM2-350M-MLX-bf16

143
1

Qwen2.5-32B-Instruct-MLX-8bit

NaNK
license:apache-2.0
142
1

OpenReasoning-Nemotron-7B-GGUF

💫 Community Model> OpenReasoning Nemotron 7B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: OpenReasoning-Nemotron-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b5934 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
138
1

AFM-4.5B-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: arcee-ai Original model: AFM-4.5B GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 temperature: 0.5 topk: 50 topp: 0.95 repeatpenalty: 1.1 minp: 0.05 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
134
2

granite-embedding-30m-english-GGUF

💫 Community Model> granite embedding 30m english by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-embedding-30m-english GGUF quantization: provided by bartowski based on `llama.cpp` release b4381 30 million param model for extremely fast performance 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

license:apache-2.0
134
0

Qwen2-Math-72B-Instruct-GGUF

NaNK
130
2

Qwen2.5-Coder-3B-GGUF

NaNK
130
0

DeepCoder-14B-Preview-GGUF

NaNK
license:mit
129
17

KAT-Dev-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
129
0

Mistral-Small-24B-Instruct-2501-GGUF

💫 Community Model> Mistral Small 24B Instruct 2501 by Mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Mistral-Small-24B-Instruct-2501 GGUF quantization: provided by bartowski based on `llama.cpp` release b4585 Multilingual: Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
128
32

Llama-3-Groq-70B-Tool-Use-GGUF

NaNK
llama
126
7

openhands-lm-7b-v0.1-GGUF

NaNK
license:mit
126
3

Falcon3-10B-Instruct-GGUF

NaNK
125
1

Qwen2.5-Coder-0.5B-Instruct-MLX-8bit

💫 Community Model> Qwen2.5 Coder 0.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
124
0

Qwen3-VL-32B-Thinking-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 6-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
122
0

gemma-7b-aps-it-GGUF

NaNK
120
4

internlm2-math-plus-7b-GGUF

NaNK
119
0

Qwen3-VL-8B-Thinking-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-8B-Thinking MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 4-bit quantized version of Qwen3-VL-8B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
119
0

Meta-Llama-3-70B-Instruct-BPE-fix-GGUF

NaNK
llama
117
4

granite-3.1-1b-a400m-instruct-GGUF

NaNK
license:apache-2.0
117
0

Mistral-Large-Instruct-2407-GGUF

NaNK
116
13

GLM-Z1-32B-0414-GGUF

NaNK
license:mit
115
0

Qwen3-VL-30B-A3B-Thinking-MLX-8bit

NaNK
license:apache-2.0
115
0

Hermes-4-405B-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B GGUF quantization: provided by LM Studio team using `llama.cpp` release b6292 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:llama3
114
2

DiscoPOP-zephyr-7b-gemma-GGUF

NaNK
113
6

Llama-3.1-Nemotron-Nano-4B-v1.1-GGUF

💫 Community Model> Llama 3.1 Nemotron Nano 4B v1.1 by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: Llama-3.1-Nemotron-Nano-4B-v1.1 GGUF quantization: provided by bartowski based on `llama.cpp` release b5432 Created from Llama 3.1 8B with pruning and distilling Tuned for reasoning, human chat preferences, and tasks, such as RAG and tool calling. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
llama-3
111
2

ERNIE-4.5-0.3B-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: baidu Original model: ERNIE-4.5-0.3B-PT GGUF quantization: provided by bartowski based on `llama.cpp` release b5780 Optimized for general-purpose language understanding and generation 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
111
1

olmOCR-2-7B-1025-GGUF

NaNK
license:apache-2.0
111
0

Qwen2.5-Coder-14B-GGUF

NaNK
license:apache-2.0
111
0

Llama-3.1-8B-UltraLong-1M-Instruct-GGUF

NaNK
base_model:nvidia/Llama-3.1-Nemotron-8B-UltraLong-1M-Instruct
106
1

UI-TARS-72B-DPO-GGUF

NaNK
license:apache-2.0
105
3

Qwen2-Math-1.5B-Instruct-GGUF

NaNK
license:apache-2.0
105
1

c4ai-command-a-03-2025-GGUF

💫 Community Model> c4ai command a 03 2025 by Cohereforai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: CohereForAI Original model: c4ai-command-a-03-2025 GGUF quantization: provided by bartowski based on `llama.cpp` release b4877 License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian. Trained for conversation, RAG, tool use, and coding. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:cc-by-nc-4.0
103
17

Phi-3.5-MoE-instruct-GGUF

license:mit
103
4

Mistral-Small-4-119B-2603-GGUF

NaNK
103
1

Hermes-4-405B-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
llama
103
0

OpenThinker3-7B-GGUF

NaNK
llama-factory
102
6

Qwen2.5-0.5B-Instruct-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long context: Support for 32k tokens and 8k token generation Large-scale training dataset: Encompasses a huge range of knowledge. Enhanced structured data understanding and generation. Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
102
0

DeepSWE-Preview-GGUF

💫 Community Model> DeepSWE Preview by Agentica-Org 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: agentica-org Original model: DeepSWE-Preview GGUF quantization: provided by bartowski based on `llama.cpp` release b5760 Trained on top of Qwen3-32B with thinking mode enabled Coding agent trained with only reinforcement learning (RL) to excel at software engineering (SWE) tasks Achieves an impressive 59.0% on SWE-Bench-Verified, which is currently #1 in the open-weights category 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

license:mit
100
4

Qwen3-VL-8B-Thinking-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-8B-Thinking MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 8-bit quantized version of Qwen3-VL-8B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
100
0

CodeLlama-7B-KStack-clean-GGUF

NaNK
base_model:JetBrains/CodeLlama-7B-KStack-clean
99
1

Dhanishtha-2.0-preview-GGUF

NaNK
license:apache-2.0
98
1

Qwen2.5-3B-Instruct-MLX-4bit

NaNK
98
0

EXAONE-Deep-7.8B-GGUF

NaNK
98
0

reka-flash-3.1-GGUF

NaNK
license:apache-2.0
94
1

DeepSeek-R1-0528-GGUF

💫 Community Model> DeepSeek R1 0528 by Deepseek-Ai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepseek-ai Original model: DeepSeek-R1-0528 GGUF quantization: provided by bartowski based on `llama.cpp` release b5524 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:mit
92
9

EXAONE-3.5-32B-Instruct-GGUF

NaNK
92
1

Qwen2-Math-7B-Instruct-GGUF

NaNK
license:apache-2.0
90
5

r1-1776-distill-llama-70b-GGUF

NaNK
base_model:perplexity-ai/r1-1776-distill-llama-70b
87
3

Llama-3.1-8B-UltraLong-4M-Instruct-GGUF

NaNK
base_model:nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct
87
0

EXAONE-4.0-32B-MLX-8bit

NaNK
87
0

Qwen2.5-Math-1.5B-Instruct-GGUF

NaNK
license:apache-2.0
86
1

OpenReasoning-Nemotron-14B-GGUF

💫 Community Model> OpenReasoning Nemotron 14B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: OpenReasoning-Nemotron-14B GGUF quantization: provided by bartowski based on `llama.cpp` release b5934 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
85
1

Skywork-R1V3-38B-GGUF

NaNK
license:mit
84
3

openhands-lm-32b-v0.1-GGUF

NaNK
license:mit
83
11

Llama-3.1-Tulu-3-70B-GGUF

NaNK
base_model:allenai/Llama-3.1-Tulu-3-70B
83
0

Qwen3-VL-30B-A3B-Thinking-MLX-4bit

NaNK
license:apache-2.0
83
0

Qwen3-VL-32B-Thinking-MLX-5bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 5-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
83
0

Qwen2.5-Coder-0.5B-GGUF

NaNK
license:apache-2.0
82
0

cogito-v1-preview-qwen-32B-GGUF

NaNK
license:apache-2.0
81
2

AceReason-Nemotron-1.1-7B-GGUF

💫 Community Model> AceReason Nemotron 1.1 7B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: AceReason-Nemotron-1.1-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b5674 Thanks to its stronger SFT backbone, AceReason-Nemotron-1.1-7B significantly outperforms its predecessor and sets a record-high performance among Qwen2.5-7B-based reasoning models on challenging math and code reasoning benchmarks Technical report available here: https://arxiv.org/abs/2506.13284 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
81
1

EXAONE-4.0-32B-GGUF

NaNK
81
0

Jedi-7B-1080p-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: xlangai Original model: Jedi-7B-1080p GGUF quantization: provided by bartowski based on `llama.cpp` release b5524 Trained from Qwen 2.5 VL on their 4 million synthesized computer use examples 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
80
2

granite-3.3-2b-instruct-GGUF

💫 Community Model> granite 3.3 2b instruct by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-3.3-2b-instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5147 Fine-tuned for improved reasoning and instruction-following capabilities Supports English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese Capable of thinking, classification, extraction, coding (including FIM), function calling, and long context tasks such as summarization, RAG, and long document Q/A 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
78
3

Qwen2.5-Math-72B-Instruct-GGUF

NaNK
78
2

Falcon3-7B-Instruct-GGUF

NaNK
78
1

SmolLM3-3B-MLX-4bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: HuggingFaceTB Original model: SmolLM3-3B MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of SmolLM3-3B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
78
1

LFM2-700M-MLX-8bit

NaNK
78
0

Yi-Coder-1.5B-Chat-GGUF

NaNK
license:apache-2.0
74
5

granite-embedding-125m-english-GGUF

license:apache-2.0
73
1

MindLink-32B-0801-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Skywork Original model: MindLink-32B-0801 GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
72
4

Devstral-Small-2505-MLX-6bit

💫 Community Model> Devstral-Small-2505 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2505 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Devstral-Small-2505 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
71
6

ZR1-1.5B-GGUF

NaNK
license:mit
70
2

EXAONE-4.0-1.2B-MLX-8bit

NaNK
68
0

Hermes-4-405B-MLX-8bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
llama
66
0

KAT-Dev-MLX-5bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 5-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
66
0

internlm2_5-20b-chat-GGUF

NaNK
65
5

cogito-v1-preview-llama-8B-GGUF

NaNK
base_model:deepcogito/cogito-v1-preview-llama-8B
65
4

EXAONE-4.0-1.2B-MLX-4bit

NaNK
65
0

OlympicCoder-7B-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: open-r1 Original model: OlympicCoder-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b4867 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
61
7

Qwen3-VL-30B-A3B-Thinking-MLX-6bit

NaNK
license:apache-2.0
61
0

OpenCoder-8B-Instruct-GGUF

NaNK
60
1

Qwen2.5-1.5B-Instruct-MLX-8bit

NaNK
license:apache-2.0
60
0

OpenReasoning-Nemotron-32B-GGUF

💫 Community Model> OpenReasoning Nemotron 32B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: OpenReasoning-Nemotron-32B GGUF quantization: provided by bartowski based on `llama.cpp` release b5934 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
60
0

AM-Thinking-v1-GGUF

NaNK
license:apache-2.0
56
1

LFM2-700M-MLX-bf16

56
0

internlm2_5-1_8b-chat-GGUF

NaNK
55
3

Intern-S1-GGUF

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: internlm Original model: Intern-S1 GGUF quantization: provided by bartowski based on `llama.cpp` release b6139 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
55
1

AceReason-Nemotron-7B-GGUF

💫 Community Model> AceReason Nemotron 7B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: AceReason-Nemotron-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b5466 Trained for math and code reasoning model entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-7B Technical report available here: https://arxiv.org/abs/2505.16400 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
53
1

SmolLM3-3B-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: HuggingFaceTB Original model: SmolLM3-3B MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of SmolLM3-3B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
53
1

Falcon3-1B-Instruct-GGUF

NaNK
53
0

granite-3.1-3b-a800m-instruct-GGUF

NaNK
license:apache-2.0
53
0

Falcon3-3B-Instruct-GGUF

NaNK
52
0

Qwen2.5-3B-Instruct-MLX-8bit

NaNK
51
0

cogito-v1-preview-llama-3B-GGUF

NaNK
base_model:deepcogito/cogito-v1-preview-llama-3B
49
3

Hermes-4-405B-MLX-5bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 5-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
llama
49
0

Qwen3-VL-30B-A3B-Thinking-MLX-5bit

NaNK
license:apache-2.0
49
0

granite-3.1-2b-instruct-GGUF

NaNK
license:apache-2.0
48
1

Llama-3_1-Nemotron-Ultra-253B-v1-GGUF

NaNK
llama-3
48
1

cogito-v1-preview-qwen-14B-GGUF

NaNK
license:apache-2.0
47
3

DeepCoder-1.5B-Preview-GGUF

NaNK
license:mit
47
3

granite-3.0-2b-instruct-GGUF

NaNK
license:apache-2.0
47
1

OpenCodeReasoning-Nemotron-32B-GGUF

NaNK
license:apache-2.0
47
1

Hyperion-3.0-Mistral-7B-DPO-GGUF

NaNK
license:apache-2.0
45
0

UI-TARS-2B-SFT-GGUF

NaNK
license:apache-2.0
44
3

Llama-3_1-Nemotron-51B-Instruct-GGUF

NaNK
llama-3
44
1

granite-3.0-8b-instruct-GGUF

NaNK
license:apache-2.0
44
1

Hermes-4-405B-MLX-6bit

👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
llama
44
0

Llama-3.1-8B-UltraLong-2M-Instruct-GGUF

NaNK
base_model:nvidia/Llama-3.1-Nemotron-8B-UltraLong-2M-Instruct
43
0

txgemma-9b-chat-GGUF

NaNK
42
1

Skywork-OR1-7B-Preview-GGUF

NaNK
42
1

OREAL-7B-GGUF

NaNK
license:apache-2.0
42
0

Mistral-Small-Instruct-2409-GGUF

NaNK
40
21

Skywork-SWE-32B-GGUF

NaNK
license:apache-2.0
40
1

Qwen2.5-1.5B-Instruct-MLX-4bit

NaNK
license:apache-2.0
40
0

Magistral-Small-2507-MLX-8bit

💫 Community Model> Magistral-Small-2507 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Magistral-Small-2507 MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of Magistral-Small-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.

NaNK
license:apache-2.0
40
0

Qwen3-VL-4B-Thinking-MLX-4bit

NaNK
license:apache-2.0
40
0