lmstudio-community
gemma-4-26B-A4B-it-GGUF
Qwen3-VL-4B-Instruct-MLX-4bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---
Qwen3-VL-4B-Instruct-MLX-8bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---
Qwen3-VL-4B-Instruct-MLX-5bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---
Qwen3-VL-4B-Instruct-MLX-6bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-4B-Instruct ---
gemma-4-E4B-it-GGUF
DeepSeek-R1-0528-Qwen3-8B-MLX-4bit
--- license: mit library_name: mlx tags: - mlx base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B pipeline_tag: text-generation ---
gemma-4-31B-it-GGUF
DeepSeek-R1-0528-Qwen3-8B-MLX-8bit
--- license: mit library_name: mlx tags: - mlx pipeline_tag: text-generation base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B ---
Qwen3-4B-Thinking-2507-MLX-4bit
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Thinking-2507 ---
Qwen3-4B-Thinking-2507-MLX-8bit
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Thinking-2507 ---
Qwen3-4B-Thinking-2507-MLX-6bit
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507/blob/main/LICENSE pipeline_tag: text-generation tags: - mlx base_model: Qwen/Qwen3-4B-Thinking-2507 ---
Qwen3-VL-8B-Instruct-MLX-4bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---
gpt-oss-20b-GGUF
--- base_model: openai/gpt-oss-20b license: apache-2.0 tags: - gguf ---
Magistral-Small-2509-MLX-4bit
--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---
Magistral-Small-2509-MLX-8bit
--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---
Magistral-Small-2509-MLX-6bit
--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---
Magistral-Small-2509-MLX-5bit
--- base_model: mistralai/Magistral-Small-2509 language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn library_name: vllm license: apache-2.0 inference: false extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy. tags: - vllm - mistral-common - mlx ---
Qwen3-VL-8B-Instruct-MLX-8bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---
Qwen3-VL-8B-Instruct-MLX-6bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---
Qwen3-VL-8B-Instruct-MLX-5bit
--- license: apache-2.0 pipeline_tag: image-text-to-text tags: - mlx base_model: Qwen/Qwen3-VL-8B-Instruct ---
Qwen3-Coder-30B-A3B-Instruct-MLX-4bit
💫 Community Model> Qwen3-Coder-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-Coder-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Coder-30B-A3B-Instruct-MLX-5bit
💫 Community Model> Qwen3-Coder-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm 5-bit quantized version of Qwen3-Coder-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Coder-30B-A3B-Instruct-MLX-8bit
Qwen3-Coder-30B-A3B-Instruct-MLX-6bit
💫 Community Model> Qwen3-Coder-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-Coder-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3n-E4B-it-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E4B-it MLX quantization: provided by LM Studio team using mlxvlm 4-bit quantized version of gemma-3n-E4B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3n-E4B-it-MLX-bf16
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E4B-it MLX quantization: provided by LM Studio team using mlxvlm Original bfloat16 version of gemma-3n-E4B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3n-E4B-it-MLX-8bit
gemma-3n-E4B-it-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E4B-it MLX quantization: provided by LM Studio team using mlxvlm 6-bit quantized version of gemma-3n-E4B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-4-E2B-it-GGUF
Qwen3-VL-30B-A3B-Instruct-MLX-4bit
Qwen3-4B-Instruct-2507-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-4B-Instruct-2507 MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-4B-Instruct-2507-MLX-8bit
Qwen3-4B-Instruct-2507-MLX-5bit
Qwen3-4B-Instruct-2507-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-4B-Instruct-2507 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-4B-Instruct-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-30B-A3B-Instruct-MLX-8bit
Qwen3-VL-30B-A3B-Instruct-MLX-6bit
Qwen3-VL-30B-A3B-Instruct-MLX-5bit
💫 Community Model> Qwen3-VL-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-30B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 5-bit quantized version of Qwen3-VL-30B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Seed-OSS-36B-Instruct-MLX-8bit
GLM-4.7-Flash-MLX-8bit
Seed-OSS-36B-Instruct-MLX-4bit
💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ByteDance-Seed Original model: Seed-OSS-36B-Instruct MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Seed-OSS-36B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Seed-OSS-36B-Instruct-MLX-5bit
GLM-4.7-Flash-MLX-6bit
Seed-OSS-36B-Instruct-MLX-6bit
💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ByteDance-Seed Original model: Seed-OSS-36B-Instruct MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Seed-OSS-36B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-8B-MLX-4bit
Qwen3-8B-MLX-8bit
LFM2-24B-A2B-MLX-4bit
gpt-oss-120b-MLX-8bit
Hermes-4-70B-MLX-4bit
Qwen3-14B-GGUF
Hermes-4-70B-MLX-8bit
DeepSeek-R1-0528-Qwen3-8B-GGUF
Hermes-4-70B-MLX-5bit
Hermes-4-70B-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-70B MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Hermes-4-70B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-30B-A3B-Instruct-2507-MLX-4bit
Qwen3-30B-A3B-Instruct-2507-MLX-8bit
Qwen3-30B-A3B-Instruct-2507-MLX-6bit
💫 Community Model> Qwen3-30B-A3B-Instruct-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-30B-A3B-Instruct-2507 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-30B-A3B-Instruct-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Coder-30B-A3B-Instruct-GGUF
💫 Community Model> Qwen3 Coder 30B A3B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-30B-A3B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-8B-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-8B GGUF quantization: provided by bartowski based on `llama.cpp` release b5200 Supports a context length of up to 131,072 tokens with YaRN (default 32k) Supports `/nothink` to disable reasoning, just add it at the end of your prompt Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense Excels at creative writing, role-playing, multi-turn dialogues, and instruction following Advanced agent capabilities and support for over 100 languages and dialects 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Magistral-Small-2509-GGUF
💫 Community Model> Magistral-Small-2509 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Magistral-Small-2509 GGUF quantization: provided by LM Studio team using `llama.cpp` release b6503 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3-12b-it-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-12b-it GGUF quantization: provided by bartowski based on `llama.cpp` release b4877 Supports a context length of 128k tokens, with a max output of 8192. Multimodal supporting images normalized to 896 x 896 resolution. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Requires latest (currently beta) llama.cpp runtime. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-8B-Instruct-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-8B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6890 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Mistral-Small-3.2-24B-Instruct-2506-MLX-6bit
Mistral-Small-3.2-24B-Instruct-2506-MLX-4bit
LFM2-1.2B-MLX-8bit
Mistral-Small-3.2-24B-Instruct-2506-MLX-8bit
LFM2-1.2B-MLX-bf16
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: LiquidAI Original model: LFM2-1.2B MLX quantization: provided by LM Studio team using mlxlm Original bfloat16 version of LFM2-1.2B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Phi-4-mini-reasoning-MLX-4bit
Qwen3-30B-A3B-Instruct-2507-GGUF
💫 Community Model> Qwen3-30B-A3B-Instruct-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-30B-A3B-Instruct-2507 GGUF quantization: provided by LM Studio team using `llama.cpp` release b6022 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-4B-Instruct-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-4B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6890 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-14B-Instruct-MLX-4bit
Qwen2.5-VL-7B-Instruct-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-7B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5317 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-14B-Instruct-MLX-8bit
Qwen3-VL-30B-A3B-Instruct-GGUF
💫 Community Model> Qwen3-VL-30B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-30B-A3B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6890 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Devstral-Small-2507-MLX-4bit
Devstral-Small-2507-MLX-8bit
Devstral-Small-2507-MLX-6bit
💫 Community Model> Devstral-Small-2507 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2507 MLX quantization: provided by LM Studio team using mlxlm LM Studio model page: mistralai/devstral-small-2507 6-bit quantized version of Devstral-Small-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Devstral-Small-2507-MLX-bf16
Qwen3-4B-Thinking-2507-GGUF
QwQ-32B-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: QwQ-32B MLX quantizations: provided by bartowski from mlx-examples LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
QwQ-32B-MLX-8bit
gemma-3-27B-it-qat-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-27b-it GGUF quantization: provided by Google Optimized with Quantization Aware Training for improved 4-bit performance. Supports a context length of 128k tokens, with a max output of 8192. Multimodal supporting images normalized to 896 x 896 resolution. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
ERNIE-4.5-21B-A3B-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: baidu Original model: ERNIE-4.5-21B-A3B-PT MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of ERNIE-4.5-21B-A3B-PT using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
ERNIE-4.5-21B-A3B-MLX-8bit
ERNIE-4.5-21B-A3B-MLX-6bit
gemma-3-4b-it-GGUF
gemma-3n-E4B-it-text-GGUF
granite-4.0-h-tiny-GGUF
GLM-4.6V-Flash-MLX-4bit
Phi-4-reasoning-plus-MLX-4bit
Devstral-Small-2-24B-Instruct-2512-GGUF
Qwen3-14B-MLX-4bit
This model lmstudio-community/Qwen3-14B-4bit was converted to MLX format from Qwen/Qwen3-14B using mlx-lm version 0.24.0.
gpt-oss-120b-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-120b GGUF quantization: provided by LM Studio team using `llama.cpp` 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
GLM-4.6V-Flash-MLX-6bit
Qwen3-Next-80B-A3B-Instruct-GGUF
Qwen3-14B-MLX-8bit
This model lmstudio-community/Qwen3-14B-MLX-8bit was converted to MLX format from Qwen/Qwen3-14B using mlx-lm version 0.24.0.
Qwen3-32B-MLX-4bit
This model lmstudio-community/Qwen3-32B-MLX-4bit was converted to MLX format from Qwen/Qwen3-32B using mlx-lm version 0.24.0.
Qwen3-4B-Instruct-2507-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-4B-Instruct-2507 GGUF quantization: provided by bartowski based on `llama.cpp` release b6096 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-32B-MLX-8bit
This model lmstudio-community/Qwen3-32B-MLX-8bit was converted to MLX format from Qwen/Qwen3-32B using mlx-lm version 0.24.0.
Qwen3-1.7B-MLX-8bit
This model lmstudio-community/Qwen3-1.7B-MLX-8bit was converted to MLX format from Qwen/Qwen3-1.7B using mlx-lm version 0.24.0.
Mistral-7B-Instruct-v0.3-GGUF
💫 Community Model> Mistral 7B Instruct v0.3 by Mistral AI 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Mistral AI Original model: Mistral-7B-Instruct-v0.3 GGUF quantization: provided by bartowski based on `llama.cpp` release b2965 Mistral 7B Instruct is an excellent high quality model tuned for instruction following, and release v0.3 is no different. This iteration features function calling support, which should extend the use case further and allow for a more useful assistant. Choose the `Mistral Instruct` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: Version 0.3 has a few changes over release 0.2, including: - An extended vocabulary (32000 -> 32768) - A new tokenizer - Support for function calling Function calling support is made possible through the new extended vocabulary, including tokens TOOLCALLS, AVAILABLETOOLS, and TOOLRESULTS. This model maintains the v0.2 context length of 32768 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. 🙏 Special thanks to Kalomaze, Dampf and turboderp for their work on the dataset (linked here) that was used for calculating the imatrix for all sizes. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-1.7B-MLX-4bit
This model lmstudio-community/Qwen3-1.7B-MLX-4bit was converted to MLX format from Qwen/Qwen3-1.7B using mlx-lm version 0.24.0.
Qwen3-1.7B-MLX-bf16
This model lmstudio-community/Qwen3-1.7B-MLX-bf16 was converted to MLX format from Qwen/Qwen3-1.7B using mlx-lm version 0.24.0.
Qwen3-Next-80B-A3B-Instruct-MLX-4bit
Qwen3-4B-MLX-4bit
This model lmstudio-community/Qwen3-4B-MLX-4bit was converted to MLX format from Qwen/Qwen3-4B using mlx-lm version 0.24.0.
Qwen2.5-Coder-32B-Instruct-MLX-4bit
💫 Community Model> Qwen2.5 Coder 32B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-32B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long-context Support up to 128K tokens with yarn rope scaling factor of 4.0 Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-4B-MLX-8bit
This model lmstudio-community/Qwen3-4B-MLX-8bit was converted to MLX format from Qwen/Qwen3-4B using mlx-lm version 0.24.0.
Qwen2.5-Coder-32B-Instruct-MLX-8bit
💫 Community Model> Qwen2.5 Coder 32B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-32B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long-context Support up to 128K tokens with yarn rope scaling factor of 4.0 Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
GLM-4.6V-Flash-MLX-8bit
Magistral-Small-2506-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistral-ai Original model: magistral-small MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of magistral-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
magistral-small-2506-mlx-bf16
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistral-ai Original model: magistral-small MLX quantization: provided by LM Studio team using mlxlm Original bfloat16 version of magistral-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Next-80B-A3B-Instruct-MLX-8bit
💫 Community Model> Qwen3-Next-80B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Next-80B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm LM Studio Model Page: https://lmstudio.ai/models/qwen/qwen3-next-80b 8-bit quantized version of Qwen3-Next-80B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Next-80B-A3B-Instruct-MLX-6bit
Qwen3-30B-A3B-MLX-4bit
This model lmstudio-community/Qwen3-30B-A3B-MLX-4bit was converted to MLX format from Qwen/Qwen3-30B-A3B using mlx-lm version 0.24.0.
Qwen3-Next-80B-A3B-Instruct-MLX-5bit
💫 Community Model> Qwen3-Next-80B-A3B-Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Next-80B-A3B-Instruct MLX quantization: provided by LM Studio team using mlxlm LM Studio Model Page: https://lmstudio.ai/models/qwen/qwen3-next-80b 5-bit quantized version of Qwen3-Next-80B-A3B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-30B-A3B-MLX-8bit
This model lmstudio-community/Qwen3-30B-A3B-MLX-8bit was converted to MLX format from Qwen/Qwen3-30B-A3B using mlx-lm version 0.24.0.
Devstral-Small-2505-MLX-4bit
💫 Community Model> Devstral-Small-2505 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2505 MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Devstral-Small-2505 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3-1B-it-qat-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-1b-it GGUF quantization: provided Google Optimized with Quantization Aware Training for improved 4-bit performance. Supports a context length of 32k tokens, with a max output of 8192. Gemma 3 models are well-suited for a variety of text generation, including question answering, summarization, and reasoning. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Meta-Llama-3.1-8B-Instruct-GGUF
Llama-3.3-70B-Instruct-GGUF
Mistral-Small-3.2-24B-Instruct-2506-GGUF
💫 Community Model> Mistral Small 3.2 24B Instruct 2506 by Mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Mistral-Small-3.2-24B-Instruct-2506 GGUF quantization: provided by lmmy based on `llama.cpp` release b5726 Supports dozens of languages, including English, French, German, Spanish, Portuguese, Italian, Japanese, Korean, Russian, Chinese, Arabic, Persian, Indonesian, Malay, Nepali, Polish, Romanian, Serbian, Swedish, Turkish, Ukrainian, Vietnamese, Hindi, and Bengali. This model's tool calling performance may be degraded. Stay tuned for more updates from the team. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
phi-4-GGUF
Qwen2.5-Coder-14B-Instruct-GGUF
Phi-4-mini-reasoning-GGUF
Seed-OSS-36B-Instruct-GGUF
💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ByteDance-Seed Original model: Seed-OSS-36B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6292 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
DeepSeek-Coder-V2-Lite-Instruct-GGUF
Meta-Llama-3-8B-Instruct-GGUF
GLM-4.7-Flash-MLX-4bit
Llama-3.2-1B-Instruct-GGUF
Qwen3-1.7B-GGUF
DeepSeek-R1-Distill-Qwen-7B-GGUF
Hermes-4-70B-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-70B GGUF quantization: provided by LM Studio team using `llama.cpp` release b6287 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Codestral-22B-v0.1-GGUF
Qwen3-32B-GGUF
GLM-4.5-Air-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of GLM-4.5-Air using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-4B-GGUF
NVIDIA-Nemotron-3-Nano-30B-A3B-GGUF
Phi-4-reasoning-plus-GGUF
Qwen3-Coder-480B-A35B-Instruct-MLX-6bit
QwQ-32B-GGUF
Qwen3-Coder-480B-A35B-Instruct-MLX-4bit
Qwen3-235B-A22B-Instruct-2507-MLX-4bit
Qwen3-235B-A22B-Instruct-2507-MLX-6bit
Devstral-Small-2507-GGUF
Qwen3-235B-A22B-Instruct-2507-MLX-8bit
Qwen3-Coder-480B-A35B-Instruct-MLX-8bit
Mistral-Nemo-Instruct-2407-GGUF
SmolLM2-1.7B-Instruct-GGUF
Qwen3-0.6B-GGUF
DeepSeek-R1-Distill-Qwen-14B-GGUF
gpt-oss-20b-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-20b MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of gpt-oss-20b using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-VL-3B-Instruct-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-3B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5317 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
DeepSeek-R1-Distill-Qwen-32B-GGUF
💫 Community Model> DeepSeek R1 Distill Qwen 32B by Deepseek-Ai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepseek-ai Original model: DeepSeek-R1-Distill-Qwen-32B GGUF quantization: provided by bartowski based on `llama.cpp` release b4514 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Llama-3.2-3B-Instruct-GGUF
gemma-2-9b-it-GGUF
Qwen3-30B-A3B-GGUF
gemma-3-270m-it-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of gemma-3-270m-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Llama-3.1-Tulu-3-8B-GGUF
EXAONE-3.5-2.4B-Instruct-GGUF
Qwen3-Coder-Next-GGUF
Yi-Coder-1.5B-GGUF
gemma-3-4B-it-qat-GGUF
EXAONE-3.5-7.8B-Instruct-GGUF
OpenCoder-1.5B-Instruct-GGUF
gpt-oss-safeguard-20b-MLX-MXFP4
💫 Community Model> gpt-oss-safeguard-20b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-20b MLX quantization: provided by LM Studio team using mlxlm MXFP4 quantized version of gpt-oss-safeguard-20b using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
AMD-OLMo-1B-SFT-DPO-GGUF
NuExtract-v1.5-GGUF
Qwen2.5-0.5B-Instruct-GGUF
DeepSeek-R1-Distill-Llama-8B-GGUF
Qwen2.5-0.5B-Instruct-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long context: Support for 32k tokens and 8k token generation Large-scale training dataset: Encompasses a huge range of knowledge. Enhanced structured data understanding and generation. Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3-1b-it-GGUF
Qwen2.5-Coder-32B-GGUF
DeepSeek-R1-Distill-Qwen-1.5B-GGUF
granite-4.0-h-small-MLX-8bit
💫 Community Model> granite-4.0-h-small by ibm-granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-4.0-h-small MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of granite-4.0-h-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
deepseek-coder-6.7B-kexer-GGUF
ERNIE-4.5-21B-A3B-PT-GGUF
dolphin-2.8-mistral-7b-v02-GGUF
gemma-3-12B-it-qat-GGUF
DeepSeek-R1-Distill-Llama-70B-GGUF
gpt-oss-safeguard-20b-GGUF
💫 Community Model> gpt-oss-safeguard-20b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-20b GGUF quantization: provided by LM Studio team using `llama.cpp` release b6868 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-7B-Instruct-GGUF
gemma-2-27b-it-GGUF
GLM-4.5-Air-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of GLM-4.5-Air using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
GLM-4.5-Air-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air GGUF quantization: provided by bartowski based on `llama.cpp` release b6085 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3-27b-it-GGUF
Qwen3-235B-A22B-Instruct-2507-GGUF
Qwen3-VL-32B-Instruct-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 8-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-32B-Instruct-GGUF
Qwen3-VL-2B-Instruct-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Instruct GGUF quantization: provided by LM Studio team using `llama.cpp` release b6888 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-2-2b-it-GGUF
granite-3.1-8b-instruct-GGUF
💫 Community Model> granite 3.1 8b instruct by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-3.1-8b-instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b4381 Intended for general instructions, summarization, text classification and extraction, Q/A, RAG, coding, function calling, and long context tasks. Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-3.2-8b-instruct-GGUF
gpt-oss-safeguard-120b-MLX-MXFP4
💫 Community Model> gpt-oss-safeguard-120b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-120b MLX quantization: provided by LM Studio team using mlxlm MXFP4 quantized version of gpt-oss-safeguard-120b using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-30B-A3B-Thinking-2507-GGUF
Llama-4-Scout-17B-16E-Instruct-GGUF
Qwen3-VL-30B-A3B-Thinking-GGUF
Qwen2.5-Coder-7B-Instruct-MLX-4bit
Devstral-Small-2505-GGUF
💫 Community Model> Devstral Small 2505 by Mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2505 GGUF quantization: provided by mattjcly based on `llama.cpp` release b5426 "Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model debuts as the #1 open source model on SWE-bench. Despite its compact size of just 24 billion parameters, Devstral outperforms much larger models in agentic coding tasks. These tasks require exploring a codebase and making complex modifications to resolve issues." 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Magistral-Small-2506-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: magistral-small GGUF quantization: provided by lmmy based on `llama.cpp` release b5606 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Next-80B-A3B-Thinking-MLX-8bit
Qwen3-VL-8B-Thinking-GGUF
GLM-4.5-Air-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: zai-org Original model: GLM-4.5-Air MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of GLM-4.5-Air using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
embeddinggemma-300m-qat-GGUF
Qwen2.5-VL-32B-Instruct-GGUF
💫 Community Model> Qwen2.5 VL 32B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-32B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5284 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-4.0-h-small-GGUF
gemma-3n-E2B-it-text-GGUF
Qwen3-0.6B-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-0.6B MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-0.6B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-4.0-h-tiny-MLX-8bit
Qwen2.5-7B-Instruct-GGUF
Mistral-Small-3.1-24B-Instruct-2503-GGUF
Qwen3-Coder-480B-A35B-Instruct-GGUF
💫 Community Model> Qwen3 Coder 480B A35B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-Coder-480B-A35B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5962 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Phi-4-mini-instruct-GGUF
Qwen3-VL-2B-Instruct-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 8-bit quantized version of Qwen3-VL-2B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
medgemma-27b-text-it-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: medgemma-27b-text-it MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of medgemma-27b-text-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Phi-3.1-mini-4k-instruct-GGUF
Qwen3-235B-A22B-GGUF
Qwen3-VL-32B-Thinking-GGUF
mathstral-7B-v0.1-GGUF
granite-4.0-h-small-MLX-6bit
💫 Community Model> granite-4.0-h-small by ibm-granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-4.0-h-small MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of granite-4.0-h-small using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-7B-Instruct-1M-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-7B-Instruct-1M GGUF quantization: provided by bartowski based on `llama.cpp` release b4546 Significantly improved performance in handling long-context tasks while maintaining its capability in short tasks. Accuracy degradation may occur for sequences exceeding 262,144 tokens until improved support is added. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-VL-72B-Instruct-GGUF
💫 Community Model> Qwen2.5 VL 72B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-VL-72B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5317 Proficient in recognizing common objects such as flowers, birds, fish, and insects, but it is highly capable of analyzing texts, charts, icons, graphics, and layouts within images. Capable of acting as a visual agent that can reason and dynamically direct tools, which is capable of computer use and phone use. Useful for generating structured outputs and stable JSON outputs. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Llama-3-Groq-8B-Tool-Use-GGUF
DeepSeek-Coder-V2-Instruct-0724-GGUF
Qwen3-VL-32B-Instruct-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 4-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Llama-4-Scout-17B-16E-MLX-text-8bit
gemma-3-270m-it-MLX-bf16
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it MLX quantization: provided by LM Studio team using mlxlm Original bfloat16 version of gemma-3-270m-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-14B-Instruct-1M-GGUF
💫 Community Model> Qwen2.5 14B Instruct 1M by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2.5-14B-Instruct-1M GGUF quantization: provided by bartowski based on `llama.cpp` release b4546 Significantly improved performance in handling long-context tasks while maintaining its capability in short tasks. Accuracy degradation may occur for sequences exceeding 262,144 tokens until improved support is added. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
codegemma-7b-it-GGUF
Qwen3-0.6B-MLX-bf16
Qwen3-30B-A3B-Thinking-2507-MLX-4bit
KAT-Dev-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gpt-oss-safeguard-120b-GGUF
💫 Community Model> gpt-oss-safeguard-120b by openai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openai Original model: gpt-oss-safeguard-120b GGUF quantization: provided by LM Studio team using `llama.cpp` release b6866 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-235B-A22B-Thinking-GGUF
InternVL3_5-14B-GGUF
Qwen2-VL-7B-Instruct-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen2-VL-7B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b4327 Vision model capable of understanding images of various resolutions and ratios. Complex reasoning for agentic automation with vision. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Next-80B-A3B-Thinking-MLX-4bit
gemma-3-270m-it-qat-GGUF
💫 Community Model> gemma-3-270m-it-qat-q40 by google 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it-qat-q40-unquantized GGUF quantization: provided by LM Studio team using `llama.cpp` release b6153 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-4.0-h-micro-GGUF
Qwen2.5-Coder-7B-Instruct-MLX-8bit
💫 Community Model> Qwen2.5 Coder 7B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-7B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long-context Support up to 128K tokens with yarn rope scaling factor of 4.0 Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-Next-80B-A3B-Thinking-GGUF
Qwen2.5-Coder-1.5B-Instruct-GGUF
GLM-4-9B-0414-GGUF
InternVL3_5-30B-A3B-GGUF
💫 Community Model> InternVL35 30B A3B by Opengvlab 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: OpenGVLab Original model: InternVL35-30B-A3B GGUF quantization: provided by bartowski based on `llama.cpp` release b6258 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3-270m-it-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it GGUF quantization: provided by LM Studio team using `llama.cpp` release b6153 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
EXAONE-4.0-32B-MLX-4bit
Qwen2.5-14B-Instruct-GGUF
pixtral-12b-GGUF
granite-vision-3.2-2b-GGUF
💫 Community Model> granite vision 3.2 2b by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-vision-3.2-2b GGUF quantization: provided by bartowski based on `llama.cpp` release b4778 Designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more. Use cases include analyzing tables and charts, performing OCR, and answering questions based on document content. It also has general image understanding. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2-VL-2B-Instruct-GGUF
GLM-4-32B-0414-GGUF
Qwen3-30B-A3B-Thinking-2507-MLX-8bit
Qwen3-VL-235B-A22B-Instruct-GGUF
Qwen3-VL-32B-Instruct-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 6-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
medgemma-4b-it-GGUF
gemma-3-270m-it-qat-MLX-4bit
💫 Community Model> gemma-3-270m-it-qat-q40 by google 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it-qat-q40-unquantized MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of gemma-3-270m-it-qat-q40-unquantized using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-4.0-h-small-MLX-4bit
CodeLlama-7B-KStack-GGUF
WizardLM-2-7B-GGUF
Qwen3-VL-4B-Thinking-GGUF
Meta-Llama-3.1-70B-Instruct-GGUF
c4ai-command-r-v01-GGUF
Qwen3-VL-32B-Thinking-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 8-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-1.5B-Instruct-MLX-8bit
💫 Community Model> Qwen2.5 Coder 1.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-1.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-30B-A3B-Thinking-2507-MLX-6bit
Qwen3-235B-A22B-Thinking-2507-GGUF
💫 Community Model> Qwen3 235B A22B Thinking 2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 GGUF quantization: provided by bartowski based on `llama.cpp` release b5962 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
olmOCR-7B-0225-preview-GGUF
Llama3-ChatQA-1.5-8B-GGUF
starcoder2-15b-instruct-v0.1-GGUF
Llama-4-Scout-17B-16E-MLX-text-4bit
Qwen2-VL-72B-Instruct-GGUF
Meta-Llama-3-70B-Instruct-GGUF
Qwen2-500M-Instruct-GGUF
Qwen3-235B-A22B-Thinking-2507-MLX-8bit
💫 Community Model> Qwen3-235B-A22B-Thinking-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of Qwen3-235B-A22B-Thinking-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
InternVL3_5-8B-GGUF
Yi-Coder-9B-Chat-GGUF
MiniCPM-o-2_6-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: openbmb Original model: MiniCPM-o-26 GGUF quantization: provided by bartowski based on `llama.cpp` release b4585 Supports images of any aspect ratio up to 1.8 million pixels (e.g. 1344x1344) See more in their technical report (here)[https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9] 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
codegemma-7b-GGUF
codegemma-1.1-7b-it-GGUF
deepseek-coder-1.3B-kexer-GGUF
Qwen3-VL-2B-Thinking-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Thinking GGUF quantization: provided by LM Studio team using `llama.cpp` release b6889 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-1.1-2b-it-GGUF
LFM2-350M-MLX-8bit
SmolLM3-3B-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: HuggingFaceTB Original model: SmolLM3-3B MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of SmolLM3-3B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-0.5B-Instruct-GGUF
Qwen3-VL-32B-Instruct-MLX-5bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 5-bit quantized version of Qwen3-VL-32B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-embedding-107m-multilingual-GGUF
Yi-1.5-9B-Chat-GGUF
Qwen2.5-Coder-3B-Instruct-MLX-4bit
💫 Community Model> Qwen2.5 Coder 3B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-3B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
nomic-embed-code-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nomic-ai Original model: nomic-embed-code GGUF quantization: provided by bartowski based on `llama.cpp` release b5284 7B parameter embedding model designed for code retrieval Based on Qwen2 and trained for multiple programming languages such as Python, Java, Ruby, PHP, JavaScript, and Go 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-14B-Instruct-MLX-4bit
MiniCPM-V-2_6-GGUF
DeepSeek-R1-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepseek-ai Original model: DeepSeek-R1 GGUF quantization: provided by bartowski based on `llama.cpp` release b4514 DeepSeek R1 represents the current SOTA for open reasoning models. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-3B-Instruct-GGUF
Qwen3-VL-2B-Instruct-MLX-bf16
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-2B-Instruct MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL Original bfloat16 version of Qwen3-VL-2B-Instruct using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
codegemma-2b-GGUF
gemma-3n-E2B-it-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E2B-it MLX quantization: provided by LM Studio team using mlxvlm 6-bit quantized version of gemma-3n-E2B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Phi-3.1-mini-128k-instruct-GGUF
InternVL3_5-4B-GGUF
InternVL3_5-2B-GGUF
granite-4.0-micro-GGUF
granite-4.0-h-tiny-MLX-6bit
Qwen2.5-Coder-32B-Instruct-GGUF
gemma-3-270m-it-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3-270m-it MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of gemma-3-270m-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
LFM2-VL-1.6B-GGUF
GLM-4.5-GGUF
medgemma-4b-it-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: medgemma-4b-it MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of medgemma-4b-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-32B-Instruct-GGUF
granite-embedding-278m-multilingual-GGUF
granite-4.0-h-tiny-MLX-4bit
stable-code-instruct-3b-GGUF
gemma-3n-E2B-it-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E2B-it MLX quantization: provided by LM Studio team using mlxvlm 4-bit quantized version of gemma-3n-E2B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
GLM-Z1-9B-0414-GGUF
KAT-Dev-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev GGUF quantization: provided by LM Studio team using `llama.cpp` release b6644 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
OREAL-DeepSeek-R1-Distill-Qwen-7B-GGUF
aya-expanse-8b-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: CohereForAI Original model: aya-expanse-8b GGUF quantization: provided by bartowski based on `llama.cpp` release b3930 Aya Expanse offers highly advanced multilingual capabilities. License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy Supports 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
aya-23-8B-GGUF
Starling-LM-7B-beta-GGUF
Qwen2.5-Math-7B-Instruct-GGUF
Meta-Llama-3-120B-Instruct-GGUF
SmolLM2-135M-Instruct-GGUF
Qwen2.5-Coder-3B-Instruct-GGUF
internlm2-math-plus-20b-GGUF
Qwen3-235B-A22B-Thinking-2507-MLX-4bit
💫 Community Model> Qwen3-235B-A22B-Thinking-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Qwen3-235B-A22B-Thinking-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-4.0-h-small-MLX-5bit
c4ai-command-r-08-2024-GGUF
EXAONE-4.0.1-32B-GGUF
DeepSeek-V2.5-GGUF
Qwen2.5-7B-Instruct-MLX-4bit
Llama-3.1-Tulu-3-405B-GGUF
openchat-3.6-8b-20240522-GGUF
InternVL3_5-1B-GGUF
Qwen2.5-1.5B-Instruct-GGUF
Llama-3.1-Nemotron-70B-Instruct-HF-GGUF
EuroLLM-9B-Instruct-GGUF
💫 Community Model> EuroLLM 9B Instruct by Utter-Project 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: utter-project Original model: EuroLLM-9B-Instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b4240 Capable of generating text in all EU languages including: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Yi-1.5-6B-Chat-GGUF
EXAONE-4.0-1.2B-GGUF
LFM2-VL-450M-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: LiquidAI Original model: LFM2-VL-450M GGUF quantization: provided by bartowski based on `llama.cpp` release b6214 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
UI-TARS-7B-DPO-GGUF
zeta-GGUF
Qwen3-Next-80B-A3B-Thinking-MLX-6bit
aya-23-35B-GGUF
Mistral-Large-Instruct-2411-GGUF
Qwen2.5-14B-Instruct-MLX-8bit
Llama3-ChatQA-1.5-70B-GGUF
medgemma-27b-text-it-GGUF
Qwen2.5-Coder-0.5B-Instruct-MLX-4bit
💫 Community Model> Qwen2.5 Coder 0.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-3n-E2B-it-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: google Original model: gemma-3n-E2B-it MLX quantization: provided by LM Studio team using mlxvlm 8-bit quantized version of gemma-3n-E2B-it using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-72B-Instruct-GGUF
Qwen2.5-Coder-3B-Instruct-MLX-8bit
💫 Community Model> Qwen2.5 Coder 3B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-3B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Meta-Llama-3-8B-Instruct-BPE-fix-GGUF
c4ai-command-r-plus-08-2024-GGUF
gemma-3n-E2B-it-MLX-bf16
cogito-v2-preview-llama-70B-GGUF
💫 Community Model> cogito v2 preview llama 70B by Deepcogito 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepcogito Original model: cogito-v2-preview-llama-70B GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Magistral-Small-2507-GGUF
💫 Community Model> Magistral-Small-2507 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Magistral-Small-2507 GGUF quantization: provided by LM Studio team using `llama.cpp` release b5980 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-1.5B-Instruct-MLX-4bit
💫 Community Model> Qwen2.5 Coder 1.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-1.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
cogito-v2-preview-llama-109B-MoE-GGUF
💫 Community Model> cogito v2 preview llama 109B MoE by Deepcogito 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepcogito Original model: cogito-v2-preview-llama-109B-MoE GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-32B-Instruct-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-32B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long context: Support for 32k tokens and 8k token generation Large-scale training dataset: Encompasses a huge range of knowledge. Enhanced structured data understanding and generation. Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
SmolLM3-3B-GGUF
Qwen3-235B-A22B-Thinking-2507-MLX-6bit
💫 Community Model> Qwen3-235B-A22B-Thinking-2507 by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-235B-A22B-Thinking-2507 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Qwen3-235B-A22B-Thinking-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
wavecoder-ultra-6.7b-GGUF
Qwen3-VL-32B-Thinking-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 4-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Apriel-Nemotron-15b-Thinker-GGUF
Qwen3-VL-2B-Thinking-MLX-8bit
DeepSeek-V3-0324-GGUF
Qwen1.5-32B-Chat-GGUF
Qwen3-Next-80B-A3B-Thinking-MLX-5bit
granite-3.0-3b-a800m-instruct-GGUF
GLM-Z1-Rumination-32B-0414-GGUF
Athene-V2-Chat-GGUF
OlympicCoder-32B-GGUF
Qwen2.5-7B-Instruct-MLX-8bit
Hunyuan-A13B-Instruct-GGUF
granite-4.0-h-tiny-MLX-5bit
Phi-4-reasoning-MLX-4bit
This model lmstudio-community/Phi-4-reasoning-MLX-4bit was converted to MLX format from microsoft/Phi-4-reasoning using mlx-lm version 0.24.0.
Yi 1.5 34B Chat GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: 01-ai Original model: Yi-1.5-34B-Chat GGUF quantization: provided by bartowski based on `llama.cpp` release b2854 Model Summary: Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. This model should perform well on a wide range of tasks, such as coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. Under the hood, the model will see a prompt that's formatted like so: No technical details have been released about this model. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. 🙏 Special thanks to Kalomaze for his dataset (linked here) that was used for calculating the imatrix for the IQ1M and IQ2XS quants, which makes them usable even at their tiny size! LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Phi-4-reasoning-GGUF
DeepSeek-V2.5-1210-GGUF
SmolLM2-360M-Instruct-GGUF
internlm2-math-plus-mixtral8x22b-GGUF
granite-3.3-8b-instruct-GGUF
💫 Community Model> granite 3.3 8b instruct by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-3.3-8b-instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5147 Fine-tuned for improved reasoning and instruction-following capabilities Supports English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese Capable of thinking, classification, extraction, coding (including FIM), function calling, and long context tasks such as summarization, RAG, and long document Q/A 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
aya-expanse-32b-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: CohereForAI Original model: aya-expanse-32b GGUF quantization: provided by bartowski based on `llama.cpp` release b3930 Aya Expanse offers highly advanced multilingual capabilities. License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy Supports 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
KAT-Dev-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Athene-70B-GGUF
LFM2-350M-MLX-bf16
Qwen2.5-32B-Instruct-MLX-8bit
OpenReasoning-Nemotron-7B-GGUF
💫 Community Model> OpenReasoning Nemotron 7B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: OpenReasoning-Nemotron-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b5934 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
AFM-4.5B-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: arcee-ai Original model: AFM-4.5B GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 temperature: 0.5 topk: 50 topp: 0.95 repeatpenalty: 1.1 minp: 0.05 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-embedding-30m-english-GGUF
💫 Community Model> granite embedding 30m english by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-embedding-30m-english GGUF quantization: provided by bartowski based on `llama.cpp` release b4381 30 million param model for extremely fast performance 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2-Math-72B-Instruct-GGUF
Qwen2.5-Coder-3B-GGUF
DeepCoder-14B-Preview-GGUF
KAT-Dev-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Mistral-Small-24B-Instruct-2501-GGUF
💫 Community Model> Mistral Small 24B Instruct 2501 by Mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Mistral-Small-24B-Instruct-2501 GGUF quantization: provided by bartowski based on `llama.cpp` release b4585 Multilingual: Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Llama-3-Groq-70B-Tool-Use-GGUF
openhands-lm-7b-v0.1-GGUF
Falcon3-10B-Instruct-GGUF
Qwen2.5-Coder-0.5B-Instruct-MLX-8bit
💫 Community Model> Qwen2.5 Coder 0.5B Instruct by Qwen 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-Coder-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Up to 5.5 trillion training tokens including source code, text-code grounding, and synthetic data LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-32B-Thinking-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 6-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
gemma-7b-aps-it-GGUF
internlm2-math-plus-7b-GGUF
Qwen3-VL-8B-Thinking-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-8B-Thinking MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 4-bit quantized version of Qwen3-VL-8B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Meta-Llama-3-70B-Instruct-BPE-fix-GGUF
granite-3.1-1b-a400m-instruct-GGUF
Mistral-Large-Instruct-2407-GGUF
GLM-Z1-32B-0414-GGUF
Qwen3-VL-30B-A3B-Thinking-MLX-8bit
Hermes-4-405B-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B GGUF quantization: provided by LM Studio team using `llama.cpp` release b6292 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
DiscoPOP-zephyr-7b-gemma-GGUF
Llama-3.1-Nemotron-Nano-4B-v1.1-GGUF
💫 Community Model> Llama 3.1 Nemotron Nano 4B v1.1 by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: Llama-3.1-Nemotron-Nano-4B-v1.1 GGUF quantization: provided by bartowski based on `llama.cpp` release b5432 Created from Llama 3.1 8B with pruning and distilling Tuned for reasoning, human chat preferences, and tasks, such as RAG and tool calling. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
ERNIE-4.5-0.3B-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: baidu Original model: ERNIE-4.5-0.3B-PT GGUF quantization: provided by bartowski based on `llama.cpp` release b5780 Optimized for general-purpose language understanding and generation 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
olmOCR-2-7B-1025-GGUF
Qwen2.5-Coder-14B-GGUF
Llama-3.1-8B-UltraLong-1M-Instruct-GGUF
UI-TARS-72B-DPO-GGUF
Qwen2-Math-1.5B-Instruct-GGUF
c4ai-command-a-03-2025-GGUF
💫 Community Model> c4ai command a 03 2025 by Cohereforai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: CohereForAI Original model: c4ai-command-a-03-2025 GGUF quantization: provided by bartowski based on `llama.cpp` release b4877 License: CC-BY-NC, requires also adhering to C4AI's Acceptable Use Policy The model has been trained on 23 languages: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian. Trained for conversation, RAG, tool use, and coding. 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Phi-3.5-MoE-instruct-GGUF
Mistral-Small-4-119B-2603-GGUF
Hermes-4-405B-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
OpenThinker3-7B-GGUF
Qwen2.5-0.5B-Instruct-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Compatibility: Apple Silicon Macs Model creator: Qwen Original model: Qwen2.5-0.5B-Instruct MLX quantizations: provided by bartowski from mlx-examples Long context: Support for 32k tokens and 8k token generation Large-scale training dataset: Encompasses a huge range of knowledge. Enhanced structured data understanding and generation. Over 29 languages including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
DeepSWE-Preview-GGUF
💫 Community Model> DeepSWE Preview by Agentica-Org 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: agentica-org Original model: DeepSWE-Preview GGUF quantization: provided by bartowski based on `llama.cpp` release b5760 Trained on top of Qwen3-32B with thinking mode enabled Coding agent trained with only reinforcement learning (RL) to excel at software engineering (SWE) tasks Achieves an impressive 59.0% on SWE-Bench-Verified, which is currently #1 in the open-weights category 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-8B-Thinking-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-8B-Thinking MLX quantization: provided by LM Studio team using mlxvlm LM Studio model page: Qwen3-VL 8-bit quantized version of Qwen3-VL-8B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
CodeLlama-7B-KStack-clean-GGUF
Dhanishtha-2.0-preview-GGUF
Qwen2.5-3B-Instruct-MLX-4bit
EXAONE-Deep-7.8B-GGUF
reka-flash-3.1-GGUF
DeepSeek-R1-0528-GGUF
💫 Community Model> DeepSeek R1 0528 by Deepseek-Ai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: deepseek-ai Original model: DeepSeek-R1-0528 GGUF quantization: provided by bartowski based on `llama.cpp` release b5524 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
EXAONE-3.5-32B-Instruct-GGUF
Qwen2-Math-7B-Instruct-GGUF
r1-1776-distill-llama-70b-GGUF
Llama-3.1-8B-UltraLong-4M-Instruct-GGUF
EXAONE-4.0-32B-MLX-8bit
Qwen2.5-Math-1.5B-Instruct-GGUF
OpenReasoning-Nemotron-14B-GGUF
💫 Community Model> OpenReasoning Nemotron 14B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: OpenReasoning-Nemotron-14B GGUF quantization: provided by bartowski based on `llama.cpp` release b5934 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Skywork-R1V3-38B-GGUF
openhands-lm-32b-v0.1-GGUF
Llama-3.1-Tulu-3-70B-GGUF
Qwen3-VL-30B-A3B-Thinking-MLX-4bit
Qwen3-VL-32B-Thinking-MLX-5bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Qwen Original model: Qwen3-VL-32B-Thinking MLX quantization: provided by LM Studio team using mlxvlm 5-bit quantized version of Qwen3-VL-32B-Thinking using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Coder-0.5B-GGUF
cogito-v1-preview-qwen-32B-GGUF
AceReason-Nemotron-1.1-7B-GGUF
💫 Community Model> AceReason Nemotron 1.1 7B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: AceReason-Nemotron-1.1-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b5674 Thanks to its stronger SFT backbone, AceReason-Nemotron-1.1-7B significantly outperforms its predecessor and sets a record-high performance among Qwen2.5-7B-based reasoning models on challenging math and code reasoning benchmarks Technical report available here: https://arxiv.org/abs/2506.13284 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
EXAONE-4.0-32B-GGUF
Jedi-7B-1080p-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: xlangai Original model: Jedi-7B-1080p GGUF quantization: provided by bartowski based on `llama.cpp` release b5524 Trained from Qwen 2.5 VL on their 4 million synthesized computer use examples 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
granite-3.3-2b-instruct-GGUF
💫 Community Model> granite 3.3 2b instruct by Ibm-Granite 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: ibm-granite Original model: granite-3.3-2b-instruct GGUF quantization: provided by bartowski based on `llama.cpp` release b5147 Fine-tuned for improved reasoning and instruction-following capabilities Supports English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese Capable of thinking, classification, extraction, coding (including FIM), function calling, and long context tasks such as summarization, RAG, and long document Q/A 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen2.5-Math-72B-Instruct-GGUF
Falcon3-7B-Instruct-GGUF
SmolLM3-3B-MLX-4bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: HuggingFaceTB Original model: SmolLM3-3B MLX quantization: provided by LM Studio team using mlxlm 4-bit quantized version of SmolLM3-3B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
LFM2-700M-MLX-8bit
Yi-Coder-1.5B-Chat-GGUF
granite-embedding-125m-english-GGUF
MindLink-32B-0801-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Skywork Original model: MindLink-32B-0801 GGUF quantization: provided by bartowski based on `llama.cpp` release b6014 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Devstral-Small-2505-MLX-6bit
💫 Community Model> Devstral-Small-2505 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Devstral-Small-2505 MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Devstral-Small-2505 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
ZR1-1.5B-GGUF
EXAONE-4.0-1.2B-MLX-8bit
Hermes-4-405B-MLX-8bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
KAT-Dev-MLX-5bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: Kwaipilot Original model: KAT-Dev MLX quantization: provided by LM Studio team using mlxlm 5-bit quantized version of KAT-Dev using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
internlm2_5-20b-chat-GGUF
cogito-v1-preview-llama-8B-GGUF
EXAONE-4.0-1.2B-MLX-4bit
OlympicCoder-7B-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: open-r1 Original model: OlympicCoder-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b4867 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-30B-A3B-Thinking-MLX-6bit
OpenCoder-8B-Instruct-GGUF
Qwen2.5-1.5B-Instruct-MLX-8bit
OpenReasoning-Nemotron-32B-GGUF
💫 Community Model> OpenReasoning Nemotron 32B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: OpenReasoning-Nemotron-32B GGUF quantization: provided by bartowski based on `llama.cpp` release b5934 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
AM-Thinking-v1-GGUF
LFM2-700M-MLX-bf16
internlm2_5-1_8b-chat-GGUF
Intern-S1-GGUF
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: internlm Original model: Intern-S1 GGUF quantization: provided by bartowski based on `llama.cpp` release b6139 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
AceReason-Nemotron-7B-GGUF
💫 Community Model> AceReason Nemotron 7B by Nvidia 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: nvidia Original model: AceReason-Nemotron-7B GGUF quantization: provided by bartowski based on `llama.cpp` release b5466 Trained for math and code reasoning model entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-7B Technical report available here: https://arxiv.org/abs/2505.16400 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
SmolLM3-3B-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: HuggingFaceTB Original model: SmolLM3-3B MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of SmolLM3-3B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Falcon3-1B-Instruct-GGUF
granite-3.1-3b-a800m-instruct-GGUF
Falcon3-3B-Instruct-GGUF
Qwen2.5-3B-Instruct-MLX-8bit
cogito-v1-preview-llama-3B-GGUF
Hermes-4-405B-MLX-5bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 5-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Qwen3-VL-30B-A3B-Thinking-MLX-5bit
granite-3.1-2b-instruct-GGUF
Llama-3_1-Nemotron-Ultra-253B-v1-GGUF
cogito-v1-preview-qwen-14B-GGUF
DeepCoder-1.5B-Preview-GGUF
granite-3.0-2b-instruct-GGUF
OpenCodeReasoning-Nemotron-32B-GGUF
Hyperion-3.0-Mistral-7B-DPO-GGUF
UI-TARS-2B-SFT-GGUF
Llama-3_1-Nemotron-51B-Instruct-GGUF
granite-3.0-8b-instruct-GGUF
Hermes-4-405B-MLX-6bit
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: NousResearch Original model: Hermes-4-405B MLX quantization: provided by LM Studio team using mlxlm 6-bit quantized version of Hermes-4-405B using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
Llama-3.1-8B-UltraLong-2M-Instruct-GGUF
txgemma-9b-chat-GGUF
Skywork-OR1-7B-Preview-GGUF
OREAL-7B-GGUF
Mistral-Small-Instruct-2409-GGUF
Skywork-SWE-32B-GGUF
Qwen2.5-1.5B-Instruct-MLX-4bit
Magistral-Small-2507-MLX-8bit
💫 Community Model> Magistral-Small-2507 by mistralai 👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord. Model creator: mistralai Original model: Magistral-Small-2507 MLX quantization: provided by LM Studio team using mlxlm 8-bit quantized version of Magistral-Small-2507 using MLX, optimized for Apple Silicon. 🙏 Special thanks to the Apple Machine Learning Research team for creating MLX. LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.