cerebras
btlm-3b-8k-base
GLM-4.5-Air-REAP-82B-A12B
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.5-Air-REAP-82B-A12B, a memory-efficient compressed variant of GLM-4.5-Air that maintains near-identical performance while being 25% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 106B model - 25% Memory Reduction: Compressed from 106B to 82B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview GLM-4.5-Air-REAP-82B-A12B has the following specifications: - Base Model: GLM-4.5-Air - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 25% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 82B total, 12B activated per token - Number of Layers: 46 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 96 (uniformly pruned from 128) - Number of Activated Experts: 8 per token - Context Length: 131,072 tokens - License: MIT ๐ฉ This checkpoint maintains almost identical performance while being 25% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.5-Air, with a 25% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.5-Air` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
Qwen3-Coder-REAP-25B-A3B
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing Qwen3-Coder-REAP-25B-A3B, a memory-efficient compressed variant of Qwen3-Coder-30B-A3B-Instruct that maintains near-identical performance while being 20% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 25B model - 20% Memory Reduction: Compressed from 30B to 25B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview Qwen3-Coder-REAP-25B-A3B has the following specifications: - Base Model: Qwen3-Coder-30B-A3B-Instruct - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 20% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 25B total, 3B activated per token - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 103 (uniformly pruned from 128) - Number of Activated Experts: 8 per token - Context Length: 262,144 tokens natively (extendable to 1M with YaRN) - License: Apache 2.0 | Benchmark | Qwen3-Coder-30B-A3B-Instruct | Qwen3-Coder-REAP-25B-A3B | | :------------- | :-------------------------------: | :------------------------: | | Compression | โ | 20% | | HumanEval | 92.1 | 94.5 | | HumanEval+ | 87.8 | 89.0 | | MBPP | 87.6 | 87.3 | | MBPP+ | 73.5 | 72.8 | | LiveCodeBench (25.01 - 25.05) | 35.2 | 35.2 | | BFCL-v3 (Non-Live) | 83.9 | 82.2 | | BFCL-v3 (Live) | 76.2 | 74.0 | | BFCL-v3 (Multi-Turn) | 29.6 | 30.5 | | BFCL-v3 (Overall) | 63.2 | 62.2 | | ๐ยฒ-bench (Airline) | 39.3 | 40.7 | | ๐ยฒ-bench (Retail) | 62.6 | 62.0 | | ๐ยฒ-bench (Telecom) | 33.6 | 32.2 | ๐ฉ This checkpoint maintains almost identical performance while being 20% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of Qwen3-Coder-30B-A3B-Instruct, with a 20% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `Qwen3-Coder-30B-A3B-Instruct` and distributed under the Apache 2.0 License. If you use this checkpoint, please cite the REAP paper:
Cerebras-GPT-111M
GLM-4.5-Air-REAP-82B-A12B-FP8
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.5-Air-REAP-82B-A12B-FP8, a memory-efficient compressed variant of GLM-4.5-Air-FP8 that maintains near-identical performance while being 25% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 106B model - 25% Memory Reduction: Compressed from 106B to 82B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research Note: a BF16 version for more accurate downstream low-bit quantization is also available on HF. GLM-4.5-Air-REAP-82B-A12B-FP8 has the following specifications: - Base Model: GLM-4.5-Air-FP8 - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 25% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 82B total, 12B activated per token - Number of Layers: 46 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 96 (uniformly pruned from 128) - Number of Activated Experts: 8 per token - Context Length: 131,072 tokens - License: MIT TBD for FP8 model. Evalulation results available for the BF16 variant. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.5-Air-FP8, with a 25% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.5-Air-FP8` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
Cerebras-GPT-590M
Cerebras-GPT-6.7B
Cerebras-GPT-1.3B
GLM 4.6 REAP 218B A32B FP8
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.6-REAP-218B-A32B-FP8, a memory-efficient compressed variant of GLM-4.6-FP8 that maintains near-identical performance while being 40% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model - 40% Memory Reduction: Compressed from 355B to 218B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research For downstream low-bit quantization, we suggest using the BF16 variant. GLM-4.6-REAP-218B-A32B-FP8 has the following specifications: - Base Model: GLM-4.6-FP8 - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 40% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 218B total, 32B activated per token - Number of Layers: 92 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 96 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 202,752 tokens - License: MIT Benchmark GLM-4.6-FP8 GLM-4.6-REAP-268B-A32B-FP8 GLM-4.6-REAP-252B-A32B-FP8 GLM-4.6-REAP-218B-A32B-FP8 ๐ฉ This checkpoint maintains almost identical performance while being 40% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.6-FP8, with a 40% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.6-FP8` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
Cerebras-GPT-256M
Cerebras-GPT-2.7B
Cerebras-GPT-13B
GLM 4.6 REAP 218B A32B
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.6-REAP-218B-A32B-FP8, a memory-efficient compressed variant of GLM-4.6-FP8 that maintains near-identical performance while being 40% lighter. Note: this is a BF16 version for more accurate downstream low-bit quantization. An FP8 version is also available on HF. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model - 40% Memory Reduction: Compressed from 355B to 218B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview GLM-4.6-REAP-218B-A32B-FP8 has the following specifications: - Base Model: GLM-4.6-FP8 - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 40% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 218B total, 32B activated per token - Number of Layers: 92 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 96 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 202,752 tokens - License: MIT TBD for BF16 model. Evalulation results available for the FP8 variant. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.6-FP8, with a 40% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.6-FP8` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
Qwen3 Coder REAP 246B A35B FP8
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing Qwen3-Coder-REAP-246B-A35B-FP8, a memory-efficient compressed variant of Qwen3-Coder-480B-A35B-Instruct-FP8 that maintains near-identical performance while being 50% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 480B model - 50% Memory Reduction: Compressed from 480B to 246B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview Qwen3-Coder-REAP-246B-A35B-FP8 has the following specifications: - Base Model: Qwen3-Coder-480B-A35B-Instruct - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 50% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 246B total, 35B activated per token - Number of Layers: 62 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 80 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 262,144 tokens natively (extendable to 1M with YaRN) - Quantization: FP8 - License: Apache 2.0 | Benchmark | Qwen3-Coder-480B-A35B-Instruct-FP8 | Qwen3-Coder-REAP-363B-A35B-FP8 | Qwen3-Coder-REAP-246B-A35B-FP8 | | :------------- | :-------------------------------: | :------------------------: | :------------: | | Compression | โ | 25% | 50% | | HumanEval | 95.1 | 95.7 | 93.9 | | HumanEval+ | 89.0 | 89.0 | 87.2 | | MBPP | 92.3 | 91.7 | 91.0 | | MBPP+ | 79.1 | 77.2 | 77.2 | | LiveCodeBench (25.01 - 25.05) | 43.1 | 41.6 | 41.5 | | SWE-Bench-Verified (w/ mini-swe-agent) | 54.0 | 54.0 | 52.2 | | BFCL-v3 (Non-Live) | 86.6 | 87.8 | 84.9 | | BFCL-v3 (Live) | 82.5 | 82.3 | 80.1 | | BFCL-v3 (Multi-Turn) | 38.0 | 39.2 | 37.1 | | BFCL-v3 (Overall) | 69.0 | 69.8 | 67.4 | | ๐ยฒ-bench (Airline) | 46.0 | 48.7 | 44.7 | | ๐ยฒ-bench (Retail) | 64.3 | 66.1 | 63.2 | | ๐ยฒ-bench (Telecom) | 50.0 | 52.9 | 47.1 | | TerminalBench 0.1.1 (Terminus agent) | 30.5 | 30.5 | 30.0 | ๐ฉ This checkpoint maintains comparable performance while being 50% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of Qwen3-Coder-480B-A35B-Instruct, with a 50% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `Qwen/Qwen3-Coder-480B-A35B-Instruct` and distributed under the Apache 2.0 License. If you use this checkpoint, please cite the REAP paper:
GLM 4.6 REAP 268B A32B
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.6-REAP-268B-A32B, a memory-efficient compressed variant of GLM-4.6 that maintains near-identical performance while being 25% lighter. Note: this is a BF16 version for more accurate downstream low-bit quantization. An FP8 version is also available on HF. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model - 25% Memory Reduction: Compressed from 355B to 268B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview GLM-4.6-REAP-268B-A32B has the following specifications: - Base Model: GLM-4.6 - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 25% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 268B total, 32B activated per token - Number of Layers: 92 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 120 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 202,752 tokens - License: MIT TBD for BF16 model. Evalulation results available for the FP8 variant. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.6, with a 25% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.6` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
GLM 4.6 REAP 268B A32B FP8
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.6-REAP-268B-A32B-FP8, a memory-efficient compressed variant of GLM-4.6-FP8 that maintains near-identical performance while being 25% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model - 25% Memory Reduction: Compressed from 355B to 268B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research For downstream low-bit quantization, we suggest using the BF16 variant. GLM-4.6-REAP-268B-A32B-FP8 has the following specifications: - Base Model: GLM-4.6-FP8 - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 25% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 268B total, 32B activated per token - Number of Layers: 92 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 120 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 202,752 tokens - License: MIT Benchmark GLM-4.6-FP8 GLM-4.6-REAP-268B-A32B-FP8 GLM-4.6-REAP-252B-A32B-FP8 GLM-4.6-REAP-218B-A32B-FP8 ๐ฉ This checkpoint maintains almost identical performance while being 25% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.6-FP8, with a 25% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.6-FP8` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
Qwen3 Coder REAP 363B A35B FP8
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing Qwen3-Coder-REAP-363B-A35B-FP8, a memory-efficient compressed variant of Qwen3-Coder-480B-A35B-Instruct-FP8 that maintains near-identical performance while being 25% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 480B model - 25% Memory Reduction: Compressed from 480B to 363B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview Qwen3-Coder-REAP-363B-A35B-FP8 has the following specifications: - Base Model: Qwen3-Coder-480B-A35B-Instruct - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 25% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 363B total, 35B activated per token - Number of Layers: 62 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 120 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 262,144 tokens natively (extendable to 1M with YaRN) - Quantization: FP8 - License: Apache 2.0 | Benchmark | Qwen3-Coder-480B-A35B-Instruct-FP8 | Qwen3-Coder-REAP-363B-A35B-FP8 | Qwen3-Coder-REAP-246B-A35B-FP8 | | :------------- | :-------------------------------: | :------------------------: | :------------: | | Compression | โ | 25% | 50% | | HumanEval | 95.1 | 95.7 | 93.9 | | HumanEval+ | 89.0 | 89.0 | 87.2 | | MBPP | 92.3 | 91.7 | 91.0 | | MBPP+ | 79.1 | 77.2 | 77.2 | | LiveCodeBench (25.01 - 25.05) | 43.1 | 41.6 | 41.5 | | SWE-Bench-Verified (w/ mini-swe-agent) | 54.0 | 54.0 | 52.2 | | BFCL-v3 (Non-Live) | 86.6 | 87.8 | 84.9 | | BFCL-v3 (Live) | 82.5 | 82.3 | 80.1 | | BFCL-v3 (Multi-Turn) | 38.0 | 39.2 | 37.1 | | BFCL-v3 (Overall) | 69.0 | 69.8 | 67.4 | | ๐ยฒ-bench (Airline) | 46.0 | 48.7 | 44.7 | | ๐ยฒ-bench (Retail) | 64.3 | 66.1 | 63.2 | | ๐ยฒ-bench (Telecom) | 50.0 | 52.9 | 47.1 | | TerminalBench 0.1.1 (Terminus agent) | 30.5 | 30.5 | 30.0 | ๐ฉ This checkpoint maintains almost identical performance while being 25% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of Qwen3-Coder-480B-A35B-Instruct, with a 25% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `Qwen/Qwen3-Coder-480B-A35B-Instruct` and distributed under the Apache 2.0 License. If you use this checkpoint, please cite the REAP paper:
GLM 4.6 REAP 252B A32B FP8
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing GLM-4.6-REAP-252B-A32B-FP8, a memory-efficient compressed variant of GLM-4.6-FP8 that maintains near-identical performance while being 30% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 355B model - 30% Memory Reduction: Compressed from 355B to 252B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research For downstream low-bit quantization, we suggest using the BF16 variant. GLM-4.6-REAP-252B-A32B-FP8 has the following specifications: - Base Model: GLM-4.6-FP8 - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 30% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 252B total, 32B activated per token - Number of Layers: 92 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 112 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 202,752 tokens - License: MIT Benchmark GLM-4.6-FP8 GLM-4.6-REAP-268B-A32B-FP8 GLM-4.6-REAP-252B-A32B-FP8 GLM-4.6-REAP-218B-A32B-FP8 ๐ฉ This checkpoint maintains almost identical performance while being 30% lighter. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of GLM-4.6-FP8, with a 30% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `zai-org/GLM-4.6-FP8` and distributed under the MIT license. If you use this checkpoint, please cite the REAP paper:
GLM 4.6 REAP 252B A32B
Qwen3-Coder-REAP-246B-A35B
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing Qwen3-Coder-REAP-246B-A35B, a memory-efficient compressed variant of Qwen3-Coder-480B-A35B-Instruct that maintains near-identical performance while being 50% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 480B model - 50% Memory Reduction: Compressed from 480B to 246B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview Qwen3-Coder-REAP-246B-A35B has the following specifications: - Base Model: Qwen3-Coder-480B-A35B-Instruct - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 50% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 246B total, 35B activated per token - Number of Layers: 62 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 80 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 262,144 tokens natively (extendable to 1M with YaRN) - Quantization: FP8 - License: Apache 2.0 TBD for BF16 model. Evalulation results available for the FP8 variant. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of Qwen3-Coder-480B-A35B-Instruct, with a 50% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `Qwen/Qwen3-Coder-480B-A35B-Instruct` and distributed under the Apache 2.0 License. If you use this checkpoint, please cite the REAP paper:
Cerebras-ViT-L-336-patch14-llava7b-ShareGPT4V
Qwen3-Coder-REAP-363B-A35B
๐ณ REAP ๐ณ the Experts: Why Pruning Prevails for One-Shot MoE Compression Introducing Qwen3-Coder-REAP-363B-A35B, a memory-efficient compressed variant of Qwen3-Coder-480B-A35B-Instruct that maintains near-identical performance while being 25% lighter. This model was created using REAP (Router-weighted Expert Activation Pruning), a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - Near-Lossless Performance: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 480B model - 25% Memory Reduction: Compressed from 480B to 363B parameters, significantly lowering deployment costs and memory requirements - Preserved Capabilities: Retains all core functionalities including code generation, agentic workflows, repository-scale understanding, and function calling - Drop-in Compatibility: Works with vanilla vLLM - no source modifications or custom patches required - Optimized for Real-World Use: Particularly effective for resource-constrained environments, local deployments, and academic research --- ๐ Model Overview Qwen3-Coder-REAP-363B-A35B has the following specifications: - Base Model: Qwen3-Coder-480B-A35B-Instruct - Compression Method: REAP (Router-weighted Expert Activation Pruning) - Compression Ratio: 25% expert pruning - Type: Sparse Mixture-of-Experts (SMoE) Causal Language Model - Number of Parameters: 363B total, 35B activated per token - Number of Layers: 62 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 120 (uniformly pruned from 160) - Number of Activated Experts: 8 per token - Context Length: 262,144 tokens natively (extendable to 1M with YaRN) - Quantization: FP8 - License: Apache 2.0 TBD for BF16 model. Evalulation results available for the FP8 variant. For more details on the evaluation setup, refer to the REAP arXiv preprint. You can deploy the model directly using the latest vLLM (v0.11.0), no source modifications or custom patches required. If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). This checkpoint was created by applying the REAP (Router-weighted Expert Activation Pruning) method uniformly across all Mixture-of-Experts (MoE) blocks of Qwen3-Coder-480B-A35B-Instruct, with a 25% pruning rate. REAP selects experts to prune based on a novel saliency criterion that considers both: - Router gate values: How frequently and strongly the router activates each expert - Expert activation norms: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. - One-Shot Compression: No fine-tuning required after pruning - the model is immediately ready for deployment - Preserved Router Control: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - Generative Task Superiority: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks The model was calibrated using a diverse mixture of domain-specific datasets including: - Code generation samples (evol-codealpaca) - Function calling examples (xlam-function-calling) - Agentic multi-turn trajectories (SWE-smith-trajectories) ๐ For more details, refer to the following resources: - ๐งพ arXiv Preprint - ๐งพ REAP Blog - ๐ป REAP Codebase (GitHub) This model is derived from `Qwen/Qwen3-Coder-480B-A35B-Instruct` and distributed under the Apache 2.0 License. If you use this checkpoint, please cite the REAP paper: