RESMP-DEV
Qwen3 Next 80B A3B Thinking NVFP4
Quantized version of Qwen/Qwen3-Next-80B-A3B-Thinking using LLM Compressor and the NVFP4 (E2M1 + E4M3) format. This should be the start of a new series of hopefully optimal NVFP4 quantizations as capable cards continue to grow out in the wild. | Property | Value | |-----------|--------| | Base model | Qwen/Qwen3-Next-80B-A3B-Thinking | | Quantization | NVFP4 (FP4 microscaling, block = 16, scale = E4M3) | | Method | Post-Training Quantization with LLM Compressor | | Toolchain | LLM Compressor | | Hardware target | NVIDIA Blackwell (Untested on RTX cards) / GB200 Tensor Cores | | Precision | Weights & activations = FP4 • Scales = FP8 (E4M3) | | Maintainer | RESMP.DEV | This model is a drop-in replacement for Qwen/Qwen3-Next-80B-A3B-Thinking that runs in NVFP4 precision. Accuracy remains within ≈ 1 % of the FP8 baseline on standard reasoning and coding benchmarks.
GLM-4.6-NVFP4
Quantized version of GLM-4.6 using LLM Compressor and the NVFP4 (E2M1 + E4M3) format. This should be the start of a new series of hopefully optimal NVFP4 quantizations as capable cards continue to grow out in the wild. | Property | Value | |-----------|--------| | Base model | GLM-4.6 | | Quantization | NVFP4 (FP4 microscaling, block = 16, scale = E4M3) | | Method | Post-Training Quantization with LLM Compressor | | Toolchain | LLM Compressor | | Hardware target | NVIDIA Blackwell (Untested on RTX cards) / GB200 Tensor Cores | | Precision | Weights & activations = FP4 • Scales = FP8 (E4M3) | | Maintainer | REMSP.DEV | This model is a drop-in replacement for GLM-4.6 that runs in NVFP4 precision, enabling up to 6× faster GEMM throughput and around 65 % lower memory use compared with BF16. Accuracy remains within ≈ 1 % of the FP8 baseline on standard reasoning and coding benchmarks.
Qwen3-Next-80B-A3B-Instruct-NVFP4
gpt-oss-20b-bash-unsloth
This model is a fine-tuned version of unsloth/gpt-oss-20b-unsloth-bnb-4bit. It has been trained using TRL. - PEFT 0.17.0 - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4