Qwen3-Next-80B-A3B-Instruct-split-GGUF

99
1
Q4
license:apache-2.0
by
lefromage
Language Model
OTHER
80B params
New
99 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
179GB+ RAM
Mobile
Laptop
Server
Quick Summary

Another way to download the Q2K quant model pieces: check https://huggingface.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
75GB+ RAM

Code Examples

text
...

user
explain quantum computing in a paragraph
assistant
Quantum computing is a revolutionary approach to computation that leverages the principles of quantum mechanics—such as superposition, entanglement, and interference—to process information in fundamentally different ways than classical computers. Instead of using binary bits (0 or 1), quantum computers use quantum bits, or qubits, which can exist in a combination of 0 and 1 simultaneously thanks to superposition. This allows a quantum computer to explore many possible solutions at once. When qubits become entangled, their states become interdependent, meaning the state of one instantly influences the other, even at a distance. By manipulating these qubits with precise microwave or laser pulses, quantum algorithms can solve certain problems—like factoring large numbers, simulating molecules, or optimizing complex systems—exponentially faster than classical computers. While still in early development and highly sensitive to environmental noise, quantum computing holds the potential to transform fields like cryptography, drug discovery, artificial intelligence, and financial modeling. [end of text]


llama_perf_sampler_print:    sampling time =      13.05 ms /   210 runs   (    0.06 ms per token, 16093.19 tokens per second)
llama_perf_context_print:        load time =   12190.98 ms

llama_perf_context_print: prompt eval time =    5201.06 ms /    14 tokens (  371.50 ms per token,     2.69 tokens per second)
llama_perf_context_print:        eval time =   31579.94 ms /   195 runs   (  161.95 ms per token,     6.17 tokens per second)
llama_perf_context_print:       total time =   36857.21 ms /   209 tokens

llama_perf_context_print:    graphs reused =          0
llama_memory_breakdown_print: | memory breakdown [MiB]   | total    free     self   model   context   compute    unaccounted |
llama_memory_breakdown_print: |   - Metal (Apple M4 Max) | 98304 = 70034 + (28151 = 27675 +     171 +     304) +         117 |
llama_memory_breakdown_print: |   - Host                 |                    167 =    97 +       0 +      70                |
ggml_metal_free: deallocating

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.