KimChen
3 models • 1 total models in database
Sort by:
Bge M3 GGUF
KimChen/bge-m3-GGUF This model was converted to GGUF format from `BAAI/bge-m3` using llama.cpp. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
NaNK
llama-cpp
488
12
c4ai-command-r-08-2024
NaNK
license:cc-by-nc-4.0
22
2
gemma-2-27b-it-Q8_0-GGUF
NaNK
llama-cpp
6
2