concedo
llama-joycaption-beta-one-hf-llava-mmproj-gguf
These GGUF quants were made from https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava and designed for use in KoboldCpp 1.91 and above. Contains 3 GGUF quants of Joycaption Beta One, as well as the associated mmproj file. To use: - Download the main model (Llama-Joycaption-Beta-One-Hf-Llava-Q4K.gguf) and the mmproj (Llama-Joycaption-Beta-One-Hf-Llava-F16.gguf) - Launch KoboldCpp and go to Loaded Files tab - Select the main model as "Text Model" and the mmproj as "Vision mmproj"
Beepo-22B-GGUF
KobbleTinyV2-1.1B-GGUF
Pythia-70M-ChatSalad
Vicuzard-30B-Uncensored
OPT-19M-ChatSalad
koboldcpp
Mini-Magnum-Unboxed-12B-GGUF
CabbageSoup-24B-GGUF
KobbleSmall-2B-GGUF
CrabSoup-GGUF
Huihui-GLM-4.5-Air-abliterated-lossytensors
huihui-ai/Huihui-GLM-4.5-Air-abliterated-lossytensors This is a lossy safetensors version of Huihui-GLM-4.5-Air-abliterated-GGUF that can be run with huggingface transformers, since the original release did not include safetensors files. It was re-converted back into .safetensors manually from the Q4KM GGUF file. As such, although the weights are now in BF16, it is considered a "lossy" version of inferior quality to the full precision model. However, you should now be able to use it with backends that cannot use GGUF and require safetensors (e.g. MLX, VLLM). Avoid requantizing it to formats above Q4KM - You will NOT gain any additional quality. If you require the max precision version, you'll have to buy it from huihui.
Phi-SoSerious-Mini-V1-GGUF
KobbleTinyV2-1.1B
CabbageSoup-24B
This is a merge of Broken-Tutu-24B-Unslop-v2.0 and Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated created using mergekit. It mellows out some of the biases of Broken Tutu and steers it back towards baseline Mistral Small 3.2 24B. Note that the resultant model is still censored per se - it will require the appropriate system prompt or jailbreak in order to get unrestricted responses, similar to Broken Tutu. GGUF quants can be found at https://huggingface.co/concedo/CabbageSoup-24B-GGUF This model was merged using the Linear merge method using Broken-Tutu-24B-Unslop-v2.0 as a base. The following models were included in the merge: Huihui-Mistral-Small-3.2-24B-Ablit-Novision The following YAML configuration was used to produce this model: