concedo

27 models • 1 total models in database
Sort by:

llama-joycaption-beta-one-hf-llava-mmproj-gguf

These GGUF quants were made from https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava and designed for use in KoboldCpp 1.91 and above. Contains 3 GGUF quants of Joycaption Beta One, as well as the associated mmproj file. To use: - Download the main model (Llama-Joycaption-Beta-One-Hf-Llava-Q4K.gguf) and the mmproj (Llama-Joycaption-Beta-One-Hf-Llava-F16.gguf) - Launch KoboldCpp and go to Loaded Files tab - Select the main model as "Text Model" and the mmproj as "Vision mmproj"

4,310
44

Beepo-22B-GGUF

NaNK
2,416
48

KobbleTinyV2-1.1B-GGUF

NaNK
license:apache-2.0
902
14

Pythia-70M-ChatSalad

784
6

Vicuzard-30B-Uncensored

NaNK
llama
779
11

OPT-19M-ChatSalad

772
18

koboldcpp

126
5

Mini-Magnum-Unboxed-12B-GGUF

NaNK
license:apache-2.0
107
4

CabbageSoup-24B-GGUF

NaNK
81
0

KobbleSmall-2B-GGUF

NaNK
52
2

CrabSoup-GGUF

51
0

Huihui-GLM-4.5-Air-abliterated-lossytensors

huihui-ai/Huihui-GLM-4.5-Air-abliterated-lossytensors This is a lossy safetensors version of Huihui-GLM-4.5-Air-abliterated-GGUF that can be run with huggingface transformers, since the original release did not include safetensors files. It was re-converted back into .safetensors manually from the Q4KM GGUF file. As such, although the weights are now in BF16, it is considered a "lossy" version of inferior quality to the full precision model. However, you should now be able to use it with backends that cannot use GGUF and require safetensors (e.g. MLX, VLLM). Avoid requantizing it to formats above Q4KM - You will NOT gain any additional quality. If you require the max precision version, you'll have to buy it from huihui.

license:mit
48
4

Phi-SoSerious-Mini-V1-GGUF

20
6

KobbleTinyV2-1.1B

NaNK
llama
9
11

CabbageSoup-24B

This is a merge of Broken-Tutu-24B-Unslop-v2.0 and Huihui-Mistral-Small-3.2-24B-Instruct-2506-abliterated created using mergekit. It mellows out some of the biases of Broken Tutu and steers it back towards baseline Mistral Small 3.2 24B. Note that the resultant model is still censored per se - it will require the appropriate system prompt or jailbreak in order to get unrestricted responses, similar to Broken Tutu. GGUF quants can be found at https://huggingface.co/concedo/CabbageSoup-24B-GGUF This model was merged using the Linear merge method using Broken-Tutu-24B-Unslop-v2.0 as a base. The following models were included in the merge: Huihui-Mistral-Small-3.2-24B-Ablit-Novision The following YAML configuration was used to produce this model:

NaNK
7
0

Beepo-22B

NaNK
5
55

Phi-SoSerious-Mini-V1

2
8

pygmalion-6bv3-ggml-ggjt

NaNK
0
13

cerebras-111M-ggml

0
7

Mini-Magnum-Unboxed-12B

NaNK
license:apache-2.0
0
5

janeway-6b-ggml

NaNK
0
4

rwkv-v4-169m-ggml

0
4

pythia-70m-chatsalad-ggml

0
3

FireGoatInstruct

0
3

KobbleSmall-2B

NaNK
0
3

cerebras-2.7b-ggml

NaNK
0
1

OpenLLAMA-3B-GGML

NaNK
0
1