xxx777xxxASD
ChaoticSoliloquy-4x8B-GGUF
L3.1-ClaudeMaid-4x8B-GGUF
L3-ChaoticSoliloquy-v1.5-4x8B-GGUF
L3-ChaoticSoliloquy-v2-4x8B-test-GGUF
NeuralKunoichi-EroSumika-4x7B-128k-GGUF
PrimaMonarch-EroSumika-2x10.7B-128k-GGUF
L3-SnowStorm-v1.15-4x8B-A
L3_SnowStorm_4x8B
PrimaMonarch-EroSumika-2x10.7B-128k
L3.1-ClaudeMaid-4x8B
.image-container { position: relative; display: inline-block; } .image-container img { display: block; border-radius: 10px; box-shadow: 0 0 1px rgba(0, 0, 0, 0.3); } .image-container::before { content: ""; position: absolute; top: 0px; left: 20px; width: calc(100% - 40px); height: calc(100%); background-image: url("https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/O0FlWv4L8ZnehOGETw7qt.png"); background-size: cover; filter: blur(10px); z-index: -1; } It seems like 1.71 koboldcpp can't run GGUFs of llama-3.1 MoE models yet, or perhaps im just dumb and messed something up. If anyone has similar problem - run the model directly from llama.cpp, here's simple open source GUI(Windows) you can use if the console is your worst enemy - NeverSleep/Lumimaid-v0.2-8B - Undi95/Meta-Llama-3.1-8B-Claude - Nitral-AI/SekhmetBet-L3.1-8B-v0.2 - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored