Yaroster

3 models • 1 total models in database
Sort by:

Secunda 0.1 GGUF

| Version | Type | Strengths | Weaknesses | Recommended Use | |-------------------------------------------------------|-----------------|---------------------------------------------------------------------------|----------------------------------------------------------------------|-----------------------------| | Secunda-0.1-GGUF / RAW | Instruction | - Most precise - Coherent code - Perfected Modelfile | - Smaller context / limited flexibility | Production / Baseline | | Secunda-0.3-F16-QA | QA-based Input | - Acceptable for question-based generation | - Less accurate than 0.1 - Not as coherent | Prototyping (QA mode) | | Secunda-0.3-F16-TEXT | Text-to-text | - Flexible for freeform tasks | - Slightly off - Modelfile-dependent | Experimental / Text rewrite | | Secunda-0.3-GGUF | GGUF build | - Portable GGUF of 0.3 | - Inherits 0.3 weaknesses | Lightweight local testing | | Secunda-0.5-RAW | QA Natural | - Best QA understanding - Long-form generation potential | - Inconsistent output length - Some instability | Research / Testing LoRA | | Secunda-0.5-GGUF | GGUF build | - Portable, inference-ready version of 0.5 | - Shares issues of 0.5 | Offline experimentation | | Secunda-0.1-RAW | Instruction | - Same base as 0.1-GGUF | - Same as 0.1 | Production backup | Secunda-0.1-GGUF is a fully merged and quantized release of Secunda’s original Ren’Py .rpy story generator, built from the LoRA adapters of Secunda-0.1-RAW + LLaMA 3.1 8B — now packaged in GGUF format for lightweight local inference via llama.cpp, llamafile, ollama, or LM Studio. This model produces: • Full define character blocks with color • Backgrounds and sprite image declarations • Narrative arc starting from label start: • Menus, jumps, emotional dialogue • A Ren’Py script that actually runs /!\ NO HUMAN-MADE DATA WAS USED TO TRAIN THIS AI ! Secunda takes much pride in making sure the training data is scripted ! /!\ If you like Visual Novels, please visit itch.io and support independant creators ! | 🌕 Variant | 🔧 Quantization Type | 💾 Filename | 💬 Notes | |------------------|----------------------|-----------------------------------|------------------------------------------------------| | 8-bit Quantized | `q80` | `secunda-0.1-q80.gguf` | Balanced. Great quality & performance tradeoff. | | 2-bit Tiny | `tq20` | `secunda-0.1-tq20.gguf` | Ultra-light. Use on small devices, lower fidelity. | | 1-bit Minimalist | `tq10` | `secunda-0.1-tq10.gguf` | Experimental. For extreme edge deployments. | --- First, make sure you've installed Ollama and cloned the model: Generated 1000+ `.rpy` files Passed human review for structure, creativity & syntax 90% valid output with minimal manual tweaks --- 🪐 Constellation Companions Secunda-0.3-F16-QA — experimental question-answer variant Secunda-0.3-F16-TEXT — for less structured generation Primétoile — full VN pipeline > ✧ Because stories can spark from a single phrase ✧ ⚠️ This repo contains only the LoRA adapter weights. To use the model, download the base `LLaMA 3.1` from Meta (terms apply): https://ai.meta.com/resources/models-and-libraries/llama-downloads/

llama-3
65
1

Secunda-0.3-GGUF

llama-3
42
0

Secunda-0.5-GGUF

llama-3
11
0