s3nh
Gryphe-MythoMax-L2-13b-GGUF
UTENA-7B-NSFW-V2-GGUF
AdaptLLM-law-LLM-13B-GGUF
AdaptLLM-finance-LLM-13B-GGUF
AdaptLLM-medicine-LLM-13B-GGUF
NSFW-Panda-7B-GGUF
s3nh-nsfw-noromaid-zephyr-GGUF
Noromaid-Aeryth-7B-GGUF
decapoda-research-Antares-11b-v1-GGUF
Tensoic-TinyLlama-1.1B-3T-openhermes-GGUF
EduHelp 8B
EduHelper is a child-friendly tutoring assistant fine-tuned from the Qwen3-8B base model using parameter-efficient fine-tuning (PEFT) with LoRA on the ajibawa-2023/Education-Young-Children dataset. - Base model: Qwen3-8B - Method: PEFT (LoRA), adapters merged into the final weights - Training data: Education-Young-Children - Intended use: Gentle, age-appropriate explanations and basic tutoring for young learners - Language: Primarily English - Safety: Requires adult supervision; not a substitute for professional advice - Architecture: Decoder-only LLM (chat/instruction style), based on Qwen3-8B - Training approach: Supervised fine-tuning with LoRA (via PEFT), adapters merged into the base model for standalone deployment - Focus: Clear, simple, supportive answers for early-learning contexts (e.g., basic reading, counting, everyday knowledge) Please refer to the Qwen3-8B base model card for detailed architecture and licensing. - Suitable for: - Simple explanations and step-by-step guidance - Basic arithmetic and counting practice - Short reading comprehension and vocabulary support - Everyday factual knowledge for children - Not suitable for: - Medical, legal, or emergency advice - Unsupervised use by children - High-stakes or specialized professional tasks The model can make mistakes or produce content that may not be perfectly age-appropriate. Always supervise and review outputs. - Dataset: ajibawa-2023/Education-Young-Children - Description: Educational prompts and responses oriented toward young children - Notes: Review the dataset card for curation details and license. Ensure compliance when redistributing or deploying. Tips: - For more focused answers, try `temperature=0.2–0.5`. - Add a clear system prompt to reinforce gentle, age-appropriate behavior. - Supervision: Children should use this model under adult supervision. - Content filtering: Consider additional filtering or guardrails to ensure age-appropriate outputs. - Biases: The model may reflect biases present in training data. Review outputs in your application context. --- - Knowledge breadth and factuality are bounded by the base model and dataset. - Not optimized for advanced reasoning or specialized domains. - May occasionally produce overly complex or off-topic responses. If you use EduHelper, please cite the model and its components: - The Qwen3-8B base model (per its model card) - The ajibawa-2023/Education-Young-Children dataset - Base model: Qwen3-8B by the Qwen team - Dataset: ajibawa-2023/Education-Young-Children Thanks for lium.io for generous grant Thanks for basilica.ai for access to hardware
EstopianMaid-13B-GGUF
SanjiWatsuki-Silicon-Maid-7B-GGUF
MiniCPM-2B-dpo-fp32-GGUF
fnlp-moss-base-7b-GGUF
Kunocchini-7b-128k-test-GGUF
nsfw-noromaid-mistral-instruct-GGUF
Sao10K-Stheno-L2-13B-GGUF
HiTZ-GoLLIE-7B-GGUF
Sao10K-Sensualize-Solar-10.7B-GGUF
WestSeverus-7B-DPO-GGUF
s3nh-Law-Noromaid-13b-GGUF
flux-7b-v0.2-GGUF
ehartford-WizardLM-1.0-Uncensored-Llama2-13b-GGUF
Undi95-Unholy-v2-13B-GGUF
Guilherme34-Samantha-v2-GGUF
Tensoic-TinyLlama-1.1B-2.5T-openhermes-GGUF
intfloat-e5-mistral-7b-instruct-GGUF
whiterabbitneo-WhiteRabbitNeo-13B-GGUF
stabilityai-japanese-stablelm-instruct-gamma-7b-GGUF
GEITje-7B-ultra-GGUF
jeff31415-TinyLlama-1.1B-1.5T-OpenOrca-Alpha-GGUF
stabilityai-japanese-stablelm-base-gamma-7b-GGUF
Kunoichi-DPO-v2-7B-GGUF
Obrolin-Kesehatan-7B-GGUF
Fredithefish-CanarY-GGUF
openerotica-cockatrice-7b-v0.2-GGUF
cockatrice-7b-v0.3-GGUF
Llama-2-13b-chat-dutch-GGUF
NeuralNovel-Tanuki-7B-v0.1-GGUF
gizmo-ai-Starling-LM-7B-alpha-GGUF
ajibawa-2023-Uncensored-Jordan-13B-GGUF
NexoNimbus-7B-GGUF
SanjiWatsuki-Lelantos-7B-GGUF
ajibawa-2023-Uncensored-Jordan-7B-GGUF
MathLLM-MathCoder-CL-7B-GGUF
elonmollusk-neuralogix-neural-chat-v1-GGUF
mlabonne-Marcoro14-7B-slerp-GGUF
occultml-Helios-10.7B-v2-GGUF
jeonsworld-CarbonVillain-10.7B-v1-GGUF
Mistral-7B-Evol-Instruct-Chinese-GGUF
FlagAlpha-Llama2-Chinese-13b-Chat-GGUF
ToolBench-ToolLLaMA-2-7b-v2-GGUF
SanjiWatsuki-Sonya-7B-GGUF
s3nh-Medicine-Noromaid-13b-GGUF
Blurred-Beagle-7b-slerp-GGUF
TachyHealth-Thealth-SLERP-GGUF
DopeorNope-Mark1-10.7B-GGUF
mlabonne-Marcoro14-7B-ties-GGUF
DopeorNope-SOLARC-M-10.7B-GGUF
abideen-x-7B-GGUF
NarutoDolphin-10B-GGUF
diffnamehard-Psyfighter2-Noromaid-ties-13B-GGUF
hunkim-NousResearch-Llama-2-7b-hf-ko-7-koalpaca-v1.1a-kopen-platypus-GGUF
akjindal53244-Arithmo-Mistral-7B-GGUF
TinyLlama-1.1B-32k-GGUF
GeneZC-MiniChat-2-3B-GGUF
UTENA-7B-V3-GGUF
Spanicin-Fulcrum-7B-slerp-GGUF
TIGER-Lab-TIGERScore-7B-V1.0-GGUF
Masterjp123-NeuralMaid-7b-GGUF
s3nh-Finance-Noromaid-13b-GGUF
migtissera-Synthia-7B-v1.2-GGUF
Masterjp123-Clover3-13B-GGUF
vickt-LLama-chinese-med-chat-GGUF
NeverSleepHistorical-Noromaid-7B-0.4-GGUF
Azazelle-Tippy-Toppy-7b-GGUF
MaziyarPanahi-Seraph-7B-Mistral-7B-Instruct-v0.2-slerp-GGUF
s3nh-Noromaid-Panda-7B-GGUF
functionary-small-v2.2-GGUF
Edentns-DataVortexM-7B-Instruct-v0.1-GGUF
koala-7B-slerp-GGUF
Photolens-OpenOrcaxOpenChat-2-13b-langchain-chat-GGUF
Unbabel-TowerInstruct-7B-v0.1-GGUF
Novocode7b-GGUF
ToolBench-ToolLLaMA-2-7b-v1-GGUF
jan-hq-supermario-v2-GGUF
Ketak-ZoomRx-Drug_Ollama_v3-2-GGUF
Aeryth-7B-v0.1-GGUF
TinyLlama-de-stage2-v0.7-GGUF
Rhino-Mistral-7B-GGUF
NeuralDaredevil-7B-GGUF
Vikhr-7b-0.1-GGUF
azale-ai-Starstreak-7b-alpha-GGUF
Newton-7B-GGUF
Open-Orca-OpenOrca-Preview1-13B-GGUF
HamSter-0.2-GGUF
Chikuma_10.7B-GGUF
DaringLotus-GGUF
elonmollusk-neuralogix-openhermes-v2-GGUF
s3nh-Sonya-Panda-7B-slerp-GGUF
golaxy-gogpt2-7b-GGUF
totally-not-an-llm-AlpacaCielo2-7b-8k-GGUF
Sao10K-Winterreise-m7-GGUF
abacusai-Giraffe-13b-32k-v3-GGUF
beberik-Lonepino-11B-GGUF
SnowLotus-v2-10.7B-GGUF
likenneth-honest_llama2_chat_7B-GGUF
Lelantos-Maid-DPO-7B-GGUF
MathLLM-MathCoder-L-7B-GGUF
HiTZ-GoLLIE-13B-GGUF
Doctor-Shotgun-TinyLlama-1.1B-32k-Instruct-GGUF
lmsys-longchat-7b-v1.5-32k-GGUF
mlabonne-NeuralPipe-7B-slerp-GGUF
Delcos-Velara-11B-V2-GGUF
ALMA-7B-GGUF
YeungNLP-firefly-llama-13b-GGUF
OEvortex-HelpingAI-GGUF
bibidentuhanoi-BMO-7B-Instruct-GGUF
phanerozoic-Mistral-Pirate-7b-v0.3-GGUF
Azazelle-Yuna-7b-Merge-GGUF
Azazelle-Maylin-7b-GGUF
sethuiyer-Dr_Samantha_7b_mistral-GGUF
TencentARC-LLaMA-Pro-8B-Instruct-GGUF
MarkrAI-MarK2-10.7B-GGUF
Neuronovo-neuronovo-7B-v0.3-GGUF
Wernicke-7B-dpo-GGUF
WSB-GPT-7B-GGUF
garage-bAInd-Stable-Platypus2-13B-GGUF
Yash21-TinyYi-7b-GGUF
Henk717-spring-dragon-GGUF
Yash21-SuperChat-7B-GGUF
arkanbima-Aethizin-10.7B-GGUF
Loquace-tiny-1.1B-GGUF
Synatra-7B-v0.3-dpo-GGUF
TinyDolphin-2.8-1.1b-GGUF
PistachioAlt-Noromaid-Bagel-7B-Slerp-GGUF
Dr_Samantha-7b-GGUF
ALMA-13B-GGUF
Patronum-7B-GGUF
Mistral-7B-Instruct-v0.2-Neural-Story-GGUF
Faraday-7B-GGUF
zephyr-speakleash-007-pl-8192-32-16-0.05-GGUF
CapybaraHermes-2.5-Mistral-7B-GGUF
beksinski-style-stable-diffusion
artwork-arcane-stable-diffusion
NeuralBeagle-11B-GGUF
DistilabelBeagle14-7B-GGUF
NSFW-Panda-7B
MedChat3.5-GGUF
Gorgon-7b-v0.1-GGUF
TinyGauss-1.1B-GGUF
latxa-7b-v1-GGUF
WestLake-7B-v2-GGUF
UTENA-7B-UNA-V2-GGUF
multimaster-7b-GGUF
EduHelp_Beck_8B
Sydney_Pirate_Mistral_7b-GGUF
sethuiyer-SynthIQ-7b-GGUF
Voldemort-10B-DPO-GGUF
mistral-7b-lamia-v0.1b-GGUF
Hermaid-7B-GGUF
Novocode7b-v3-GGUF
Thespis-Mistral-7b-v0.7-GGUF
DuckDB-NSQL-7B-v0.1-GGUF
Tess-10.7B-v1.5b-GGUF
NonSense
edu_assistant_qwen1.7b__5lora_nf
- Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
edu_assistant_qwen1.7b__6lora_nf
- Developed by: [More Information Needed] - Funded by [optional]: [More Information Needed] - Shared by [optional]: [More Information Needed] - Model type: [More Information Needed] - Language(s) (NLP): [More Information Needed] - License: [More Information Needed] - Finetuned from model [optional]: [More Information Needed] - Repository: [More Information Needed] - Paper [optional]: [More Information Needed] - Demo [optional]: [More Information Needed] Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]
edu_assistant_qwen1.7b
zelda-botw-stable-diffusion
I present you fine tuned model of stable-diffusion-v1-5, which heavily based of work of great artworks from Legend of Zelda: Breath of The Wild. Use the tokens botw style in your prompts for the effect. Model was trained using the diffusers library, which based on Dreambooth implementation. Training steps included: - prior preservation loss - train-text-encoder fine tuning This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion. You can also export the model to ONNX, MPS and/or [FLAX/JAX](). This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here