Ali-Yaser
Qwen-turkis
Llama-3.3-krix-v2
Gemma-3-27b-krix-v2
Qwen-turkish
This model was finetuned and converted to GGUF format using Unsloth. Example usage: - For text only LLMs: llama-cli --hf repoid/modelname -p "why is the sky blue?" - For multimodal models: llama-mtmd-cli -m modelname.gguf --mmproj mmprojfile.gguf Available Model files: - `qwen3-4b-instruct-2507.F16.gguf` Ollama An Ollama Modelfile is included for easy deployment. This model is a fine-tuned version for Turkish and is currently at version v0.1. The datasets used are publicly available.
LLaMA-3.1-turkis-8b
(important warning) If you are using the model in GGUF mode, you need to configure the prompt template. Here is the recommended prompt template: PARAMETER stop "### Instruction:" PARAMETER stop "### Response:" - GGUF MODELS https://huggingface.co/mradermacher/LLaMA-3.1-turkis-8b-GGUF; https://huggingface.co/mradermacher/LLaMA-3.1-turkis-8b-i1-GGUF Thanks for mradermacher for converting the model to GGUF format. - Developed by: Ali-Yaser - License: llama 3.1 - Finetuned from model : unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This model, Llama 3.1 8b, has been fine-tuned using a 1M token dataset, and the model version is v0.2. The model is newer and MAY PROVIDE INCORRECT RESPONSES.