mosama
Qwen3-VL-8B-Instruct-GGUF
Qwen2.5-VL-3B-Instruct-GGUF
LFM2-VL-450M-GGUF
Qwen3-Embedding-4B-GGUF
Yehia-7B-preview-W4A16_ASYM
Qwen3Guard-Gen-0.6B-GGUF
Yehia-7B-preview-GGUF
whisper_small_full_finetune
Full finetuning in float32 bit of whisper small on SADA 2022 Datset. The model is trained on SADA 2022 datset for arabic transcription. I forced the decoder to use the arabic special tokens. This was finetuned from openai whisper small.
AIN-Q4_K_M-GGUF
whisper-tiny-float32-sada-v1
Qwen2.5-0.5B-Pretraining-ar-eng-urd-LoRA-Adapters
Qwen2.5-1.5B-Instruct-CoT-Reflection
This model has been finetuned from the Qwen2.5-1.5B-Instruct Model. This model has been finetuned on data to produce step by step chain of thought responses with reflections. This model was trained with unsloth with LoRA and 4 bit quantization. It is recommended to use the prompt mentioned in the code snippet to get the best responses. NOTE: IGNORE THE BACKSLASH '\\' BEFORE THE TRIPLE BACK TICKS IN THE PROMPT In this section, break down the task and develop a clear, step-by-step plan to solve it. Use chain of thought reasoning, where \ you work through each step thoughtfully and logically, reflecting on each part of the process as you go. Write each step thoroughly, addressing \ all key points and presenting them in numbered steps (1, 2, 3, ...). After each step, include a reflection. The reflection serves to validate \ your reasoning. If any part of the reasoning seems flawed, correct it here. This is where you reflect upon your reasoning for \ this step. If any part of your thought process seems flawed, correct it here and continue. Once the thinking process is complete, provide the final solution in this section. Ensure that your final answer is concise and focused on the core solution. \