Orbita-v0.1 This model is a Turkish Large Language Model with multiple abilites across different dimensions on the Turkish Language covering a variety of Tasks like Coding, Math, etc... This model is an extended version of a Qwen-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish dataset carefully annotated to carry out turkish instructions in an accurate and organized manner. This model was fully finetuned extensively on 8x H100 GPU's for 2 days using a carefully annotated Turkish dataset. Model Details
- Base Model: Qwen 14B based LLM - Training Dataset: Annotated Turkish Dataset - Training Method: Full Finetuning
| Metric |Value| |---------------------------------|----:| |Avg. |49.47| |AI2 Reasoning Challengetr |41.97| |HellaSwagtr |48.00| |MMLUtr |49.51| |TruthfulQAtr |50.78| |Winogrande tr |56.16| |GSM8ktr |50.41|