Developed by: James Phifer (NexusMind.tech) Funded by: Tristian (Shuttle.ai) License: Apache-2.0 Finetuned from: Qwen/Qwen2.5-VL-7B-Instruct Architecture: Transformer-based LLM
Overview This model is designed to handle complex mathematical questions efficiently using Chain of Thought (CoT) reasoning.
- Capabilities: - General-purpose LLM - Strong performance on multi-step reasoning tasks - Able to respond to requests ethically while preventing human harm
- Limitations: - Not evaluated extensively - May generate incorrect or biased outputs in certain contexts
Dataset: Trained on a 120k-line CoT dataset for mathematical reasoning. Training Hardware: 1x A100 80GB GPU (Provided by Tristian at Shuttle.ai)
Status: Not formally tested yet. Preliminary Results: - Provides detailed, well-structured answers - Performs well on long-form mathematical problems