LLMYourWay
ModelsDevices
Edge AI
CompareInsights
Enterprise

sci-m-wang

4 models • 3 total models in database
Sort by:

Emotion_inferencer-Qwen2.5-7B-Instruct

NaNK
—
2
1

Chief_chain_generator-Qwen2.5-7B-Instruct

NaNK
—
2
1

Phi-3-mini-4k-instruct-sa-v0.1

This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the LangGPTcommunity, the LangGPTalpaca and the LangGPTseed datasets. The following hyperparameters were used during training: - learningrate: 5e-05 - trainbatchsize: 2 - evalbatchsize: 8 - seed: 42 - gradientaccumulationsteps: 8 - totaltrainbatchsize: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: cosine - numepochs: 5.0 - PEFT 0.10.0 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1 Open LLM Leaderboard Evaluation Results Detailed results can be found here | Metric |Value| |-------------------|----:| |Avg. |25.55| |IFEval (0-Shot) |50.21| |BBH (3-Shot) |36.61| |MATH Lvl 5 (4-Shot)|13.14| |GPQA (0-shot) |10.51| |MuSR (0-shot) | 9.65| |MMLU-PRO (5-shot) |33.17|

llama-factory
2
0

internlm2-7b-sa-v0.1

NaNK
—
1
0
LLMYourWay

The definitive AI model comparison platform. Compare 12K+ models, track performance, and discover the perfect AI solution for your needs.

Made with AI
Real-time Data

Product

  • Find Your Device
  • Browse Models
  • Compare AI
  • Benchmarks
  • Pricing
  • API Access

Resources

  • Blog & Articles
  • Methodology
  • Changelog
  • Trending
  • Use Cases

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Service
12K+12,000+
AI Models Tracked & Updated Daily
© 2026 LLMYourWay. All rights reserved.
Data updated every 4 hours
Powered by real-time AI data
API