LLMYourWay
ModelsDevices
Edge AI
CompareInsights
Enterprise

langgptai

5 models • 2 total models in database
Sort by:

Qwen-sft-la-v0.1

NaNK
llama-factory
2
0

Yi-1.5-6B-Chat-sa-v0.1

NaNK
llama-factory
2
0

Qwen-sft-ls-v0.1

NaNK
llama-factory
1
0

Qwen-las-v0.1

This model is a fine-tuned version of /datas/huggingface/Qwen1.5-4B-Chat on the LangGPTcommunity, the LangGPTalpaca and the LangGPTseed datasets. The following hyperparameters were used during training: - learningrate: 5e-05 - trainbatchsize: 2 - evalbatchsize: 8 - seed: 42 - gradientaccumulationsteps: 8 - totaltrainbatchsize: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: cosine - numepochs: 10.0 - mixedprecisiontraining: Native AMP - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1

NaNK
llama-factory
1
0

chatglm3-6b_sa_v0.1

NaNK
llama-factory
1
0
LLMYourWay

The definitive AI model comparison platform. Compare 12K+ models, track performance, and discover the perfect AI solution for your needs.

Made with AI
Real-time Data

Product

  • Find Your Device
  • Browse Models
  • Compare AI
  • Benchmarks
  • Pricing
  • API Access

Resources

  • Blog & Articles
  • Methodology
  • Changelog
  • Trending
  • Use Cases

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Service
12K+12,000+
AI Models Tracked & Updated Daily
© 2026 LLMYourWay. All rights reserved.
Data updated every 4 hours
Powered by real-time AI data
API