langgptai
5 models • 2 total models in database
Sort by:
Qwen-sft-la-v0.1
NaNK
llama-factory
2
0
Yi-1.5-6B-Chat-sa-v0.1
NaNK
llama-factory
2
0
Qwen-sft-ls-v0.1
NaNK
llama-factory
1
0
Qwen-las-v0.1
This model is a fine-tuned version of /datas/huggingface/Qwen1.5-4B-Chat on the LangGPTcommunity, the LangGPTalpaca and the LangGPTseed datasets. The following hyperparameters were used during training: - learningrate: 5e-05 - trainbatchsize: 2 - evalbatchsize: 8 - seed: 42 - gradientaccumulationsteps: 8 - totaltrainbatchsize: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: cosine - numepochs: 10.0 - mixedprecisiontraining: Native AMP - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
NaNK
llama-factory
1
0
chatglm3-6b_sa_v0.1
NaNK
llama-factory
1
0