lkoenig

8 models • 11 total models in database
Sort by:

BBAI_230_Xiaqwen

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: gz987/qwen2.5-7b-cabs-v0.3 Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview The following YAML configuration was used to produce this model:

NaNK
3
2

BBAI_212_Qwencore

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: bunnycore/Qwen2.5-7B-CyberRombos bunnycore/Qwen-2.5-7B-Deep-Stock-v4 The following YAML configuration was used to produce this model:

NaNK
3
0

BBAI_200_Gemma

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: allknowingroger/Gemma2Slerp2-27B allknowingroger/Gemma2Slerp3-27B The following YAML configuration was used to produce this model:

NaNK
1
0

BBAI_145_

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: gz987/qwen2.5-7b-cabs-v0.3 gz987/qwen2.5-7b-cabs-v0.4 The following YAML configuration was used to produce this model:

NaNK
1
0

BBAI_456_QwenKoen

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Lawnakk/BBALAW1.61 Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview The following YAML configuration was used to produce this model:

NaNK
1
0

BBAI_7B_Qwen2.5koen

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Qwen/Qwen2.5-7B-Instruct gz987/qwen2.5-7b-cabs-v0.3 The following YAML configuration was used to produce this model:

NaNK
1
0

BBAI_212_QwenLawLo

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Lawnakk/BBALAW1.61 gz987/qwen2.5-7b-cabs-v0.3 The following YAML configuration was used to produce this model:

NaNK
1
0

BBAI_7B_KoenQwenDyan

This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: Qwen/Qwen2.5-7B-Instruct lkoenig/BBAI7BQwenDyancabsLAW The following YAML configuration was used to produce this model:

NaNK
1
0