Supichi
BBA99
Base model: Qwen Qwen 2.5 32B Instruct, Zetasepic Qwen 2.5 32B Instruct Abliterated v2.
BBAI_135_Gemma
Base model includes allknowingroger/Gemma2Slerp2-27B and allknowingroger/Gemma2Slerp3-27B.
BBAI_525_Tsu_gZ_Xia0
Base model Supichi/BBAI_275 Tsunami gZ and Supichi/BBAI_250 Xia0 gZ.
HF_TOKEN
Base model Qwen 2.5 32B Instruct and tanliboy lambda qwen 2.5 32b dpo test.
BBAI_250_Xia0_gZ
Base model gz987/qwen2.5-7b-cabs-v0.4 and Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview-v0.2.
NJS26
Base model: Cortical Stack Pastiche Crown Clown 7B Dare DPO, Bio Mistral Bio Mistral 7B.
BBA-123
Base model: Qwen Qwen 2.5 Coder 32B Instruct, maldv Awqward 2.5 32B Instruct.
BBAI_QWEEN_V000000_LUMEN_14B
Base model: v000000 Qwen2.5 Lumen 14B Qwen Qwen2.5 14B.
BBA456
This is a merge of pre-trained language models created using mergekit. This model was merged using the SLERP merge method. The following models were included in the merge: mistralai/Mathstral-7B-v0.1 mistralai/Mistral-7B-v0.1 The following YAML configuration was used to produce this model:
BBAI_275_Tsunami_gZ
Base model: Tsunami-th/Tsunami-1.0-7B-Instruct, gz987/qwen2.5-7b-cabs-v0.4.
BBAIK29
Base model includes BBAI 230 Xiaqwen and BBAI 250 Xia0 gZ.