RonanMcGovern
7 models • 2 total models in database
Sort by:
deepseek-coder-1.3b-base-chat-function-calling-v3-adapters-local
NaNK
llama
4
0
TinyLlama-1.1B-intermediate-step-480k-1T-chat-llama-style-adapters
NaNK
llama
3
1
Llama 2 7b Chat Hf Function Calling Adapters
NaNK
base_model:meta-llama/Llama-2-7b-chat-hf
3
1
TinyLlama-1.1B-intermediate-step-1431k-3T-SFT-adapters-local
NaNK
llama
3
0
Llama-2-7b-hf-function-calling-adapters
The following `bitsandbytes` quantization config was used during training: - loadin8bit: False - loadin4bit: True - llmint8threshold: 6.0 - llmint8skipmodules: None - llmint8enablefp32cpuoffload: False - llmint8hasfp16weight: False - bnb4bitquanttype: nf4 - bnb4bitusedoublequant: True - bnb4bitcomputedtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - loadin8bit: False - loadin4bit: True - llmint8threshold: 6.0 - llmint8skipmodules: None - llmint8enablefp32cpuoffload: False - llmint8hasfp16weight: False - bnb4bitquanttype: nf4 - bnb4bitusedoublequant: True - bnb4bitcomputedtype: bfloat16 Framework versions
NaNK
base_model:meta-llama/Llama-2-7b-hf
1
1
Llama-2-7b-chat-hf-function-calling-adapters-v2
NaNK
base_model:meta-llama/Llama-2-7b-chat-hf
1
0
all-MiniLM-L12-v2-ft
NaNK
—
1
0