mergekit-linear-tqwumtt
1
llama
by
djuna-test-lab
Language Model
OTHER
2203.05482B params
New
1 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
4925GB+ RAM
Mobile
Laptop
Server
Quick Summary
This is a merge of pre-trained language models created using mergekit.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2052GB+ RAM
Code Examples
Configurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedConfigurationyaml
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
weight: 0.7
- model: allenai/Llama-3.1-Tulu-3.1-8B
parameters:
weight: 0.3
merge_method: linear
tokenizer_source: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliteratedDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.