HMS-Slerp-12B-v2

2
12.0B
2 languages
by
yamatazen
Language Model
OTHER
12B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
27GB+ RAM
Mobile
Laptop
Server
Quick Summary

This is a merge of pre-trained language models created using mergekit.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
12GB+ RAM

Code Examples

Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union
Configurationyaml
base_model: yamatazen/Shisa-v2-Mistral-Nemo-12B-Abliterated
models:
  - model: yamatazen/Himeyuri-Magnum-12B
merge_method: slerp
dtype: bfloat16
parameters:
  normalize: true
  t: [0.25, 0.3, 0.5, 0.3, 0.25]
tokenizer:
  source: union

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.