L3-Nymeria-v2-8B

1
15
8.0B
1 language
llama
by
tannedbum
Language Model
OTHER
8B params
New
1 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
18GB+ RAM
Mobile
Laptop
Server
Quick Summary

- Upgraded SimPO. - A touch of 3SOME, Lumimaid and Jamet Blackroot resulting a slightly different prose and wider RP vocab. - Leans slightly more on nsfw than t...

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
8GB+ RAM

Code Examples

SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
SillyTaverntext
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B
Configurationyaml
slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.