smollm2-135m-soup1

3
llama
by
ThomasTheMaker
Language Model
OTHER
2203.05482B params
New
3 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
4925GB+ RAM
Mobile
Laptop
Server
Quick Summary

This is a merge of pre-trained language models created using mergekit.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2052GB+ RAM

Code Examples

Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16
Configurationyaml
models:
  - model: mnoukhov/SmolLM2-135M-Instruct_tldr-sft
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M-Instruct
    parameters:
      weight: 1.0
  - model: HuggingFaceTB/SmolLM2-135M
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.