Qwen2.5-Ultra-1.5B-25.02-Exp-v0.2

2
by
Xiaojian9992024
Language Model
OTHER
1.5B params
New
2 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
4GB+ RAM
Mobile
Laptop
Server
Quick Summary

This is a merge of pre-trained language models created using mergekit.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
2GB+ RAM

Code Examples

Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Configurationyaml
models:
  - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
    #no parameters necessary for base model
  - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT
    parameters:
      density: 0.5
      weight: 0.5
  - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp
parameters:
  normalize: false
  int8_mask: true
dtype: float16

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.