li-14b-v0.4

26
19
14.0B
5 languages
license:apache-2.0
by
wanlige
Language Model
OTHER
14B params
New
26 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
32GB+ RAM
Mobile
Laptop
Server
Quick Summary

> [!TIP] This model is currently ranked #1 among the models up to 15B parameters and #50 among all models on the Open LLM Leaderboard. 世纪开元智印互联科技集团股份有限公司创立于200...

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
14GB+ RAM

Code Examples

Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16
Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16
Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16
Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16
Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16
Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16
Configurationyaml
models:

  - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic

  - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 #uncensored

  - model: Qwen/Qwen2.5-14B #text generation

  - model: Qwen/Qwen2.5-14B-Instruct #chat assistant

  - model: Qwen/Qwen2.5-Coder-14B #coding

  - model: SicariusSicariiStuff/Impish_QWEN_14B-1M #math

  - model: tanliboy/lambda-qwen2.5-14b-dpo-test #dpo

merge_method: model_stock

base_model: Qwen/Qwen2.5-14B-Instruct

normalize: true

int8_mask: true

dtype: bfloat16

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.