Tifa-DeepsexV2-7b-MGRPO-GGUF-Q4

6.7K
226
7.0B
4 languages
Q4
license:apache-2.0
by
ValueFX9507
Other
OTHER
7B params
New
7K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary

- 原始模型:Qwen2.5-7B - GGUF: F16 | Q8 | Q4 (Q4损失较大,推荐Q8以上) - Demo APK: 点击下载 - 简单的前端:Github链接 - 必看教程:BiliBili视频教程 本模型基于Qwen2.5 7b进行深度优化,具有100万字上下文能力,借助Tifa220B生成的数据...

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM

Code Examples

💡 使用建议python
generation_config = {
    "temperature": 0.75,
    "top_p": 0.6,
    "repetition_penalty": 1.08,
    "max_new_tokens": 1536,
    "do_sample": True
}

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.