dreamgen

31 models • 1 total models in database
Sort by:

opus-v1.2-70b-gguf

NaNK
250
2

opus-v1.2-7b-gguf

NaNK
150
13

opus-v1.4-70b-llama3-gguf

NaNK
license:cc-by-nc-nd-4.0
103
4

lucid-v1-nemo-GGUF

llama-cpp
95
0

WizardLM-2-7B

🤗 HF Repo •🐱 Github Repo • 🐦 Twitter • 📃 [WizardLM] • 📃 [WizardCoder] • 📃 [WizardMath] We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B. - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works and consistently outperforms all the existing state-of-the-art opensource models. - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models. For more details of WizardLM-2 please read our release blog post and upcoming paper. Model name: WizardLM-2 7B Developed by: WizardLM@Microsoft AI Base model: mistralai/Mistral-7B-v0.1 Parameters: 7B Language(s): Multilingual Blog: Introducing WizardLM-2 Repository: https://github.com/nlpxucan/WizardLM Paper: WizardLM-2 (Upcoming) License: Apache2.0 We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models. The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales. We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual. We report the win:loss rate without tie: - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314. - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat. - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta. Method Overview We built a fully AI powered synthetic training system to train WizardLM-2 models, please refer to our blog for more details of this system. WizardLM-2 adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following: We provide a WizardLM-2 inference demo code on our github.

NaNK
license:apache-2.0
87
36

opus-v1-34b-gguf

NaNK
64
8

opus-v0-70b-gguf

NaNK
58
3

opus-v1.2-7b

NaNK
license:cc-by-nc-nd-4.0
34
34

opus-v1.2-llama-3-8b

NaNK
llama
15
52

opus-v0-7b

NaNK
13
29

lucid-v1-nemo

NaNK
12
53

opus-v1-34b

NaNK
llama
7
16

opus-v1.2-7b-awq

NaNK
6
0

opus-v0-7b-awq

NaNK
3
3

opus-v0-70b

NaNK
llama
2
9

opus-v1.4-70b-llama3-exl2-6.0bpw-h8

NaNK
llama
2
3

opus-v1.4-70b-llama3-exl2-6.0bpw-h6

NaNK
llama
2
1

llama3-8b-instruct-align-test2-kto

NaNK
llama
2
0

opus-v1.4-70b-llama3-exl2-2.4bpw-h6

NaNK
llama
2
0

WizardLM-2-8x22B

NaNK
license:apache-2.0
1
31

opus-v1.2-70b

NaNK
llama
1
4

opus-v1-34b-awq

NaNK
llama
1
1

opus-v1.4-70b-llama3-exl2-4.0bpw-h6

NaNK
llama
1
1

opus-v0-70b-awq

NaNK
llama
1
0

llama3-8b-instruct-align-test1-kto

NaNK
llama
1
0

llama3-8b-assistant-test-run1-sft-e2

NaNK
llama
1
0

opus-v1.4-70b-llama3-exl2-2.25bpw-h6

NaNK
llama
1
0

opus-v1.4-70b-llama3-exl2-5.0bpw-h8

NaNK
llama
1
0

opus-v0.5-70b

NaNK
llama
0
2

opus-v1.2-70b-awq

NaNK
llama
0
1

opus-v1.4-70b-llama3-exl2-4.25bpw-h6

NaNK
llama
0
1