elyza

16 models • 2 total models in database
Sort by:

Llama-3-ELYZA-JP-8B

Llama-3-ELYZA-JP-8B is a large language model trained by ELYZA, Inc. Based on meta-llama/Meta-Llama-3-8B-Instruct, it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3) - Masato Hirakawa - Shintaro Horie - Tomoaki Nakamura - Daisuke Oba - Sam Passaglia - Akira Sasaki

NaNK
llama
25,298
135

ELYZA-japanese-Llama-2-7b-instruct

Model Description ELYZA-japanese-Llama-2-7b は、 Llama2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は Blog記事 を参照してください。 | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |elyza/ELYZA-japanese-Llama-2-7b| 32000 | 6.27B | |elyza/ELYZA-japanese-Llama-2-7b-instruct| 32000 | 6.27B | |elyza/ELYZA-japanese-Llama-2-7b-fast| 45043 | 6.37B | |elyza/ELYZA-japanese-Llama-2-7b-fast-instruct| 45043 | 6.37B | - Akira Sasaki - Masato Hirakawa - Shintaro Horie - Tomoaki Nakamura Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

NaNK
llama
4,976
74

Llama-3-ELYZA-JP-8B-GGUF

Llama-3-ELYZA-JP-8B is a large language model trained by ELYZA, Inc. Based on meta-llama/Meta-Llama-3-8B-Instruct, it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3) We have prepared two quantized model options, GGUF and AWQ. This is the GGUF (Q4KM) model, converted using llama.cpp. The following table shows the performance degradation due to quantization: | Model | ELYZA-tasks-100 GPT4 score | | :-------------------------------- | ---: | | Llama-3-ELYZA-JP-8B | 3.655 | | Llama-3-ELYZA-JP-8B-GGUF (Q4KM) | 3.57 | | Llama-3-ELYZA-JP-8B-AWQ | 3.39 | Install llama.cpp through brew (works on Mac and Linux): There are various desktop applications that can handle GGUF models, but here we will introduce how to use the model in the no-code environment LM Studio. - Installation: Download and install LM Studio. - Downloading the Model: Search for `elyza/Llama-3-ELYZA-JP-8B-GGUF` in the search bar on the home page 🏠, and download `Llama-3-ELYZA-JP-8B-q4km.gguf`. - Start Chatting: Click on 💬 in the sidebar, select `Llama-3-ELYZA-JP-8B-GGUF` from "Select a Model to load" in the header, and load the model. You can now freely chat with the local LLM. - Setting Options: You can set options from the sidebar on the right. Faster inference can be achieved by setting Quick GPU Offload to Max in the GPU Settings. - (For Developers) Starting an API Server: Click ` ` in the left sidebar and move to the Local Server tab. Select the model and click Start Server to launch an OpenAI API-compatible API server. This demo showcases Llama-3-ELYZA-JP-8B-GGUF running smoothly on a MacBook Pro (M1 Pro), achieving an inference speed of approximately 20 tokens per second. - Masato Hirakawa - Shintaro Horie - Tomoaki Nakamura - Daisuke Oba - Sam Passaglia - Akira Sasaki

NaNK
llama-cpp
2,429
67

ELYZA-japanese-Llama-2-7b

Model Description ELYZA-japanese-Llama-2-7b は、 Llama2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は Blog記事 を参照してください。 | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |elyza/ELYZA-japanese-Llama-2-7b| 32000 | 6.27B | |elyza/ELYZA-japanese-Llama-2-7b-instruct| 32000 | 6.27B | |elyza/ELYZA-japanese-Llama-2-7b-fast| 45043 | 6.37B | |elyza/ELYZA-japanese-Llama-2-7b-fast-instruct| 45043 | 6.37B | - Akira Sasaki - Masato Hirakawa - Shintaro Horie - Tomoaki Nakamura Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

NaNK
llama
2,316
96

ELYZA-japanese-Llama-2-7b-fast-instruct

NaNK
llama
1,997
80

ELYZA-japanese-Llama-2-13b-fast-instruct

NaNK
llama
1,060
24

ELYZA-japanese-Llama-2-7b-fast

NaNK
llama
1,057
23

ELYZA-japanese-Llama-2-13b-instruct

Model Description ELYZA-japanese-Llama-2-13b は、 Llama 2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は Blog記事 を参照してください。 | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |elyza/ELYZA-japanese-Llama-2-13b| 32000 | 13.02B | |elyza/ELYZA-japanese-Llama-2-13b-instruct| 32000 | 13.02B | |elyza/ELYZA-japanese-Llama-2-13b-fast| 44581 | 13.14B | |elyza/ELYZA-japanese-Llama-2-13b-fast-instruct| 44581 | 13.14B | - Akira Sasaki - Masato Hirakawa - Shintaro Horie - Tomoaki Nakamura - Sam Passaglia - Daisuke Oba (intern) Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

NaNK
llama
1,024
42

ELYZA-japanese-Llama-2-13b

NaNK
llama
862
22

ELYZA-japanese-Llama-2-13b-fast

NaNK
llama
823
7

ELYZA-Shortcut-1.0-Qwen-7B

NaNK
license:apache-2.0
774
0

ELYZA-Thinking-1.0-Qwen-32B

NaNK
license:apache-2.0
747
7

ELYZA-Shortcut-1.0-Qwen-32B

NaNK
license:apache-2.0
401
2

Llama-3-ELYZA-JP-8B-AWQ

NaNK
llama
130
4

ELYZA-japanese-CodeLlama-7b

NaNK
llama
74
6

ELYZA-japanese-CodeLlama-7b-instruct

NaNK
llama
48
18