Moonlight-16B-A3B-Instruct-gguf

604
9
16.0B
2 languages
Q4
license:mit
by
mmnga
Language Model
OTHER
16B params
New
604 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
36GB+ RAM
Mobile
Laptop
Server
Quick Summary

Moonlight-16B-A3B-Instruct-gguf moonshotaiさんが公開しているMoonlight-16B-A3B-Instructのggufフォーマット変換版です。 imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成しました。

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
15GB+ RAM

Code Examples

Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'
Usagetextllama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-cli -m 'Moonlight-16B-A3B-Instruct-gguf' -n 128 -c 128 -p 'あなたはプロの料理人です。レシピを教えて'

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.