karakuri-vl-32b-instruct-2507-gguf
63
1
32.0B
2 languages
Q4
license:apache-2.0
by
mmnga
Language Model
OTHER
32B params
New
63 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
72GB+ RAM
Mobile
Laptop
Server
Quick Summary
karakuri-vl-32b-instruct-2507-gguf karakuri-aiさんが公開しているkarakuri-vl-32b-instruct-2507のggufフォーマット変換版です。 imatrixのデータはTFMC/imatrix-dataset-for-japanese-llmを使用して作成し...
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
30GB+ RAM
Code Examples
Usagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngUsagetextllama.cpp
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
build/bin/llama-mtmd-cli -m 'karakuri-vl-32b-instruct-2507-gguf' -p '何が書いてある?' --mmproj mmproj.gguf --image test-image.pngDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.