WariHima

12 models • 1 total models in database
Sort by:

Qwen3-Embedding-0.6B-Q4_K_M-GGUF

WariHima/Qwen3-Embedding-0.6B-Q4KM-GGUF This model was converted to GGUF format from `Qwen/Qwen3-Embedding-0.6B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
110
0

qwen3vl-8b-ja-gptoss-glm45-qwen3-235b-distil-26_02_12-Q4_K_M

NaNK
license:apache-2.0
88
0

furigna-accent-whisper-v0.1-lora

NaNK
28
0

sarashina2.2-1b-instruct-v0.1-Q4_K_M-GGUF

NaNK
llama-cpp
23
2

Qwen3-14B-Q4_K_M-GGUF

NaNK
llama-cpp
20
0

Qwen3-8B-Q4_K_M-GGUF

NaNK
llama-cpp
18
0

sarashina2.2-0.5b-instruct-v0.1-Q4_K_M-GGUF

NaNK
llama-cpp
10
1

sarashina2.2-3b-instruct-v0.1-Q4_K_M-GGUF

NaNK
llama-cpp
4
0

Qwen3-8B-ERP-v0.1-gptq

NaNK
4
0

Qwen3-4B-Instruct-2507-Q4_K_M-GGUF

WariHima/Qwen3-4B-Instruct-2507-Q4KM-GGUF This model was converted to GGUF format from `Qwen/Qwen3-4B-Instruct-2507` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
2
0

index-tts-japanese-prosody

sample human voice score (you can find wav tsukuyomuichancorpus datasets. datasets was not in train ) japanese speaker's human voice score avrgs ↓ tsukuyomuicorpussample amitaro's courpus siingle speaker ft model (not upload, you can ft single speaker and nearly score) dnsmosoutsample train, infer (webgui) code in this fork vram use lower than original and this model only work this repo https://github.com/q9uri/index-tts-ja model pretrain use jvnv courpus, cretaed by taakamichi shinosuke sensei and japanese voice actor! original reazon-speech was created by reazon team, source voice was japanese tv wav file under licensed by 日本国著作権の例外項目 denoized by fishaudio. use uvr5 reupload hf by litagin02 anime-whisper-0.3 use create text transcript kanji in suppres token,transcripts nearly kana only text. model was trained sigle rtx 3060 (max 60% setting,power look like rtx a2000) batch size 1, amp don't use (haha, i forgotten. recommend use amp) gpu was i have'n ローカルllmに向き合う会 hackason. thx サルドラ (@saldra) ゆづき may i support me? buy gpu for me amazon.jp shoplist download custom pretrain models https://huggingface.co/WariHima/index-tts-japanese-prosody infer need cuda 12.8 and vram 8gb created voice length 2 sec, 36000+6000x8x6cyclesteps.pth rename to gpt.pth, copy to ./checkpoints. japanese-bpe.model to ./checkpoints, don't be rename.

NaNK
license:apache-2.0
0
3

VoiceSpeechMaker-pretrain

license:agpl-3.0
0
1