skymizer
Llama-3.2-1B-GGUF
DeepSeek-R1-Distill-Qwen-1.5B-GGUF
gemma-3-1b-pt-GGUF
Llama-3.1-8B-Instruct-GGUF
Llama-3.1-8B-GGUF
Qwen3-0.6B-Base-GGUF
Llama-3.2-3B-GGUF
These models are converted from `meta-llama/Llama-3.2-3B`, quantized with llama.cpp.
Qwen3-4B-Thinking-2507-GGUF
These models are quantized from Qwen/Qwen3-4B-Thinking-2507 with ggml-org/gguf-my-repo and converted to bf16, f16 with llama.cpp with commit hash `25ff6f7659f6a5c47d6a73eada5813f0495331f0`.
granite-4.0-tiny-base-preview-GGUF
granite-3.1-1b-a400m-GGUF
Qwen3-30B-A3B-Thinking-2507-GGUF
These models are converted from Qwen/Qwen3-30B-A3B-Thinking-2507 Please set up the generation config properly - temperature = 0.6 - topp = 0.95 - topk = 20 - minp = 0.0 - output tokens: 32768 Best Practices: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507#best-practices
mistral-7B-v0.1-sft-slim-orca-sonnet-3.5-v4
DeepSeek-R1-Distill-Llama-8B-GGUF
gemma-3-1b-pt-qat-q4_0-gguf
Llama-3.2-1B-Instruct-GGUF
Qwen3-4B-Instruct-2507-GGUF
This repo collects GGUF models which are converted from Qwen/Qwen3-4B-Instruct-2507 with llama.cpp and ggml-org/gguf-my-repo
Llama-3.2-3B-Instruct-GGUF
These models are converted from meta-llama/Llama-3.2-3B-Instruct
Qwen2.5-1.5B-GGUF
Llama2-7b-sft-chat-custom-template-dpo
This model is a fine-tuned version of elichen3051/llama2-7b-sft-chat-no-template on the HuggingFaceH4/ultrafeedbackbinarized, the HuggingFaceH4/orcadpopairs and the HuggingFaceH4/cai-conversation-harmless datasets. It achieves the following results on the evaluation set: - Loss: 0.4717 - Rewards/chosen: -1.6807 - Rewards/rejected: -3.1957 - Rewards/accuracies: 0.6345 - Rewards/margins: 1.5150 - Logps/rejected: -519.5196 - Logps/chosen: -379.2986 - Logits/rejected: -2.7275 - Logits/chosen: -2.7213 The following hyperparameters were used during training: - learningrate: 5e-07 - trainbatchsize: 8 - evalbatchsize: 8 - seed: 42 - distributedtype: multi-GPU - numdevices: 7 - gradientaccumulationsteps: 8 - totaltrainbatchsize: 448 - totalevalbatchsize: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lrschedulertype: cosine - lrschedulerwarmupratio: 0.03 - numepochs: 2 | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6727 | 0.2032 | 43 | 0.6714 | -0.0530 | -0.0999 | 0.5871 | 0.0470 | -209.9431 | -216.5270 | -2.2167 | -2.2006 | | 0.6056 | 0.4064 | 86 | 0.6041 | -0.5876 | -0.8878 | 0.6023 | 0.3002 | -288.7347 | -269.9940 | -3.0277 | -3.0177 | | 0.573 | 0.6096 | 129 | 0.5451 | -0.9286 | -1.6015 | 0.6174 | 0.6729 | -360.0960 | -304.0913 | -2.9301 | -2.9238 | | 0.5239 | 0.8128 | 172 | 0.5123 | -1.2863 | -2.2358 | 0.6288 | 0.9495 | -423.5324 | -339.8588 | -2.9884 | -2.9803 | | 0.4668 | 1.0159 | 215 | 0.4945 | -1.4994 | -2.6377 | 0.6439 | 1.1383 | -463.7195 | -361.1752 | -2.5910 | -2.5843 | | 0.4607 | 1.2191 | 258 | 0.4816 | -1.5810 | -2.8887 | 0.6402 | 1.3077 | -488.8177 | -369.3280 | -2.8026 | -2.7951 | | 0.5068 | 1.4223 | 301 | 0.4764 | -1.5805 | -3.0061 | 0.6402 | 1.4256 | -500.5590 | -369.2790 | -2.7586 | -2.7513 | | 0.4724 | 1.6255 | 344 | 0.4730 | -1.6832 | -3.1741 | 0.6383 | 1.4909 | -517.3631 | -379.5493 | -2.6296 | -2.6237 | | 0.4836 | 1.8287 | 387 | 0.4718 | -1.6795 | -3.1900 | 0.6420 | 1.5105 | -518.9514 | -379.1832 | -2.6434 | -2.6374 | - Transformers 4.42.0.dev0 - Pytorch 2.3.1 - Datasets 2.19.2 - Tokenizers 0.19.1
mistral-7b-v0.1-sft-open-orca-dry-run
mistral-7b-v0.1-sft-open-orca-dry-run-2
mistral-7b-v0.1-csft-open-orca-on-open-orca
Mistral-7B-v0.1-q-sparse-fineweb-edu-table2
Mistral-7B-v0.1-q-sparse-fineweb-edu-table2-re
Qwen3-30B-A3B-Instruct-2507-GGUF
The models are converted from Qwen/Qwen3-30B-A3B-Instruct-2507 Before using these models, please set up the generation config properly. - temperature = 0.7 - topp = 0.8 - topk = 20 - minp = 0.0 - output token length: 16,384 tokens Best Practice: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507#best-practices