ysn-rfd
Test_FIBO_IDEN_FFN-GGUF
TestFIBOIDENFFN Model creator: ysn-rfd Original model: ysn-rfd/TestFIBOIDENFFN GGUF quantization: provided by ysn-rfd using `llama.cpp` Special thanks 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. Use with Ollama
chatbench-distilgpt2-GGUF
FIBO IDEN MODEL GGUF
FIBOIDENMODEL Model creator: ysn-rfd Original model: ysn-rfd/FIBOIDENMODEL GGUF quantization: provided by ysn-rfd using `llama.cpp` Special thanks 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. Use with Ollama
gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-Q4_0-GGUF
gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q8_0-GGUF
First_Persian_SLM_Big_Update_Version3_ULTIMATE_ysnrfd
Dolphin3.0-Qwen2.5-1.5B-GGUF
mommygpt-3B-GGUF
FIBONACCI PERSIAN MODEL GGUF
Example usage: - For text only LLMs: llama-cli --hf repoid/modelname -p "why is the sky blue?" - For multimodal models: llama-mtmd-cli -m modelname.gguf --mmproj mmprojfile.gguf Ollama An Ollama Modelfile is included for easy deployment.
ysnrfd-base-V2
WizardLM-7B-Uncensored-GGUF
ysn-rfd/WizardLM-7B-Uncensored-GGUF This model was converted to GGUF format from `cognitivecomputations/WizardLM-7B-Uncensored` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
chatbench-llama3-8b-GGUF
WizardLM-7B-Uncensored-Q4_0-GGUF
ysnrfd-base
starcoder2-3b-GGUF
Qwen 3 1.7b Persian Q8 0 GGUF
ysn-rfd/qwen31.7bpersian-Q80-GGUF This model was converted to GGUF format from `ysn-rfd/qwen31.7bpersian` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Dolphin3.0-Qwen2.5-3b-GGUF
calme-3.3-llamaloi-3b-GGUF
Dolphin3.0-Llama3.2-1B-GGUF
Dolphin3.0-Llama3.2-3B-GGUF
LlamaCorn-1.1B-Chat-GGUF
text2image-prompt-generator-GGUF
text2image-prompt-generator Model creator: succinctly Original model: succinctly/text2image-prompt-generator GGUF quantization: provided by ysn-rfd using `llama.cpp` Special thanks 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. Use with Ollama
FIBO_IDEN_MODEL_GGUF
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
NSFW-3B-GGUF
OpenThinker2-7B-GGUF
gpt2-xl-conversational-GGUF
NSFW-3B-Q2_K-GGUF
YASIN-Persian-Base
PersianMind-v1.0-Q4_0-GGUF
TinySwallow-1.5B-Instruct-GGUF
gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-Q8_0-GGUF
ReaderLM-v2-GGUF
Ministral-3b-instruct-GGUF
Refact-1_6B-fim-GGUF
Huihui-MoE-12B-A4B-abliterated-Q4_K_M-GGUF
ysn-rfd/Huihui-MoE-12B-A4B-abliterated-Q4KM-GGUF This model was converted to GGUF format from `huihui-ai/Huihui-MoE-12B-A4B-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
DeepSeek-R1-Distill-Qwen-7B-GGUF
ysn-rfd/DeepSeek-R1-Distill-Qwen-7B-GGUF This model was converted to GGUF format from `deepseek-ai/DeepSeek-R1-Distill-Qwen-7B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
gemma-3-4b-it-GGUF
NSFW_DPO_Noromaid-7b-Q4_0-GGUF
stablelm-zephyr-3b-Q8_0-GGUF
xLAM-1b-fc-r-Q4_0-GGUF
Qwen 3 1.7b Persian
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit
openhands-lm-7b-v0.1-GGUF
PersianMind-v1.0-Q4_K_M-GGUF
Z1-7B-GGUF
FIBONACCI PERSIAN MODEL
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit
gemma-3-4b-it-qat-int4-unquantized-Q4_K_M-GGUF
ysn-rfd/gemma-3-4b-it-qat-int4-unquantized-Q4KM-GGUF This model was converted to GGUF format from `google/gemma-3-4b-it-qat-int4-unquantized` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
gpt-oss-4.2b-specialized-harmful-pruned-moe-only-4-experts-Q8_0-GGUF
FIBO_IDEN_MODEL
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Test_FIBO_IDEN_FFN
- Developed by: ysnrfd - License: apache-2.0 - Finetuned from model : unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
txgemma-2b-predict-GGUF
BabyMistral-GGUF
calme-3.3-instruct-3b-GGUF
DAN-Qwen3-1.7B-Q8_0-GGUF
ysn-rfd/DAN-Qwen3-1.7B-Q80-GGUF This model was converted to GGUF format from `UnfilteredAI/DAN-Qwen3-1.7B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
LFM2-8B-A1B-Q4_0-GGUF
EXAONE-Deep-2.4B-GGUF
ysnrfd-fa-persian-slm-3m
CodeLlama-7b-Python-hf-Q4_K_M-GGUF
gpt2-xl-Q8_0-GGUF
OpenELM-3B-Instruct-GGUF
HallOumi-8B-GGUF
beecoder-220M-python-Q8_0-GGUF
openchat_3.5-Q8_0-GGUF
bloomz-3b-GGUF
chargen-v2-GGUF
text2image-prompt-generator-Q8_0-GGUF
LFM2-8B-A1B-Q8_0-GGUF
granite-3.2-2b-instruct-GGUF
HelpingAI2.5-2B-GGUF
Arch-Function-Chat-3B-GGUF
gpt4all-falcon-Q2_K-GGUF
bloom-560m-RLHF-SD2-prompter-Q8_0-GGUF
LLuMi_Think_3B-GGUF
OpenELM-1_1B-Instruct-Q8_0-GGUF
TinyMistral-248M-v2.5-Instruct-GGUF
OpenELM-3B-Instruct-Q4_0-GGUF
Marco-o1-Q8_0-GGUF
ReaderLM-v2-Q8_0-GGUF
TinyLlama_v1.1_math_code-GGUF
Phi-4-mini-instruct-GGUF
DeepHermes-3-Llama-3-3B-Preview-GGUF
WizardLM-7B-Uncensored-Q4_K_M-GGUF
gemma-3-4b-it-qat-int4-unquantized-Q8_0-GGUF
ysn-rfd/gemma-3-4b-it-qat-int4-unquantized-Q80-GGUF This model was converted to GGUF format from `google/gemma-3-4b-it-qat-int4-unquantized` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Athena-R3-1.5B-Q8_0-GGUF
AceInstruct-1.5B-GGUF
stable-code-instruct-3b-Q8_0-GGUF
ysn-rfd/stable-code-instruct-3b-Q80-GGUF This model was converted to GGUF format from `stabilityai/stable-code-instruct-3b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Open-RS3-GGUF
First_Persian_SLM_Big_Update_Version3_ysnrfd
WARNINNGS: This Model IS Pre-Trained, in the future will be finetuned. The First Persian SLM By YSNRFD (YASIN ARYANFARD) and AMIRHOSSEIN MEHRDOOST, This Model support Only Persian text Inputs, In The Future I Want Add Englih Language Support. - Developed by: ysnrfd (yasin aryanfard) - Funded by: ysnrfd (yasin aryanfard) and Amirhossein Mehrdoost (https://huggingface.co/fibonacciai) - Shared by: ysnrfd (yasin aryanfard) and Amirhossein Mehrdoost (https://huggingface.co/fibonacciai) - Model type: SLM - Language(s) (NLP): Persian - License: ysnrfd LICENSE ysnrfd Sample Persian Text LINK: https://huggingface.co/datasets/ysn-rfd/fibonaccialpacatosharegptgptformatconvertnewdatasetrelease - Hardware Type: Nvidia Tesla T4 (1) - Hours used: 1H - Cloud Provider: Google Colab
TinyMistral-248M-v2.5-GGUF
OpenCoder-1.5B-Instruct-GGUF
causal-language-modeling-Q2_K-GGUF
stablelm-zephyr-3b-Q4_K_M-GGUF
openhands-lm-1.5b-v0.1-GGUF
zeta-GGUF
PersianMind-v1.0-GGUF
stablelm-zephyr-3b-Q2_K-GGUF
CodeLlama-7b-Python-hf-Q2_K-GGUF
Qwen2.5-Coder-3B-Instruct-Q8_0-GGUF
Promt-generator-Q8_0-GGUF
ysn-rfd/Promt-generator-Q80-GGUF This model was converted to GGUF format from `UnfilteredAI/Promt-generator` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Persian-to-English-Translation-mT5-V1-Q8_0-GGUF
NSFW_DPO_Noromaid-7b-GGUF
distilgpt2-Q8_0-GGUF
TinyLlama_v1.1_math_code-Q2_K-GGUF
Qwen2-7B-Instruct-Q4_K_M-GGUF
gpt2-large-Q2_K-GGUF
ysn-rfd/gpt2-large-Q2K-GGUF This model was converted to GGUF format from `openai-community/gpt2-large` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
DeepScaleR-1.5B-Preview-Q8_0-GGUF
ysn-rfd/DeepScaleR-1.5B-Preview-Q80-GGUF This model was converted to GGUF format from `agentica-org/DeepScaleR-1.5B-Preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Ling-Coder-lite-Q8_0-GGUF
TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF
OpenELM-3B-Instruct-Q2_K-GGUF
open_llama_3b_v2-Q2_K-GGUF
gemma-3-1b-it-Q8_0-GGUF
Open-RS3-Q8_0-GGUF
ysn-rfd/Open-RS3-Q80-GGUF This model was converted to GGUF format from `knoveleng/Open-RS3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
openchat_3.5-GGUF
h2o-danube3.1-4b-chat-Q8_0-GGUF
h2o-danube3.1-4b-chat-Q5_0-GGUF
TinyLlama_v1.1-GGUF
OLMo-2-1124-7B-Instruct-GGUF
gpt2-Q8_0-GGUF
Qwen2.5-0.5B-Instruct-Q8_0-GGUF
HelpingAI2.5-5B-GGUF
gpt2-xl-Q4_K_M-GGUF
h2o-danube3-4b-chat-Q4_0-GGUF
open_llama_3b-Q4_K_M-GGUF
Qwen2.5-3B-Instruct-Q8_0-GGUF
Qwen2.5-1.5B-Instruct-Q8_0-GGUF
gemma-3-4b-it-Q8_0-GGUF
Mistral-7B-Instruct-v0.3-GGUF
h2o-danube3.1-4b-chat-Q4_K_M-GGUF
WizardLM-7B-Uncensored-Q5_K_M-GGUF
openchat-3.5-0106-Q8_0-GGUF
fibonacci_test_release-Q4_K_M-GGUF
ysn-rfd/fibonaccitestrelease-Q4KM-GGUF This model was converted to GGUF format from `ysn-rfd/fibonaccitestrelease` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
BADMISTRAL-1.5B-GGUF
SmolLM-1.7B-Q8_0-GGUF
OpenELM-3B-Instruct-Q8_0-GGUF
causal-language-modeling-Q4_K_M-GGUF
gpt2-Q2_K-GGUF
OpenELM-450M-Instruct-Q8_0-GGUF
Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF
h2o-danube3.1-4b-chat-Q4_0-GGUF
WizardLM-7B-Uncensored-Q5_0-GGUF
First_Persian_SLM_Big_Update_Version2_ysnrfd
OlympicCoder-7B-GGUF
TinyLlama_v1.1_math_code-Q8_0-GGUF
gpt2-medium-Q2_K-GGUF
ysn-rfd/gpt2-medium-Q2K-GGUF This model was converted to GGUF format from `openai-community/gpt2-medium` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).
Sweet-mix_v2.2_flat-openvino
t5-v1_1-base-Q8_0-GGUF
TinyDolphin-2.8-1.1b-Q8_0-GGUF
SmolLM2-1.7B-Instruct-Q8_0-GGUF
DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF
gemma-3-4b-persian-v0-Q8_0-GGUF
AceInstruct-7B-GGUF
WizardLM-7B-Uncensored-Q8_0-GGUF
fibonacci_test_release-Q2_K-GGUF
Fibonacci-Spiral-Positional-Encoding
AGI-Core
Test_FIBO_IDEN
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit This gemma3n model was trained 2x faster with Unsloth and Huggingface's TRL library.
YASIN-V2-GGUF
YASIN-V2
YASIN-V1
Spectral-Basis-Adapter
dreamshaper-xl-fp8-i
hassaku-xl-illustrious-fp8-i
wai-illustrious-sdxl-fp8-p
wai-illustrious-sdxl-fp8-i
ysnrfd-base-V3
RealRobot_LLM
RealRobot_LLM-Q8_0-GGUF
RealRobot_LLM_LoRA1
MODEL
llm-t97
finetune-smollm2-135m-instruct
openchat-3.5-0106-GGUF
pushed_to_hub_ysnrfd
tokenizer_ysnrfd
fibonacci_test_release
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
FIBO_IDEN_MODEL_ADAPTER
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Lora Model Persian Qwen
- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.