ysn-rfd

176 models • 6 total models in database
Sort by:

Test_FIBO_IDEN_FFN-GGUF

TestFIBOIDENFFN Model creator: ysn-rfd Original model: ysn-rfd/TestFIBOIDENFFN GGUF quantization: provided by ysn-rfd using `llama.cpp` Special thanks 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. Use with Ollama

llama-cpp
688
2

chatbench-distilgpt2-GGUF

NaNK
llama-cpp
449
1

FIBO IDEN MODEL GGUF

FIBOIDENMODEL Model creator: ysn-rfd Original model: ysn-rfd/FIBOIDENMODEL GGUF quantization: provided by ysn-rfd using `llama.cpp` Special thanks 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. Use with Ollama

llama-cpp
294
2

gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-Q4_0-GGUF

NaNK
llama-cpp
270
2

gpt-oss-4.2b-specialized-all-pruned-moe-only-4-experts-Q8_0-GGUF

NaNK
llama-cpp
165
2

First_Persian_SLM_Big_Update_Version3_ULTIMATE_ysnrfd

163
2

Dolphin3.0-Qwen2.5-1.5B-GGUF

NaNK
llama-cpp
94
0

mommygpt-3B-GGUF

NaNK
openllama
90
0

FIBONACCI PERSIAN MODEL GGUF

Example usage: - For text only LLMs: llama-cli --hf repoid/modelname -p "why is the sky blue?" - For multimodal models: llama-mtmd-cli -m modelname.gguf --mmproj mmprojfile.gguf Ollama An Ollama Modelfile is included for easy deployment.

llama.cpp
88
1

ysnrfd-base-V2

81
2

WizardLM-7B-Uncensored-GGUF

ysn-rfd/WizardLM-7B-Uncensored-GGUF This model was converted to GGUF format from `cognitivecomputations/WizardLM-7B-Uncensored` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
76
2

chatbench-llama3-8b-GGUF

NaNK
llama-cpp
75
0

WizardLM-7B-Uncensored-Q4_0-GGUF

NaNK
llama-cpp
73
1

ysnrfd-base

69
1

starcoder2-3b-GGUF

NaNK
llama-cpp
69
1

Qwen 3 1.7b Persian Q8 0 GGUF

ysn-rfd/qwen31.7bpersian-Q80-GGUF This model was converted to GGUF format from `ysn-rfd/qwen31.7bpersian` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
60
1

Dolphin3.0-Qwen2.5-3b-GGUF

NaNK
llama-cpp
60
0

calme-3.3-llamaloi-3b-GGUF

NaNK
llama
57
0

Dolphin3.0-Llama3.2-1B-GGUF

NaNK
llama-cpp
57
0

Dolphin3.0-Llama3.2-3B-GGUF

NaNK
llama-cpp
52
1

LlamaCorn-1.1B-Chat-GGUF

NaNK
llama-cpp
45
2

text2image-prompt-generator-GGUF

text2image-prompt-generator Model creator: succinctly Original model: succinctly/text2image-prompt-generator GGUF quantization: provided by ysn-rfd using `llama.cpp` Special thanks 🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible. Use with Ollama

llama-cpp
45
1

FIBO_IDEN_MODEL_GGUF

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
45
1

NSFW-3B-GGUF

NaNK
llama-cpp
42
2

OpenThinker2-7B-GGUF

NaNK
llama-factory
42
0

gpt2-xl-conversational-GGUF

llama-cpp
41
1

NSFW-3B-Q2_K-GGUF

NaNK
llama-cpp
40
1

YASIN-Persian-Base

39
3

PersianMind-v1.0-Q4_0-GGUF

NaNK
llama-cpp
39
1

TinySwallow-1.5B-Instruct-GGUF

NaNK
llama-cpp
39
1

gpt-oss-10.8b-specialized-all-pruned-moe-only-15-experts-Q8_0-GGUF

NaNK
37
1

ReaderLM-v2-GGUF

NaNK
llama-cpp
35
0

Ministral-3b-instruct-GGUF

NaNK
llama-cpp
34
1

Refact-1_6B-fim-GGUF

NaNK
llama-cpp
34
0

Huihui-MoE-12B-A4B-abliterated-Q4_K_M-GGUF

ysn-rfd/Huihui-MoE-12B-A4B-abliterated-Q4KM-GGUF This model was converted to GGUF format from `huihui-ai/Huihui-MoE-12B-A4B-abliterated` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
34
0

DeepSeek-R1-Distill-Qwen-7B-GGUF

ysn-rfd/DeepSeek-R1-Distill-Qwen-7B-GGUF This model was converted to GGUF format from `deepseek-ai/DeepSeek-R1-Distill-Qwen-7B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
31
1

gemma-3-4b-it-GGUF

NaNK
llama-cpp
29
1

NSFW_DPO_Noromaid-7b-Q4_0-GGUF

NaNK
llama-cpp
29
1

stablelm-zephyr-3b-Q8_0-GGUF

NaNK
llama-cpp
29
0

xLAM-1b-fc-r-Q4_0-GGUF

NaNK
llama-cpp
27
0

Qwen 3 1.7b Persian

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit

NaNK
license:apache-2.0
25
1

openhands-lm-7b-v0.1-GGUF

NaNK
llama-cpp
25
0

PersianMind-v1.0-Q4_K_M-GGUF

llama-cpp
24
1

Z1-7B-GGUF

NaNK
llama-cpp
24
1

FIBONACCI PERSIAN MODEL

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit

NaNK
license:apache-2.0
24
1

gemma-3-4b-it-qat-int4-unquantized-Q4_K_M-GGUF

ysn-rfd/gemma-3-4b-it-qat-int4-unquantized-Q4KM-GGUF This model was converted to GGUF format from `google/gemma-3-4b-it-qat-int4-unquantized` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
23
1

gpt-oss-4.2b-specialized-harmful-pruned-moe-only-4-experts-Q8_0-GGUF

NaNK
llama-cpp
23
1

FIBO_IDEN_MODEL

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
23
1

Test_FIBO_IDEN_FFN

- Developed by: ysnrfd - License: apache-2.0 - Finetuned from model : unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit

license:apache-2.0
21
2

txgemma-2b-predict-GGUF

NaNK
llama-cpp
21
1

BabyMistral-GGUF

llama-cpp
21
1

calme-3.3-instruct-3b-GGUF

NaNK
llama-cpp
21
0

DAN-Qwen3-1.7B-Q8_0-GGUF

ysn-rfd/DAN-Qwen3-1.7B-Q80-GGUF This model was converted to GGUF format from `UnfilteredAI/DAN-Qwen3-1.7B` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
21
0

LFM2-8B-A1B-Q4_0-GGUF

NaNK
llama-cpp
21
0

EXAONE-Deep-2.4B-GGUF

NaNK
llama-cpp
20
1

ysnrfd-fa-persian-slm-3m

license:apache-2.0
19
1

CodeLlama-7b-Python-hf-Q4_K_M-GGUF

NaNK
llama-2
18
2

gpt2-xl-Q8_0-GGUF

llama-cpp
17
1

OpenELM-3B-Instruct-GGUF

NaNK
llama-cpp
17
1

HallOumi-8B-GGUF

NaNK
llama-cpp
17
1

beecoder-220M-python-Q8_0-GGUF

NaNK
smol_llama
16
2

openchat_3.5-Q8_0-GGUF

NaNK
llama-cpp
16
0

bloomz-3b-GGUF

NaNK
llama-cpp
16
0

chargen-v2-GGUF

NaNK
llama-cpp
15
1

text2image-prompt-generator-Q8_0-GGUF

llama-cpp
15
0

LFM2-8B-A1B-Q8_0-GGUF

NaNK
llama-cpp
14
0

granite-3.2-2b-instruct-GGUF

NaNK
llama-cpp
13
2

HelpingAI2.5-2B-GGUF

NaNK
llama-cpp
13
0

Arch-Function-Chat-3B-GGUF

NaNK
llama-cpp
12
1

gpt4all-falcon-Q2_K-GGUF

llama-cpp
12
0

bloom-560m-RLHF-SD2-prompter-Q8_0-GGUF

llama-cpp
12
0

LLuMi_Think_3B-GGUF

NaNK
llama
11
1

OpenELM-1_1B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
11
0

TinyMistral-248M-v2.5-Instruct-GGUF

llama-cpp
10
2

OpenELM-3B-Instruct-Q4_0-GGUF

NaNK
llama-cpp
10
1

Marco-o1-Q8_0-GGUF

NaNK
llama-cpp
10
0

ReaderLM-v2-Q8_0-GGUF

NaNK
llama-cpp
10
0

TinyLlama_v1.1_math_code-GGUF

NaNK
llama-cpp
10
0

Phi-4-mini-instruct-GGUF

llama-cpp
9
1

DeepHermes-3-Llama-3-3B-Preview-GGUF

NaNK
Llama-3
9
0

WizardLM-7B-Uncensored-Q4_K_M-GGUF

NaNK
llama-cpp
9
0

gemma-3-4b-it-qat-int4-unquantized-Q8_0-GGUF

ysn-rfd/gemma-3-4b-it-qat-int4-unquantized-Q80-GGUF This model was converted to GGUF format from `google/gemma-3-4b-it-qat-int4-unquantized` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
9
0

Athena-R3-1.5B-Q8_0-GGUF

NaNK
llama-cpp
8
2

AceInstruct-1.5B-GGUF

NaNK
llama-cpp
8
1

stable-code-instruct-3b-Q8_0-GGUF

ysn-rfd/stable-code-instruct-3b-Q80-GGUF This model was converted to GGUF format from `stabilityai/stable-code-instruct-3b` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
8
0

Open-RS3-GGUF

NaNK
llama-cpp
8
0

First_Persian_SLM_Big_Update_Version3_ysnrfd

WARNINNGS: This Model IS Pre-Trained, in the future will be finetuned. The First Persian SLM By YSNRFD (YASIN ARYANFARD) and AMIRHOSSEIN MEHRDOOST, This Model support Only Persian text Inputs, In The Future I Want Add Englih Language Support. - Developed by: ysnrfd (yasin aryanfard) - Funded by: ysnrfd (yasin aryanfard) and Amirhossein Mehrdoost (https://huggingface.co/fibonacciai) - Shared by: ysnrfd (yasin aryanfard) and Amirhossein Mehrdoost (https://huggingface.co/fibonacciai) - Model type: SLM - Language(s) (NLP): Persian - License: ysnrfd LICENSE ysnrfd Sample Persian Text LINK: https://huggingface.co/datasets/ysn-rfd/fibonaccialpacatosharegptgptformatconvertnewdatasetrelease - Hardware Type: Nvidia Tesla T4 (1) - Hours used: 1H - Cloud Provider: Google Colab

7
2

TinyMistral-248M-v2.5-GGUF

NaNK
llama-cpp
7
1

OpenCoder-1.5B-Instruct-GGUF

NaNK
llama-cpp
7
1

causal-language-modeling-Q2_K-GGUF

llama-cpp
7
0

stablelm-zephyr-3b-Q4_K_M-GGUF

NaNK
llama-cpp
7
0

openhands-lm-1.5b-v0.1-GGUF

NaNK
llama-cpp
7
0

zeta-GGUF

llama-cpp
7
0

PersianMind-v1.0-GGUF

NaNK
llama-cpp
6
1

stablelm-zephyr-3b-Q2_K-GGUF

NaNK
llama-cpp
6
0

CodeLlama-7b-Python-hf-Q2_K-GGUF

NaNK
llama-2
6
0

Qwen2.5-Coder-3B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
6
0

Promt-generator-Q8_0-GGUF

ysn-rfd/Promt-generator-Q80-GGUF This model was converted to GGUF format from `UnfilteredAI/Promt-generator` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
6
0

Persian-to-English-Translation-mT5-V1-Q8_0-GGUF

NaNK
llama-cpp
5
1

NSFW_DPO_Noromaid-7b-GGUF

NaNK
llama-cpp
5
1

distilgpt2-Q8_0-GGUF

NaNK
llama-cpp
5
0

TinyLlama_v1.1_math_code-Q2_K-GGUF

NaNK
llama-cpp
5
0

Qwen2-7B-Instruct-Q4_K_M-GGUF

NaNK
llama-cpp
5
0

gpt2-large-Q2_K-GGUF

ysn-rfd/gpt2-large-Q2K-GGUF This model was converted to GGUF format from `openai-community/gpt2-large` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
5
0

DeepScaleR-1.5B-Preview-Q8_0-GGUF

ysn-rfd/DeepScaleR-1.5B-Preview-Q80-GGUF This model was converted to GGUF format from `agentica-org/DeepScaleR-1.5B-Preview` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
5
0

Ling-Coder-lite-Q8_0-GGUF

llama-cpp
5
0

TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF

NaNK
llama-cpp
4
0

OpenELM-3B-Instruct-Q2_K-GGUF

NaNK
llama-cpp
4
0

open_llama_3b_v2-Q2_K-GGUF

NaNK
llama-cpp
4
0

gemma-3-1b-it-Q8_0-GGUF

NaNK
llama-cpp
4
0

Open-RS3-Q8_0-GGUF

ysn-rfd/Open-RS3-Q80-GGUF This model was converted to GGUF format from `knoveleng/Open-RS3` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
4
0

openchat_3.5-GGUF

NaNK
llama-cpp
4
0

h2o-danube3.1-4b-chat-Q8_0-GGUF

NaNK
llama-cpp
4
0

h2o-danube3.1-4b-chat-Q5_0-GGUF

NaNK
llama-cpp
4
0

TinyLlama_v1.1-GGUF

NaNK
llama-cpp
4
0

OLMo-2-1124-7B-Instruct-GGUF

NaNK
llama-cpp
4
0

gpt2-Q8_0-GGUF

NaNK
llama-cpp
3
1

Qwen2.5-0.5B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
3
1

HelpingAI2.5-5B-GGUF

NaNK
llama-cpp
3
1

gpt2-xl-Q4_K_M-GGUF

llama-cpp
3
0

h2o-danube3-4b-chat-Q4_0-GGUF

NaNK
llama-cpp
3
0

open_llama_3b-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

Qwen2.5-3B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
3
0

Qwen2.5-1.5B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
3
0

gemma-3-4b-it-Q8_0-GGUF

NaNK
llama-cpp
3
0

Mistral-7B-Instruct-v0.3-GGUF

NaNK
llama-cpp
3
0

h2o-danube3.1-4b-chat-Q4_K_M-GGUF

NaNK
llama-cpp
3
0

WizardLM-7B-Uncensored-Q5_K_M-GGUF

NaNK
llama-cpp
3
0

openchat-3.5-0106-Q8_0-GGUF

NaNK
llama-cpp
3
0

fibonacci_test_release-Q4_K_M-GGUF

ysn-rfd/fibonaccitestrelease-Q4KM-GGUF This model was converted to GGUF format from `ysn-rfd/fibonaccitestrelease` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
3
0

BADMISTRAL-1.5B-GGUF

NaNK
llama-cpp
2
2

SmolLM-1.7B-Q8_0-GGUF

NaNK
llama-cpp
2
0

OpenELM-3B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
2
0

causal-language-modeling-Q4_K_M-GGUF

llama-cpp
2
0

gpt2-Q2_K-GGUF

NaNK
llama-cpp
2
0

OpenELM-450M-Instruct-Q8_0-GGUF

llama-cpp
2
0

Qwen2.5-Coder-7B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
2
0

h2o-danube3.1-4b-chat-Q4_0-GGUF

NaNK
llama-cpp
2
0

WizardLM-7B-Uncensored-Q5_0-GGUF

NaNK
llama-cpp
2
0

First_Persian_SLM_Big_Update_Version2_ysnrfd

1
3

OlympicCoder-7B-GGUF

NaNK
llama-cpp
1
1

TinyLlama_v1.1_math_code-Q8_0-GGUF

NaNK
llama-cpp
1
0

gpt2-medium-Q2_K-GGUF

ysn-rfd/gpt2-medium-Q2K-GGUF This model was converted to GGUF format from `openai-community/gpt2-medium` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

llama-cpp
1
0

Sweet-mix_v2.2_flat-openvino

1
0

t5-v1_1-base-Q8_0-GGUF

NaNK
llama-cpp
1
0

TinyDolphin-2.8-1.1b-Q8_0-GGUF

NaNK
llama-cpp
1
0

SmolLM2-1.7B-Instruct-Q8_0-GGUF

NaNK
llama-cpp
1
0

DeepSeek-R1-Distill-Qwen-1.5B-Q8_0-GGUF

NaNK
llama-cpp
1
0

gemma-3-4b-persian-v0-Q8_0-GGUF

NaNK
llama-cpp
1
0

AceInstruct-7B-GGUF

NaNK
llama-cpp
1
0

WizardLM-7B-Uncensored-Q8_0-GGUF

NaNK
llama-cpp
1
0

fibonacci_test_release-Q2_K-GGUF

llama-cpp
1
0

Fibonacci-Spiral-Positional-Encoding

0
2

AGI-Core

0
2

Test_FIBO_IDEN

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit This gemma3n model was trained 2x faster with Unsloth and Huggingface's TRL library.

license:apache-2.0
0
2

YASIN-V2-GGUF

NaNK
llama-cpp
0
1

YASIN-V2

0
1

YASIN-V1

0
1

Spectral-Basis-Adapter

license:apache-2.0
0
1

dreamshaper-xl-fp8-i

0
1

hassaku-xl-illustrious-fp8-i

0
1

wai-illustrious-sdxl-fp8-p

0
1

wai-illustrious-sdxl-fp8-i

0
1

ysnrfd-base-V3

NaNK
license:apache-2.0
0
1

RealRobot_LLM

license:apache-2.0
0
1

RealRobot_LLM-Q8_0-GGUF

llama-cpp
0
1

RealRobot_LLM_LoRA1

license:apache-2.0
0
1

MODEL

license:mit
0
1

llm-t97

license:apache-2.0
0
1

finetune-smollm2-135m-instruct

llama
0
1

openchat-3.5-0106-GGUF

NaNK
llama-cpp
0
1

pushed_to_hub_ysnrfd

license:apache-2.0
0
1

tokenizer_ysnrfd

0
1

fibonacci_test_release

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
0
1

FIBO_IDEN_MODEL_ADAPTER

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
0
1

Lora Model Persian Qwen

- Developed by: ysn-rfd - License: apache-2.0 - Finetuned from model : unsloth/Qwen3-1.7B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

NaNK
license:apache-2.0
0
1