John1604

42 models • 1 total models in database
Sort by:

DeepSeek-R1-Distill-Qwen-32B-gguf

This is distilled Deepseek-r1. Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. Command in windows command line, or in terminal in ubuntu, type: ollama run hf.co/John1604/DeepSeek-R1-Distill-Qwen-32B-gguf:q3km (q3km is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/DeepSeek-R1-Distill-Qwen-32B-gguf" and check "GGUF", the model should be found. Download the model. you may choose the quantized type in the download window. Load the model. Load the model to the LM Studio. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
1,677
3

DeepSeek-R1-Distill-Llama-70B-gguf

This is distilled Deepseek-r1. Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. Command in windows command line, or in terminal in ubuntu, type: ollama run hf.co/John1604/DeepSeek-R1-Distill-Llama-70B-gguf:q3km (q3km is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/DeepSeek-R1-Distill-Llama-70B-gguf" and check "GGUF", the model should be found. Download the model. Load the model. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B
1,668
3

Qwen3-Coder-480B-A35B-Instruct-gguf

download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/Qwen3-Coder-480B-A35B-Instruct-gguf" and check "GGUF", the model should be found. Download the model. you may choose the quantized type in the download window. Load the model. Load the model to the LM Studio. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
1,095
2

DeepSeek-R1-0528-Qwen3-8B-gguf

This is the LLM about HIPPA law. Ask the LLM about HIPAA. Command in windows command line, or in terminal in ubuntu, type: (q6k is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/DeepSeek-R1-0528-Qwen3-8B-gguf" and check "GGUF", the model should be found. Download the model. Load the model. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
572
1

DeepSeek-R1-Distill-Qwen-14B-gguf

This is distilled Deepseek-r1. Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. Command in windows command line, or in terminal in ubuntu, type: ollama run hf.co/John1604/DeepSeek-R1-Distill-Qwen-14B-gguf:q3km (q3km is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/DeepSeek-R1-Distill-Qwen-14B-gguf" and check "GGUF", the model should be found. Download the model. Load the model. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
408
1

Qwen3-235B-A22B-Instruct-2507-gguf

Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/Qwen3-235B-A22B-Instruct-2507-gguf" and check "GGUF", the model should be found. Download the model. you may choose the quantized type in the download window. Load the model. Load the model to the LM Studio. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
405
1

John1604-strongperson-negotiate-japanese-gguf

388
2

John1604-negotiation-with-strongpersons-korean-gguf

355
1

Qwen3-Next-80B-A3B-Instruct-gguf

NaNK
license:apache-2.0
308
0

GLM-4.5-Air-gguf

license:apache-2.0
289
0

MiniMax-M2.1-gguf

NaNK
288
0

Baichuan-M2-32B-gguf

LLM is an invention as important as electricity, much more important than internet, phones, and computers combined. Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. Command in windows command line, or in terminal in ubuntu, type: ollama run hf.co/John1604/Baichuan-M2-32B-gguf:q4ks (q4ks is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/Baichuan-M2-32B-gguf" and check "GGUF", the model should be found. Download the model. You may choose quantized model. Load the model. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
282
1

Qwen3-14B-gguf

Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. Command in windows command line, or in terminal in ubuntu, type: (q5km is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/Qwen3-14B-gguf" and check "GGUF", the model should be found. Download the model. You may choose quantized model. Load the model. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
277
0

Qwen3-30B-A3B-Instruct-2507-gguf

download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604" and check "GGUF", the model should be found. Download the model. Load the model. Ask questions. Command in windows command line, or in terminal in ubuntu, type: (q6k is the model quant type, q5ks, q4km, ..., can also be used) | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
license:apache-2.0
258
1

John1604-AI-status-2025-gguf

Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models. Command in windows command line, or in terminal in ubuntu, type: ollama run hf.co/John1604/John1604-AI-status-2025-gguf:q4ks (q4ks is the model quant type, q5ks, q4km, ..., can also be used) download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/John1604-AI-status-2025-gguf" and check "GGUF", the model should be found. Download the model. You may choose quantized model. Load the model. Ask questions. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

NaNK
238
1

strongperson-negotiate-zh-gguf

Command in windows command line, or in terminal in ubuntu, type: (q6k is the model quant type, q5ks, q4km, ..., can also be used) International Inventor's License If the use is not commercial, it is free to use without any fees. For commercial use, if the company or individual does not make any profit, no fees are required. For commercial use, if the company or individual has a net profit, they should pay 1% of the net profit or 0.5% of the sales revenue, whichever is less. For commercial use, we provide product-related services. | Type | Bits | Quality | Description | | ---------- | ----- | ---------------- | ------------------------------------ | | Q2K | 2-bit | 🟥 Low | Minimal footprint; only for tests | | Q3KS | 3-bit | 🟧 Low | “Small” variant (less accurate) | | Q3KM | 3-bit | 🟧 Low–Med | “Medium” variant | | Q4KS | 4-bit | 🟨 Med | Small, faster, slightly less quality | | Q4KM | 4-bit | 🟩 Med–High | “Medium” — best 4-bit balance | | Q5KS | 5-bit | 🟩 High | Slightly smaller than Q5KM | | Q5KM | 5-bit | 🟩🟩 High | Excellent general-purpose quant | | Q6K | 6-bit | 🟩🟩🟩 Very High | Almost FP16 quality, larger size | | Q80 | 8-bit | 🟩🟩🟩🟩 | Near-lossless baseline |

230
1

gpt-oss-120b-gguf

Make sure you have enough ram/gpu to run application. download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604/gpt-oss-120b-gguf" and check "GGUF", the model should be found. Download the model. Load the model. Ask questions.

NaNK
license:apache-2.0
207
0

MiniMax-M2-gguf

NaNK
license:mit
171
0

John1604-Bible-Xpert-Korean-version-3.0-gguf

165
0

Qwen3-8B-gguf

NaNK
license:apache-2.0
126
0

Qwen3-VL-2B-Thinking-gguf

NaNK
license:apache-2.0
100
0

Qwen3-235B-A22B-Thinking-2507-gguf

NaNK
license:apache-2.0
88
1

Qwen3-Coder-30B-A3B-Instruct-gguf

NaNK
license:apache-2.0
68
1

HYPAA-John1604-en-2025-gguf

59
2

John1604-negotiation-with-strongpersons-korean

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

NaNK
49
0

Qwen3-Next-80B-A3B-Thinking-gguf

NaNK
license:apache-2.0
42
0

GLM-4.5-Air-f16-gguf

license:apache-2.0
36
0

John1604-AI-status-2025

NaNK
26
1

art-of-negotiation-with-strongpersons-chinese

This model can answer and guide people to master skills in the negotiation with strongmen. There are gguf files that can be readily used in ollama, or LM studio. This model can also be used in vllm or transformer code. For example asking touch questions: A: 在谈判中,强者通过将自身立场伪装成对对方有益的建议,从而降低对方的心理防御。 当一方提出看似合理且具有说服力的选项时,即使该选项本身并不真正符合对方利益,对方 也会倾向于接受,因为这种表达方式让对方感觉是自己做出了选择,而非被强迫妥协。例如, 强者在提出条件或要求时,并不直接表明这是最终方案,而是将其包装成“最好的朋友”提出 的建议,强调其公平性、合理性与双方共同利益的契合点。这种策略利用了人类心理中对 自主决策和情感认同的需求,使对方误以为自己是在维护自身立场,实则在无形中被引导 做出让步。正如文中所举的例子,强者通过巧妙设计“选项”,将原本可能被视为压迫性的 要求转化为一种合作与共赢的表象,从而迫使对手在不自觉中接受其真实意图,最终实现目标。 International Inventor's License If the use is not commercial, it is free to use without any fees. For commercial use, if the company or individual does not make any profit, no fees are required. For commercial use, if the company or individual has a net profit, they should pay 1% of the net profit or 0.5% of the sales revenue, whichever is less. For commercial use, we provide product-related services.

NaNK
24
1

John1604

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: Willam Hsu, John Xu - Model type: Text Generation - Language(s) (NLP): Chinese - License: Apache 2.0 license Model Thinking (https://github.com/aml2lab/Inventor-s-License) This model can provide in-depth web security and LLM security knowledge. The code of John1604 has been in the latest Hugging Face `transformers` version 4.55.4 and we advise you to use the latest version of `transformers`. The following contains a code snippet illustrating how to use the model generate content based on given inputs. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. - Hardware Type: [More Information Needed] - Hours used: [More Information Needed] - Cloud Provider: [More Information Needed] - Compute Region: [More Information Needed] - Carbon Emitted: [More Information Needed]

license:apache-2.0
23
1

John1604-AI-status-japanese-2025

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: Willam Hsu, John Xu - Model type: Text Generation - Language(s) (NLP): Japanese - License: Intenational Inventor's License The code of John1604 has been in the latest Hugging Face `transformers` version 4.55.4 and we advise you to use the latest version of `transformers`. The following contains a code snippet illustrating how to use the model generate content based on given inputs.

NaNK
23
0

Qwen3-VL-32B-Instruct-gguf

NaNK
license:apache-2.0
18
0

John1604-strongperson-negotiate-ja

15
1

John1604-AML3-HYPAA

NaNK
12
1

Qwen3-VL-30B-A3B-Thinking-gguf

NaNK
license:apache-2.0
11
0

AML3-Security-Advisor

International Inventor's License If the use is not commercial, it is free to use without any fees. For commercial use, if the company or individual does not make any profit, no fees are required. For commercial use, if the company or individual has a net profit, they should pay 1% of the net profit or 0.5% of the sales revenue, whichever is less. For commercial use, we provide product-related services.

NaNK
8
2

Kimi-K2-Thinking-q6K-gguf

7
0

John1604-Deal-Bully-english-gguf

5
0

AML3-Security-and-databricks-dolly-15k

International Inventor's License If the use is not commercial, it is free to use without any fees. For commercial use, if the company or individual does not make any profit, no fees are required. For commercial use, if the company or individual has a net profit, they should pay 1% of the net profit or 0.5% of the sales revenue, whichever is less. For commercial use, we provide product-related services.

NaNK
5
0

John1604-AML3-HYPAA-English

NaNK
2
0

John1604-Deal-Bully-chinese

NaNK
1
0

gpt-oss-20b-gguf

download and install LM Studio https://lmstudio.ai/ In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed. In the "Mission Control" search bar, type "John1604" and check "GGUF", the model should be found. Download the model. Load the model. Ask questions.

NaNK
license:apache-2.0
0
1