AlicanKiraz0

19 models • 1 total models in database
Sort by:

Cybersecurity BaronLLM Offensive Security LLM Q6 K GGUF

NaNK
llama-cpp
641
111

Seneca-Cybersecurity-LLM-Q4_K_M-GGUF

Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use. Over time, it will specialize in the following areas: - Incident Response - Threat Hunting - Code Analysis - Exploit Development - Reverse Engineering - Malware Analysis "Those who shed light on others do not remain in darkness..." AlicanKiraz0/SenecaLLM-Q4KM-GGUF This model was converted to GGUF format from `AlicanKiraz0/SenecaLLM` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
590
28

BaronLLM-v2-OffensiveSecurityLLM-Q8

BaronLLM v2.0 - State-of-the-Art Offensive Security AI Model Developed Trendyol Group Security Team - Alican Kiraz - İsmail Yavuz - Melih Yılmaz - Mertcan Kondur - Rıza Sabuncu - Özgün Kultekin > BaronLLM v2.0 is a state-of-the-art large language model fine-tuned specifically for offensive cybersecurity research & adversarial simulation, achieving breakthrough performance on industry benchmarks while maintaining safety constraints. CS-Eval Global Rankings - 13th place globally among all cybersecurity AI models - 4th place among publicly released models in its parameter class - Comprehensive average score: 80.93% | Category | BaronLLM v2.0 | vs. Industry Leaders | |----------|---------------|----------------------| | Standards & Regulations | 87.2% | Only 4.3 points behind Deepseek-v3 (671B) - 48× smaller! | | Application Security | 85.5% | Just 4.8 points behind GPT-4o (175B) - 12.5× more compact! | | Endpoint & Host | 88.1% | Only 1.4 points behind o1-preview (200B) - 14× higher efficiency! | | MCQ Overall | 86.9% | Within 2-6% of premium models! | The model has been trained with 4 H100 GPUs for 65 hours. Performance Improvements (v1 → v2) - Base model performance boosted by ~1.5x on CyberSec-Eval benchmarks - Enhanced with Causal Reasoning and Chain-of-Thought (CoT) capabilities --- | Capability | Details | |------------|---------| | Adversary Simulation | Generates full ATT&CK chains, C2 playbooks, and social-engineering scenarios | | Exploit Reasoning | Step-by-step vulnerability analysis with code-level explanations and PoC generation | | Payload Optimization | Advanced obfuscation techniques and multi-stage payload logic | | Threat Intelligence | Log analysis, artifact triage, and attack pattern recognition | | Cloud-Native Security | Kubernetes, serverless, and multi-cloud environment testing | | Emerging Threats | AI/ML security, quantum computing risks, and zero-day research | | Specification | Details | |--------------|---------| | Base Model | Qwen3-14B | | Parameters | 14 Billion | | Context Length | 8,192 tokens | | Training Data | 53,202 curated examples | | Domains Covered | 200+ specialized cybersecurity areas | | Languages | English | | Fine-tuning Method | Instruction tuning with CoT | 53,202 meticulously curated instruction-tuning examples covering 200+ specialized cybersecurity domains: Topic Distribution - Cloud Security & DevSecOps: 18.5% - Threat Intelligence & Hunting: 16.2% - Incident Response & Forensics: 14.8% - AI/ML Security: 12.3% - Network & Protocol Security: 11.7% - Identity & Access Management: 9.4% - Emerging Technologies: 8.6% - Platform-Specific Security: 5.3% - Compliance & Governance: 3.2% Data Sources (Curated & Redacted) - Public vulnerability databases (NVD/CVE, VulnDB) - Security research papers (Project Zero, PortSwigger, NCC Group) - Industry threat reports (with permissions) - Synthetic ATT&CK chains (auto-generated + human-vetted) - Conference proceedings (BlackHat, DEF CON, RSA) > Note: No copyrighted exploit code or proprietary malware datasets were used. > Dataset filtering removed raw shellcode/binary payloads. | Objective | Template | Parameters | |-----------|----------|------------| | Exploit Analysis | `ROLE: Senior Pentester\nOBJECTIVE: Analyze CVE-XXXX...` | `temperature=0.3, topp=0.9` | | Red Team Planning | `Generate ATT&CK chain for [target environment]...` | `temperature=0.5, topp=0.95` | | Threat Hunting | `Identify C2 patterns in [log type]...` | `temperature=0.2, topp=0.85` | | Incident Response | `Create response playbook for [threat scenario]...` | `temperature=0.4, topp=0.9` | Ethical Framework - Policy Gradient RLHF with security domain experts - OpenAI/Anthropic-style policies preventing malicious misuse - Continuous red-teaming via SecEval v0.3 - Dual-use prevention mechanisms Responsible Disclosure - Model capabilities are documented transparently - Access restricted to verified professionals - Usage monitoring for compliance - Regular security audits The technical paper detailing BaronLLM v2.0's architecture, training methodology, and benchmark results will be available on arXiv within one month. BaronLLM was originally developed to support the Trendyol Group Security Team and has evolved into a state-of-the-art offensive security AI model. We welcome collaboration from the security community: - Bug Reports: Via GitHub Issues - Feature Requests: Through community discussions - Research Collaboration: Contact for academic partnerships Important: This model is designed for authorized security testing and research only. Users must comply with all applicable laws and obtain proper authorization before conducting any security assessments. The developers assume no liability for misuse. Special thanks to: - Trendyol Group Security Team - The open-source security community - Academic Cybersecurity research community - All contributors and testers "Those who shed light on others do not remain in darkness..."

NaNK
llama-cpp
248
7

Qwen3-14B-BaronLLM-v2-Q4_0-GGUF

NaNK
llama-cpp
195
5

Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF

Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use. Over time, it will specialize in the following areas: - Incident Response - Threat Hunting - Code Analysis - Exploit Development - Reverse Engineering - Malware Analysis "Those who shed light on others do not remain in darkness..." AlicanKiraz0/SenecaLLMxQwen2.5-7B-CyberSecurity-Q80-GGUF This model was converted to GGUF format from `AlicanKiraz0/SenecaLLMxQwen2.5-7B-CyberSecurity` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
license:mit
75
12

Seneca-Cybersecurity-LLM-x-QwQ-32B-Q8_Max-Version

NaNK
license:mit
51
9

Seneca-Cybersecurity-LLM-x-DeepSeek-R1-Distill-Qwen-32B-v1.3-Q4_K_M-GGUF

NaNK
license:mit
43
9

Seneca-Cybersecurity-LLM-x-QwQ-32B-Q4_Medium-Version

NaNK
license:mit
40
11

Seneca-Cybersecurity-LLM-x-DeepSeek-R1-Distill-Qwen-32B-v1.3-Safe-Q2_K-GGUF

NaNK
license:mit
37
7

Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity-Q4_K_M-GGUF

Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use. Over time, it will specialize in the following areas: - Incident Response - Threat Hunting - Code Analysis - Exploit Development - Reverse Engineering - Malware Analysis "Those who shed light on others do not remain in darkness..." AlicanKiraz0/SenecaLLMxQwen2.5-7B-CyberSecurity-Q4KM-GGUF This model was converted to GGUF format from `AlicanKiraz0/SenecaLLMxQwen2.5-7B-CyberSecurity` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
license:mit
34
4

Seneca-Cybersecurity-LLM-x-QwQ-32B-Q2_Light-Version

NaNK
license:mit
28
11

Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity-Q2_K-GGUF

Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use. Over time, it will specialize in the following areas: - Incident Response - Threat Hunting - Code Analysis - Exploit Development - Reverse Engineering - Malware Analysis "Those who shed light on others do not remain in darkness..." AlicanKiraz0/SenecaLLMxQwen2.5-7B-CyberSecurity-Q2K-GGUF This model was converted to GGUF format from `AlicanKiraz0/SenecaLLMxQwen2.5-7B-CyberSecurity` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
license:mit
15
3

QwQ-32B-Preview-Seneca-Cybersecurity-LLMv1.2-Q8_0-GGUF

Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use. Over time, it will specialize in the following areas: - Incident Response - Threat Hunting - Code Analysis - Exploit Development - Reverse Engineering - Malware Analysis "Those who shed light on others do not remain in darkness..." AlicanKiraz0/QwQ-32B-Preview-SenecaLLMv1.2-Q80-GGUF This model was converted to GGUF format from `AlicanKiraz0/QwQ-32B-Preview-SenecaLLMv1.2` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
llama-cpp
14
3

Seneca-Cybersecurity-LLM-x-DeepSeek-R1-Distill-Qwen-32B-v1.3-Safe-Q8_0-GGUF

Links: - Medium: https://alican-kiraz1.medium.com/ - Linkedin: https://tr.linkedin.com/in/alican-kiraz - X: https://x.com/AlicanKiraz0 - YouTube: https://youtube.com/@alicankiraz0 With the release of the new DeepSeek-R1, I quickly began training SenecaLLM v1.3 based on this model. During training: About 20 hours on BF16 with 4×H200 About 10 hours on BF16 with 8×A100 About 12 hours on FP32 with 8×H200 Thanks to DeepSeek R1’s Turkish support capability and the dataset used in SenecaLLM v1.3, it can now provide Turkish support! With the new dataset I’ve prepared, it can produce quite good outputs in the following areas: Information Security v1.4 Incident Response v1.3 Threat Hunting v1.3 Ethical Exploit Development v1.2 Purple Team Tactics v1.2 Reverse Engineering v1.0 "Those who shed light on others do not remain in darkness..." AlicanKiraz0/Seneca-x-DeepSeek-R1-Distill-Qwen-32B-v1.3-Safe-Q80-GGUF This model was converted to GGUF format from `AlicanKiraz0/Seneca-x-DeepSeek-R1-Distill-Qwen-32B-v1.3-Safe` using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model. Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well. Step 2: Move into the llama.cpp folder and build it with `LLAMACURL=1` flag along with other hardware-specific flags (for ex: LLAMACUDA=1 for Nvidia GPUs on Linux).

NaNK
license:mit
13
3

Seneca-Cybersecurity-LLM_x_gemma-2-9b-CyberSecurity-Q4

NaNK
llama-cpp
11
3

QwQ-32B-Preview-Seneca-Cybersecurity-LLMv1.2-Q4_K_M-GGUF

NaNK
llama-cpp
10
4

Seneca-Cybersecurity-LLM_x_Qwen2.5-7B-CyberSecurity

NaNK
license:mit
2
14

Mihenk-LLM-14B-Turkish-Financial-Model

NaNK
llama-cpp
2
0

Kara-Kumru-v1.0-2B

NaNK
license:apache-2.0
0
9