Agnuxo

113 models • 1 total models in database
Sort by:

Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit

NaNK
llama
220
3

Mamba-Codestral-7B-v0.1-python_coding_assistant-GGUF_8bit

NaNK
license:apache-2.0
91
1

Llama-3.1-Minitron-4B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
llama
62
1

Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_4bit

NaNK
license:apache-2.0
55
3

Agente-Llama-3.1-Spanish_English_GGUF_32bit

NaNK
llama
54
0

Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_16bit

NaNK
license:apache-2.0
53
0

Llama-3.1-Minitron-4B-Instruct_CODE_Python-Spanish_English_GGUF_q5_k

NaNK
llama
38
0

Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_4bit

NaNK
license:apache-2.0
37
0

Agente-GPT-Qwen2.5-3B-Instruct-Spanish_English_GGUF_q5_k

NaNK
license:apache-2.0
37
0

Mistral-NeMo-Minitron-8B-Alpaca-CODE-Python-GGUF-16bit

NaNK
license:apache-2.0
35
0

Mistral-Nemo-CODE-Python_assistant-GGUF_16bit

NaNK
license:apache-2.0
32
0

Qwen2-1.5B-Instruct_MOE_assistant-GGUF_4bit

NaNK
license:apache-2.0
31
0

Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_4bit

NaNK
license:apache-2.0
31
0

Llama-3.1-Minitron-4B-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
llama
31
0

Agente-Llama-3.1-Spanish_English_GGUF_q5_k

llama
30
0

Agente-GPT-Qwen-2.5-3B-Spanish_English_GGUF_32bit

NaNK
license:apache-2.0
30
0

Qwen2-1.5B-Instruct_MOE_CODE_assistant-GGUF_4bit

NaNK
license:apache-2.0
28
0

Mamba-Codestral-7B-Instruct_CODE_Python-Spanish_English_GGUF_16bit

NaNK
license:apache-2.0
27
0

Agente-Llama-3.1-Spanish_English_GGUF_q6_k

llama
27
0

Llama-3.1-Minitron-4B-Instruct_CODE_Python-Spanish_English_GGUF_16bit

NaNK
llama
25
0

Agente-GPT-Qwen2.5-3B-Instruct-Spanish_English_GGUF_q6_k

NaNK
license:apache-2.0
25
0

gemma-2-2b-instruct-python_CODE_assistant-GGUF_4bit

NaNK
license:apache-2.0
24
0

Mamba-Codestral-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
license:apache-2.0
22
0

Mamba-Codestral-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
license:apache-2.0
22
0

Agente-GPT-Qwen-2.5-3B_GGUF_16bit

NaNK
license:apache-2.0
22
0

Agente-GPT-Qwen2.5-3B-Instruct-Spanish_English_GGUF_4bit

NaNK
license:apache-2.0
21
0

NEBULA-X-DEMO

🌌 NEBULA-X: Enhanced Unified Holographic Neural Network NEBULA-X is a revolutionary AI architecture that combines holographic memory, quantum computing, and optical neural networks to create the world's first production-ready photonic neural network system. Holographic Neural Networks - Holographic Memory: Information stored as interference patterns in 3D space - Light-based Processing: Neurons represented as points of light with optical properties - Interferometric Computing: Calculations performed through wave interference Quantum-Enhanced Processing - 4 Qubits per Neuron: Distributed quantum memory for enhanced processing - Quantum Entanglement: Non-local correlations between neural components - Superposition States: Parallel processing of multiple possibilities Optical Raytracing - GPU-Accelerated: CUDA kernels for Monte Carlo raytracing - Real-time Physics: Accurate simulation of light propagation - Material Properties: Reflectivity, transmittance, and phase shifts | Benchmark | Score | Improvement vs Baseline | |-----------|-------|------------------------| | MMLU | 85.0% | +240% | | GSM8K | 78.0% | +∞% (baseline: 0%) | | HellaSwag | 92.3% | +152% | | ARC | 88.7% | +198% | Francisco Angulo de Lafuente (Agnuxo) - Research Focus: Holographic Computing, Quantum AI, Optical Neural Networks - NVIDIA LlamaIndex Developer Contest 2024 Winner - 27+ Repositories in Advanced AI Architectures NEBULA-X represents a paradigm shift in AI architecture, combining the power of light, quantum mechanics, and evolutionary algorithms to create truly intelligent systems.

license:apache-2.0
21
0

Qwen2-7B-v2-Instruct_CODE_Python-Spanish_English_GGUF_32bit

NaNK
license:apache-2.0
20
0

Tinytron-ORCA-7B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit

NaNK
llama
20
0

Agente-GPT-Qwen-2.5-7B-Spanish_8bit

NaNK
license:apache-2.0
20
0

Qwen2_0.5B-Spanish_English_raspberry_pi_GGUF_4bit

NaNK
license:apache-2.0
19
0

Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
license:apache-2.0
18
0

Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_16bit

NaNK
license:apache-2.0
18
0

Agente-Director-Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
18
0

Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_32bit

NaNK
license:apache-2.0
18
0

Qwen2-1.5B-Instruct_MOE_Director-GGUF_4bit

NaNK
license:apache-2.0
17
0

Qwen2_0.5B-Spanish_English_raspberry_pi_GGUF_16bit

NaNK
license:apache-2.0
17
0

Qwen2-7B-v2-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
license:apache-2.0
17
0

Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
llama
17
0

Agente-Llama-3.1-Spanish_8bit

NaNK
llama
17
0

Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_q5_k

NaNK
license:apache-2.0
17
0

Phi-3.5-mini-instruct-python_coding_assistant-GGUF_4bit

NaNK
llama
16
0

Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_16bit

NaNK
license:apache-2.0
16
0

tiny-llama-Spanish_English_raspberry_pi_GGUF_4bit

NaNK
llama
16
0

Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_q6_k

NaNK
license:apache-2.0
16
0

Agente-GPT-Qwen-2.5-7B_GGUF_16bit

NaNK
license:apache-2.0
16
0

Qwen2-1.5B-Instruct_MOE_CODE_assistant-GGUF_8bit

NaNK
license:apache-2.0
15
0

Phi-3.5-mini-instruct-python_coding_assistant-GGUF_8bit

NaNK
llama
14
0

Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
license:apache-2.0
14
0

Agente-Director-Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_q5_k

NaNK
14
0

Mistral-NeMo-Minitron-8B-Alpaca-CODE-Python-GGUF-8bit

NaNK
license:apache-2.0
13
0

Qwen2-1.5B-Instruct_MOE_Director-GGUF_8bit

NaNK
license:apache-2.0
13
0

Tinytron-ORCA-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
llama
13
0

Agente-Director-Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
license:apache-2.0
13
0

gemma-2-2b-instruct-python_CODE_assistant-GGUF_16bit

NaNK
license:apache-2.0
12
0

Mistral-Nemo-CODE-Python_assistant-GGUF_8bit

NaNK
license:apache-2.0
12
0

Agente-Llama-3.1_GGUF_16bit

NaNK
llama
12
0

Agente-GPT-Qwen2.5-3B-Instruct-Spanish_8bit

NaNK
license:apache-2.0
12
0

Meta-Llama-3.1-8B-CODE-Alpaca-Python-8bit-GGUF

NaNK
llama
11
0

Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_8bit

NaNK
license:apache-2.0
11
0

Tinytron-ORCA-3B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit

NaNK
llama
11
0

Qwen2-1.5B-Instruct_MOE_CODE_assistant-GGUF_16bit

NaNK
license:apache-2.0
10
0

Qwen2_0.5B-GGUF_Spanish_English_raspberry_pi_8bit

NaNK
license:apache-2.0
10
0

gemma-2-2b-Python_CODE_assistant-GGUF_8bit

NaNK
license:apache-2.0
9
0

Qwen2-1.5B-Instruct_MOE_assistant-GGUF_16bit

NaNK
license:apache-2.0
9
0

Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k

NaNK
llama
9
0

Qwen2-1.5B-Instruct_MOE_assistant-GGUF_8bit

NaNK
license:apache-2.0
8
0

Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-CODE-Python_16bit

NaNK
license:apache-2.0
8
0

Qwen2-7B-v2-Instruct_CODE_Python-Spanish_English_GGUF_16bit

NaNK
license:apache-2.0
8
0

Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_16bit

NaNK
llama
8
0

Qwen2-1.5B-Instruct_MOE_Director-GGUF_16bit

NaNK
license:apache-2.0
7
0

tiny-llama-Spanish_English_raspberry_pi_GGUF_16bit

NaNK
llama
7
0

Qwen2-7B-v2-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
license:apache-2.0
7
0

Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k

NaNK
llama
7
0

Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k

NaNK
llama
7
0

Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k

NaNK
llama
7
0

Tinytron-ORCA-3B-Instruct_CODE_Python_English_Asistant-16bit-v2

NaNK
llama
7
0

Tinytron-1B-TinyLlama-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
llama
6
0

Tinytron-ORCA-3B-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
llama
6
0

Agente-Director-Qwen2-7Bron-Instruct_CODE_Python_English_GGUF_16bit

NaNK
license:apache-2.0
6
0

tiny-llama-GGUF_Spanish_English_raspberry_pi_8bit

NaNK
llama
5
0

Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_16bit

NaNK
llama
5
0

Tinytron-ORCA-3B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
llama
5
0

Agente-Llama-3.1-Spanish_English_GGUF_4bit

NaNK
llama
5
0

Tinytron-TinyLlama-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2

NaNK
llama
4
0

Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_4bit

NaNK
llama
4
0

HAL_9000-Qwen2-0.5B-Instruct_Spanish_English_lora_model

NaNK
license:apache-2.0
4
0

Agente-GPT-Qwen2.5-3B-Instruct-Spanish_16bit

NaNK
license:apache-2.0
4
0

Mistral-NeMo-Minitron-8B-Base-Nebulal

NaNK
3
0

Mamba-Codestral-7B-v0.1-python_coding_assistant_16bit

NaNK
3
0

Meta-Llama-3.1-8B-Instruct-Depth-Base-Instruct_CODE_Python_Spanish_English_lora_model

NaNK
base_model:meta-llama/Llama-3.1-8B-Instruct
3
0

Agente-Director-Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_q6_k

NaNK
3
0

NEBULA-HRM-DEMO

NEBULA-HRM-DEMO: Hybrid Photonic + Hierarchical Reasoning Model NEBULA-HRM is a compact research model (~30.13M params) that explores a hybrid architecture combining a hierarchical reasoning module (HRM) with additional structured processing blocks. This repository contains training scripts, checkpoints, and an example inference pipeline. Important: internal benchmarks (ARC-AGI/Sudoku/Mazes) are small synthetic subsets intended for quick smoke tests, not official leaderboards. GLUE SST-2 validation accuracy is reported from a short training run in a controlled environment. - Parameters: 30.13M - Inference speed (local): ~163 samples/sec (batch-dependent) - Memory footprint (peak, local): ~0.32 GB - Framework: PyTorch - Checkpoints: `nebulahrmfinal.pt`, `nebulahrmcomplete.pth`, `pytorchmodel.bin` Option A: Download files with huggingfacehub and load the PyTorch checkpoint: - `NEBULAHRMCompleteFixed.py`: full training/inference implementation in PyTorch - `nebulahrmcomplete.pth`: PyTorch checkpoint (statedict) - `nebulahrmfinal.pt`: additional serialized artifact - `pytorchmodel.bin`: standard binary for Hub compatibility - `config.json`, `tokenizerconfig.json`, `specialtokensmap.json`: auxiliary config files - `inference.py`: ready-to-run script that pulls artifacts from the Hub - `modelcard.md`: extended model card - Environment: Windows 11, Python 3.10, CUDA 11.8, RTX 3090 - Key env vars: `TOKENIZERSPARALLELISM=false`, `WANDBDISABLED=true`, `OMPNUMTHREADS=1`, `PYTHONUTF8=1` - Debugging aids: `CUDALAUNCHBLOCKING=1`, `TORCHSHOWCPPSTACKTRACES=1`, `PYTORCHCUDAALLOCCONF=maxsplitsizemb:256` - Data: GLUE SST-2 (validation accuracy ~0.51 from a brief run). Internal synthetic subsets used for quick sanity checks. - This is a research prototype. Internal benchmarks are not substitutes for official leaderboards. - Not optimized for production latency; use as a reference for architecture and training loop design.

license:apache-2.0
3
0

Agente-GPT-Qwen2.5-3B-Instruct-Asistant-16bit-v2

NaNK
license:apache-2.0
2
1

Qwen2-1.5B-Instruct_MOE_Director_16bit

NaNK
license:apache-2.0
2
0

Qwen2-1.5B-Instruct_MOE_Director-Conductor_16bit

NaNK
license:apache-2.0
2
0

tiny-llama_Spanish_English_ESP32_16bit

NaNK
llama
2
0

Tinytron-ORCA-7B-Instruct_CODE_Python_English_GGUF_16bit

NaNK
llama
2
0

Tinytron-ORCA-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
llama
2
0

Mistral-NeMo-Minitron-8B-Base-CODE-Python

NaNK
license:apache-2.0
1
0

Qwen2-1.5B-Instruct_MOE_assistant_16bit

NaNK
license:apache-2.0
1
0

Llama-3.1-Minitron-4B-Instruct_CODE_Python_Spanish_English_16bit

NaNK
llama
1
0

Tinytron-TinyLlama-Instruct_CODE_Python_Spanish_English_16bit

NaNK
llama
1
0

Tinytron-TinyLlama-Instruct_CODE_Python-GGUF_Spanish_English_8bit

NaNK
llama
1
0

Tinytron-1B-TinyLlama-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2

NaNK
llama
1
0

HAL_9000-QWEN2-0.5_Spanish_English_lora_model

NaNK
license:apache-2.0
1
0

HAL_9000-Qwen2-0.5B-Instruct_Spanish_English_16bit

NaNK
1
0

HAL_9000-Qwen2-0.5B-Instruct_Asistant-16bit-v2

NaNK
1
0

Agente-GPT-Qwen-2.5-7B-Asistant-16bit-v2

NaNK
license:apache-2.0
1
0

nebula-photonic-v1-DEMO

Model Description NEBULA-Photonic-v1.0 is an authentic photonic neural network for spatial reasoning tasks. Team: Francisco Angulo de Lafuente - Project NEBULA Team Performance - Test Accuracy: 50.0% - Random Baseline: 36.0% - Improvement: +14.0 percentage points Architecture - Model Type: PhotonicMazeSolver - Parameters: 14,430 - Photonic Neurons: 16 - Quantum Memory: 64 neurons (4-qubit each) - Hidden Dimensions: 160 Applications - Robotics navigation - Game AI spatial reasoning - Route optimization - Research in photonic computing Project Philosophy "Soluciones sencillas para problemas complejos, sin placeholders y con la verdad por delante"

1
0

Tinytron-Qwen-0.5B-Instruct_CODE_Python_English_GGUF_16bit

NaNK
llama
0
1

Agente-Llama-3.1-Asistant-16bit-v2

NaNK
llama
0
1

Tiny Llama Spanish English Raspberry Pi5 16bit

- Developed by: Agnuxo(https://github.com/Agnuxo1) - License: apache-2.0 - Finetuned from model : Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. This model has been fine-tuned for various tasks and evaluated on the following benchmarks: Model Size: 4,124,864 parameters Required Memory: 0.02 GB

NaNK
llama
0
1