Agnuxo
Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit
Mamba-Codestral-7B-v0.1-python_coding_assistant-GGUF_8bit
Llama-3.1-Minitron-4B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_4bit
Agente-Llama-3.1-Spanish_English_GGUF_32bit
Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_16bit
Llama-3.1-Minitron-4B-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_4bit
Agente-GPT-Qwen2.5-3B-Instruct-Spanish_English_GGUF_q5_k
Mistral-NeMo-Minitron-8B-Alpaca-CODE-Python-GGUF-16bit
Mistral-Nemo-CODE-Python_assistant-GGUF_16bit
Qwen2-1.5B-Instruct_MOE_assistant-GGUF_4bit
Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_4bit
Llama-3.1-Minitron-4B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-Llama-3.1-Spanish_English_GGUF_q5_k
Agente-GPT-Qwen-2.5-3B-Spanish_English_GGUF_32bit
Qwen2-1.5B-Instruct_MOE_CODE_assistant-GGUF_4bit
Mamba-Codestral-7B-Instruct_CODE_Python-Spanish_English_GGUF_16bit
Agente-Llama-3.1-Spanish_English_GGUF_q6_k
Llama-3.1-Minitron-4B-Instruct_CODE_Python-Spanish_English_GGUF_16bit
Agente-GPT-Qwen2.5-3B-Instruct-Spanish_English_GGUF_q6_k
gemma-2-2b-instruct-python_CODE_assistant-GGUF_4bit
Mamba-Codestral-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Mamba-Codestral-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-GPT-Qwen-2.5-3B_GGUF_16bit
Agente-GPT-Qwen2.5-3B-Instruct-Spanish_English_GGUF_4bit
NEBULA-X-DEMO
🌌 NEBULA-X: Enhanced Unified Holographic Neural Network NEBULA-X is a revolutionary AI architecture that combines holographic memory, quantum computing, and optical neural networks to create the world's first production-ready photonic neural network system. Holographic Neural Networks - Holographic Memory: Information stored as interference patterns in 3D space - Light-based Processing: Neurons represented as points of light with optical properties - Interferometric Computing: Calculations performed through wave interference Quantum-Enhanced Processing - 4 Qubits per Neuron: Distributed quantum memory for enhanced processing - Quantum Entanglement: Non-local correlations between neural components - Superposition States: Parallel processing of multiple possibilities Optical Raytracing - GPU-Accelerated: CUDA kernels for Monte Carlo raytracing - Real-time Physics: Accurate simulation of light propagation - Material Properties: Reflectivity, transmittance, and phase shifts | Benchmark | Score | Improvement vs Baseline | |-----------|-------|------------------------| | MMLU | 85.0% | +240% | | GSM8K | 78.0% | +∞% (baseline: 0%) | | HellaSwag | 92.3% | +152% | | ARC | 88.7% | +198% | Francisco Angulo de Lafuente (Agnuxo) - Research Focus: Holographic Computing, Quantum AI, Optical Neural Networks - NVIDIA LlamaIndex Developer Contest 2024 Winner - 27+ Repositories in Advanced AI Architectures NEBULA-X represents a paradigm shift in AI architecture, combining the power of light, quantum mechanics, and evolutionary algorithms to create truly intelligent systems.
Qwen2-7B-v2-Instruct_CODE_Python-Spanish_English_GGUF_32bit
Tinytron-ORCA-7B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit
Agente-GPT-Qwen-2.5-7B-Spanish_8bit
Qwen2_0.5B-Spanish_English_raspberry_pi_GGUF_4bit
Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_16bit
Agente-Director-Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_32bit
Qwen2-1.5B-Instruct_MOE_Director-GGUF_4bit
Qwen2_0.5B-Spanish_English_raspberry_pi_GGUF_16bit
Qwen2-7B-v2-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-Llama-3.1-Spanish_8bit
Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_q5_k
Phi-3.5-mini-instruct-python_coding_assistant-GGUF_4bit
Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_16bit
tiny-llama-Spanish_English_raspberry_pi_GGUF_4bit
Agente-GPT-Qwen-2.5-7B-Spanish_English_GGUF_q6_k
Agente-GPT-Qwen-2.5-7B_GGUF_16bit
Qwen2-1.5B-Instruct_MOE_CODE_assistant-GGUF_8bit
Phi-3.5-mini-instruct-python_coding_assistant-GGUF_8bit
Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-Director-Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
Mistral-NeMo-Minitron-8B-Alpaca-CODE-Python-GGUF-8bit
Qwen2-1.5B-Instruct_MOE_Director-GGUF_8bit
Tinytron-ORCA-7B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-Director-Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
gemma-2-2b-instruct-python_CODE_assistant-GGUF_16bit
Mistral-Nemo-CODE-Python_assistant-GGUF_8bit
Agente-Llama-3.1_GGUF_16bit
Agente-GPT-Qwen2.5-3B-Instruct-Spanish_8bit
Meta-Llama-3.1-8B-CODE-Alpaca-Python-8bit-GGUF
Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-GGUF_8bit
Tinytron-ORCA-3B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit
Qwen2-1.5B-Instruct_MOE_CODE_assistant-GGUF_16bit
Qwen2_0.5B-GGUF_Spanish_English_raspberry_pi_8bit
gemma-2-2b-Python_CODE_assistant-GGUF_8bit
Qwen2-1.5B-Instruct_MOE_assistant-GGUF_16bit
Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k
Qwen2-1.5B-Instruct_MOE_assistant-GGUF_8bit
Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant-CODE-Python_16bit
Qwen2-7B-v2-Instruct_CODE_Python-Spanish_English_GGUF_16bit
Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_16bit
Qwen2-1.5B-Instruct_MOE_Director-GGUF_16bit
tiny-llama-Spanish_English_raspberry_pi_GGUF_16bit
Qwen2-7B-v2-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
Tinytron-1B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k
Tinytron-ORCA-3B-Instruct_CODE_Python_English_Asistant-16bit-v2
Tinytron-1B-TinyLlama-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Tinytron-ORCA-3B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Agente-Director-Qwen2-7Bron-Instruct_CODE_Python_English_GGUF_16bit
tiny-llama-GGUF_Spanish_English_raspberry_pi_8bit
Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_16bit
Tinytron-ORCA-3B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Agente-Llama-3.1-Spanish_English_GGUF_4bit
Tinytron-TinyLlama-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2
Tinytron-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_4bit
HAL_9000-Qwen2-0.5B-Instruct_Spanish_English_lora_model
Agente-GPT-Qwen2.5-3B-Instruct-Spanish_16bit
Mistral-NeMo-Minitron-8B-Base-Nebulal
Mamba-Codestral-7B-v0.1-python_coding_assistant_16bit
Meta-Llama-3.1-8B-Instruct-Depth-Base-Instruct_CODE_Python_Spanish_English_lora_model
Agente-Director-Qwen2-7B-Instruct_CODE_Python-Spanish_English_GGUF_q6_k
NEBULA-HRM-DEMO
NEBULA-HRM-DEMO: Hybrid Photonic + Hierarchical Reasoning Model NEBULA-HRM is a compact research model (~30.13M params) that explores a hybrid architecture combining a hierarchical reasoning module (HRM) with additional structured processing blocks. This repository contains training scripts, checkpoints, and an example inference pipeline. Important: internal benchmarks (ARC-AGI/Sudoku/Mazes) are small synthetic subsets intended for quick smoke tests, not official leaderboards. GLUE SST-2 validation accuracy is reported from a short training run in a controlled environment. - Parameters: 30.13M - Inference speed (local): ~163 samples/sec (batch-dependent) - Memory footprint (peak, local): ~0.32 GB - Framework: PyTorch - Checkpoints: `nebulahrmfinal.pt`, `nebulahrmcomplete.pth`, `pytorchmodel.bin` Option A: Download files with huggingfacehub and load the PyTorch checkpoint: - `NEBULAHRMCompleteFixed.py`: full training/inference implementation in PyTorch - `nebulahrmcomplete.pth`: PyTorch checkpoint (statedict) - `nebulahrmfinal.pt`: additional serialized artifact - `pytorchmodel.bin`: standard binary for Hub compatibility - `config.json`, `tokenizerconfig.json`, `specialtokensmap.json`: auxiliary config files - `inference.py`: ready-to-run script that pulls artifacts from the Hub - `modelcard.md`: extended model card - Environment: Windows 11, Python 3.10, CUDA 11.8, RTX 3090 - Key env vars: `TOKENIZERSPARALLELISM=false`, `WANDBDISABLED=true`, `OMPNUMTHREADS=1`, `PYTHONUTF8=1` - Debugging aids: `CUDALAUNCHBLOCKING=1`, `TORCHSHOWCPPSTACKTRACES=1`, `PYTORCHCUDAALLOCCONF=maxsplitsizemb:256` - Data: GLUE SST-2 (validation accuracy ~0.51 from a brief run). Internal synthetic subsets used for quick sanity checks. - This is a research prototype. Internal benchmarks are not substitutes for official leaderboards. - Not optimized for production latency; use as a reference for architecture and training loop design.
Agente-GPT-Qwen2.5-3B-Instruct-Asistant-16bit-v2
Qwen2-1.5B-Instruct_MOE_Director_16bit
Qwen2-1.5B-Instruct_MOE_Director-Conductor_16bit
tiny-llama_Spanish_English_ESP32_16bit
Tinytron-ORCA-7B-Instruct_CODE_Python_English_GGUF_16bit
Tinytron-ORCA-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Mistral-NeMo-Minitron-8B-Base-CODE-Python
Qwen2-1.5B-Instruct_MOE_assistant_16bit
Llama-3.1-Minitron-4B-Instruct_CODE_Python_Spanish_English_16bit
Tinytron-TinyLlama-Instruct_CODE_Python_Spanish_English_16bit
Tinytron-TinyLlama-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Tinytron-1B-TinyLlama-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2
HAL_9000-QWEN2-0.5_Spanish_English_lora_model
HAL_9000-Qwen2-0.5B-Instruct_Spanish_English_16bit
HAL_9000-Qwen2-0.5B-Instruct_Asistant-16bit-v2
Agente-GPT-Qwen-2.5-7B-Asistant-16bit-v2
nebula-photonic-v1-DEMO
Model Description NEBULA-Photonic-v1.0 is an authentic photonic neural network for spatial reasoning tasks. Team: Francisco Angulo de Lafuente - Project NEBULA Team Performance - Test Accuracy: 50.0% - Random Baseline: 36.0% - Improvement: +14.0 percentage points Architecture - Model Type: PhotonicMazeSolver - Parameters: 14,430 - Photonic Neurons: 16 - Quantum Memory: 64 neurons (4-qubit each) - Hidden Dimensions: 160 Applications - Robotics navigation - Game AI spatial reasoning - Route optimization - Research in photonic computing Project Philosophy "Soluciones sencillas para problemas complejos, sin placeholders y con la verdad por delante"
Tinytron-Qwen-0.5B-Instruct_CODE_Python_English_GGUF_16bit
Agente-Llama-3.1-Asistant-16bit-v2
Tiny Llama Spanish English Raspberry Pi5 16bit
- Developed by: Agnuxo(https://github.com/Agnuxo1) - License: apache-2.0 - Finetuned from model : Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. This model has been fine-tuned for various tasks and evaluated on the following benchmarks: Model Size: 4,124,864 parameters Required Memory: 0.02 GB