Eunoia-4B-mini
1
license:apache-2.0
by
shvgroups
Language Model
OTHER
4B params
New
0 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
9GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
4GB+ RAM
Code Examples
How to Get Started with the Modelpythontransformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shvgroups/Eunoia-4B-mini")
model = AutoModelForCausalLM.from_pretrained("shvgroups/Eunoia-4B-mini")
prompt = "Explain photosynthesis step by step."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
---
## Training Details
### Training Data
Eunoia-4B-Mini is built on top of the base model’s training corpus and further refined through:
- Instruction-following supervision
- Reasoning-structured prompts
- Iterative evaluation and retry loops
- Goal-decomposition templates
No private or user data was used in training or refinement.
### Training Procedure
#### Training Hyperparameters
- **Training regime:** Mixed-precision fine-tuning (fp16 / bf16)
- **Architecture:** Decoder-only transformer with an external reasoning controller
---
## Evaluation
### Metrics
The model is evaluated primarily on the following qualitative and behavioral metrics:
- Long-horizon coherence
- Instruction adherence over extended outputs
- Multi-step reasoning stability
- Retry and recovery behavior under failure
Formal benchmark results will be released in future updates.
---
## Environmental Impact
- **Hardware:** NVIDIA GPUs
- **Training setup:** Research-scale fine-tuning
- **Carbon impact:** Not formally measured
---
## Technical Specifications
### Model Architecture and Objective
- **Base transformer:** Qwen3-4B-Instruct
- **External reasoning modules:**
- Goal Tree
- Execution Gate
- Goal Evaluator
- Adaptive Goal Mutation Engine
These components operate outside the core transformer and guide generation iteratively through structured goal management and adaptive control logic.
---
## Citation
If you use this model in academic work, please cite:
### BibTeXDeploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.