Keak CRO Llama 3.1 8B Instruct
A LoRA-fine-tuned variant of Meta’s Llama 3.1 8B Instruct, optimized for Conversion Rate Optimization (CRO) and A/B testing automation. Developed by Keak AI, this model generates high-converting website copy, structured business insights, and persuasive content that aligns with CRO best practices.
Base Model: `meta-llama/Meta-Llama-3.1-8B-Instruct` Adapter Type: LoRA (Low-Rank Adaptation) via PEFT Trained By: Keak AI Specialization: Conversion rate optimization, persuasive copywriting, and structured web analysis
This model enhances Llama 3.1’s reasoning and language generation with CRO-specific knowledge, enabling it to: - Extract business and visual identity context from webpages - Generate optimized A/B testing variants of copy and CTAs - Apply CRO principles such as clarity, curiosity, urgency, and benefit framing
The model is designed for: 1. Webpage Analysis & Context Extraction – Identify core offering, audience, pain points, and design tone 2. Variant Generation – Produce high-performing alternatives for headlines, CTAs, and product descriptions
It performs best when used in two steps: (1) Analyze context → (2) Generate optimized variant.
Fine-tuned on Keak AI’s proprietary A/B testing dataset, containing real conversion experiments and human-evaluated high-performing variants. This dataset reflects diverse industries (e-commerce, SaaS, marketing) and is continuously updated for improvement.
Model: `meta-llama/Meta-Llama-3.1-8B-Instruct` Quantization: 4-bit (NF4) with bitsandbytes Compute dtype: bfloat16 Double quantization: Enabled
| Setting | Value | | ---------------------- | ------------------------------------ | | Epochs | 3 | | Learning Rate | 2e-5 | | Batch Size | 1 (per GPU, eff. 8 with grad accum.) | | Optimizer | pagedadamw8bit | | Scheduler | cosine, 10 % warmup | | Weight Decay | 0.01 | | Max Grad Norm | 0.3 | | Seq Length | 2048 | | Gradient Checkpointing | ✅ Enabled |
Always include the recommended system message Use the two-step workflow (analyze → generate) Provide clear optimization constraints (tone, length, principles) Specify selectors when optimizing page elements
Proprietary dataset — limited open benchmarking Optimal for English; multilingual support still experimental May underperform on niches unseen in training data Follows the Llama 3.1 Community License restrictions
Released under the Llama 3.1 Community License Agreement. Use is permitted for commercial and research applications within the license terms. See Meta Llama 3.1 License for details.
For questions or collaboration inquiries, contact Keak AI via https://huggingface.co/Keak-AI or open an issue in the model repository.