LLMYourWay
ModelsDevices
Edge AI
CompareInsights
Enterprise

oobabooga

2 models • 1 total models in database
Sort by:

CodeBooga-34B-v0.1

1) Phind-CodeLlama-34B-v2 2) WizardCoder-Python-34B-V1.0 It was created with the BlockMerge Gradient script, the same one that was used to create MythoMax-L2-13b, and with the same settings. The following YAML was used: Both base models use the Alpaca format, so it should be used for this one as well. I made a quick experiment where I asked a set of 3 Python and 3 Javascript questions (real world, difficult questions with nuance) to the following models: 1) This one 2) A second variant generated with `modelpath1` and `modelpath2` swapped in the YAML above, which I called CodeBooga-Reversed-34B-v0.1 3) WizardCoder-Python-34B-V1.0 4) Phind-CodeLlama-34B-v2 Specifically, I used 4.250b EXL2 quantizations of each. I then sorted the responses for each question by quality, and attributed the following scores: 4th place: 0 3rd place: 1 2nd place: 2 1st place: 4 CodeBooga-34B-v0.1: 22 WizardCoder-Python-34B-V1.0: 12 Phind-CodeLlama-34B-v2: 7 CodeBooga-Reversed-34B-v0.1: 1 CodeBooga-34B-v0.1 performed very well, while its variant performed poorly, so I uploaded the former but not the latter. TheBloke has kindly provided GGUF quantizations for llama.cpp:

NaNK
llama
8,632
147

llama-tokenizer

—
0
21
LLMYourWay

The definitive AI model comparison platform. Compare 12K+ models, track performance, and discover the perfect AI solution for your needs.

Made with AI
Real-time Data

Product

  • Find Your Device
  • Browse Models
  • Compare AI
  • Benchmarks
  • Pricing
  • API Access

Resources

  • Blog & Articles
  • Methodology
  • Changelog
  • Trending
  • Use Cases

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Service
12K+12,000+
AI Models Tracked & Updated Daily
© 2026 LLMYourWay. All rights reserved.
Data updated every 4 hours
Powered by real-time AI data
API