OptiMind-SFT

7
3
license:mit
by
microsoft
Language Model
OTHER
New
7 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
Unknown
Mobile
Laptop
Server
Quick Summary

AI model with specialized capabilities.

Code Examples

Usagetext
pip install "sglang[all]" openai gurobipy

# Make sure you have a valid Gurobi license and PYTHON>=3.12
python -m sglang.launch_server \
  --model-path microsoft/OptiMind-SFT \
  --host 0.0.0.0 \
  --port 30000 \
  --tensor-parallel-size 1 \
  --trust-remote-code
recommended defaulttext
This will return a response that first describes the mathematical model and then includes a python code block implementing it in gurobipy.


## Primary Use Cases

- Translating natural-language Operations Research (OR) problems into mixed-integer linear programs (MILPs) and corresponding `gurobipy` code for research and prototyping.
- Studying and benchmarking NL to MILP modeling pipelines on public OR datasets such as IndustryOR, Mamo-Complex, and OptMATH.
- Educational use for teaching how to derive optimization models (variables, constraints, objectives) from informal problem descriptions.
- Performing ablations and research on solver-in-the-loop prompting and multi-turn correction in domain-specific modeling tasks.

## Out-of-Scope Use Cases

- General-purpose chat, open-domain reasoning, or tasks unrelated to optimization modeling.
- Safety-critical or regulated applications (e.g., healthcare, finance, legal decisions, credit scoring) without expert human review of both the model output and the resulting optimization.
- Fully automated deployment where optimization results are used directly for real-world decisions without human oversight.
- Automatic execution of generated code in production systems without sandboxing, logging, and appropriate security controls.


## Technical Requirements & Integration

We recommend **≥32GB GPU VRAM** (e.g., A100/H100/B200) for comfortable inference, especially for long prompts and multi-turn interactions. 
Please checkout our [GitHub page](https://github.com/microsoft/OptiGuide) for instructions on the inference pipeline.

# Data Overview
## Training and Validation Data
We fine-tune OptiMind-SFT on cleaned versions of the OR-Instruct and OptMATH training sets, and validate on a held-out validation split drawn from the same cleaned corpora.

## Testing Data
For testing, we use manually cleaned and expert-validated versions of the IndustryOR, Mamo-Complex, and OptMATH benchmarks. Please visit our [GitHub page](https://github.com/microsoft/OptiGuide) to download the cleaned benchmarks.

# Known Technical Limitations

- The model can still produce incorrect formulations or invalid code, or declare feasibility/optimality incorrectly.   
- It is specialized to OR benchmarks; behavior on general text or other problem domains is not guaranteed.
- No dedicated red-teaming against unsafe content categories (e.g., hate, violence, self-harm) or jailbreak attacks has been performed; the paper focuses on technical robustness metrics. 

Users **must** keep a human in the loop for all consequential decisions and carefully review any generated code before execution.

# Other Sources & Maintenance
- Evaluation code and cleaned benchmarks: [GitHub page](https://github.com/microsoft/OptiGuide)
- Paper: [Arxiv link](https://arxiv.org/abs/2509.22979)
For questions, issues, or feature requests, please use the GitHub issue tracker or the Hugging Face “Community” tab.

# Citation
If you use OptiMind-SFT or the associated datasets/benchmarks in your work, please cite:

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.