MOOSE-Star-IR-R1D-7B
38
1
license:apache-2.0
by
ZonglinY
Language Model
OTHER
7B params
New
38 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary
AI model with specialized capabilities.
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM
Code Examples
Usagebash
git clone https://github.com/ZonglinY/MOOSE-Star.git && cd MOOSE-Star
# See requirements.txt for full dependencies; at minimum: pip install transformers torchSee requirements.txt for full dependencies; at minimum: pip install transformers torchbash
# SGLang requires a separate environment; see https://github.com/sgl-project/sglang for installation
# Start the server
python -m sglang.launch_server --model-path ZonglinY/MOOSE-Star-IR-R1D-7B --port 1235Start the serverpython
import sys
sys.path.insert(0, "./Inference")
from ir_probability_extractor import IRProbabilityExtractor
extractor = IRProbabilityExtractor(base_urls=["http://localhost:1235/v1"])
result = extractor.get_selection_probabilities(
research_question="Your research question",
background_survey="Your background survey",
candidates=[
{"title": "Candidate A title", "abstract": "Candidate A abstract"},
{"title": "Candidate B title", "abstract": "Candidate B abstract"},
# ... up to 15 candidates (labeled A-O)
],
)
print(f"Selected: [{result.selected_label}]")
print(f"Probabilities: {result.probabilities}")Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.