RxStruct Gemma 1B
14
2
1.0B
1 language
license:cc-by-nc-2.0
by
Shiva7706
Other
OTHER
1B params
New
14 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
3GB+ RAM
Mobile
Laptop
Server
Quick Summary
Fine-tuned Model: RxStruct-Gemma-1B | Quantized Version: GGUF Release A fine-tuned variant of Gemma-3-1B-IT optimized for structured medical data extraction fr...
Device Compatibility
Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1GB+ RAM
Training Data Analysis
🟡 Average (4.3/10)
Researched training datasets used by RxStruct Gemma 1B with quality assessment
Specialized For
general
science
multilingual
reasoning
Training Datasets (3)
common crawl
🔴 2.5/10
general
science
Key Strengths
- •Scale and Accessibility: At 9.5+ petabytes, Common Crawl provides unprecedented scale for training d...
- •Diversity: The dataset captures billions of web pages across multiple domains and content types, ena...
- •Comprehensive Coverage: Despite limitations, Common Crawl attempts to represent the broader web acro...
Considerations
- •Biased Coverage: The crawling process prioritizes frequently linked domains, making content from dig...
- •Large-Scale Problematic Content: Contains significant amounts of hate speech, pornography, violent c...
wikipedia
🟡 5/10
science
multilingual
Key Strengths
- •High-Quality Content: Wikipedia articles are subject to community review, fact-checking, and citatio...
- •Multilingual Coverage: Available in 300+ languages, enabling training of models that understand and ...
- •Structured Knowledge: Articles follow consistent formatting with clear sections, allowing models to ...
Considerations
- •Language Inequality: Low-resource language editions have significantly lower quality, fewer articles...
- •Biased Coverage: Reflects biases in contributor demographics; topics related to Western culture and ...
arxiv
🟡 5.5/10
science
reasoning
Key Strengths
- •Scientific Authority: Peer-reviewed content from established repository
- •Domain-Specific: Specialized vocabulary and concepts
- •Mathematical Content: Includes complex equations and notation
Considerations
- •Specialized: Primarily technical and mathematical content
- •English-Heavy: Predominantly English-language papers
Explore our comprehensive training dataset analysis
View All DatasetsCode Examples
Example Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example OutputExample Usagepythontransformers
from unsloth import FastLanguageModel
from transformers import TextStreamer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="google/gemma-3-1b-it",
adapter_name="Shiva7706/RxStruct-Gemma-1B",
)
FastLanguageModel.for_inference(model)
prompt = """Mr. Shah, your blood pressure is quite high at 160/100.
I'm starting you on Amlodipine 5mg once daily in the morning.
Also take Atorvastatin 10mg at bedtime for your cholesterol.
Get your lipid profile and kidney function tests done after 1 month.
Reduce salt intake and exercise regularly."""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=512)
## Example Outputtext
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)text
All conversations are synthetic and do not contain any personally identifiable or real patient data.
## Model Performance
* Validation Loss: 0.2435
* Validation Perplexity: 1.28
* JSON Structural Accuracy: ~94% (measured on 50 random generations)
* Inference Latency (RTX 3050): ~1.9s per 300-token generation
## Limitations
* The model is trained only on synthetic data, not real medical transcripts.
* It should not be used for clinical decision-making.
* Certain ambiguous dialogues may lead to redundant entities (e.g., mixing tests and medicines).
* JSON format adherence is strong but not perfect; a small post-processor is recommended.
## Recommended Post-Processing (Optional)Deploy This Model
Production-ready deployment in minutes
Together.ai
Instant API access to this model
Production-ready inference API. Start free, scale to millions.
Try Free APIReplicate
One-click model deployment
Run models in the cloud with simple API. No DevOps required.
Deploy NowDisclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.