mbart-audi-diagnosis-agent

10
3
1 language
license:mit
by
MahmutCanBoran
Other
OTHER
2001.08210B params
New
10 downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
4473GB+ RAM
Mobile
Laptop
Server
Quick Summary

AI-powered assistant for diagnosing chronic issues in Audi vehicles.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
1864GB+ RAM

Code Examples

📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
📦 Dependenciestxt
streamlit>=1.36
transformers>=4.41
torch>=2.2
sentencepiece>=0.2  # Required for mBART tokenization
accelerate>=0.31    # Optional but recommended (for device_map="auto")
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))
Required for mBART tokenizationpythontransformers
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

MODEL_ID = "MahmutCanBoran/mbart-audi-diagnosis-agent"

@st.cache_resource(show_spinner=True)
def load_model():
    tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
    model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_ID)
    if torch.cuda.is_available():
        model = model.to("cuda")
    return tokenizer, model

st.set_page_config(page_title="Audi AI Diagnosis", page_icon="🚘")
st.title("🚘 Audi AI Diagnosis Agent")
st.caption("mBART-50 based: symptom → likely diagnosis")

symptom = st.text_area(
    "Enter symptom:",
    height=110,
    placeholder="in my A4 40 TDI there is a rattling noise during acceleration"
)

if st.button("Diagnose", type="primary", use_container_width=True):
    tokenizer, model = load_model()
    inputs = tokenizer(symptom.strip(), return_tensors="pt")
    if torch.cuda.is_available():
        inputs = {k: v.to("cuda") for k, v in inputs.items()}
    with torch.inference_mode():
        outputs = model.generate(**inputs, max_new_tokens=96)
    st.success("Likely diagnosis")
    st.write(tokenizer.decode(outputs[0], skip_special_tokens=True))

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.